CN110263654A - A kind of flame detecting method, device and embedded device - Google Patents

A kind of flame detecting method, device and embedded device Download PDF

Info

Publication number
CN110263654A
CN110263654A CN201910435666.2A CN201910435666A CN110263654A CN 110263654 A CN110263654 A CN 110263654A CN 201910435666 A CN201910435666 A CN 201910435666A CN 110263654 A CN110263654 A CN 110263654A
Authority
CN
China
Prior art keywords
flame
candidate
block
region
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910435666.2A
Other languages
Chinese (zh)
Inventor
曾杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Clp Smart Security Polytron Technologies Inc
Original Assignee
Shenzhen Clp Smart Security Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Clp Smart Security Polytron Technologies Inc filed Critical Shenzhen Clp Smart Security Polytron Technologies Inc
Priority to CN201910435666.2A priority Critical patent/CN110263654A/en
Publication of CN110263654A publication Critical patent/CN110263654A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The present embodiments relate to image procossing and machine learning field, a kind of flame detecting method, device and embedded device are disclosed, wherein the described method includes: obtaining the video flowing of target area;According to the video flowing of the target area, discrete moving region block is determined;The discrete moving region block is merged, integration region block is generated;The integration region block is detected, determines candidate flame region block;By the flame region block input of candidate fire defector model trained in advance, the fire area of the target area is positioned.By the above-mentioned means, of the invention, solve the technical issues of low Detection accuracy existing for existing flame detecting method, bad adaptability, improve the accuracy rate of fire defector, and can also detect flammule and can on embedded device real-time detection.

Description

Flame detection method and device and embedded equipment
Technical Field
The invention relates to the field of image processing and machine learning, in particular to a flame detection method, a flame detection device and embedded equipment.
Background
The traditional fire detection technology mainly adopts a smoke detector, an infrared detector, an ultraviolet detector and the like, when the detector exceeds a given threshold value, the detector gives an alarm, but the smoke detector and the ultraviolet detector are not suitable for high and large space buildings and open areas (such as grasslands, forests, tunnels, airports, shopping malls, large warehouses and the like), and have the problems of low response speed, high false alarm rate and the like.
The video-based flame detection method adopts a computer vision technology and an artificial intelligence technology to realize real-time detection of a monitored area and find the position where flame appears, so that an ignition point can be accurately positioned, and the occurrence and spread of fire are avoided. The advantages of this technique are: the flame monitoring device can monitor flame in a large-range scene, and is high in response speed and small in environmental pollution.
Existing image-based flame detection methods use some characteristics of flames to identify, such as color, contour, blur, texture, and the like. Due to the complexity and irregularity of the flame, the flame has the defects of low anti-interference performance, high false alarm rate, poor adaptability and the like.
Disclosure of Invention
The embodiment of the invention aims to provide a flame detection method, a flame detection device and embedded equipment, which solve the technical problems of low detection accuracy and poor adaptability of the conventional flame detection method and improve the flame detection accuracy.
In order to solve the above technical problems, embodiments of the present invention provide the following technical solutions:
in a first aspect, an embodiment of the present invention provides a flame detection method, including:
acquiring a video stream of a target area;
determining discrete motion region blocks according to the video stream of the target region;
fusing the discrete motion region blocks to generate fused region blocks;
detecting the fusion region block, and determining a candidate flame region block;
and inputting the candidate flame area block into a pre-trained flame detection model, and positioning the ignition area of the target area.
In this embodiment of the present invention, the determining discrete motion region blocks according to the video stream of the target region includes:
and acquiring each frame image in the video stream of the target area, and determining discrete motion area blocks by a frame difference method.
In this embodiment of the present invention, the fusing the discrete motion region blocks to generate a fused region block specifically includes:
and fusing the motion region blocks through morphological expansion operation to generate fused region blocks.
In an embodiment of the present invention, the detecting the fusion region block and determining a candidate flame region block includes:
converting the image of the fusion region block from an RGB space to a YIQ space;
determining candidate flame pixel points in the image of the fusion region block according to the pixel points of the image of the fusion region block in the YIQ space;
determining candidate fire points according to the candidate flame pixel points;
and determining a candidate flame area block according to the candidate fire point.
In an embodiment of the present invention, the determining a candidate flame region block according to the candidate fire point includes:
determining the number of the candidate fire points and the sum of pixel points of the fusion region block;
calculating the ratio of the number of the candidate fire points to the sum of the pixel points of the fusion region block;
and if the proportion value is larger than a preset proportion threshold value, determining the fusion region block as a candidate flame region block.
In an embodiment of the present invention, before inputting the candidate flame region block into a pre-trained flame detection model, the method further includes:
adjusting the size of the candidate flame region block.
In a second aspect, embodiments of the present invention provide a flame detection apparatus, the apparatus comprising:
the video stream unit is used for acquiring a video stream of the target area;
a motion region block unit for determining discrete motion region blocks according to the video stream of the target region;
a fusion region block unit for fusing the discrete motion region blocks to generate a fusion region block;
the flame area block unit is used for detecting the fusion area block and determining a candidate flame area block;
and the ignition area unit is used for inputting the candidate flame area block into a flame detection model trained in advance and positioning the ignition area of the target area.
In an embodiment of the present invention, the motion region block unit is specifically configured to:
and acquiring each frame image in the video stream of the target area, and determining discrete motion area blocks by a frame difference method.
In an embodiment of the present invention, the fusion region block unit is specifically configured to:
and fusing the motion region blocks through morphological expansion operation to generate fused region blocks.
In an embodiment of the present invention, the flame zone block unit is specifically configured to:
converting the image of the fusion region block from an RGB space to a YIQ space;
determining candidate flame pixel points in the image of the fusion region block according to the pixel points of the image of the fusion region block in the YIQ space;
determining candidate fire points according to the candidate flame pixel points;
and determining a candidate flame area block according to the candidate fire point.
In an embodiment of the present invention, the determining a candidate flame region block according to the candidate fire point includes:
determining the number of the candidate fire points and the sum of pixel points of the fusion region block;
calculating the ratio of the number of the candidate fire points to the sum of the pixel points of the fusion region block;
and if the proportion value is larger than a preset proportion threshold value, determining the fusion region block as a candidate flame region block.
In an embodiment of the present invention, the flame detection device further includes:
and the flame area block adjusting unit is used for adjusting the size of the candidate flame area block.
In a third aspect, an embodiment of the present invention provides an embedded device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a flame detection method as described above.
In a fourth aspect, embodiments of the present invention also provide a non-transitory computer-readable storage medium having stored thereon computer-executable instructions for enabling an embedded device to perform a flame detection method as described above.
The embodiment of the invention has the beneficial effects that: in contrast to the prior art, an embodiment of the present invention provides a flame detection method, including: acquiring a video stream of a target area; determining discrete motion region blocks according to the video stream of the target region; fusing the discrete motion region blocks to generate fused region blocks; detecting the fusion region block, and determining a candidate flame region block; and inputting the candidate flame area block into a pre-trained flame detection model, and positioning the ignition area of the target area. Through the mode, the embodiment of the invention solves the technical problems of low detection accuracy and poor adaptability of the existing flame detection method, and improves the accuracy of flame detection.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a system architecture diagram according to an embodiment of the present invention;
FIG. 2 is a flowchart of an algorithm of a flame detection method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a method for detecting flame according to an embodiment of the invention;
FIG. 4 is a detailed flowchart of step S40 in FIG. 3;
FIG. 5 is a detailed flowchart of step S44 in FIG. 4;
FIG. 6 is a flow chart illustrating a method for identifying a fire zone according to an embodiment of the present invention;
FIG. 7 is a schematic view of a flame detection device provided by an embodiment of the invention;
fig. 8 is a schematic structural diagram of an embedded device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Transfer learning is that the weight of each node in a layer network is transferred from a trained network to a completely new network, for example: training a network (base network) → copying its top n layers to the top n layers of the target network → initializing the rest of the layers of the target network randomly → starting training the target task. In the case of backsropgating, there are two options: (1) freezing the migrated n layers, namely, not changing the values of the n layers when training the target task; (2) the first n layers are not frozen but their values are continuously adjusted, called fine-tune. This depends mainly on the size of the target data set and the number of parameters of the first n layers, if the target data set is small, but the number of parameters is very large, in order to prevent overfitting, a frozen method is usually adopted; conversely, fine-tune is used.
A convolutional neural network (convolutional neural network) is a neural network in which a matrix multiplication is replaced with a convolution operation (convolution) at least in a certain layer. The nature of the convolution operation determines the suitability of the neural network for processing data having a grid-like structure. The most typical grid-type data is a digital image, whether a grayscale image or a color image, is a set of scalars or vectors defined on a two-dimensional grid of pixels. Therefore, the convolutional neural network has been widely used in image and text recognition since its birth, and is gradually expanded to other fields such as natural language processing.
Convolution is a mathematical operation performed on two functions, with different interpretations in different disciplines. In a convolutional network, two functions that participate in the operation are called input and kernel functions (kernel functions), respectively. Essentially, convolution is the process of weighting and summing the inputs by using the kernel function as the weighting coefficient.
The input layer converts an image to be processed into one or more pixel matrixes, the convolution layer extracts features from the pixel matrixes by using one or more convolution kernels, obtained feature mapping is sent to the pooling layer after being processed by a nonlinear function, and dimension reduction operation is executed by the pooling layer. The alternating use of convolutional layers and pooling layers can enable the convolutional neural network to extract image features at different levels. And finally, the obtained features are used as the input of the full connection layer, and the classification result is output by the classifier of the full connection layer.
In the training of convolutional neural networks, the parameter to be trained is a convolutional kernel, i.e., a matrix of weight coefficients in a convolutional layer. The training also adopts a back propagation method, and the continuous updating of the parameters can improve the accuracy of image feature extraction.
The method for detecting flame by adopting deep learning at present has the following defects: and small flames in the image cannot be detected, so that the small flames at a far point away from the camera cannot be detected. The neural network adopted by the deep learning has requirements on the resolution of an input image, such as: 300, 128, etc., and the resolution of the camera in the real scene is 1920, 1080, 1280, 720, etc., so if the deep neural network is directly adopted to process the image shot by the camera, the resolution of the image needs to be reduced, once the resolution is reduced, the flame area block in the image is directly reduced, which causes that the flame area block cannot be detected, and the flame false detection rate is high. The requirement for real-time flame detection on embedded devices cannot be met. The depth neural network model used at present is relatively complex, the calculated amount is large, the time consumed for processing one frame of image by a common embedded device is long, and the real-time detection cannot be met.
In the embodiment of the invention, the flame detection model is an SSD detection model, the flame model is a feedforward convolutional neural network, the SSD is an abbreviation of a Single Shot MultiBox Detector, and the SSD algorithm is a multi-target detection algorithm for directly predicting the category and the position of a target.
Referring to fig. 1, fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present invention;
as shown in fig. 1, at least one monitoring camera is connected to the embedded device by wire or wirelessly, for example: the at least one monitoring camera is in communication connection with the embedded device through data lines, serial port lines, bluetooth, a wireless local area network, an ad hoc network, 3G, 4G, 5G, and the like, the embedded device performs detection according to a video stream sent by the monitoring camera to generate a detection result, and the embedded device is connected with a monitoring platform through a line or a wireless connection, and is used for sending the detection result to the monitoring platform, for example: and detecting the fire area, so that the monitoring platform performs alarm processing according to the detection result, for example: and reminding the fire-fighting responsible person to carry out fire-fighting safety treatment.
In the embodiment of the invention, the embedded device can be an intelligent terminal or a mobile terminal with a communication template and an image processing template, the embedded device can be connected with an intelligent terminal with one monitoring camera, can also be connected with multiple paths, and can also be integrated with the monitoring camera, and the monitoring platform can be a computer terminal, a tower server, a rack server, a blade server or a cloud server.
Specifically, the following describes an embodiment of the present invention specifically by taking an example in which the embedded device is an intelligent terminal connected to a monitoring camera.
Referring to fig. 2, fig. 2 is an algorithm flow of a flame detection method according to an embodiment of the present invention; as shown in fig. 2, a frame difference method is performed on an acquired video stream to obtain discrete motion region blocks, fusion is performed on the discrete motion region blocks to generate fusion region blocks, color information model detection is performed on the fusion region blocks to generate candidate flame region blocks, flame detection model detection is performed on the candidate flame region blocks to determine a firing region.
Specifically, referring to fig. 3 again, fig. 3 is a schematic flow chart of a flame detection method according to an embodiment of the invention;
as shown in fig. 3, the method is applied to an embedded device, and the method includes:
step S10: acquiring a video stream of a target area;
specifically, before acquiring the video stream of the target area, the method further includes: the method comprises the steps that a flame detection model is trained in advance, the flame detection model is a MobileNet _ SSD detection model, and in order to enable the MobileNet _ SSD detection model to be capable of detecting an ignition area, the MobileNet _ SSD detection model needs to be trained in advance by using a large number of images and then can be deployed in a real application scene.
In an embodiment of the present invention, the flame detection model is an open-source feedforward convolutional neural network (SSD). The feedforward convolutional neural network is mainly composed of a base network and a series of convolutional kernels, wherein the base network is a MobilenetV1 neural network, and the MobilenetV1 neural network is a lightweight convolutional neural network suitable for mobile terminals and designed for classification. In the MobileNet _ SSD detection model, the MobilenetV1 neural network is truncated to remove the classification layer, and the MobilenetV1 neural network truncated to remove the classification layer is used as a feature extractor for extracting features of an image, wherein the extracted features are input into a series of convolution kernels in the SSD for object detection, each convolution kernel corresponds to a classifier, and each convolution kernel is responsible for detecting the class of an object and the position of the object.
In an embodiment of the present invention, the training flame detection model includes:
making a flame training sample and inputting the flame training sample into a pre-trained MobileNet _ SSD model for training. Specifically, making the flame training sample includes: the method comprises the steps of collecting flame training samples from channels such as the Internet and monitoring videos, or manually synthesizing images containing flames, processing the collected images to increase the sample amount, manually marking out a flame external rectangular frame in the images, and obtaining the flame training samples. Because the number of parameters of the MobileNet _ SSD model is large, the SSD is trained on a small number of images and is easily over-fitted, so that the trained SSD has poor popularization capability. To reduce the risk of over-fitting, one typically chooses to increase the number of images that are collected to increase the sample size, including: performing operations such as scaling and turning on the collected image, for example: and randomly turning horizontally, wherein the horizontal turning refers to turning the image up and down. And the images are amplified through random cropping and random horizontal turnover, so that the number of the images is increased, and the overfitting risk is reduced.
Specifically, the inputting the flame training sample into a pre-trained MobileNet _ SSD model for training includes: based on a tensoflow frame, inputting the obtained flame training sample into a pre-trained MobileNet _ SSD model for training, and finally obtaining a model capable of identifying flame.
Since the SSD is a general detection model, when applied to different specific scenarios, the SSD needs to be modified appropriately according to the requirements of the application scenarios. Since the present invention requires detection of one class, the output class of the SSD is modified to one.
In the embodiment of the invention, the whole training process is completed on a Google open source target Detection platform (TensorflowObject Detection API).
Step S20: determining discrete motion region blocks according to the video stream of the target region;
the target area is provided with at least one monitoring camera, video stream of the target area is collected through the monitoring camera, and the video stream is composed of multiple frames of images.
Specifically, the determining a discrete motion region block according to the video stream of the target region includes:
and acquiring each frame image in the video stream of the target area, and determining discrete motion area blocks by a frame difference method.
If no motion region block is detected by the frame difference method, the video stream continues to be acquired for image analysis, and if a motion region block is detected by the detection method, the process proceeds to step S30.
Step S30: fusing the discrete motion region blocks to generate fused region blocks;
discrete blocks can occur in the motion region block acquired by the frame difference method, for example: a large fire in the image can be dispersed into a plurality of small fire blocks, and the dispersed motion region blocks are fused to generate a fusion region block.
Specifically, the fusing the discrete motion region blocks to generate a fused region block specifically includes:
and fusing the motion region blocks through morphological expansion operation to generate fused region blocks. It will be appreciated that since the distance between discrete motion region blocks is not fixed, multiple fusion region blocks may occur during the fusion process, for example: and fusing to generate 1-N large area blocks, wherein the number of the area blocks is related to the distance between the discrete motion area blocks, and if the distance between the discrete motion area blocks is smaller, fusing the motion area blocks with smaller distance through morphological expansion operation to obtain larger fusion area blocks.
Step S40: detecting the fusion region block, and determining a candidate flame region block;
and detecting the fusion region block through a color information model, and judging whether the fusion region block belongs to a candidate flame region block.
Referring back to fig. 4, fig. 4 is a detailed flowchart of step S40 in fig. 3;
as shown in fig. 4, the detecting the fusion region block and determining a candidate flame region block includes:
step S41: converting the image of the fusion region block from an RGB space to a YIQ space;
specifically, the R value, the G value, and the B value of the RGB color model of the fusion region block are obtained, and the R value, the G value, and the B value of the RGB color model of the fusion region block are converted into the Y value, the I value, and the Q value of the YIQ color model. Specifically, the image of the fusion region block is converted from an RGB space to a YIQ space by formula (1), where formula (1) is as follows:
step S42: determining candidate flame pixel points in the image of the fusion region block according to the pixel points of the image of the fusion region block in the YIQ space;
specifically, whether the Y value, the I value, and the Q value of the YIQ color model satisfy formula (2) is judged by converting the R value, the G value, and the B value of the RGB color model of the fusion region block into the Y value, the I value, and the Q value of the YIQ color model, and if so, it is determined that a pixel point in the image of the fusion region block is a candidate flame pixel point, where the formula (2) is as follows:
step S43: determining candidate fire points according to the candidate flame pixel points;
specifically, if the R value, the G value, and the B value in the RGB color model corresponding to the candidate flame pixel point satisfy preset conditions, the candidate flame pixel point is determined as a candidate fire point, where the preset conditions are that the R value, the G value, and the B value in the RGB color model corresponding to the candidate flame pixel point satisfy formula (3), and the formula (3) is as follows:
step S44: and determining a candidate flame area block according to the candidate fire point.
Specifically, please refer to fig. 5, fig. 5 is a detailed flowchart of step S44 in fig. 4;
as shown in fig. 5, the determining a candidate flame region block according to the candidate fire point includes:
step S441: determining the number of the candidate fire points and the sum of pixel points of the fusion region block;
specifically, the number of the candidate fire points and the sum of the pixel points of the fusion region block are counted.
Step S442: calculating the ratio of the number of the candidate fire points to the sum of the pixel points of the fusion region block;
specifically, a ratio of the number of candidate fire points in the fusion image block to the total sum of pixel points of the fusion region block is calculated, for example: the number of the candidate fire points is represented by NUM, the sum of pixel points of the fusion area block is represented by NUM,and expressing the ratio of the number of the candidate fire points to the sum of the pixel points.
Step S443: and if the proportion value is larger than a preset proportion threshold value, determining the fusion region block as a candidate flame region block.
Specifically, η represents the preset proportion threshold value, and the proportion value is judgedWhether the number of the generated fusion region blocks is larger than a preset proportion threshold η or not, if so, determining that the fusion region block is a candidate flame region block, if not, continuously determining other fusion region blocks, wherein the number of the generated fusion region blocks is possibly multiple, so that all the fusion region blocks need to be determined, determining whether each fusion region block belongs to a candidate flame region block, if not, returning to step S10, obtaining a video stream of a target region, for example, determining whether the number of the candidate fire points meets a preset candidate condition or not, if the number of the candidate fire points meets the preset candidate condition, determining that the fusion region block is the candidate flame region block, and if the number of the candidate fire points does not meet the preset candidate condition, returning to step S10, obtaining the video stream of the target region, wherein the preset candidate condition is a formula (4), and the formula (4) is as follows:
wherein NUM represents the number of candidate fire points in the fusion image block, NUM represents the sum of pixel points in the fusion area block,as the number and image of candidate firesThe ratio of the sum of prime points, η, is a predetermined ratio threshold, it is understood that the predetermined ratio threshold η may be specifically set, such as η being set to 0.1,0.2, and so on, and preferably the predetermined ratio threshold η being set to 0.1.
In an embodiment of the present invention, before inputting the candidate flame region block into a pre-trained flame detection model, the method further includes:
adjusting the size of the candidate flame region block.
Specifically, since the size of the candidate flame region block is generally small, the problem that the small flame in the image cannot be detected is solved by adjusting the size of the candidate flame region block, so as to achieve the purpose of detecting the small flame far away from the monitoring camera, specifically, the adjusting the size of the candidate flame region block includes: scaling the candidate flame region block to a preset resolution, for example: adjusting the candidate flame region block to 300 x 300 resolution. It will be appreciated that the preset resolution may also be adjusted according to specific needs, for example: set the preset resolution to 500 x 500, and so on.
Step S50: inputting the candidate flame area block into a pre-trained flame detection model, and positioning the ignition area of the target area;
referring to fig. 6, fig. 6 is a flowchart illustrating a process of identifying a fire area according to an embodiment of the present invention;
as shown in fig. 6, due to the influence of the computation amount and complexity of the MobileNet _ SSD model, and in order not to lose information of each frame image, it is necessary to adjust the candidate flame region block by adjusting the size of the candidate flame region block, for example: adjusting the candidate flame region block to a resolution of 300 × 300, wherein the flame detection model is a deep neural network model, inputting the candidate flame region block into a pre-trained flame detection model, and identifying and locating the firing region of the target region, for example: inputting the adjusted candidate flame region block into the flame detection model, outputting the position of a target by the flame detection model, wherein the target is an ignition region, the position of the target refers to the position of the target in the candidate flame region block, the position of the target is given by two coordinate points, the position of the ignition region is determined by the positions of the two coordinate points, a minimum circumscribed rectangle is determined by the two coordinate points, the position of the rectangle is determined by the coordinates of the upper left corner and the lower right corner of the rectangle, namely, the two coordinate points are the coordinates of the upper left corner and the lower right corner of the rectangle respectively.
In an embodiment of the present invention, the method further comprises: outputting a confidence score for the ignition region through the flame detection model, wherein the higher the confidence score, the higher the probability that the ignition region exists in the target region.
In the embodiment of the present invention, the determining the discrete motion region block may further be a gaussian background modeling method, and the determining the candidate flame region block may further employ a similar color detection model, or detect flames by a target object detection method based on a deep neural network, which all fall within the protection scope of the present invention.
In an embodiment of the invention, a flame detection method is provided, which is applied to an embedded device, and comprises the following steps: acquiring a video stream of a target area; determining discrete motion region blocks according to the video stream of the target region; fusing the discrete motion region blocks to generate fused region blocks; detecting the fusion region block, and determining a candidate flame region block; and inputting the candidate flame area block into a pre-trained flame detection model, and positioning the ignition area of the target area. Through the mode, the embodiment of the invention solves the technical problems of low detection accuracy and poor adaptability of the existing flame detection method, and improves the accuracy of flame detection.
Compared with the traditional method for detecting the fire by adopting a sensor mode, the method has the advantages that the monitoring range is wider, the method is insensitive to environmental factors such as temperature and humidity, the method adopts a deep learning method to identify the fire, the identification accuracy is very high, the false detection rate is lower, compared with the traditional image identification method, the method for detecting the fire based on the deep neural network can be more suitable for complex application scenes, small flames far away from a camera can be detected, a large fire can be found out as soon as possible, meanwhile, the method can adopt embedded equipment to detect, the equipment cost is reduced, the fire can be detected in real time, the response time is fast, the control is within 1s, the real-time response is facilitated, the timely processing by workers is facilitated, and the expansion of consequences caused by the fire is avoided.
Referring to fig. 7, fig. 7 is a schematic view of a flame detection device according to an embodiment of the invention; the flame detection device can be applied to embedded equipment,
as shown in fig. 7, the flame detection device 70 includes:
a video stream unit 71, configured to obtain a video stream of the target area;
a motion region block unit 72 for determining discrete motion region blocks from the video stream of the target region;
a fusion region block unit 73 for fusing the discrete motion region blocks to generate fusion region blocks;
a flame region block unit 74, configured to detect the fusion region block and determine a candidate flame region block;
and a fire area unit 75, configured to input the candidate flame area block into a flame detection model trained in advance, and locate a fire area of the target area.
In the embodiment of the present invention, the motion region block unit 72 is specifically configured to:
and acquiring each frame image in the video stream of the target area, and determining discrete motion area blocks by a frame difference method.
In the embodiment of the present invention, the fusion area block unit 73 is specifically configured to:
and fusing the motion region blocks through morphological expansion operation to generate fused region blocks.
In the embodiment of the present invention, the flame region block unit 74 is specifically configured to:
converting the image of the fusion region block from an RGB space to a YIQ space;
determining candidate flame pixel points in the image of the fusion region block according to the pixel points of the image of the fusion region block in the YIQ space;
determining candidate fire points according to the candidate flame pixel points;
and determining a candidate flame area block according to the candidate fire point.
In an embodiment of the present invention, the determining a candidate flame region block according to the candidate fire point includes:
determining the number of the candidate fire points and the sum of pixel points of the fusion region block;
calculating the ratio of the number of the candidate fire points to the sum of the pixel points of the fusion region block;
and if the proportion value is larger than a preset proportion threshold value, determining the fusion region block as a candidate flame region block.
In an embodiment of the present invention, the flame detection device 70 further includes:
a flame zone block adjusting unit (not shown) for adjusting the size of the candidate flame zone block.
Since the apparatus embodiment and the method embodiment are based on the same concept, the contents of the apparatus embodiment may refer to the method embodiment on the premise that the contents do not conflict with each other, and are not described herein again.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an embedded device according to an embodiment of the present invention. The embedded device can be an electronic device which can perform flame detection, such as an intelligent terminal and a mobile terminal.
As shown in fig. 8, the embedded device 80 includes one or more processors 81 and a memory 82. In fig. 8, one processor 81 is taken as an example.
The processor 81 and the memory 82 may be connected by a bus or other means, and fig. 8 illustrates the connection by a bus as an example.
The memory 82, which is a non-volatile computer-readable storage medium, may be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as the units corresponding to a flame detection method in the embodiment of the present invention (for example, the units described in fig. 7). The processor 81 executes various functional applications of the flame detection method and data processing, i.e. the functions of the various modules and units of the method embodiment described above and the apparatus embodiment described above, by running non-volatile software programs, instructions and modules stored in the memory 82.
The memory 82 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 82 may optionally include memory located remotely from the processor 81, which may be connected to the processor 81 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The modules are stored in the memory 82 and, when executed by the one or more processors 81, perform the flame detection method of any of the method embodiments described above, e.g., performing the various steps shown in fig. 3, 4, 5 described above; the functions of the respective modules or units described in fig. 7 can also be implemented.
The embedded device of the embodiment of the present invention exists in various forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such electronic devices include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such electronic devices include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play video content, and generally also have mobile internet access features. This type of device comprises: video players, handheld game consoles, and intelligent toys and portable car navigation devices.
(4) And other electronic equipment with a video playing function and an internet surfing function.
Embodiments of the present invention also provide a non-transitory computer storage medium storing computer-executable instructions, which are executed by one or more processors, such as one of the processors 81 in fig. 8, to enable the one or more processors to perform the flame detection method in any of the method embodiments, such as performing the steps shown in fig. 3, 4, and 5 described above; the functions of the various units described in fig. 7 may also be implemented.
The above-described embodiments of the apparatus or device are merely illustrative, wherein the unit modules described as separate parts may or may not be physically separate, and the parts displayed as module units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the technical solutions mentioned above may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute the method according to each embodiment or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method of flame detection, the method comprising:
acquiring a video stream of a target area;
determining discrete motion region blocks according to the video stream of the target region;
fusing the discrete motion region blocks to generate fused region blocks;
detecting the fusion region block, and determining a candidate flame region block;
and inputting the candidate flame area block into a pre-trained flame detection model, and positioning the ignition area of the target area.
2. The method of claim 1, wherein determining discrete motion region blocks from the video stream of the target region comprises:
and acquiring each frame image in the video stream of the target area, and determining discrete motion area blocks by a frame difference method.
3. The method according to claim 1, wherein the fusing the discrete motion region blocks to generate fused region blocks specifically comprises:
and fusing the motion region blocks through morphological expansion operation to generate fused region blocks.
4. The method according to claim 1, wherein the detecting the fusion region block and determining a candidate flame region block comprises:
converting the image of the fusion region block from an RGB space to a YIQ space;
determining candidate flame pixel points in the image of the fusion region block according to the pixel points of the image of the fusion region block in the YIQ space;
determining candidate fire points according to the candidate flame pixel points;
and determining a candidate flame area block according to the candidate fire point.
5. The method of claim 4, wherein determining a candidate flame zone block based on the candidate flame point comprises:
determining the number of the candidate fire points and the sum of pixel points of the fusion region block;
calculating the ratio of the number of the candidate fire points to the sum of the pixel points of the fusion region block;
and if the proportion value is larger than a preset proportion threshold value, determining the fusion region block as a candidate flame region block.
6. The method of claim 1, wherein prior to inputting the candidate flame region block into a pre-trained flame detection model, the method further comprises:
adjusting the size of the candidate flame region block.
7. A flame detection device, the device comprising:
the video stream unit is used for acquiring a video stream of the target area;
a motion region block unit for determining discrete motion region blocks according to the video stream of the target region;
a fusion region block unit for fusing the discrete motion region blocks to generate a fusion region block;
the flame area block unit is used for detecting the fusion area block and determining a candidate flame area block;
and the ignition area unit is used for inputting the candidate flame area block into a flame detection model trained in advance and positioning the ignition area of the target area.
8. The apparatus according to claim 7, wherein the fusion area block unit is specifically configured to:
and fusing the motion region blocks through morphological expansion operation to generate fused region blocks.
9. The apparatus of claim 7, wherein the flame area block unit is specifically configured to:
converting the image of the fusion region block from an RGB space to a YIQ space;
determining candidate flame pixel points in the image of the fusion region block according to the pixel points of the image of the fusion region block in the YIQ space;
determining candidate fire points according to the candidate flame pixel points;
and determining a candidate flame area block according to the candidate fire point.
10. An embedded device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
CN201910435666.2A 2019-05-23 2019-05-23 A kind of flame detecting method, device and embedded device Pending CN110263654A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910435666.2A CN110263654A (en) 2019-05-23 2019-05-23 A kind of flame detecting method, device and embedded device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910435666.2A CN110263654A (en) 2019-05-23 2019-05-23 A kind of flame detecting method, device and embedded device

Publications (1)

Publication Number Publication Date
CN110263654A true CN110263654A (en) 2019-09-20

Family

ID=67915274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910435666.2A Pending CN110263654A (en) 2019-05-23 2019-05-23 A kind of flame detecting method, device and embedded device

Country Status (1)

Country Link
CN (1) CN110263654A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853287A (en) * 2019-09-26 2020-02-28 华南师范大学 Flame real-time monitoring system and method based on Internet of things distributed architecture
CN110956611A (en) * 2019-11-01 2020-04-03 武汉纺织大学 Smoke detection method integrated with convolutional neural network
CN110975191A (en) * 2019-12-24 2020-04-10 尹伟 Fire extinguishing method for unmanned aerial vehicle
CN111797726A (en) * 2020-06-18 2020-10-20 浙江大华技术股份有限公司 Flame detection method and device, electronic equipment and storage medium
CN112733766A (en) * 2021-01-15 2021-04-30 北京锐马视讯科技有限公司 Video flame detection method, device and equipment based on pixel technology
CN112801148A (en) * 2021-01-14 2021-05-14 西安电子科技大学 Fire recognition and positioning system and method based on deep learning
CN113408479A (en) * 2021-07-12 2021-09-17 重庆中科云从科技有限公司 Flame detection method and device, computer equipment and storage medium
CN113836967A (en) * 2020-06-08 2021-12-24 阿里巴巴集团控股有限公司 Data processing method, data processing device, storage medium and computer equipment
CN114155457A (en) * 2021-11-16 2022-03-08 华南师范大学 Control method and control device based on flame dynamic identification

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017084094A1 (en) * 2015-11-20 2017-05-26 富士通株式会社 Apparatus, method, and image processing device for smoke detection
CN106845443A (en) * 2017-02-15 2017-06-13 福建船政交通职业学院 Video flame detecting method based on multi-feature fusion
CN107609470A (en) * 2017-07-31 2018-01-19 成都信息工程大学 The method of outdoor fire disaster early-stage smog video detection
CN107704818A (en) * 2017-09-28 2018-02-16 韦彩霞 A kind of fire detection system based on video image
CN108765454A (en) * 2018-04-25 2018-11-06 深圳市中电数通智慧安全科技股份有限公司 A kind of smog detection method, device and device end based on video
CN109035666A (en) * 2018-08-29 2018-12-18 深圳市中电数通智慧安全科技股份有限公司 A kind of fire-smoke detection method, apparatus and terminal device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017084094A1 (en) * 2015-11-20 2017-05-26 富士通株式会社 Apparatus, method, and image processing device for smoke detection
CN106845443A (en) * 2017-02-15 2017-06-13 福建船政交通职业学院 Video flame detecting method based on multi-feature fusion
CN107609470A (en) * 2017-07-31 2018-01-19 成都信息工程大学 The method of outdoor fire disaster early-stage smog video detection
CN107704818A (en) * 2017-09-28 2018-02-16 韦彩霞 A kind of fire detection system based on video image
CN108765454A (en) * 2018-04-25 2018-11-06 深圳市中电数通智慧安全科技股份有限公司 A kind of smog detection method, device and device end based on video
CN109035666A (en) * 2018-08-29 2018-12-18 深圳市中电数通智慧安全科技股份有限公司 A kind of fire-smoke detection method, apparatus and terminal device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853287A (en) * 2019-09-26 2020-02-28 华南师范大学 Flame real-time monitoring system and method based on Internet of things distributed architecture
CN110956611A (en) * 2019-11-01 2020-04-03 武汉纺织大学 Smoke detection method integrated with convolutional neural network
CN110975191A (en) * 2019-12-24 2020-04-10 尹伟 Fire extinguishing method for unmanned aerial vehicle
CN113836967A (en) * 2020-06-08 2021-12-24 阿里巴巴集团控股有限公司 Data processing method, data processing device, storage medium and computer equipment
CN111797726A (en) * 2020-06-18 2020-10-20 浙江大华技术股份有限公司 Flame detection method and device, electronic equipment and storage medium
CN112801148A (en) * 2021-01-14 2021-05-14 西安电子科技大学 Fire recognition and positioning system and method based on deep learning
CN112733766A (en) * 2021-01-15 2021-04-30 北京锐马视讯科技有限公司 Video flame detection method, device and equipment based on pixel technology
CN113408479A (en) * 2021-07-12 2021-09-17 重庆中科云从科技有限公司 Flame detection method and device, computer equipment and storage medium
CN114155457A (en) * 2021-11-16 2022-03-08 华南师范大学 Control method and control device based on flame dynamic identification

Similar Documents

Publication Publication Date Title
CN110263654A (en) A kind of flame detecting method, device and embedded device
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN108229307B (en) Method, device and equipment for object detection
CN108509859B (en) Non-overlapping area pedestrian tracking method based on deep neural network
CN110909630B (en) Abnormal game video detection method and device
CN108416902A (en) Real-time object identification method based on difference identification and device
CN109815881A (en) Training method, the Activity recognition method, device and equipment of Activity recognition model
CN111652181B (en) Target tracking method and device and electronic equipment
CN113642431A (en) Training method and device of target detection model, electronic equipment and storage medium
CN111582116A (en) Video erasing trace detection method, device, equipment and storage medium
CN114253647B (en) Element display method and device, electronic equipment and storage medium
CN113065379B (en) Image detection method and device integrating image quality and electronic equipment
WO2023142912A1 (en) Method and apparatus for detecting left behind object, and storage medium
CN116434325A (en) Method, device, equipment and storage medium for detecting specific action
CN115620054A (en) Defect classification method and device, electronic equipment and storage medium
KR102637342B1 (en) Method and apparatus of tracking target objects and electric device
CN111274985B (en) Video text recognition system, video text recognition device and electronic equipment
CN117372928A (en) Video target detection method and device and related equipment
CN115082758B (en) Training method of target detection model, target detection method, device and medium
CN115294162B (en) Target identification method, device, equipment and storage medium
CN101567088B (en) Method and device for detecting moving object
CN111723614A (en) Traffic signal lamp identification method and device
CN112084815A (en) Target detection method based on camera focal length conversion, storage medium and processor
CN105787963A (en) Video target tracking method and device
CN114511877A (en) Behavior recognition method and device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190920

RJ01 Rejection of invention patent application after publication