CN111986436B - Comprehensive flame detection method based on ultraviolet and deep neural networks - Google Patents

Comprehensive flame detection method based on ultraviolet and deep neural networks Download PDF

Info

Publication number
CN111986436B
CN111986436B CN202010908392.7A CN202010908392A CN111986436B CN 111986436 B CN111986436 B CN 111986436B CN 202010908392 A CN202010908392 A CN 202010908392A CN 111986436 B CN111986436 B CN 111986436B
Authority
CN
China
Prior art keywords
flame
neural network
training
ultraviolet
alarm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010908392.7A
Other languages
Chinese (zh)
Other versions
CN111986436A (en
Inventor
王思维
范峥荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Shidao Information Technology Co ltd
Original Assignee
Chengdu Shidao Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Shidao Information Technology Co ltd filed Critical Chengdu Shidao Information Technology Co ltd
Priority to CN202010908392.7A priority Critical patent/CN111986436B/en
Publication of CN111986436A publication Critical patent/CN111986436A/en
Application granted granted Critical
Publication of CN111986436B publication Critical patent/CN111986436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Fire Alarms (AREA)
  • Photometry And Measurement Of Optical Pulse Characteristics (AREA)

Abstract

The invention provides a comprehensive flame detection method based on an ultraviolet and deep neural network, which comprises a fire detection system, a fire detection system and a fire detection system, wherein the fire detection system comprises an ultraviolet detector, a camera, a controller and an alarm device; the data output end of the ultraviolet detector is connected with the first data input end of the controller, the trigger end of the controller is connected with the trigger end of the camera, the data output end of the camera is connected with the second data input end of the controller, and the alarm output end of the controller is connected with the alarm input end of the alarm device; the controller judges whether the data collected by the ultraviolet detector meets the triggering condition or not according to the data collected by the ultraviolet detector, if so, the controller triggers the camera to collect the data, and the controller judges the environmental condition of the controller according to the data collected by the camera and gives an alarm through the alarm device. The invention can judge and alarm the fire spreading.

Description

Comprehensive flame detection method based on ultraviolet and deep neural networks
Technical Field
The invention relates to the field of deep learning target detection, in particular to a comprehensive flame detection method based on an ultraviolet deep neural network.
Background
Fire has played an important role in the development of society, but it is also a double-edged sword. In order to reduce the influence of fire on life, property, society and economy, the fire is controlled in a very small range, and the fire detection technology is very important. Along with the development of electronic technology, fire detection devices have also been emerging, and can be classified into smoke-sensitive, temperature-sensitive, photosensitive and image-based fire detectors according to the fire detection principle.
1. The temperature-sensing fire detector is a sensor which responds according to the change of the environmental temperature and can convert the temperature signal in the environment into an electric signal. A smoke-sensitive fire detector is a fire detector that responds to smoke particles in ambient air, and is a device that converts changes in smoke concentration at a detection site into an electrical signal.
2. Generally, the reliability of the temperature-sensitive fire detector is higher than that of the smoke detector, and the requirement on the environment is low; however, it is not suitable for use in places where smoke and dust may be generated because of its slow response to initial fire. The smoke sensing type fire detector can detect dangerous situations in time at the initial stage of a fire, but is only suitable for places where the fire generates large smoke or places where smoldering is easy to generate, and has great limitation when being used in environments with too fast ventilation or large smoke at ordinary times. And the traditional temperature-sensing type and smoke-sensing type detectors have the characteristics of high false alarm rate and long response time.
3. The photosensitive flame detector is generally divided into an ultraviolet detector and an infrared detector, and the traditional photosensitive flame detector comprises a single ultraviolet detector, a single infrared detector, a double infrared detector and a triple infrared detector. The ultraviolet detector has quick response, is not sensitive to human and high-temperature objects, has background noise, is easily influenced by thunder, electric arc and the like, and the infrared detector is easily influenced by the high-temperature objects, the human, sunlight and the like, so that the single ultraviolet and single infrared detection is easy to generate false alarm.
4. The image type fire detector analyzes and processes the environmental picture data shot by the camera to judge whether a fire disaster exists in the current environment. The image processing technology has some existing algorithms for fire detection, but most of the algorithms are applied to some fixed scenes, so that the improvement of the fire detection rate and the reduction of the false detection rate are difficult to meet at the same time.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly creatively provides a comprehensive flame detection method based on an ultraviolet and deep neural network.
In order to achieve the above object, the present invention provides a comprehensive flame detection system based on an ultraviolet and deep neural network, comprising an ultraviolet detector, a camera, a controller and an alarm device;
the data output end of the ultraviolet detector is connected with the first data input end of the controller, the trigger end of the controller is connected with the trigger end of the camera, the data output end of the camera is connected with the second data input end of the controller, and the alarm output end of the controller is connected with the alarm input end of the alarm device;
the controller judges whether the data collected by the ultraviolet detector meet the triggering conditions or not according to the data collected by the ultraviolet detector, if so, the controller triggers the camera to collect the data, and the controller judges the environmental condition according to the data collected by the camera and gives an alarm through the alarm device.
The invention also discloses a comprehensive flame detection method based on the ultraviolet deep neural network, which comprises the following steps:
s1, acquiring image data of multiple imaging modes in an environment where flames need to be detected, wherein the image data comprises a sample image with flames and a sample image without fire;
s2, processing the flame image data set based on a Yolo _ v4 and training a model to construct a deep neural network model for identifying flame;
s3, detecting the ultraviolet spectrum in the solar blind area wave band in the flame radiation in the monitored environment in real time through an ultraviolet sensor, and triggering a camera to snapshot the environment condition when flame appears in the environment to obtain current environment image data by a controller;
s4, identifying the flame condition in the image data through a deep neural network model, further confirming the fire possibly detected by the ultraviolet detector, outputting the area of the flame and the position of a fire source, and calculating the spreading speed of the flame through comparing the areas of the flame of the front frame and the flame of the rear frame;
in a preferred embodiment of the present invention, the S2 includes:
s2-1, preprocessing the acquired flame image data, classifying images with or without flames, marking the positions of fire sources in the images with the flames to obtain flame images with fire source marks, and dividing an image data set into a training set and a testing set;
s2-2, presetting hyper-parameters such as learning rate and iteration times of Yolo _ v4, and inputting a training set into a neural network for classification training;
s2-3, calculating the values of Loss, precision, recall, F1, AP and mAP after iterative training according to a formula, setting thresholds of the Loss and the mAP, and keeping a weight file of the neural network when a final training result meets the threshold;
s2-4, if the training result does not meet the threshold condition, retraining again by means of increasing a data set, resetting the hyper-parameter and the like until the training result meets the threshold;
and S2-5, inputting the test set into the trained neural network model in the S2-3, calculating the detection precision of the neural network model on the test set, if the detection precision meets a set value, indicating that the training is successful, and if the detection precision does not meet the set value, changing parameters and carrying out iterative training again.
In a preferred embodiment of the present invention, the S3 includes:
s3-1, detecting an ultraviolet spectrum in a solar blind area wave band in flame radiation under the environment through an ultraviolet detector, measuring the ultraviolet spectrum in the environment once every 30ms through an ultraviolet sensor, and setting a counter to record the environment abnormity detection times;
s3-2, when the ultraviolet signal in the environment is detected to be not up to a set threshold value, the environment is normal, the ultraviolet detector returns first data, and a counter is emptied;
s3-3, when the ultraviolet signal in the environment is detected to reach a set threshold value, counting is started, when the counting times reach a set value, the ultraviolet detector returns second data, and a camera is started to carry out snapshot;
in a preferred embodiment of the present invention, the S4 includes:
s4-1, when the ultraviolet detector triggers the camera to capture flame image data of the current environment, sending the image into a feature extraction network, and outputting feature maps of three scales;
s4-2, inputting the characteristic diagram into a target detection layer, and detecting the position of a fire source on the characteristic diagrams with three scales;
and S4-3, if the fact that the fire exists in the image is detected, the network outputs a prediction boundary box of the fire source, including information such as the position of the upper left corner point of the flame prediction boundary box in the image, the length and the width of the boundary box and the like.
And S4-3, if the fact that the fire really exists in the images is detected, the network outputs a flame prediction boundary box comprising information such as the position of the upper left corner point of the flame prediction boundary box in the images, the length and the width of the boundary box and the like, and when flame information is detected in all ten continuous images, the fire disaster is determined to happen, and alarm information is output to an alarm device.
And S4-4, calculating the flame area in the image through flame prediction bounding box information output by the neural network, comparing the flame area in the previous frame of image input into the neural network, and calculating the flame spreading speed.
Figure BDA0002662332170000041
Figure BDA0002662332170000042
Area refers to the total size of flame areas in the image, N refers to the number of fire sources in the image, L and W refer to the length and width of a flame prediction bounding box of each fire source, rate refers to the spreading speed of flames and refers to the interval time of two frame image input, subscript k refers to the current frame image of the input neural network, and k-1 refers to the last frame image of the input neural network. If the spreading speed Rate is greater than or equal to a preset first spreading speed threshold, the alarm device sends out three-level alarm information, if the spreading speed Rate is greater than or equal to a preset second spreading speed threshold, the preset second spreading speed threshold is greater than the preset first spreading speed threshold, the alarm device sends out two-level alarm information, if the spreading speed Rate is greater than or equal to a preset third spreading speed threshold, and the preset third spreading speed threshold is greater than the preset second spreading speed threshold, the alarm device sends out first-level alarm information, and the first-level alarm information is sound-light alarm; the second-level alarm information is a brightness alarm; the third-level alarm information is a sound alarm.
Area k Representing the total size of the flame area in the current frame image;
L i indicating the length of the flame prediction bounding box of the ith fire source in the image;
W i indicating the width of the flame prediction bounding box of the ith fire source in the image;
Area k-1 representing the total size of the flame area in the previous frame of image;
time represents the time difference between two frames of image acquisition;
in conclusion, due to the adoption of the technical scheme, the ultraviolet detector has the advantages of quick response, insensitivity to people and high-temperature objects and the like; the Yolo _ v4 neural network algorithm has good detection precision and detection speed. The invention relates to a fire detection system designed by combining an ultraviolet detector and a Yolo _ v4 neural network algorithm, which detects an ultraviolet spectrum with the wavelength of 180-240 nm in a solar blind area wave band in the environment by using the ultraviolet detector, further confirms the detected possible fire by using a Yolo _ v4 neural network model, obtains the area of the fire, and calculates the spreading trend of the fire by comparing the flame areas of front and rear frames. The method can avoid the situations of false detection, missed detection and the like of the traditional temperature, smoke and light sensation sensor in some complex scenes. And aiming at the condition that the traditional fire detector can not detect the fire spreading speed at present, the invention adds the function of measuring the fire spreading trend on the basis of fire detection, and can also monitor whether the flame condition is out of control or not in some scenes with flames, so that firefighters can master more detailed fire information.
The invention detects the fire by combining the ultraviolet fire detector with the Yolo _ v4 neural network algorithm, has stronger robustness compared with the traditional smoke, temperature and light sensing fire detectors, reduces the probability of false detection and improves the detection speed. Compared with the traditional fire detector, the fire detector can measure the spreading speed of the fire in real time during detection, and can provide more fire information for the outside.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of the system of the present invention.
FIG. 2 is a flow chart of the method of the present invention.
FIG. 3 is a flow chart of neural network detection in accordance with the present invention.
Fig. 4 is an image of an environment in which the present invention is used for testing.
FIG. 5 is a diagram of the neural network of the present invention identifying flame effects.
FIG. 6 is a graph of the location, area and propagation velocity of a flame as identified by the neural network of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As shown in fig. 1, an ultraviolet detector monitors ultraviolet spectrum signals of flames in an environment in real time, a camera triggers the ultraviolet detector to capture an environment image of the image, captured image information is input into a Yolo _4 neural network for detection, and when a fire is detected, a warning is given out.
As shown in fig. 2, the system monitors ultraviolet spectrums in an environment in real time through an ultraviolet detector, when the existence of the flame ultraviolet spectrums is detected, the ultraviolet detector triggers a camera to capture the environment, captured image information is transmitted into a Yolo _ v4 neural network, and whether flame exists in the environment is detected through images; if the flame is detected, the neural network outputs a flame prediction frame, the position and the area of the flame in the image can be obtained through the flame prediction frame, and then whether the fire occurs is judged by judging whether the flame information is detected in ten continuous frames of images; if the fire does exist, the spreading speed of the flame is calculated according to the area of the flame detected in the two latest frames of images and the time interval between the images and the input neural network, and finally the position, the area and the spreading speed of the flame are sent to an alarm device.
Yolo _ v4 neural network model training
1. Production of data sets
a) The method comprises the steps of arranging a camera in an environment needing to monitor fire conditions, and manually simulating various flame conditions in the environment.
b) A camera is used for carrying out mass snapshot on the environment, environmental image data including no flame, flame and a plurality of fire sources are collected, and the quantity of different types of images is guaranteed to be equivalent.
c) The images are divided into two types of images with no flame and flame, the positions and areas of the flame in the images with flame are marked by using a marking tool, and text files including image names, detection categories and the like required by training are generated.
d) And dividing the marked image data into a training set and a test set for subsequent neural network model training and performance testing.
2. Training and testing network models
a) Presetting hyper-parameters such as learning rate, iteration times and the like of the Yolo _ v4 neural network training, and inputting a training set into the neural network for classification training;
b) Calculating Loss values Loss, precision, recall, F1, AP and mAP values after iterative training according to a formula, setting thresholds of Loss and mAP, and keeping a weight file of the neural network when a final training result meets the threshold;
Figure BDA0002662332170000071
Figure BDA0002662332170000072
Figure BDA0002662332170000073
Figure BDA0002662332170000074
wherein T is 1 The number of images for correctly identifying flame is referred to; f 1 The number of images indicating that flames are recognized wrongly; f 2 Referring to the number of flame images not identified, the mAP indicates that the AP is calculated for each category and then the AP average is calculated.
c) If the training result does not meet the threshold condition, retraining again by means of increasing the data set, resetting the hyper-parameter and the like until the training result meets the threshold;
d) Inputting the test set into a trained neural network model, calculating the detection precision of the neural network model on the test set, if the test set meets a set value, indicating that the training is successful, and if the test set does not meet the set value, changing parameters and carrying out iterative training again;
the main process of the system is as follows:
1. environmental monitoring
The flame contains visible radiation which can be seen by human eyes and ultraviolet radiation which can not be seen by human eyes and has the wavelength distribution of 105-380 nm, and when a substance is burnt, the flame can emit stronger ultraviolet radiation, so that an ultraviolet photoelectric tube is used for detecting an ultraviolet spectrum signal in the environment, when a sufficient amount of ultraviolet signals appear in the environment, the ultraviolet photoelectric tube can generate current, then a pulse signal is generated through a capacitor charging and discharging and shaping circuit and is sent to a controller, then the controller starts to count the pulse signal, the pulse signal is read every 30ms, if the continuous counting exceeds ten times, the flame really exists in the environment, and then a camera is triggered to capture the image information of the environment and transmit the image information into a Yolo _ v4 neural network.
2. Identifying flames in an environment
As shown in fig. 3, after a frame of environment image is input into the neural network, the image first enters the feature extraction layer of the Yolo _ v4 neural network, and then three medium-scale feature maps are output for target detection of different scales; and then inputting the three characteristic maps into a target detection layer to detect whether flames exist in the image or not.
3. Outputting the position of the fire source, calculating the area and the spreading speed of the flame
As shown in fig. 4 to 6, when it is detected that a flame does exist in the environment, the Yolo _ v4 neural network outputs a flame prediction box, which includes position information of the upper left corner of the flame, length and width information of the flame prediction box, and a detection confidence of the flame target.
The area of a flame prediction frame is used as the prediction area of flame, and then the flame area in the previous frame of image input to the neural network is compared, so that the flame spreading speed is calculated.
Figure BDA0002662332170000091
Figure BDA0002662332170000092
Wherein Area refers to the total size of flame Area in the image, N refers to the number of fire sources in the image, L and W refer to the length and width of a flame prediction bounding box of each fire source, rate refers to the spreading Rate of flame and refers to the interval time of two frame image input, subscript k refers to the current frame image of the input neural network, and k-1 refers to the last frame image of the input neural network.
If the spreading rate of the flame is greater than or equal to the preset spreading rate of the flame, the controller sends an alarm signal to the alarm device to warn the staff to pay attention.
4. Accuracy of judging fire
If the images input into the neural network in ten continuous frames detect flames, the fact that a fire disaster happens is judged, and the device can send flame information in the current environment to an alarm device, wherein the flame information comprises flame position information, flame area, spreading speed and the like.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (2)

1. A comprehensive flame detection method based on ultraviolet and deep neural networks is characterized by comprising a flame detection system, wherein the flame detection system comprises an ultraviolet detector, a camera, a controller and an alarm device;
the data output end of the ultraviolet detector is connected with the first data input end of the controller, the trigger end of the controller is connected with the trigger end of the camera, the data output end of the camera is connected with the second data input end of the controller, and the alarm output end of the controller is connected with the alarm input end of the alarm device;
the controller judges whether the data collected by the ultraviolet detector meets the triggering condition or not according to the data collected by the ultraviolet detector, if so, the controller triggers the camera to collect the data, and the controller judges the environmental condition of the controller according to the data collected by the camera and gives an alarm through the alarm device;
the flame detection method comprises the following steps:
s1, acquiring image data of multiple imaging modes in an environment where flames need to be detected, wherein the image data comprises a sample image with flames and a sample image without fire;
s2, processing the flame image data set based on a Yolo _ v4 and training a model to construct a deep neural network model for identifying flame; the training of the model comprises the following steps:
s2-1, preprocessing the acquired flame image data, classifying images with or without flames, marking the positions of fire sources in the images with the flames to obtain flame images with fire source marks, and dividing an image data set into a training set and a testing set;
s2-2, presetting a hyper-parameter of Yolo _ v4, and inputting a training set into a neural network for classification training;
s2-3, after iterative training, when the final result of the training meets a threshold value, reserving a weight file of the neural network;
s2-4, if the training result does not meet the threshold condition, retraining again by using a mode of increasing a data set or/and resetting the hyper-parameter until the training result meets the threshold;
s2-5, inputting the test set into the S2-3 to train the neural network model, calculating the detection precision of the neural network model to the test set, if the detection precision meets a set value, indicating that the training is successful, and if the detection precision does not meet the set value, changing parameters to carry out iterative training again;
further, the training process of the model further comprises the following steps:
a) Presetting hyper-parameters such as learning rate, iteration times and the like of the Yolo _ v4 neural network training, and inputting a training set into the neural network for classification training;
b) Calculating Loss value Loss, precision, recall, F1, AP and mAP values after iterative training according to a formula, setting thresholds of the Loss and mAP, and reserving a weight file of the neural network when a final training result meets the thresholds;
Figure FDA0003755712790000021
Figure FDA0003755712790000022
Figure FDA0003755712790000023
Figure FDA0003755712790000024
wherein T is 1 The number of images for correctly identifying flame is referred to; f 1 The number of images indicating that flames are recognized wrongly; f 2 The number of unidentified flame images is referred to, the mAP represents that the AP is calculated for each category and then the average value of the AP is calculated;
c) If the training result does not meet the threshold condition, retraining again by means of increasing the data set, resetting the hyper-parameter and the like until the training result meets the threshold;
s3, detecting the ultraviolet spectrum in the solar blind area wave band in the flame radiation in the monitored environment in real time through an ultraviolet detector, and triggering a camera to snapshot the environment condition when flame appears in the environment to obtain current environment image data by a controller; the method specifically comprises the following steps:
s3-1, detecting an ultraviolet spectrum in a solar blind area wave band in flame radiation under the environment through an ultraviolet detector, measuring the ultraviolet spectrum in the environment once every 30ms through an ultraviolet sensor, and setting a counter to record the environment abnormity detection times;
s3-2, when the ultraviolet signal in the environment is detected to be not up to a set threshold value, the environment is normal, the ultraviolet detector returns first data, and a counter is emptied;
s3-3, when detecting that the ultraviolet signal in the environment reaches a set threshold, starting counting, and when the counting times reach a set value, returning second data by the ultraviolet detector and starting a camera for snapshot;
s4, identifying the flame condition in the image data through a deep neural network model, further confirming the possibly occurring fire detected by the ultraviolet detector, outputting the area of the flame and the position of a fire source, calculating the spreading speed of the flame through the comparison of the areas of the flame of the front frame and the flame of the rear frame, and giving an alarm; the issued alerts include:
if the spreading speed Rate is greater than or equal to a preset first spreading speed threshold value, the alarm device sends out three-level alarm information;
if the spreading speed Rate is greater than or equal to a preset second spreading speed threshold value, and the preset second spreading speed threshold value is greater than a preset first spreading speed threshold value, the alarm device sends out secondary alarm information;
if the spreading speed Rate is greater than or equal to a preset third spreading speed threshold value, and the preset third spreading speed threshold value is greater than a preset second spreading speed threshold value, the alarm device sends first-level alarm information;
the primary alarm information is acousto-optic alarm; the second-level alarm information is a brightness alarm; the third-level alarm information is a sound alarm.
2. The ultraviolet deep neural network-based comprehensive flame detection method as claimed in claim 1, wherein the S4 step includes:
s4-1, when the ultraviolet detector triggers the camera to capture flame image data of the current environment, sending the image into a feature extraction network, and outputting feature maps of three scales;
s4-2, inputting the characteristic diagram into a target detection layer, and detecting the position of a fire source on the characteristic diagrams of three scales;
s4-3, if the fact that the fire exists in the images is detected, the network outputs a flame prediction boundary box which comprises the position of the upper left corner point of the flame prediction boundary box in the images or/and the length and width information of the boundary box, and when flame information is detected in all ten continuous images, a fire disaster is determined to occur, and alarm information is output to an alarm device;
s4-4, calculating the flame area size in the image through flame prediction bounding box information output by the neural network, comparing the flame area size in the last frame of image input to the neural network, and calculating the flame spreading speed;
Figure FDA0003755712790000041
Figure FDA0003755712790000042
wherein, area refers to the total size of the flame Area in the image, N refers to the number of the fire sources in the image, L and W refer to the length and width of the flame prediction bounding box of each fire source, rate refers to the spreading speed of the flame, subscript k refers to the current frame image of the input neural network, and k-1 refers to the last frame image of the input neural network.
CN202010908392.7A 2020-09-02 2020-09-02 Comprehensive flame detection method based on ultraviolet and deep neural networks Active CN111986436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010908392.7A CN111986436B (en) 2020-09-02 2020-09-02 Comprehensive flame detection method based on ultraviolet and deep neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010908392.7A CN111986436B (en) 2020-09-02 2020-09-02 Comprehensive flame detection method based on ultraviolet and deep neural networks

Publications (2)

Publication Number Publication Date
CN111986436A CN111986436A (en) 2020-11-24
CN111986436B true CN111986436B (en) 2022-12-13

Family

ID=73448170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010908392.7A Active CN111986436B (en) 2020-09-02 2020-09-02 Comprehensive flame detection method based on ultraviolet and deep neural networks

Country Status (1)

Country Link
CN (1) CN111986436B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465119A (en) * 2020-12-08 2021-03-09 武汉理工光科股份有限公司 Fire-fighting dangerous case early warning method and device based on deep learning
CN112907885B (en) * 2021-01-12 2022-08-16 中国计量大学 Distributed centralized household image fire alarm system and method based on SCNN
CN114998843B (en) * 2022-08-04 2023-01-06 深圳市海清视讯科技有限公司 Fire detection method and related device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236684A (en) * 2008-03-06 2008-08-06 中国人民解放军第二炮兵装备研究院第三研究所 Fire disaster detector and detection positioning method
CN104021644A (en) * 2014-06-13 2014-09-03 淄博安益矿用设备有限公司 False triggering prevention device and method for ultraviolet flame sensor
CN104199117A (en) * 2014-08-20 2014-12-10 湖北捷讯光电有限公司 Infrared ultraviolet composite flame detector and detection method thereof
CN107301653A (en) * 2017-05-18 2017-10-27 常州市武进区半导体照明应用技术研究院 Video image fire disaster flame detection method based on BP neural network
CN107941331A (en) * 2017-12-27 2018-04-20 四川大学 The energy-probe design method of ArF excimer laser Light Energies uniformity detection
CN110852347A (en) * 2019-10-17 2020-02-28 天津大学 Fire detection method using improved YOLO v3
CN111105587A (en) * 2019-12-31 2020-05-05 广州思瑞智能科技有限公司 Intelligent flame detection method and device, detector and storage medium
CN111179279A (en) * 2019-12-20 2020-05-19 成都指码科技有限公司 Comprehensive flame detection method based on ultraviolet and binocular vision

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7991187B2 (en) * 2007-08-29 2011-08-02 Billy Hou Intelligent image smoke/flame sensor and detection system
CN101315667B (en) * 2008-07-04 2011-01-19 南京航空航天大学 Multi-characteristic synthetic recognition method for outdoor early fire disaster
CN101840571B (en) * 2010-03-30 2012-03-28 杭州电子科技大学 Flame detection method based on video image
CN102819926B (en) * 2012-08-24 2015-04-29 华南农业大学 Fire monitoring and warning method on basis of unmanned aerial vehicle
CN103337123B (en) * 2013-06-17 2015-09-30 西安石油大学 Based on the field joint stations fire alarm method of video image
US9459142B1 (en) * 2015-09-10 2016-10-04 General Monitors, Inc. Flame detectors and testing methods
CN106251568A (en) * 2016-10-26 2016-12-21 山东大学 A kind of fire alarm safety-protection system based on ultraviolet and image detecting technique and method
JP6980188B2 (en) * 2017-08-04 2021-12-15 モリタ宮田工業株式会社 Fire identification device
CN110263622A (en) * 2019-05-07 2019-09-20 平安科技(深圳)有限公司 Train fire monitoring method, apparatus, terminal and storage medium
CN110516609B (en) * 2019-08-28 2022-04-22 南京邮电大学 Fire disaster video detection and early warning method based on image multi-feature fusion
CN110660052B (en) * 2019-09-23 2023-04-07 武汉科技大学 Hot-rolled strip steel surface defect detection method based on deep learning
CN111091072A (en) * 2019-11-29 2020-05-01 河海大学 YOLOv 3-based flame and dense smoke detection method
CN111062350B (en) * 2019-12-23 2023-08-18 合肥众兴智讯科技有限公司 Artificial intelligence based firework recognition algorithm
CN111401148B (en) * 2020-02-27 2023-06-20 江苏大学 Road multi-target detection method based on improved multi-stage YOLOv3
CN111275918B (en) * 2020-03-05 2020-12-11 深圳市君利信达科技有限公司 Flame detection analysis early warning system
CN111415345B (en) * 2020-03-20 2023-09-12 山东文多网络科技有限公司 Intelligent substation ultraviolet image inspection method and device based on deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236684A (en) * 2008-03-06 2008-08-06 中国人民解放军第二炮兵装备研究院第三研究所 Fire disaster detector and detection positioning method
CN104021644A (en) * 2014-06-13 2014-09-03 淄博安益矿用设备有限公司 False triggering prevention device and method for ultraviolet flame sensor
CN104199117A (en) * 2014-08-20 2014-12-10 湖北捷讯光电有限公司 Infrared ultraviolet composite flame detector and detection method thereof
CN107301653A (en) * 2017-05-18 2017-10-27 常州市武进区半导体照明应用技术研究院 Video image fire disaster flame detection method based on BP neural network
CN107941331A (en) * 2017-12-27 2018-04-20 四川大学 The energy-probe design method of ArF excimer laser Light Energies uniformity detection
CN110852347A (en) * 2019-10-17 2020-02-28 天津大学 Fire detection method using improved YOLO v3
CN111179279A (en) * 2019-12-20 2020-05-19 成都指码科技有限公司 Comprehensive flame detection method based on ultraviolet and binocular vision
CN111105587A (en) * 2019-12-31 2020-05-05 广州思瑞智能科技有限公司 Intelligent flame detection method and device, detector and storage medium

Also Published As

Publication number Publication date
CN111986436A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111986436B (en) Comprehensive flame detection method based on ultraviolet and deep neural networks
CN106448023B (en) Fire smoke alarm with storage function
US8294567B1 (en) Method and system for fire detection
CN106251568A (en) A kind of fire alarm safety-protection system based on ultraviolet and image detecting technique and method
CN112002095A (en) Fire early warning method in mine tunnel
CN113299035A (en) Fire identification method and system based on artificial intelligence and binocular vision
CN113516820B (en) Fire early warning method and fire early warning system
CN115077627B (en) Multi-fusion environmental data supervision method and supervision system
CN114120171A (en) Fire smoke detection method, device and equipment based on video frame and storage medium
CN209433517U (en) It is a kind of based on more flame images and the fire identification warning device for combining criterion
JP4111660B2 (en) Fire detection equipment
CN108538011B (en) Laser radar fire detection method
CN110379118A (en) Fire prevention intelligent monitor system and method under train vehicle
Ho et al. Nighttime fire smoke detection system based on machine vision
CN111882800B (en) Fire-fighting early warning method and system based on multi-dimensional data linkage
JPH07200961A (en) Fire alarm system for early detection of fire
CN113516091B (en) Method for identifying electric spark image of transformer substation
CN113362560B (en) Photoelectric smoke sensing detection method for accurately identifying fire smoke
KR100627483B1 (en) An Exhaust Smoke Recognition and Alarm Device and Method using Picture Image Analysis
JP5309069B2 (en) Smoke detector
EP3029600A1 (en) Thermal image object detection
CN117636565B (en) Multispectral flame detection system based on spectral feature data fusion
CN116298459B (en) Intelligent lightning current on-line monitoring system
KR20230075647A (en) Ensemble supervised learning-based fire situation detection method
CN115620102A (en) Flame detection system based on multi-mode information fusion technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No. 109, 1st Floor, Building 2, No. 11, Tianying Road, High tech Zone, Chengdu, Sichuan 611700

Applicant after: Chengdu Shidao Information Technology Co.,Ltd.

Address before: 611731 floor 2, No. 4, Xinhang Road, West Park, high tech Zone (West Zone), Chengdu, Sichuan

Applicant before: CHENGDU ZHIMA TECHNOLOGY CO.,LTD.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant