CN115620102A - Flame detection system based on multi-mode information fusion technology - Google Patents

Flame detection system based on multi-mode information fusion technology Download PDF

Info

Publication number
CN115620102A
CN115620102A CN202211179951.0A CN202211179951A CN115620102A CN 115620102 A CN115620102 A CN 115620102A CN 202211179951 A CN202211179951 A CN 202211179951A CN 115620102 A CN115620102 A CN 115620102A
Authority
CN
China
Prior art keywords
expert
flame
detection system
class
system based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211179951.0A
Other languages
Chinese (zh)
Inventor
黄河
王剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jingxin Photoelectric Technology Co ltd
Original Assignee
Jiangsu Jingxin Photoelectric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jingxin Photoelectric Technology Co ltd filed Critical Jiangsu Jingxin Photoelectric Technology Co ltd
Priority to CN202211179951.0A priority Critical patent/CN115620102A/en
Publication of CN115620102A publication Critical patent/CN115620102A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • G06V10/811Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/817Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level by voting
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The invention discloses a flame detection system based on a multi-mode information fusion technology, relates to a flame detection system, solves the problems of poor stability, fusion interference and the like in the prior art, and comprises an infrared thermal imaging module, a binocular camera and a TOF sensor, wherein a thermal infrared image generated by the infrared thermal imaging module is sent to a temperature signal processing module; the binocular camera is used for collecting visible light images and outputting a first visual depth map; inputting the visible light image into a GMM background modeling module, then extracting a moving flame alternative region, and inputting the visible light image into a color expert by the flame alternative region; the ToF sensor generates a second depth map, and the first visual depth map and the second depth map are input into a distance difference processing module for depth difference; and finally, carrying out comprehensive judgment through a cooperative multi-expert system, and deciding and judging the class to which the flame alternative region belongs. The effect of objectively, truly and reliably detecting and evaluating the flame combustion quality is achieved.

Description

Flame detection system based on multi-mode information fusion technology
Technical Field
The invention relates to a flame detection system, in particular to a flame detection system based on a multi-mode information fusion technology.
Background
Fire is one of the major disasters that most often, most commonly, threaten public safety and social development. The frequency of fire occurrence is one of many kinds of disasters every year, and the loss caused by the fire is only secondary flood and drought, which is about 5 times of the direct loss caused by earthquake, thereby causing serious threat to the life and property safety and the social and economic development of people. Therefore, it is an important means for fire safety to be able to accurately detect a fire and to realize early warning.
The main technologies of the traditional fire detection system at present include a smoke detection technology, a gas detection technology, an infrared detection technology, video flame based on an image processing technology and the like, in addition, in order to provide the judgment capability of high flame detection, a plurality of detectors are simultaneously used, a detection area is simultaneously detected, the flame characteristic information detected respectively is analyzed, then the extracted information is directly or indirectly fused, and then whether a fire disaster exists is judged.
However, since the smoke detector obtains the parameter of the detected fire by expressing the concentration of smoke particles by the magnitude of the change amount of the ion current, and the concentration of smoke is affected by the height of the space, when the height of the smoke of the fire rises to tens of meters or more, the concentration of smoke gradually decreases with the dilution of the indoor air, and thus, when the smoke reaches the roof, the smoke detector at the ceiling of the house generates a false alarm because the measured smoke does not reach the standard of the concentration of the smoke of the fire; in addition, when the environment contains higher dust concentration, the smoke sensor is also influenced, and if the smoke-sensitive detector is in an environment with high humidity and much dust for a long time, the detection effect becomes more inaccurate, and the more serious detector fails to work, so that the phenomenon of failure is generated;
the gas sensor can only detect the fire caused by specific gas, has low sensitivity and is particularly suitable for fire detection in large scenes such as outdoor scenes or warehouses; the temperature-sensing fire detector cannot detect the temperature below the advanced threshold temperature, so that misjudgment is easily made, the sensed temperature is influenced by factors such as large space, air flow and the like, and the temperature is reduced when the space is high or the air flow is accelerated, so that the fire temperature cannot be regulated, and the false alarm is generated; the infrared imaging device detects infrared heat radiation and is sensitive to a high-power heat source or strong light, so that error identification and false alarm are easily generated; the single video flame detection still has the defects of universal applicability, susceptibility to background noise, interference of illumination change and the like. These factors can result in low flame detection rate, severe false detection and few detection situations, and it is difficult to simultaneously meet the requirements of flame detection on accuracy, robustness, real-time performance, and the like.
The video flame detection equipment and the infrared thermal imaging equipment are combined, the method for detecting the flame image and the heat energy information has a certain unique point, and the method combines two kinds of modal information, namely image information and heat energy information, in principle, so that information supplement is realized, the coverage range of information contained in input data is expanded, the precision of a prediction result is improved, and the robustness of a prediction system is improved. In fact, both of these methods are susceptible to interference from high temperature high radiation sources and intense light. Therefore, the accuracy and robustness cannot meet the practical application requirements.
Therefore, because of the uncertainty and variability of the fire, the current technology cannot meet the application requirements of fire detection under the influence of various factors such as the area, height, temperature, humidity, dust, airflow and the like of the detected place.
Disclosure of Invention
The invention aims to provide a flame detection system based on a multi-mode information fusion technology, which detects and comprehensively judges the most common characteristics of three flames of color, temperature and gas state of the flame through 3 different sensors, namely a visual sensor, a temperature sensor and a depth sensor, solves the problems of poor stability, interference on fusion and the like in the prior art, and can objectively, truly and reliably detect and evaluate the flame combustion quality.
The technical purpose of the invention is realized by the following technical scheme:
a flame detection system based on a multi-mode information fusion technology comprises an infrared thermal imaging module, a binocular camera and a TOF sensor,
the thermal infrared image generated by the infrared thermal imaging module is sent to the temperature signal processing module, and the temperature signal processing module outputs the processed thermal infrared image to a temperature expert;
the binocular camera is used for collecting visible light images and outputting a first visual depth map; inputting the visible light image into a GMM background modeling module, then extracting a moving flame alternative region, and inputting the visible light image into a color expert by the flame alternative region;
the ToF sensor generates a second depth map, the first visual depth map and the second depth map are input into the distance difference processing module for depth difference, and the distance difference processing module outputs the distance difference map to a gas expert;
and comprehensively judging the temperature expert, the color expert and the gas expert through a cooperative multi-expert system, and deciding and judging the class c of the flame candidate region.
Further, the decision rule of the color expert CE is as follows:
Figure BDA0003866316460000031
wherein R is the red component, R T Is a preset red threshold, S is saturation, S is T Is a preset saturation threshold, F denotes fire,
Figure BDA0003866316460000032
represents non-fire.
Further, the decision rule of the gas expert GE is as follows:
Figure BDA0003866316460000033
in the blob area, the depth map obtained by the binocular camera and the depth map obtained by the TOF are differentiated, and the average of all the differences is calculated to obtain Deep diff, D T Is a preset distance difference threshold, F denotes fire, and F denotes non-fire.
Further, the decision rule of the temperature expert TE is as follows:
Figure BDA0003866316460000041
where T is the average temperature of the candidate region, T T Is a temperature threshold, F denotes fire, and F denotes non-fire.
Further, the kth generic expert e { TE, CE, GE }, which is class C to select between tags k (b) Inputting the blob;
the vote for generic class i can be expressed as:
Figure BDA0003866316460000042
further, the weight ω k (i) And carrying out dynamic evaluation through a Bayesian formula to obtain the highest identification rate of the MES.
Further, a classification matrix C is calculated on the basis of the k-th expert in the training step k Assigning the tested blob belonging to the ith class to the correct c-th class by evaluating the k-th expert k Probability of class to determine ω k (i) Expressed as:
Figure BDA0003866316460000043
wherein M is the number of classes, C (ij) Is the value of the classification matrix at position (i, j).
Further, the reliability of the blob belonging to the i-th class is calculated by a weighted sum of votes,
Figure BDA0003866316460000044
furthermore, the decided class c is finally obtained by maximizing the reliability of the different classes,
Figure BDA0003866316460000051
in conclusion, the invention has the following beneficial effects:
based on a multi-mode information fusion technology, three flame inherent characteristics of flame vision, temperature and gas are adopted for information acquisition, analysis and judgment;
the flame visual information is used for extracting the color characteristics of the flame, and the temperature characteristics are used for generating obvious temperature difference between the flame and the surrounding environment when the flame is burnt;
the flame gas detection is a brand new concept, the flame is essentially a state or phenomenon, and burning combustible gas emits light, generates heat, flickers and rises upwards, so that the flame can be observed, but meanwhile, the flame can not be passively measured due to the fact that the flame is the combustible gas, particularly, parameters needing to be measured by a reflection principle are commonly used, namely laser and sonar;
the invention utilizes the characteristic of the gas state of the flame, adopts 2 distance measurement principles to measure the distance of the flame, one is binocular distance measurement, and the other is ToF distance measurement; the binocular distance measurement is to directly measure the distance of a front scene (a range shot by an image) by calculating the parallax of two images, and a light signal emitted by flame is captured; the ToF ranging principle is that the distance between a measured object and a camera is calculated by continuously emitting light pulses (generally invisible light) to the measured object, receiving the light pulses reflected back from the object and detecting the flight (round trip) time of the light pulses;
therefore, the 2 distance measurement methods can generate distance difference when measuring the distance of the same flame target, because the flame can independently emit light, the distance from a binocular camera to the flame can be detected by a binocular distance measurement method; toF ranging requires reflection of light pulses, which are a gas combustion phenomenon in the flame itself, and thus cannot reflect light pulses and pass directly through the flame. This creates a significant difference between the two ranging data and the present invention utilizes this principle to detect the gaseous nature of the flame.
The invention detects the most common characteristics of three flames of color, temperature and gas state of the flames through 3 different sensors, namely a visual sensor, a temperature sensor and a depth sensor, and comprehensively judges through a multi-mode fusion method.
Drawings
FIG. 1 is a schematic diagram of the structural principle of the multi-information fusion intelligent flame detection device of the present invention.
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings, and the present embodiment is not to be construed as limiting the invention.
A flame detection system based on a multi-mode information fusion technology is shown in figure 1 and comprises a binocular camera, a TOF sensor and a low-resolution infrared thermal imaging module,
the infrared thermal imaging module (infrared thermal imaging camera) generates a low-resolution thermal infrared image, sends the low-resolution thermal infrared image to the temperature signal processing module, the temperature signal processing module amplifies the thermal infrared image with the resolution of 240 × 180 to the resolution of 640 × 480 through up-sampling, and outputs the processed thermal infrared image to a temperature expert;
the binocular camera is used for collecting visible light images and outputting a first visual depth map with 640 × 480 resolution; inputting a visible light image into a GMM background modeling module, then extracting a moving flame alternative area, inputting the visible light image into a color expert by the flame alternative area, and simultaneously sending the visible light image to different experts for confirmation (a background modeling algorithm obtains a foreground image area blob, namely a moving area in the image, and the theory is that flames are necessarily moving);
the ToF sensor generates a second depth map, the first visual depth map and the second depth map are input into the distance difference processing module for depth difference, and the distance difference processing module outputs the distance difference map to a gas expert;
according to the scheme, the problems of low accuracy and poor robustness of a method for detecting the fire by single flame feature extraction are solved by collecting and judging multiple flame features. Compared with the flame detection schemes with multi-mode information fusion in the market, the scheme adopts different sensors to identify the most obvious characteristics of the flame, greatly improves the precision and the robustness, and can use various application scenes.
The Multi-feature fusion System adopts a collaborative Multi-Expert System MES (Multi-Expert System), different modal features of the same object are combined and judged through each Expert mechanism, 3 Expert mechanisms including TE, CE and GE are selected in the project, whether flame exists is determined, as shown in figure 1, temperature, color and flame gaseous features are used as three experts in the MES, TE represents a temperature Expert, CE represents a color Expert, and GE represents a gaseous Expert;
and comprehensively judging the temperature expert, the color expert and the gas expert through a cooperative multi-expert system, and deciding and judging the class c to which the flame candidate area belongs.
The main principle of the MES is that each expert can express its own vote, which is proportional to the recognition rate of each category on the training set, and the voting weights of the two experts for the flame category are 0.8 and 0.7, respectively, assuming that both TE and GE classify the candidate area as flames and that the percentage of flames correctly detected on the training set by the two experts is 80% and 70%.
One of the main items in deciding MES performance is the rules of combination, each expert voting, the kth generic expert ∈ { TE, CE, GE }, which will select class C between tags k (b) Inputting the blob into the input device, wherein the blob is input,
wherein, blob is a foreground image area obtained by a background modeling algorithm; c k (b) The kth expert judges the class of the given blob, and the invention has two types: f (flame) and
Figure BDA0003866316460000071
(non-flame), namely, the kth expert judges whether flame exists in the alternative area according to the decision mechanism of the own expert;
e.g. C CE (b) Color of finger CEThe expert judges the alternative area, and the expert judging mechanism is as follows: each expert according to its own C k A decision function (decision method) determines whether the blob has a flame or not to give a result of Class i (i in the present invention has only 2 labels F and
Figure BDA0003866316460000072
)
thus, the vote for generic class i can be expressed as:
Figure BDA0003866316460000081
that is, if the output corresponds to class i (class i in this embodiment includes class F or
Figure BDA0003866316460000087
Classes, for example: i =1 denotes F, i =2 denotes
Figure BDA0003866316460000086
) Then the vote will be 1, otherwise it will be 0.
Weight omega k (i) Carrying out dynamic evaluation through a Bayesian formula to obtain the highest identification rate of the MES; classification matrix C calculated on the basis of the k-th expert in the training step k (i row corresponds to i class, in this embodiment, it includes F class or F class, j column indicates C k (b) In this embodiment, including class F or class F), matrix C k Indicating that the kth expert assigns the tested blob belonging to the corresponding ith class to the c k The probability of a class;
evaluating the k-th expert to assign the tested blob belonging to the i-th class to the correct c-th class k Probability of class to determine voting weight ω k (i) Expressed as:
Figure BDA0003866316460000082
wherein M is the number of classes, C (ij) Is the value of the classification matrix at position (i, j). Final decision unitThe particular class is identified by maximizing the reliability of the entire MES, and the reliability of the blob belonging to class i is calculated by a weighted sum of votes.
In particular, reliability
Figure BDA0003866316460000083
Finally, the decided class c, namely the class i with the maximum reliability,
Figure BDA0003866316460000084
of the other three expert decisions, the decision rule of the color expert CE is as follows:
Figure BDA0003866316460000085
wherein R is the red component, R T Is a preset red threshold, S is saturation, S is T Is a preset saturation threshold, F denotes fire,
Figure BDA0003866316460000091
represents non-fire.
The decision rule of the gas expert GE is as follows:
Figure BDA0003866316460000092
in the blob area, a depth map obtained by a binocular camera and a depth map obtained by TOF are subjected to difference, and average of all differences is calculated to obtain Deep diff, D T Is a preset distance difference threshold, F denotes fire,
Figure BDA0003866316460000095
represents non-fire.
The decision rule of the temperature expert TE is as follows:
Figure BDA0003866316460000093
where T is the mean temperature of the blob (candidate region), T T Is a temperature threshold, F denotes fire,
Figure BDA0003866316460000094
represents non-fire.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention.

Claims (9)

1. A flame detection system based on a multi-mode information fusion technology is characterized in that:
comprises an infrared thermal imaging module, a binocular camera and a TOF sensor,
the thermal infrared image generated by the infrared thermal imaging module is sent to the temperature signal processing module, and the temperature signal processing module outputs the processed thermal infrared image to a temperature expert;
the binocular camera is used for collecting visible light images and outputting a first visual depth map; inputting the visible light image into a GMM background modeling module, then extracting a moving flame alternative region, and inputting the visible light image into a color expert by the flame alternative region;
the ToF sensor generates a second depth map, the first visual depth map and the second depth map are input into the distance difference processing module for depth difference, and the distance difference processing module outputs the distance difference map to a gas expert;
and comprehensively judging the temperature expert, the color expert and the gas expert through a cooperative multi-expert system, and deciding and judging the class c to which the flame candidate area belongs.
2. The flame detection system based on multi-modal information fusion technology as claimed in claim 1, wherein the decision rule of the color expert CE is as follows:
Figure FDA0003866316450000011
wherein R is the red component, R T Is a preset red threshold, S is saturation, S is T Is a preset saturation threshold, F denotes fire,
Figure FDA0003866316450000012
represents non-fire.
3. A flame detection system based on multi-modal information fusion technology according to claim 1 or 2, characterized in that: the decision rule of the gas expert GE is as follows:
Figure FDA0003866316450000013
in the blob area, a depth map obtained by a binocular camera and a depth map obtained by TOF are subjected to difference, and average of all differences is calculated to obtain Deep diff, D T Is a preset distance difference threshold, F denotes fire,
Figure FDA0003866316450000021
represents non-fire.
4. The flame detection system based on the multi-modal information fusion technology as claimed in claim 1, wherein: the decision rule of the temperature expert TE is as follows:
Figure FDA0003866316450000022
where T is the average temperature of the candidate region, T T Is a temperature threshold, F denotes fire,
Figure FDA0003866316450000023
represents non-fire.
5. The flame detection system based on the multi-modal information fusion technology as claimed in claim 1, wherein:
the kth generic expert ∈ { TE, CE, GE }, which will select class C between tags k (b) Inputting the blob;
the vote for generic class i can be expressed as:
Figure FDA0003866316450000024
6. a flame detection system based on multi-modal information fusion technology according to claim 5, characterized in that: weight omega k (i) And carrying out dynamic evaluation through a Bayesian formula to obtain the highest identification rate of the MES.
7. A flame detection system based on multi-modal information fusion technology according to claim 6, characterized in that: classification matrix C calculated on the basis of the k-th expert in the training step k Assigning the tested blob belonging to class i to the correct class c by evaluating the kth expert k Probability of class to determine ω k (i) Expressed as:
Figure FDA0003866316450000025
wherein M is the number of classes, C (ij) Is the value of the classification matrix at position (i, j).
8. The flame detection system based on the multi-modal information fusion technology as claimed in claim 7, wherein: the reliability of a blob belonging to class i is calculated from the weighted sum of votes,
Figure FDA0003866316450000031
9. the flame detection system based on the multi-modal information fusion technology as claimed in claim 8, wherein: the decided class c is finally derived by maximizing the reliability of the different classes,
Figure FDA0003866316450000032
CN202211179951.0A 2022-09-27 2022-09-27 Flame detection system based on multi-mode information fusion technology Pending CN115620102A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211179951.0A CN115620102A (en) 2022-09-27 2022-09-27 Flame detection system based on multi-mode information fusion technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211179951.0A CN115620102A (en) 2022-09-27 2022-09-27 Flame detection system based on multi-mode information fusion technology

Publications (1)

Publication Number Publication Date
CN115620102A true CN115620102A (en) 2023-01-17

Family

ID=84860895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211179951.0A Pending CN115620102A (en) 2022-09-27 2022-09-27 Flame detection system based on multi-mode information fusion technology

Country Status (1)

Country Link
CN (1) CN115620102A (en)

Similar Documents

Publication Publication Date Title
US7991187B2 (en) Intelligent image smoke/flame sensor and detection system
CN1950718B (en) Cargo sensing system
CN111739250B (en) Fire detection method and system combining image processing technology and infrared sensor
CN101751744B (en) Detection and early warning method of smoke
CN101334924B (en) Fire detection system and fire detection method thereof
US20200348446A1 (en) Early-Warning Fire Detection System Based on a Multivariable Approach
US9720086B1 (en) Thermal- and modulated-light-based passive tracking system
US20180143321A1 (en) Modulated-Light-Based Passive Tracking System
CN108389359B (en) Deep learning-based urban fire alarm method
CN111986436B (en) Comprehensive flame detection method based on ultraviolet and deep neural networks
KR102521726B1 (en) Fire detection system that can predict direction of fire spread based on artificial intelligence and method for predicting direction of fire spread
CN107025753B (en) Wide area fire alarm device based on multispectral image analysis
KR102220328B1 (en) System and method for predicting damages of building fire
CN201091014Y (en) Fire detecting device
CN114386493A (en) Fire detection method, system, device and medium based on flame vision virtualization
CN209433517U (en) It is a kind of based on more flame images and the fire identification warning device for combining criterion
KR101679148B1 (en) Detection System of Smoke and Flame using Depth Camera
KR101574176B1 (en) Method for detecting fire regions removed error and method for fire suppression
Li Research on target information fusion identification algorithm in multi-sky-screen measurement system
CN116307740B (en) Fire point analysis method, system, equipment and medium based on digital twin city
CN115620102A (en) Flame detection system based on multi-mode information fusion technology
CN113362560B (en) Photoelectric smoke sensing detection method for accurately identifying fire smoke
CN106226239A (en) Big angle of visual field intelligent image type fire detector and Intelligent Fire Detection method thereof
CN115862296A (en) Fire risk early warning method, system, equipment and medium for railway construction site
CN115311601A (en) Fire detection analysis method based on video analysis technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination