CN110276228B - Multi-feature fusion video fire disaster identification method - Google Patents

Multi-feature fusion video fire disaster identification method Download PDF

Info

Publication number
CN110276228B
CN110276228B CN201810208155.2A CN201810208155A CN110276228B CN 110276228 B CN110276228 B CN 110276228B CN 201810208155 A CN201810208155 A CN 201810208155A CN 110276228 B CN110276228 B CN 110276228B
Authority
CN
China
Prior art keywords
image
flame
fire
value
suspected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810208155.2A
Other languages
Chinese (zh)
Other versions
CN110276228A (en
Inventor
吴海彬
金肖
叶锦华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201810208155.2A priority Critical patent/CN110276228B/en
Publication of CN110276228A publication Critical patent/CN110276228A/en
Application granted granted Critical
Publication of CN110276228B publication Critical patent/CN110276228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a multi-feature fusion video fire disaster recognition algorithm. Firstly, an improved self-adaptive background updating model is utilized to acquire a target of suspected flame in an infrared video image, background interference is eliminated, static and dynamic characteristics of the flame are analyzed, accuracy of characteristic extraction is guaranteed, and finally, fire disaster identification is carried out by a flame multi-characteristic fusion method based on hierarchical analysis.

Description

Multi-feature fusion video fire disaster identification method
Technical Field
The invention relates to the technical field of fire-fighting equipment, in particular to a multi-feature fusion video fire disaster identification method.
Background
The fire brings light and warmth to human and promotes the social progress. However, with the development of society, disasters brought by fire use to human beings are continuously upgraded, and the fire is the most serious disaster threatening the survival and development of human beings in the world, and has the characteristics of high occurrence frequency, large space-time span and the like, and is easy to cause a large amount of property loss and serious personnel injury. For a long time, human beings never stop researching fire detection technology, non-contact fire detection is one of the very important technologies, but the conventional non-contact fire detector has small detection distance and is greatly influenced by environment, flame detection based on video images can effectively avoid the problems, however, the conventional video flame detection algorithm has the problems of easiness in being influenced by complex scenes and illumination conditions, poor instantaneity and reliability of the algorithm, easiness in generating misjudgment, missed judgment and the like, so that the related algorithm has poor effect in the actual use process.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a multi-feature fusion video fire disaster identification method which can effectively improve the identification efficiency of flames and ensure the real-time accurate judgment of fires.
In order to achieve the above purpose, the present invention adopts the following technical scheme: a multi-feature fusion video fire disaster identification method comprises the following processing steps: s1: acquiring infrared video information of a monitoring area in real time through a CCD camera and an optical filter; the optical filter is sleeved in front of the CCD camera lens; the rear CCD camera is connected with the computer through a USB data line and is used for collecting video information of the monitoring area in real time; s2: the computer extracts suspected fire areas by using an adaptive background updating method combining a difference method between video image frames and a background difference method; s3: the computer carries out filtering, edge enhancement and open operation pretreatment on the suspected flame area; s4: extracting static and dynamic characteristics of flames from the suspected flame region; s5: performing weight assignment on a plurality of flame characteristics of the fire disaster by using an analytic hierarchy process; s6: and (3) weighting the characteristics to calculate the suspected probability of the flame, comparing the suspected probability with a global flame characteristic evaluation value set by a user, and when the suspected probability is larger than the global flame characteristic evaluation value, recognizing that the fire disaster occurs, or else, judging that the fire disaster does not occur.
In an embodiment of the present invention, a cut-off wavelength of the optical filter is 800nm.
In an embodiment of the present invention, the adaptive background updating method includes the following steps: BS1: opening up two memories which are respectively dynamic memories and static memories; BS2: starting image acquisition, defining an image sequence counting parameter j, acquiring a 1 st image, judging whether j is equal to 1, if so, storing the image into a static memory as a background template, otherwise, continuously storing the next 3 images into a dynamic memory; BS3: making difference between each image in the dynamic memory and the background template image; BS4: binarization and edge enhancement pretreatment are carried out on the gray level difference image; BS5: the size of the suspected fire region is calculated by subtracting the pixel value of the gray difference image from the background template image from the dynamic memory image, then the calculated size is compared with a preset pixel threshold G1 and a preset pixel threshold G2, and then the background is updated or an alarm is given as the suspected fire region according to the comparison result.
Further, the step BS5 includes the following specific steps: if the pixel bj1 of the gray difference image obtained by subtracting the 1 st image of the dynamic memory from the background template image is smaller than G1, proving that no fire disaster occurs in the monitored area, and continuing to read the next image for judgment after the judgment is finished; if bj1 is larger than G1, storing the picture into a dynamic memory, and continuously collecting the 2 nd image; subtracting the 2 nd image from the background template image to obtain a pixel difference value bj2 of the gray difference image smaller than G1, and continuing to read the next image without any response, and if the difference value is larger than G1, storing the next image in the dynamic memory of the other image; if the 1 st image and the 2 nd image are continuous images in the dynamic memory, the pixel value bj2 after the difference between the 2 nd image and the background template image is recorded; if the fire is discontinuous, immediately alarming as a fire suspected area; judging whether the 3 rd image is continuous with the first two images in the same way, if so, subtracting the 3 rd image from the background template image to obtain a pixel difference value bj3 of the gray difference image, and if not, alarming to be a fire suspected area; respectively differencing the 3 rd image and the 2 nd image, and recording pixel values after differencing the 2 nd image and the 1 st image; if both values are greater than the threshold G2, the 3 rd image stored in the dynamic memory is used as a background template image to update the original background image in the static memory, otherwise, the alarm is a fire suspected area.
In one embodiment of the invention, the extracting static and dynamic characteristics of the flame includes: circularity, sharp corner features, centroid movement, similarity, roughness, and area growth.
In an embodiment of the present invention, the weight assignment process of the analytic hierarchy process for the flame features is: CS1: establishing a flame characteristic importance evaluation table, and defining a flame characteristic qualitative rule judgment matrix according to the evaluation table; CS2: calculating the consistency ratio of the judgment matrix, wherein the value is smaller than a specified value, and is that the judgment matrix meets the consistency check condition, otherwise, adjusting the value of the evaluation table, and repeating the steps CS1 and CS2 until the check condition is met; CS3: and obtaining a feature vector corresponding to the maximum feature value of the judgment matrix, and obtaining a weight vector after standardization, wherein each numerical value in the weight vector corresponds to the weight value of the flame features one by one.
Further, CS2 includes the following steps: calculating the maximum eigenvalue of the judgment matrix A as lambda max The method comprises the steps of carrying out a first treatment on the surface of the Calculating a consistency index CI of the judgment matrix A:
Figure SMS_1
n is the dimension of the matrix, and the consistency standard RI of the judgment matrix A is obtained by inquiring an N-dimension vector average random consistency index lookup table; calculating the consistency ratio CR of the judgment matrix A:
Figure SMS_2
if CR <0.1, the matrix A is considered to meet the test condition, otherwise, the evaluation table value is adjusted, and the steps CS1 and CS2 are repeated until the test condition is met.
In an embodiment of the present invention, the calculating process of the flame suspected probability is: DS1: matching each characteristic u of the flame with a recorder I (u), wherein when the flame in the video image meets a certain flame characteristic, the I (u) is 1, otherwise, the I (u) is 0; DS2: and multiplying the weight of each flame characteristic by the corresponding recorder value, and then adding to obtain a flame suspected probability value.
Compared with the prior art, the invention has the following advantages:
1. the video fire is rapidly and efficiently identified by carrying out self-adaptive background updating, flame multi-feature extraction and flame multi-feature fusion method based on hierarchical analysis on the video of the monitoring area, and when the fire happens, the video can give an alarm in time, thereby effectively reducing property loss and avoiding casualties;
2. the influence of irrelevant backgrounds on fire flames is eliminated through an improved self-adaptive background updating model, the integrity and accuracy of the extraction of possible flame areas are improved, and more accurate video image information is provided for subsequent fire identification;
3. extracting a plurality of static and dynamic characteristics of the flame, wherein the flame information coverage is high;
4. the method is simple, effective and good in real-time, and finally provides a decision basis in a quantitative form for fire discrimination;
5. by using the infrared filter, the background interference can be effectively reduced, and the fire disaster recognition efficiency is further improved.
Drawings
Fig. 1 is a flow chart of fire disaster identification in a multi-feature fusion video fire disaster identification method according to an embodiment of the invention.
Fig. 2 is a flowchart of background adaptive updating and fire suspected region extraction in the multi-feature fusion video fire identification method according to an embodiment of the present invention.
Detailed Description
The invention is further illustrated by the following description in conjunction with the accompanying drawings and specific embodiments.
The invention provides a multi-feature fusion video fire disaster identification method, and a main flow diagram is shown in fig. 1. Which comprises the following steps:
s1: video information of the monitoring area is collected in real time through the CCD camera and the optical filter.
Specifically, an infrared filter having a cutoff wavelength of 800nm is used, and a filter collar having a size suitable for the infrared filter is fixed to the CCD camera. The CCD camera is connected with the computer through a USB data line, so that video information of the monitoring area can be acquired in real time;
s2: extracting a suspected fire region by using an adaptive background updating method combining an inter-frame difference method and a background difference method; the main flow diagram is shown in fig. 2.
Specifically, the steps are as follows:
BS1: opening up two memories which are respectively dynamic memories and static memories;
BS2: starting image acquisition, defining an image sequence counting parameter j, acquiring a 1 st image, judging whether j is equal to 1, storing the image in a static memory if the j is equal to 1, marking the image as BT1 as a background template, and continuously storing the next 3 images in a dynamic memory to mark as RT if the j is equal to 1 i Wherein i=1, 2,3;
BS3: difference is carried out between each image in the dynamic memory and the background template image, and a gray level difference image CZ is obtained i
CZ i =|RT i -BT1|
BS4: for gray difference image CZ i Performing binarization and edge enhancement pretreatment;
BS5: the size of a suspected fire region is calculated through the pixel bj1 of the gray difference value image obtained by subtracting the dynamic memory image from the background template image, then the size is compared with a preset pixel threshold G1 and a preset pixel threshold G2, and then the background is updated or an alarm is given as a fire suspected region according to the comparison result.
Specifically, the steps are as follows:
if bj1 is smaller than G1, proving that no fire disaster occurs in the monitoring area, ending the judgment, and continuing to read the next picture for judgment; if bj1 is larger than G1, storing the picture in a dynamic memory, continuously collecting the 2 nd image, performing subtraction operation with the background template image, if the pixel difference is smaller than G1, possibly, instantaneously entering a heat radiation source or entering a monitoring area by specular reflection sunlight, and continuously reading the next picture without any response, if the difference is larger than G1, storing in another dynamic memory. If the 1 st image and the 2 nd image are continuous images in the dynamic memory, the pixel value bj2 after the 2 nd image is differenced from the background template image is recorded. If the fire is discontinuous, immediately alarming as a fire suspected area, which is possibly caused by the fluctuation change of the area of the flame; judging whether the 3 rd image is continuous with the first two images in the same way, if so, recording the pixel value bj3 after the 3 rd image is differenced from the background image, otherwise, alarming to be a fire suspected area;
the 3 rd image and the 2 nd image, the 2 nd image and the 1 st image are respectively differenced, and pixel values after the differencing are recorded. If both values are greater than the threshold G2, the 3 rd image stored in the dynamic memory is used as a background template image to update the original background image in the static memory, otherwise, the alarm is a fire suspected area.
S3: filtering, edge enhancement and open operation pretreatment are carried out on the fire suspected area;
s4: extracting static and dynamic characteristics of flames from the suspected flame region;
specifically, the static and dynamic characteristics of the flame include the following: circularity, sharp corner features, centroid movement, similarity, roughness, and area growth.
S5: the weight assignment process for the characteristics of the fire disaster by using the analytic hierarchy process comprises the following steps:
CS1: establishing a flame characteristic importance evaluation table, and defining a flame characteristic qualitative rule judgment matrix according to the evaluation table;
specifically, according to the research experience on fire, the flame characteristic importance evaluation table is obtained as follows:
flame characteristics Degree of circularity Sharp corner features Centroid movement Similarity degree Roughness of Area growth
Degree of circularity 1 3 3 3 5 5
Sharp corner features 1/3 1 3/2 2 3 3
Centroid movement 1/3 2/3 1 3/2 2 3
Similarity degree 1/3 1/2 2/3 1 3/2 2
Roughness of 1/5 1/3 1/2 2/3 1 1
Area growth 1/5 1/3 1/3 1/2 1 1
The importance assessment table is then converted into a judgment matrix a as follows:
Figure SMS_3
CS2: calculating the consistency ratio of the judgment matrix, wherein the value is smaller than a specified value, and is that the judgment matrix meets the consistency check condition, otherwise, adjusting the value of the evaluation table, and repeating the steps CS1 and CS2 until the check condition is met;
specifically, the maximum eigenvalue of the judgment matrix A is calculated as lambda max =6.0685;
Calculating a consistency index CI of the judgment matrix A:
Figure SMS_4
the consistency standard ri=1.24 of the judgment matrix a is obtained by querying an N-dimensional vector average random consistency index lookup table.
N (matrix dimension) 1 2 3 4 5 6 7 8
RI 0 0 0.58 0.90 1.12 1.24 1.32 1.41
Calculating the consistency ratio CR of the judgment matrix A:
Figure SMS_5
if CR <0.1, the inconsistency degree of the matrix A is considered acceptable, otherwise, the evaluation table value is adjusted, and the steps CS1 and CS2 are repeated until the inspection condition is met;
CS3: and obtaining a feature vector corresponding to the maximum feature value of the judgment matrix, and obtaining a weight vector after standardization, wherein each numerical value in the weight vector corresponds to the weight value of the flame features one by one.
Specifically, the feature vector corresponding to the maximum feature value of the judgment matrix a is obtained as v= [0.815,0.3937,0.2256,0.1432,0.1286], and then the feature vector is normalized: v [0.4053,0.1958, 01515,0.1122,0.0712,0.0640]. The result of automatic weighting based on analytic hierarchy process is as follows: the circularity weight is 0.4053, the flame tip angle weight is 0.1958, the centroid movement weight is 0.1515, the flame roughness weight 0.1122, the flame similarity weight is 0.0712, and the area growth weight is 0.0640.
S6: and (3) weighting the characteristics to calculate the suspected probability of the flame, comparing the suspected probability with a global flame characteristic evaluation value set by a user, and when the suspected probability is larger than the global flame characteristic evaluation value, recognizing that the fire disaster occurs, or else, judging that the fire disaster does not occur.
Specifically, the calculation process of the flame suspected probability is as follows:
DS1: matching each characteristic u of the flame with a recorder I (u), wherein when the flame in the video image meets a certain flame characteristic, the I (u) is 1, otherwise, the I (u) is 0;
DS2: the weight W of each flame characteristic is multiplied by the corresponding recorder value I and then added to obtain a flame suspected probability value I F
Figure SMS_6
And when the suspected probability of the flame is larger than the characteristic global evaluation value, the fire is considered to happen, otherwise, the fire is not considered to happen.
The infrared filter can reduce the background interference of the fire environment, and further improve the fire identification efficiency.
The flame characteristic global evaluation value can be set according to actual application conditions, so that the algorithm has good adaptability.
In summary, the invention provides a multi-feature fusion video fire disaster identification method capable of efficiently detecting fire disasters.
While the invention has been described with reference to various embodiments, it will be understood that the invention is not limited to the embodiments described above, but that various obvious modifications and changes can be made by those skilled in the art to which the invention pertains without departing from the principles of the invention.

Claims (4)

1. The multi-feature fusion video fire disaster identification method is characterized by comprising the following processing steps:
s1: acquiring infrared video information of a monitoring area in real time through a CCD camera and an optical filter; the optical filter is sleeved in front of the CCD camera lens; the CCD camera is connected with the computer through a USB data line and is used for collecting video information of the monitoring area in real time;
s2: the computer extracts suspected fire areas by using an adaptive background updating method combining a difference method between video image frames and a background difference method;
s3: the computer carries out filtering, edge enhancement and open operation pretreatment on the suspected flame area;
s4: extracting static and dynamic characteristics of flames from the suspected flame region;
s5: performing weight assignment on a plurality of flame characteristics of the fire disaster by using an analytic hierarchy process;
s6: the method comprises the steps of carrying out weighted calculation on a plurality of characteristics to obtain a flame suspected probability, then comparing the flame suspected probability with a flame characteristic global evaluation value set by a user, and when the suspected probability is larger than the characteristic global evaluation value, recognizing that fire occurs, otherwise, judging that the fire does not occur;
the self-adaptive background updating method comprises the following steps:
BS1: opening up two memories which are respectively dynamic memories and static memories;
BS2: starting image acquisition, defining an image sequence counting parameter j, acquiring a 1 st image, judging whether j is equal to 1, if so, storing the image into a static memory as a background template, otherwise, continuously storing the next 3 images into a dynamic memory;
BS3: making difference between each image in the dynamic memory and the background template image;
BS4: binarization and edge enhancement pretreatment are carried out on the gray level difference image;
BS5: the size of a suspected fire region is calculated by subtracting the dynamic memory image from the background template image to obtain the pixel value of the gray difference image, then the calculated size is compared with a preset pixel threshold G1 and a preset pixel threshold G2, and then the background is updated or an alarm is given as a fire suspected region according to the comparison result;
step BS5 comprises the following specific steps: if the pixel bj1 of the gray difference image obtained by subtracting the 1 st image of the dynamic memory from the background template image is smaller than G1, proving that no fire disaster occurs in the monitored area, and continuing to read the next image for judgment after the judgment is finished; if bj1 is larger than G1, storing the picture into a dynamic memory, and continuously collecting the 2 nd image;
subtracting the 2 nd image from the background template image to obtain a pixel difference value bj2 of the gray difference image smaller than G1, and continuing to read the next image without any response, and if the difference value is larger than G1, storing the next image in the dynamic memory of the other image; if the 1 st image and the 2 nd image are continuous images in the dynamic memory, the pixel value bj2 after the difference between the 2 nd image and the background template image is recorded; if the fire is discontinuous, immediately alarming as a fire suspected area; judging whether the 3 rd image is continuous with the first two images in the same way, if so, subtracting the 3 rd image from the background template image to obtain a pixel difference value bj3 of the gray difference value image, and recording, otherwise, alarming to be a fire suspected area;
respectively differencing the 3 rd image and the 2 nd image, and recording pixel values after differencing the 2 nd image and the 1 st image; if the two values are larger than the threshold G2, the 3 rd image stored in the dynamic memory is used as a background template image to update the original background image in the static memory, otherwise, the alarm is a fire suspected area;
in the step S5, the weight assignment process of the analytic hierarchy process for the flame features is as follows:
CS1: establishing a flame characteristic importance evaluation table, and defining a flame characteristic qualitative rule judgment matrix A according to the evaluation table;
CS2: calculating the consistency ratio of the judgment matrix, wherein the value is smaller than a specified value, and is that the judgment matrix meets the consistency check condition, otherwise, adjusting the value of the evaluation table, and repeating the steps CS1 and CS2 until the check condition is met;
CS3: obtaining a feature vector corresponding to the maximum feature value of the judgment matrix, and obtaining a weight vector after standardization, wherein each numerical value in the weight vector corresponds to the weight value of a plurality of flame features one by one;
CS2 comprises the following steps:
calculating the maximum eigenvalue of the judgment matrix A as lambda max
Calculating a consistency index CI of the judgment matrix A:
Figure FDA0004071081750000021
n is the dimension of the matrix, and the consistency standard RI of the judgment matrix A is obtained by inquiring an N-dimension vector average random consistency index lookup table; calculating the consistency ratio CR of the judgment matrix A:
Figure FDA0004071081750000022
if CR <0.1, matrix A is considered to satisfy the test condition, otherwise the evaluation table values are adjusted, and the CS1 and CS2 steps are repeated until the test condition is satisfied.
2. The multi-feature fusion video fire identification method of claim 1, wherein: the cut-off wavelength of the optical filter is 800nm.
3. The multi-feature fusion video fire identification method of claim 1, wherein: step S4 of extracting static and dynamic characteristics of the flame includes: circularity, sharp corner features, centroid movement, similarity, roughness, and area growth.
4. The multi-feature fusion video fire disaster identification method according to claim 1, wherein the calculating process of the flame suspected probability is as follows:
DS1: matching each characteristic u of the flame with a recorder I (u), wherein when the flame in the video image meets a certain flame characteristic, the I (u) is 1, otherwise, the I (u) is 0;
DS2: and multiplying the weight of each flame characteristic by the corresponding recorder value, and then adding to obtain a flame suspected probability value.
CN201810208155.2A 2018-03-14 2018-03-14 Multi-feature fusion video fire disaster identification method Active CN110276228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810208155.2A CN110276228B (en) 2018-03-14 2018-03-14 Multi-feature fusion video fire disaster identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810208155.2A CN110276228B (en) 2018-03-14 2018-03-14 Multi-feature fusion video fire disaster identification method

Publications (2)

Publication Number Publication Date
CN110276228A CN110276228A (en) 2019-09-24
CN110276228B true CN110276228B (en) 2023-06-20

Family

ID=67958258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810208155.2A Active CN110276228B (en) 2018-03-14 2018-03-14 Multi-feature fusion video fire disaster identification method

Country Status (1)

Country Link
CN (1) CN110276228B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706444B (en) * 2019-10-22 2021-05-14 北京航天常兴科技发展股份有限公司 Comprehensive pyrolytic particle electrical fire monitoring method, device and system
CN111710125A (en) * 2020-05-26 2020-09-25 上饶市中科院云计算中心大数据研究院 Intelligent scenic spot fire prevention early warning method and system
CN113516091B (en) * 2021-07-27 2024-03-29 福建工程学院 Method for identifying electric spark image of transformer substation
CN115359615B (en) * 2022-08-15 2023-08-04 北京飞讯数码科技有限公司 Indoor fire alarm early warning method, system, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740866A (en) * 2016-01-22 2016-07-06 合肥工业大学 Rotary kiln sintering state recognition method with artificial feedback regulation mechanism
CN106845443A (en) * 2017-02-15 2017-06-13 福建船政交通职业学院 Video flame detecting method based on multi-feature fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009136895A1 (en) * 2008-05-08 2009-11-12 Utc Fire & Security System and method for video detection of smoke and flame

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740866A (en) * 2016-01-22 2016-07-06 合肥工业大学 Rotary kiln sintering state recognition method with artificial feedback regulation mechanism
CN106845443A (en) * 2017-02-15 2017-06-13 福建船政交通职业学院 Video flame detecting method based on multi-feature fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多特征融合视频火灾识别研究;金肖;《机械制造与自动化》;20190831(第04期);全文 *

Also Published As

Publication number Publication date
CN110276228A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN110276228B (en) Multi-feature fusion video fire disaster identification method
CN110929756B (en) Steel size and quantity identification method based on deep learning, intelligent equipment and storage medium
CN112528960A (en) Smoking behavior detection method based on human body posture estimation and image classification
CN106682635A (en) Smoke detecting method based on random forest characteristic selection
CN111626188B (en) Indoor uncontrollable open fire monitoring method and system
CN111179279A (en) Comprehensive flame detection method based on ultraviolet and binocular vision
CN112149512A (en) Helmet wearing identification method based on two-stage deep learning
CN108038867A (en) Fire defector and localization method based on multiple features fusion and stereoscopic vision
CN103903020B (en) A kind of fire image recognition methods and device based on CodeBook
CN112288778B (en) Infrared small target detection method based on multi-frame regression depth network
CN116386120B (en) A noninductive control management system for wisdom campus dormitory
CN112580430A (en) Power plant smoke and fire monitoring method, device and system based on RGB vision and storage medium
Tashakkori et al. Image processing for honey bee hive health monitoring
CN114120171A (en) Fire smoke detection method, device and equipment based on video frame and storage medium
JPWO2018173947A1 (en) Image retrieval device
CN109544535B (en) Peeping camera detection method and system based on optical filtering characteristics of infrared cut-off filter
CN111091586A (en) Rapid smoke dynamic shielding area detection and positioning method and application thereof
CN111126230A (en) Smoke concentration quantitative evaluation method and electronic equipment applying same
CN112507952B (en) Self-adaptive human body temperature measurement region screening method and forehead non-shielding region extraction method
CN114973104A (en) Dynamic flame detection algorithm and system based on video image
CN107403192A (en) A kind of fast target detection method and system based on multi-categorizer
CN113299034A (en) Flame identification early warning method suitable for multiple scenes
CN112668387A (en) Illegal smoking recognition method based on AlphaPose
CN112560672A (en) Fire image recognition method based on SVM parameter optimization
CN111401275A (en) Information processing method and device for identifying grassland edge

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant