CN110619293A - Flame detection method based on binocular vision - Google Patents

Flame detection method based on binocular vision Download PDF

Info

Publication number
CN110619293A
CN110619293A CN201910840715.0A CN201910840715A CN110619293A CN 110619293 A CN110619293 A CN 110619293A CN 201910840715 A CN201910840715 A CN 201910840715A CN 110619293 A CN110619293 A CN 110619293A
Authority
CN
China
Prior art keywords
feature vector
visible light
vector
binocular vision
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910840715.0A
Other languages
Chinese (zh)
Inventor
马胤刚
张冠男
王明威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Eye Chi Yun Mdt Infotech Ltd
Original Assignee
Shenyang Eye Chi Yun Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Eye Chi Yun Mdt Infotech Ltd filed Critical Shenyang Eye Chi Yun Mdt Infotech Ltd
Priority to CN201910840715.0A priority Critical patent/CN110619293A/en
Publication of CN110619293A publication Critical patent/CN110619293A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The invention discloses a flame detection method based on binocular vision, which comprises the following steps: calibrating the infrared camera and the visible light camera through a calibration plate to obtain a translation vector T and a rotation matrix R from the infrared camera to the visible light camera; setting a temperature alarm threshold, and traversing the infrared image through the temperature threshold to obtain a binary image; linking the binary image marks to obtain a target rectangular frame coordinate; converting and expanding the coordinates of the rectangular frame by a translation vector T and a rotation matrix R to obtain a visible light image target rectangular frame; taking a rectangular frame corresponding to the visible light image as a suspected target area; and extracting the color feature vector, the contour feature vector and the texture feature vector of the suspected target area to synthesize a one-dimensional column vector, and inputting the one-dimensional column vector into the SVM to obtain a judgment result.

Description

Flame detection method based on binocular vision
Technical Field
The invention relates to the field of binocular vision fire detection, and particularly provides a fire detection method utilizing an infrared camera and a visible light camera.
Background
The fire brings immeasurable loss to the human society, and the current technology mostly adopts binocular vision to detect the fire. Existing binocular vision detection methods generally include the following two types: 1. simultaneously carrying out video acquisition on a monitored scene by utilizing a black-and-white camera and a color camera, and respectively carrying out target identification; 2. detecting a high-temperature area of a scene by using an infrared camera, and starting a visible light camera to detect a fire after an abnormality is found; the first method for detecting the flame cannot effectively utilize the temperature information of the flame, and the second method for detecting the flame cannot utilize the coordinate information of the infrared high-temperature area, so that the false alarm rates of the two methods are high.
Therefore, how to comprehensively utilize the flame temperature data and the high-temperature coordinate information of the infrared image to judge the flame so as to reduce the false alarm rate is a problem to be solved in the field.
Disclosure of Invention
In view of this, the present invention aims to provide a flame detection method based on binocular vision, so as to solve the problem that the flame temperature data and the infrared image high-temperature coordinates cannot be effectively fused in the prior art.
The technical scheme provided by the invention is as follows: the flame detection method based on binocular vision comprises the following steps:
s1: calibrating the infrared camera and the visible light camera in a binocular manner, and obtaining a translation vector T and a rotation matrix R from the infrared camera to the visible light camera;
s2: acquiring an infrared image in a field of view by using an infrared camera, and presetting a temperature alarm threshold value;
s3: traversing the temperature value corresponding to each pixel in the infrared image, setting the pixel value of the pixel with the temperature value exceeding the temperature alarm threshold value to be 255 and setting the pixel value of the pixel with the temperature value smaller than the temperature alarm threshold value to be 0 to obtain a binary image of the infrared image, and then marking and communicating the binary image to obtain the coordinates (r1, c1) of a rectangular frame outside the high-temperature region (r2, c 2);
s4: converting the (R1, c1) (R2, c2) by using the translation vector T and the rotation matrix R to obtain the coordinates (y1, x1) (y2, x2) of the visible light image corresponding to the high-temperature region;
s5: expanding the rectangular frame determined by (y1, x1) (y2, x2) to obtain new target area coordinates (R1, C1) (R2, C2);
s6: selecting a rectangular frame determined by coordinates (R1, C1) (R2, C2) in the visible light image as a suspected target area, and then extracting a color feature vector, a contour feature vector and a texture feature vector of the suspected target area;
s7: and connecting the color feature vector, the contour feature vector and the texture feature vector end to form a one-dimensional column vector, and inputting the one-dimensional column vector into the SVM to obtain a judgment result.
Preferably, S1 includes the steps of:
s11: the calibration board is placed in a binocular vision field of the infrared camera and the visible light camera, so that the calibration board is ensured to be complete and clear in images collected by the two cameras, and the proportion of the calibration board in the infrared image is as large as possible;
s12: calibrating the two cameras respectively to obtain respective internal reference matrixes and distortion coefficient matrixes of the two cameras;
s13: and calculating a translation vector T and a rotation matrix R from the infrared camera to the visible light camera.
More preferably, in S11, the ratio of the calibration plate in the infrared image is 60% or more.
Further preferably, in S2, the value range of the temperature alarm threshold is 65 ℃ to 200 ℃.
More preferably, in S5, the rectangular frame is extended by 1/n of itself in both the length direction and the width direction, wherein n is 2-4.
Further preferably, in S6, the color feature vector is obtained by LDA dimensionality reduction of the R component of the target region.
Further preferably, in S6, the contour feature vector is obtained by the following method: and (3) operating with the target area by adopting a sobel operator as a template to obtain a contour matrix, and then performing LDA dimension reduction on the contour matrix to obtain a contour characteristic vector.
Further preferably, in S6, the texture feature vector is a 5-dimensional feature vector composed of energy, entropy, contrast, inverse variance, and correlation calculated from the gray level co-occurrence matrix.
According to the flame detection method based on binocular vision, firstly, the binocular vision is calibrated to obtain a translation vector T and a rotation matrix R from an infrared camera to a visible light camera, then, an infrared image is traversed and marked to obtain a suspected target area coordinate through temperature threshold communication, the coordinate is converted and expanded through the translation vector T and the rotation matrix R, and finally the coordinate of the suspected target area of the visible light image is locked.
Detailed Description
The invention will be further explained with reference to specific embodiments, without limiting the invention.
The invention provides a fire detection method based on binocular vision, which comprises the following steps:
s1: calibrating an infrared camera and a visible light camera in a binocular mode, and obtaining a translation vector T and a rotation matrix R from the infrared camera to the visible light camera, wherein the binocular calibration method specifically comprises the following steps:
s11: the calibration plate is placed in a binocular visual field of the infrared camera and the visible light camera, the calibration plate is ensured to be complete and clear in images collected by the two cameras, the proportion of the calibration plate in the infrared image is as large as possible, and preferably, the proportion of the calibration plate in the infrared image is more than 60%;
s12: calibrating the two cameras respectively to obtain respective internal reference matrixes and distortion coefficient matrixes of the two cameras, wherein the calibration of the two cameras can be realized by using a Matlab tool box;
s13: calculating a translation vector T and a rotation matrix R from the infrared camera to the visible light camera, wherein the translation vector T and the rotation matrix R can be realized by using a Matlab tool box;
s2: acquiring an infrared image in a field of view by using an infrared camera, and presetting a temperature alarm threshold, wherein the temperature alarm threshold is set according to the scene and the ambient temperature, and the value range is 65-200 ℃;
s3: traversing the temperature value corresponding to each pixel in the infrared image, setting the pixel value of the pixel with the temperature value exceeding the temperature alarm threshold value to be 255 and setting the pixel value of the pixel with the temperature value smaller than the temperature alarm threshold value to be 0 to obtain a binary image of the infrared image, and then marking and communicating the binary image to obtain the coordinates (r1, c1) of a rectangular frame outside the high-temperature region (r2, c 2);
s4: converting the (R1, c1) (R2, c2) by using the translation vector T and the rotation matrix R to obtain the coordinates (y1, x1) (y2, x2) of the visible light image corresponding to the high-temperature region;
s5: expanding the rectangular frame determined by (y1, x1) (y2, x2) to obtain new target area coordinates (R1, C1) (R2, C2), wherein the length direction and the width direction of the rectangular frame are respectively expanded by 1/n of the rectangular frame, and preferably, n is 2-4;
wherein, R1 ═ y 1- (y2-y1)/(2n), R2 ═ y2+ (y2-y1)/(2 n);
C1=x1–(x2-x1)/(2n),C2=x2+(x2-x1)/(2n);
s6: selecting a rectangular frame determined by coordinates (R1, C1) (R2, C2) in the visible light image as a suspected target area, and then extracting a color feature vector, a contour feature vector and a texture feature vector of the suspected target area;
preferably, the color feature vector is obtained by performing LDA dimension reduction on the R component of the target area;
the method for obtaining the contour feature vector comprises the following steps: adopting a sobel operator as a template, carrying out operation with a target area to obtain a contour matrix, and then carrying out LDA dimension reduction on the contour matrix to obtain a contour characteristic vector;
the texture feature vector is a 5-dimensional feature vector consisting of energy, entropy, contrast, inverse variance and correlation calculated by the gray level co-occurrence matrix;
s7: and connecting the color feature vector, the contour feature vector and the texture feature vector end to form a one-dimensional column vector, and inputting the one-dimensional column vector into the SVM to obtain a judgment result.
According to the flame detection method based on binocular vision, firstly, the binocular vision is calibrated to obtain a translation vector T and a rotation matrix R from an infrared camera to a visible light camera, then, an infrared image is traversed and marked to obtain a suspected target area coordinate, the coordinate is converted and expanded through the translation vector T and the rotation matrix R, and finally the coordinate of the suspected target area of the visible light image is locked.
The embodiments of the present invention have been written in a progressive manner with emphasis placed on the differences between the various embodiments, and similar elements may be found in relation to each other.
While the embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.

Claims (8)

1. The fire detection method based on binocular vision is characterized by comprising the following steps:
s1: calibrating the infrared camera and the visible light camera in a binocular manner, and obtaining a translation vector T and a rotation matrix R from the infrared camera to the visible light camera;
s2: acquiring an infrared image in a field of view by using an infrared camera, and presetting a temperature alarm threshold value;
s3: traversing the temperature value corresponding to each pixel in the infrared image, setting the pixel value of the pixel with the temperature value exceeding the temperature alarm threshold value to be 255 and setting the pixel value of the pixel with the temperature value smaller than the temperature alarm threshold value to be 0 to obtain a binary image of the infrared image, and then marking and communicating the binary image to obtain the coordinates (r1, c1) of a rectangular frame outside the high-temperature region (r2, c 2);
s4: converting the (R1, c1) (R2, c2) by using the translation vector T and the rotation matrix R to obtain the coordinates (y1, x1) (y2, x2) of the visible light image corresponding to the high-temperature region;
s5: expanding the rectangular frame determined by (y1, x1) (y2, x2) to obtain new target area coordinates (R1, C1) (R2, C2);
s6: selecting a rectangular frame determined by coordinates (R1, C1) (R2, C2) in the visible light image as a suspected target area, and then extracting a color feature vector, a contour feature vector and a texture feature vector of the suspected target area;
s7: and connecting the color feature vector, the contour feature vector and the texture feature vector end to form a one-dimensional column vector, and inputting the one-dimensional column vector into the SVM to obtain a judgment result.
2. The binocular vision based fire detection method of claim 1, wherein: s1 includes the steps of:
s11: the calibration board is placed in a binocular vision field of the infrared camera and the visible light camera, so that the calibration board is ensured to be complete and clear in images collected by the two cameras, and the proportion of the calibration board in the infrared image is as large as possible;
s12: calibrating the two cameras respectively to obtain respective internal reference matrixes and distortion coefficient matrixes of the two cameras;
s13: and calculating a translation vector T and a rotation matrix R from the infrared camera to the visible light camera.
3. The binocular vision based fire detection method of claim 2, wherein: in S11, the proportion of the calibration plate in the infrared image is 60% or more.
4. The binocular vision based fire detection method of claim 1, wherein: in S2, the value range of the temperature alarm threshold is 65-200 ℃.
5. The binocular vision based fire detection method of claim 1, wherein: in S5, the length direction and the width direction of the rectangular frame are both expanded by 1/n of the rectangular frame, wherein n is 2-4.
6. The binocular vision based fire detection method of claim 1, wherein: in S6, the color feature vector is obtained by performing LDA dimension reduction on the R component of the target region.
7. The binocular vision based fire detection method of claim 1, wherein: in S6, the contour feature vector is obtained as follows: and (3) operating with the target area by adopting a sobel operator as a template to obtain a contour matrix, and then performing LDA dimension reduction on the contour matrix to obtain a contour characteristic vector.
8. The binocular vision based fire detection method of claim 1, wherein: in S6, the texture feature vector is a 5-dimensional feature vector composed of energy, entropy, contrast, inverse variance, and correlation calculated from the gray level co-occurrence matrix.
CN201910840715.0A 2019-09-06 2019-09-06 Flame detection method based on binocular vision Pending CN110619293A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910840715.0A CN110619293A (en) 2019-09-06 2019-09-06 Flame detection method based on binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910840715.0A CN110619293A (en) 2019-09-06 2019-09-06 Flame detection method based on binocular vision

Publications (1)

Publication Number Publication Date
CN110619293A true CN110619293A (en) 2019-12-27

Family

ID=68922362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910840715.0A Pending CN110619293A (en) 2019-09-06 2019-09-06 Flame detection method based on binocular vision

Country Status (1)

Country Link
CN (1) CN110619293A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651276A (en) * 2020-09-04 2021-04-13 江苏濠汉信息技术有限公司 Power transmission channel early warning system based on double-light fusion and early warning method thereof
CN113205562A (en) * 2021-05-31 2021-08-03 中国矿业大学(北京) Mine thermal power disaster identification and positioning method based on binocular vision
CN115494193A (en) * 2022-11-16 2022-12-20 常州市建筑科学研究院集团股份有限公司 Machine vision-based flame transverse propagation detection method and system for single body combustion test

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110342A (en) * 2009-12-24 2011-06-29 中国航天科工集团第三研究院第八三五八研究所 Fire detection method and device for realizing same
CN102567989A (en) * 2011-11-30 2012-07-11 重庆大学 Space positioning method based on binocular stereo vision
US20120314066A1 (en) * 2011-06-10 2012-12-13 Lee Yeu Yong Fire monitoring system and method using composite camera
CN104933723A (en) * 2015-07-21 2015-09-23 闽江学院 Tongue image segmentation method based on sparse representation
CN105488941A (en) * 2016-01-15 2016-04-13 中林信达(北京)科技信息有限责任公司 Double-spectrum forest fire disaster monitoring method and double-spectrum forest fire disaster monitoring device based on infrared-visible light image
CN107253485A (en) * 2017-05-16 2017-10-17 北京交通大学 Foreign matter invades detection method and foreign matter intrusion detection means
CN110135266A (en) * 2019-04-17 2019-08-16 浙江理工大学 A kind of dual camera electrical fire preventing control method and system based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110342A (en) * 2009-12-24 2011-06-29 中国航天科工集团第三研究院第八三五八研究所 Fire detection method and device for realizing same
US20120314066A1 (en) * 2011-06-10 2012-12-13 Lee Yeu Yong Fire monitoring system and method using composite camera
CN102567989A (en) * 2011-11-30 2012-07-11 重庆大学 Space positioning method based on binocular stereo vision
CN104933723A (en) * 2015-07-21 2015-09-23 闽江学院 Tongue image segmentation method based on sparse representation
CN105488941A (en) * 2016-01-15 2016-04-13 中林信达(北京)科技信息有限责任公司 Double-spectrum forest fire disaster monitoring method and double-spectrum forest fire disaster monitoring device based on infrared-visible light image
CN107253485A (en) * 2017-05-16 2017-10-17 北京交通大学 Foreign matter invades detection method and foreign matter intrusion detection means
CN110135266A (en) * 2019-04-17 2019-08-16 浙江理工大学 A kind of dual camera electrical fire preventing control method and system based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
蒋先刚: "《基于稀疏表达的火焰与烟雾探测方法研究》", 31 August 2017 *
谢威主编: "《智慧科技与情报服务》", 30 November 2018, 北京邮电大学出版社 *
齐力编: "《公共安全大数据技术与应用》", 31 December 2017 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651276A (en) * 2020-09-04 2021-04-13 江苏濠汉信息技术有限公司 Power transmission channel early warning system based on double-light fusion and early warning method thereof
CN113205562A (en) * 2021-05-31 2021-08-03 中国矿业大学(北京) Mine thermal power disaster identification and positioning method based on binocular vision
CN113205562B (en) * 2021-05-31 2023-09-15 中国矿业大学(北京) Mine thermodynamic disaster identification and positioning method based on binocular vision
CN115494193A (en) * 2022-11-16 2022-12-20 常州市建筑科学研究院集团股份有限公司 Machine vision-based flame transverse propagation detection method and system for single body combustion test

Similar Documents

Publication Publication Date Title
WO2018076732A1 (en) Method and apparatus for merging infrared image and visible light image
CN110619293A (en) Flame detection method based on binocular vision
CN103400150B (en) A kind of method and device that road edge identification is carried out based on mobile platform
US8724885B2 (en) Integrated image processor
CN109447011B (en) Real-time monitoring method for infrared leakage of steam pipeline
CN102811863A (en) Automated Inspection Of A Printed Image
CN110879080A (en) High-precision intelligent measuring instrument and measuring method for high-temperature forge piece
CN103281513B (en) Pedestrian recognition method in the supervisory control system of a kind of zero lap territory
CN110751635B (en) Oral cavity detection method based on interframe difference and HSV color space
JP3486229B2 (en) Image change detection device
CN114581760B (en) Equipment fault detection method and system for machine room inspection
CN112053392A (en) Rapid registration and fusion method for infrared and visible light images
CN116797977A (en) Method and device for identifying dynamic target of inspection robot and measuring temperature and storage medium
CN104813341B (en) Image processing system and image processing method
CN110595397A (en) Grate cooler working condition monitoring method based on image recognition
CN112634179B (en) Camera shake prevention power transformation equipment image change detection method and system
CN111145234A (en) Fire smoke detection method based on binocular vision
Chowdhury et al. Robust human detection and localization in security applications
CN110580684A (en) image enhancement method based on black-white-color binocular camera
EP2176829B1 (en) Arrangement and method for processing image data
CN116343100B (en) Target identification method and system based on self-supervision learning
CN116563391B (en) Automatic laser structure calibration method based on machine vision
CN108254380A (en) PCB circuit board template matching method based on Digital Image Processing
JP4682782B2 (en) Image processing device
US9875549B2 (en) Change detection in video data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191227

RJ01 Rejection of invention patent application after publication