CN106845410B - Flame identification method based on deep learning model - Google Patents

Flame identification method based on deep learning model Download PDF

Info

Publication number
CN106845410B
CN106845410B CN201710047239.8A CN201710047239A CN106845410B CN 106845410 B CN106845410 B CN 106845410B CN 201710047239 A CN201710047239 A CN 201710047239A CN 106845410 B CN106845410 B CN 106845410B
Authority
CN
China
Prior art keywords
image
spherical
flame
fisheye
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710047239.8A
Other languages
Chinese (zh)
Other versions
CN106845410A (en
Inventor
邓军
秦学斌
王伟峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Science and Technology
Original Assignee
Xian University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Science and Technology filed Critical Xian University of Science and Technology
Priority to CN201710047239.8A priority Critical patent/CN106845410B/en
Publication of CN106845410A publication Critical patent/CN106845410A/en
Application granted granted Critical
Publication of CN106845410B publication Critical patent/CN106845410B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06T3/08
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Abstract

The invention relates to a flame identification method based on a deep learning model, which comprises the steps of (1) acquiring video information, reading each frame of image, and then performing Gaussian filtering to obtain a filtered fisheye image; (2) correcting internal parameters of the fisheye image; (3) correcting external parameters of the fisheye image; (4) constructing a spherical model, projecting the corrected fisheye image onto the spherical model, and then removing a repeated region projected on the spherical model to form a spherical image; (5) removing interference information of smoke in the spherical image to the identification of the flame part by using a smoke passing model; (6) acquiring a dynamic area on the spherical image; (7) generating a perspective image from the dynamic area part; (8) normalizing the perspective image, using the perspective image as the input of the trained seven-layer architecture convolutional neural network, identifying whether the dynamic area is flame, if the dynamic area is flame, entering the step (9), otherwise, ending the operation; (9) and displaying the identification result and generating alarm information.

Description

Flame identification method based on deep learning model
Technical Field
The invention belongs to the field of communication, and particularly relates to a flame identification method based on a deep learning model.
Background
With the continuous improvement of industrialization and town level in China, large public buildings with modern facilities are developed towards diversification with large space, wide depth and complex functions, which puts higher requirements on the development of smoke and fire prevention towards diversification with large space, wide depth and complex functions and the reliable, stable and high-precision design and operation of smoke and fire prevention and fire protection safety systems.
At present, the important research problem in the field of safety engineering is effectively improving the sensing capability of fire safety to the scene and the rapidity and the accuracy level of fire safety to the early warning treatment of the fire scene through the related technology in the field of information.
At present, a subway train fire detection alarm system based on the fuzzy characteristic of fire smoke images is designed by Chen civilian brightness and the like, and an image spatial domain differential method is adopted to detect the occurrence of fire. The Li Shi Rev et al adopts an infrared filter to filter and obtain a filtered image to realize the extraction of the flame target. The flame recognition is carried out by the steps of image segmentation, image enhancement, feature extraction and the like. Wangholon et al analyze the dynamic and static characteristics of flames including edge jitter, area variation and shape, color, culture, etc. However, the design system of the flame recognition system has the following common problems:
(1) the probability of false detection or missed detection is higher in the traditional method;
(2) whether flame characteristics exist in a field dynamic region or not is considered, and the algorithm is high in complexity;
(3) if the detected area is not a flame or is similar to the texture feature of the flame, the false detection is likely to occur;
(4) the observable fire field is small.
Disclosure of Invention
The purpose of the invention is as follows: the invention improves the problems in the prior art, namely, the invention discloses a flame identification method based on a deep learning model.
The technical scheme is as follows: a flame identification method based on a deep learning model comprises the following steps:
(1) acquiring video information through two back-to-back fisheye cameras, reading each frame image of the acquired video information, and performing Gaussian filtering on the acquired fisheye image to obtain a filtered fisheye image;
(2) correcting the internal parameters of the fisheye image obtained in the step (1), and then entering the step (3), wherein the internal parameters comprise a tangential error, a radial error and an optical center error;
(3) correcting external parameters of the fisheye image, and entering the step (4);
(4) constructing a spherical model, projecting the fisheye image corrected in the step (3) onto the spherical model, removing a repeated region projected on the spherical model to form a spherical image, and then entering the step (5);
(5) removing interference information of smoke on the flame part identification in the spherical image obtained in the step (4) through a smoke removing model, and then entering the step (6);
(6) acquiring a dynamic region on the spherical image by adopting an improved Codebook method, and then entering the step (7);
(7) generating a perspective image of the dynamic region part acquired in the step (6);
(8) normalizing the perspective image obtained in the step (7), using the perspective image as the input of a convolutional neural network with a trained seven-layer architecture, identifying whether the dynamic area is flame, if the dynamic area is flame, entering the step (9), otherwise, ending the operation;
(9) and transmitting the identification result to a lower computer, and displaying the identification result and generating alarm information on the lower computer.
Further, when the fisheye image in step (3) is converted into a 640 × 480 pixel image, if the number of corner points detected on the fisheye image is greater than 300, step (3) includes the following steps:
(31) generating a normalized image block with a certain perspective angle and a certain size on the spherical model by taking the detected feature points as centers;
(32) calculating the correlation values of the feature point pairs within a certain range on two adjacent frame images,
(33) and finally, obtaining the optimal matching pair and optimizing the relative posture of the camera.
Further, when the fisheye image in step (3) is converted into a 640 × 480 pixel image, if the number of corner points detected on the fisheye image is less than 300, step (3) includes the following steps:
(31) vanishing point pairs are obtained on the spherical model by utilizing the edges and the geometric structures detected on the image,
(32) and obtaining a transformation relation of a structure on the image on the basis of the vanishing point pairs so as to calculate and optimize the relative posture of the camera and further optimize the parameters.
Further, when the angle of the repeated field of view of the large-field-of-view image in step (3) is greater than 15 degrees, step (3) includes the steps of:
(31) a large-angle perspective image is generated through a spherical model,
(32) and calculating the correlation degree on the repeated visual field of the perspective image so as to feed back and adjust the posture of the camera, and generating the perspective image again for optimization after adjustment.
Has the advantages that: the flame identification method based on the deep learning model disclosed by the invention has the following beneficial effects:
1. the false detection rate or the missing detection rate is low;
2. the observable fire range has a large field of view.
Drawings
FIG. 1 is a flow chart of a flame identification method based on a deep learning model according to the present disclosure;
FIG. 2 is a schematic view of a spherical model;
fig. 3 is a perspective image schematic diagram based on a spherical image.
The specific implementation mode is as follows:
the following describes in detail specific embodiments of the present invention.
As shown in fig. 1 to 3, a flame recognition method based on a deep learning model includes the following steps:
(1) acquiring video information through two back-to-back fisheye cameras, reading each frame image of the acquired video information, and performing Gaussian filtering on the acquired fisheye image to obtain a filtered fisheye image;
(2) correcting the internal parameters of the fisheye image obtained in the step (1), and then entering the step (3), wherein the internal parameters comprise a tangential error, a radial error and an optical center error;
(3) correcting external parameters of the fisheye image, and entering the step (4);
(4) constructing a spherical model, projecting the fisheye image corrected in the step (3) onto the spherical model, removing a repeated region projected on the spherical model to form a spherical image, and then entering the step (5);
(5) removing interference information of smoke on the flame part identification in the spherical image obtained in the step (4) through a smoke removing model, and then entering the step (6);
(6) acquiring a dynamic region on the spherical image by adopting an improved Codebook method, and then entering the step (7);
(7) generating a perspective image of the dynamic region part acquired in the step (6);
(8) normalizing the perspective image obtained in the step (7), using the perspective image as the input of a convolutional neural network with a trained seven-layer architecture, identifying whether the dynamic area is flame, if the dynamic area is flame, entering the step (9), otherwise, ending the operation;
(9) and transmitting the identification result to a lower computer, and displaying the identification result and generating alarm information on the lower computer.
Further, when the fisheye image in step (3) is converted into a 640 × 480 pixel image, if the number of corner points detected on the fisheye image is greater than 300, step (3) includes the following steps:
(31) generating a normalized image block with a certain perspective angle and a certain size on the spherical model by taking the detected feature points as centers;
(32) calculating the correlation values of the feature point pairs within a certain range on two adjacent frame images,
(33) and finally, obtaining the optimal matching pair and optimizing the relative posture of the camera.
Further, when the fisheye image in step (3) is converted into a 640 × 480 pixel image, if the number of corner points detected on the fisheye image is less than 300, step (3) includes the following steps:
(31) vanishing point pairs are obtained on the spherical model by utilizing the edges and the geometric structures detected on the image,
(32) and obtaining a transformation relation of a structure on the image on the basis of the vanishing point pairs so as to calculate and optimize the relative posture of the camera and further optimize the parameters.
Further, when the angle of the repeated field of view of the large-field-of-view image in step (3) is greater than 15 degrees, step (3) includes the steps of:
(31) a large-angle perspective image is generated through a spherical model,
(32) and calculating the correlation degree on the repeated visual field of the perspective image so as to feed back and adjust the posture of the camera, and generating the perspective image again for optimization after adjustment.
Fig. 2 is a schematic view of a spherical model. In the calculation process, each pixel point on the fisheye image is projected to the azimuth angle on the spherical model
Figure BDA0001216610920000061
And an elevation angle theta. As shown in fig. 2, the spatial point P is projected to the point P on the spherical model. The azimuth angle can be calculated according to the position of the projection point
Figure BDA0001216610920000062
And an elevation angle theta. Any point on the spherical surface
Figure BDA0001216610920000063
Calculating coordinates (x) in three-dimensional spacep,yp,zp). And respectively calculating three-dimensional points on the two fisheye images in the processing process. Direct projection to space coordinates (x) calculation for pre-imagep,yp,zp) The transformation of the spatial coordinates for projection of the back image onto a spherical surface, as shown in equation (1), is as shown in equation (2), i.e. the coordinates are rotated 90 degrees counterclockwise.
Figure BDA0001216610920000071
Figure BDA0001216610920000072
From this, it can be seen that a repeated theta is obtained on both images,
Figure BDA0001216610920000073
in the range of (a), if the repetition of theta is obtained,
Figure BDA0001216610920000074
the range of (2) is that only the pixel points on one image are taken for projection. The method can effectively solve the problem that the repeated area exists in the projection.
As shown in fig. 3, a perspective image schematic diagram based on a spherical image. O is the center of a circle, willAnd generating a perspective image block with fixed length and width by taking any point p on the spherical surface as a center. P is the center point on the perspective image block, OpP is collinear. The radius of the sphere is rsGeneral case rs1, namely a unit sphere. lsThe arc length from the projection point of the perspective image boundary to the intersection point of the sphere center and the spherical surface to the point p is phi, which is the included angle between the projection line of the perspective image boundary and the OpP line. If the size of the perspective image is fixed, the larger phi is, the larger the visual field of the perspective image is, and the closer the perspective image is to the spherical surface, otherwise, the perspective image is far away from the spherical surface. The zoom effect can be achieved by adjusting the phi angle. The establishment of the spherical model can acquire perspective image blocks with variable focal length in any viewpoint direction without dead angles in 360 degrees. And a foundation is laid for subsequent flame identification.
The embodiments of the present invention have been described in detail. However, the present invention is not limited to the above-described embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention.

Claims (4)

1. A flame identification method based on a deep learning model is characterized by comprising the following steps:
(1) acquiring video information through two back-to-back fisheye cameras, reading each frame image of the acquired video information, and performing Gaussian filtering on the acquired fisheye image to obtain a filtered fisheye image;
(2) correcting the internal parameters of the fisheye image obtained in the step (1), and then entering the step (3), wherein the internal parameters comprise a tangential error, a radial error and an optical center error;
(3) correcting external parameters of the fisheye image, and entering the step (4);
(4) constructing a spherical model, projecting the fisheye image corrected in the step (3) onto the spherical model, removing a repeated region projected on the spherical model to form a spherical image, and then entering the step (5);
(5) removing interference information of smoke on the flame part identification in the spherical image obtained in the step (4) through a smoke removing model, and then entering the step (6);
(6) acquiring a dynamic region on the spherical image by adopting an improved Codebook method, and then entering the step (7);
(7) generating a perspective image of the dynamic region part acquired in the step (6);
(8) normalizing the perspective image obtained in the step (7), using the perspective image as the input of a convolutional neural network with a trained seven-layer architecture, identifying whether the dynamic area is flame, if the dynamic area is flame, entering the step (9), otherwise, ending the operation;
(9) and transmitting the identification result to a lower computer, and displaying the identification result and generating alarm information on the lower computer.
2. The flame identification method based on the deep learning model as claimed in claim 1, wherein when the fisheye image in step (3) is converted into a 640 × 480 pixel map, if the number of corner points detected on the fisheye image is greater than 300, step (3) comprises the following steps:
(31) generating a normalized image block with a certain perspective angle and a certain size on the spherical model by taking the detected feature points as centers;
(32) calculating the correlation values of the feature point pairs within a certain range on two adjacent frame images,
(33) and finally, obtaining the optimal matching pair and optimizing the relative posture of the camera.
3. The flame identification method based on the deep learning model as claimed in claim 1, wherein when the fisheye image in step (3) is converted into a 640 x 480 pixel map, if the number of corner points detected on the fisheye image is less than 300, step (3) comprises the following steps:
(31) vanishing point pairs are obtained on the spherical model by utilizing the edges and the geometric structures detected on the image,
(32) and obtaining a transformation relation of a structure on the image on the basis of the vanishing point pairs so as to calculate and optimize the relative pose of the camera and further optimize the external parameters.
4. The flame recognition method based on the deep learning model as claimed in claim 1, wherein when the angle of the repeated field of view of the large-field-of-view image in the corrected fisheye image in step (3) is greater than 15 degrees, step (3) comprises the following steps:
(31) a large-angle perspective image is generated through a spherical model,
(32) and calculating the correlation degree on the repeated visual field of the perspective image so as to feed back and adjust the posture of the camera, and generating the perspective image again for optimization after adjustment.
CN201710047239.8A 2017-01-22 2017-01-22 Flame identification method based on deep learning model Expired - Fee Related CN106845410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710047239.8A CN106845410B (en) 2017-01-22 2017-01-22 Flame identification method based on deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710047239.8A CN106845410B (en) 2017-01-22 2017-01-22 Flame identification method based on deep learning model

Publications (2)

Publication Number Publication Date
CN106845410A CN106845410A (en) 2017-06-13
CN106845410B true CN106845410B (en) 2020-08-25

Family

ID=59120999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710047239.8A Expired - Fee Related CN106845410B (en) 2017-01-22 2017-01-22 Flame identification method based on deep learning model

Country Status (1)

Country Link
CN (1) CN106845410B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830143A (en) * 2018-05-03 2018-11-16 深圳市中电数通智慧安全科技股份有限公司 A kind of video analytic system based on deep learning
CN108985374A (en) * 2018-07-12 2018-12-11 天津艾思科尔科技有限公司 A kind of flame detecting method based on dynamic information model
CN109165575B (en) * 2018-08-06 2024-02-20 天津艾思科尔科技有限公司 Pyrotechnic recognition algorithm based on SSD frame
US11927944B2 (en) * 2019-06-07 2024-03-12 Honeywell International, Inc. Method and system for connected advanced flare analytics
CN111310662B (en) * 2020-02-17 2021-08-31 淮阴工学院 Flame detection and identification method and system based on integrated deep network
CN113450530A (en) * 2021-06-03 2021-09-28 河北华电石家庄热电有限公司 Safety early warning system based on intelligent video analysis algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441712A (en) * 2008-12-25 2009-05-27 北京中星微电子有限公司 Flame video recognition method and fire hazard monitoring method and system
CN101625789A (en) * 2008-07-07 2010-01-13 北京东方泰坦科技有限公司 Method for monitoring forest fire in real time based on intelligent identification of smoke and fire
CN102163358A (en) * 2011-04-11 2011-08-24 杭州电子科技大学 Smoke/flame detection method based on video image analysis
CN104200468A (en) * 2014-08-28 2014-12-10 中国矿业大学 Method for obtaining correction parameter of spherical perspective projection model
CN104581076A (en) * 2015-01-14 2015-04-29 国网四川省电力公司电力科学研究院 Mountain fire monitoring and recognizing method and device based on 360-degree panoramic infrared fisheye camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7202794B2 (en) * 2004-07-20 2007-04-10 General Monitors, Inc. Flame detection system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625789A (en) * 2008-07-07 2010-01-13 北京东方泰坦科技有限公司 Method for monitoring forest fire in real time based on intelligent identification of smoke and fire
CN101441712A (en) * 2008-12-25 2009-05-27 北京中星微电子有限公司 Flame video recognition method and fire hazard monitoring method and system
CN102163358A (en) * 2011-04-11 2011-08-24 杭州电子科技大学 Smoke/flame detection method based on video image analysis
CN104200468A (en) * 2014-08-28 2014-12-10 中国矿业大学 Method for obtaining correction parameter of spherical perspective projection model
CN104581076A (en) * 2015-01-14 2015-04-29 国网四川省电力公司电力科学研究院 Mountain fire monitoring and recognizing method and device based on 360-degree panoramic infrared fisheye camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Computer vision based method for real-time fire and flame detection;B. Ugˇur To¨reyin 等;《Pattern Recognition Letters 27》;20050826;第49-58页 *
基于视觉传感器的火焰识别与高压细水雾喷淋实验研究;丁飞;《中国优秀硕士学位论文全文数据库 工程科技II辑》;20140815(第08期);全文 *

Also Published As

Publication number Publication date
CN106845410A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN106845410B (en) Flame identification method based on deep learning model
CN105225230B (en) A kind of method and device of identification foreground target object
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN109118523A (en) A kind of tracking image target method based on YOLO
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN111680588A (en) Human face gate living body detection method based on visible light and infrared light
CN109308718B (en) Space personnel positioning device and method based on multiple depth cameras
CN108596193B (en) Method and system for building deep learning network structure aiming at human ear recognition
JP2017534046A (en) Building height calculation method, apparatus and storage medium
CN110189375B (en) Image target identification method based on monocular vision measurement
CN111723801B (en) Method and system for detecting and correcting target in fisheye camera picture
CN106033614B (en) A kind of mobile camera motion object detection method under strong parallax
CN106875419A (en) Small and weak tracking of maneuvering target based on NCC matching frame differences loses weight detecting method
CN109086724A (en) A kind of method for detecting human face and storage medium of acceleration
CN107038714B (en) Multi-type visual sensing cooperative target tracking method
CN107560592A (en) A kind of precision ranging method for optronic tracker linkage target
JP2009288885A (en) Lane detection device, lane detection method and lane detection program
CN106295657A (en) A kind of method extracting human height's feature during video data structure
Xu et al. Dynamic obstacle detection based on panoramic vision in the moving state of agricultural machineries
CN106915303A (en) Automobile A-column blind area perspective method based on depth data and fish eye images
CN109919832A (en) One kind being used for unpiloted traffic image joining method
CN114998737A (en) Remote smoke detection method, system, electronic equipment and medium
CN112652020A (en) Visual SLAM method based on AdaLAM algorithm
CN111047636A (en) Obstacle avoidance system and method based on active infrared binocular vision
Chowdhury et al. Robust human detection and localization in security applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200825

Termination date: 20210122