CN106845410A - A kind of flame identification method based on deep learning model - Google Patents

A kind of flame identification method based on deep learning model Download PDF

Info

Publication number
CN106845410A
CN106845410A CN201710047239.8A CN201710047239A CN106845410A CN 106845410 A CN106845410 A CN 106845410A CN 201710047239 A CN201710047239 A CN 201710047239A CN 106845410 A CN106845410 A CN 106845410A
Authority
CN
China
Prior art keywords
fish eye
flame
eye images
images
measurement model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710047239.8A
Other languages
Chinese (zh)
Other versions
CN106845410B (en
Inventor
邓军
秦学斌
王伟峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Science and Technology
Original Assignee
Xian University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Science and Technology filed Critical Xian University of Science and Technology
Priority to CN201710047239.8A priority Critical patent/CN106845410B/en
Publication of CN106845410A publication Critical patent/CN106845410A/en
Application granted granted Critical
Publication of CN106845410B publication Critical patent/CN106845410B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06T3/08
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Abstract

The present invention relates to a kind of flame identification method based on deep learning model, it includes that (1) gathers video information, reads each two field picture, then carries out gaussian filtering and obtains filtered fish eye images;(2) fish eye images inner parameter is corrected;(3) fish eye images external parameter is corrected;(4) Sphere Measurement Model is built, the fish eye images after correction is projected on Sphere Measurement Model, then removal is projected in the repeat region on Sphere Measurement Model, forms spherical diagram picture;(5) by going the interference information that smog is recognized to flame portion in smoke model removal spherical diagram picture;(6) dynamic area on spherical diagram picture is obtained;(7) dynamic area part is generated into fluoroscopy images;(8) regular fluoroscopy images, using fluoroscopy images as the input for having trained seven layer architecture convolutional neural networks, whether identification dynamic area is flame, if dynamic area is flame, into step (9), otherwise terminates this operation;(9) show recognition result and produce warning message.

Description

A kind of flame identification method based on deep learning model
Technical field
The invention belongs to the communications field, and in particular to a kind of flame identification method based on deep learning model.
Background technology
With China's industrialization and the continuous improvement of urban and town level, modern installations large public building towards space it is big, enter The complicated diversification direction of deep and broad function is developed, this for anti-pyrotechnics towards space it is big, enter the complicated diversification side of deep and broad function To development, proposed also for Smoke prevention, fire prevention and the reliable, stable of fire-fighting safety system, high precision design and operation higher Requirement.
At present, security against fire is effectively lifted for live perception by the correlation technique of message area, effectively lifting Security against fire is asked for the important research that rapidity that scene of fire early warning is disposed, levels of accuracy are safety engineering fields Topic.
At present, Chen Wenhui etc. devises a Metro Train Fire detection alarm based on fire hazard aerosol fog image-obscuring properties System, the method detects the generation of fire using the image spatial domain differential method.Li Shiwei etc. is obtained after being filtered using infrared fileter Filtering image realizes that flame object is extracted.Wang great writers etc. carry out flame by steps such as image segmentation, image enhaucament, feature extractions Identification.The behavioral characteristics and static nature of the analysis flame such as Wang Zulong include edge shake, area change and shape, color, text Reason etc..But, the design system of above-mentioned flame identification system there is a problem of following some common:
(1) traditional method flase drop or the likelihood ratio of missing inspection are higher;
(2) consider that live dynamic area whether there is flame characteristic, the complexity of algorithm is high;
(3) region of detection, may not be flame, or compare alike with the textural characteristics of flame, then be likely to appearance The situation of flase drop;
(4) the fire scope visual field of observable is small.
The content of the invention
Goal of the invention:The present invention makes improvement for the problem that above-mentioned prior art is present, i.e., the invention discloses one kind Flame identification method based on deep learning model.
Technical scheme:A kind of flame identification method based on deep learning model, comprises the following steps:
(1) video information is gathered by back-to-back two fisheye cameras, then reads each frame of video information of collection The fish eye images of acquisition are then carried out gaussian filtering by image, obtain filtered fish eye images;
(2) aligning step (1) obtains the inner parameter of fish eye images, and subsequently into step (3), inner parameter includes tangential Error, radial error and the light heart error margin;
(3) external parameter of fish eye images is corrected, into step (4);
(4) Sphere Measurement Model is built, the fish eye images after step (3) is corrected are projected on Sphere Measurement Model, then removal is thrown Repeat region of the shadow on Sphere Measurement Model, forms spherical diagram picture, subsequently into step (5);
(5) interference recognized to flame portion by smog in the spherical diagram picture that goes smoke model removal step (4) to obtain Information, subsequently into step (6);
(6) using dynamic area on Codebook methods acquisition spherical diagram picture is improved, subsequently into step (7);
(7) the dynamic area part for obtaining step (6) generates fluoroscopy images;
(8) fluoroscopy images that normalisation step (7) is obtained, using fluoroscopy images as having trained seven layer architecture convolutional Neural nets The input of network, whether identification dynamic area is flame, if dynamic area is flame, into step (9), otherwise terminates this behaviour Make;
(9) recognition result is transferred to slave computer, shows recognition result in slave computer and produce warning message.
Further, when the fish eye images in step (3) are converted to the figure of 640 × 480 pixels, if being examined on fish eye images When measuring angle point number more than 300, step (3) is comprised the following steps:
(31) normalization that perspective view is certain, size determines is generated centered on the characteristic point for detecting on the Sphere Measurement Model Image block;
(32) relevance degree of characteristic point pair is calculated within the scope of the determination in adjacent two field pictures,
(33) best match pair is finally given, optimizes the relative pose of camera.
Further, when the fish eye images in step (3) are converted to the figure of 640 × 480 pixels, if being examined on fish eye images When measuring angle point number less than 300, step (3) is comprised the following steps:
(31) using the side and geometry detected on image, end point pair is obtained on Sphere Measurement Model,
(32) transformation relation of structure on image is obtained based on end point pair, so as to calculate and optimize the relative of camera Posture, so that Optimal Parameters.
Further, when the angle in the repetition visual field of big field-of-view image in step (3) is more than 15 degree, step (3) includes Following steps:
(31) fluoroscopy images of wide-angle are generated by Sphere Measurement Model,
(32) repeat to calculate its degree of correlation on the visual field in fluoroscopy images, so that feedback adjustment camera posture, after adjustment again Secondary generation fluoroscopy images are optimized.
Beneficial effect:A kind of flame identification method based on deep learning model disclosed by the invention has following beneficial effect Really:
1st, false drop rate or loss are low;
2nd, the fire scope visual field of observable is big.
Brief description of the drawings
Fig. 1 is a kind of flow chart of flame identification method based on deep learning model disclosed by the invention;
Fig. 2 is Sphere Measurement Model schematic diagram;
Fig. 3 is the fluoroscopy images schematic diagram based on spherical diagram picture.
Specific embodiment:
Specific embodiment of the invention is described in detail below.
As shown in Figures 1 to 3, a kind of flame identification method based on deep learning model, comprises the following steps:
(1) video information is gathered by back-to-back two fisheye cameras, then reads each frame of video information of collection The fish eye images of acquisition are then carried out gaussian filtering by image, obtain filtered fish eye images;
(2) aligning step (1) obtains the inner parameter of fish eye images, and subsequently into step (3), inner parameter includes tangential Error, radial error and the light heart error margin;
(3) external parameter of fish eye images is corrected, into step (4);
(4) Sphere Measurement Model is built, the fish eye images after step (3) is corrected are projected on Sphere Measurement Model, then removal is thrown Repeat region of the shadow on Sphere Measurement Model, forms spherical diagram picture, subsequently into step (5);
(5) interference recognized to flame portion by smog in the spherical diagram picture that goes smoke model removal step (4) to obtain Information, subsequently into step (6);
(6) using dynamic area on Codebook methods acquisition spherical diagram picture is improved, subsequently into step (7);
(7) the dynamic area part for obtaining step (6) generates fluoroscopy images;
(8) fluoroscopy images that normalisation step (7) is obtained, using fluoroscopy images as having trained seven layer architecture convolutional Neural nets The input of network, whether identification dynamic area is flame, if dynamic area is flame, into step (9), otherwise terminates this behaviour Make;
(9) recognition result is transferred to slave computer, shows recognition result in slave computer and produce warning message.
Further, when the fish eye images in step (3) are converted to the figure of 640 × 480 pixels, if being examined on fish eye images When measuring angle point number more than 300, step (3) is comprised the following steps:
(31) normalization that perspective view is certain, size determines is generated centered on the characteristic point for detecting on the Sphere Measurement Model Image block;
(32) relevance degree of characteristic point pair is calculated within the scope of the determination in adjacent two field pictures,
(33) best match pair is finally given, optimizes the relative pose of camera.
Further, when the fish eye images in step (3) are converted to the figure of 640 × 480 pixels, if being examined on fish eye images When measuring angle point number less than 300, step (3) is comprised the following steps:
(31) using the side and geometry detected on image, end point pair is obtained on Sphere Measurement Model,
(32) transformation relation of structure on image is obtained based on end point pair, so as to calculate and optimize the relative of camera Posture, so that Optimal Parameters.
Further, when the angle in the repetition visual field of big field-of-view image in step (3) is more than 15 degree, step (3) includes Following steps:
(31) fluoroscopy images of wide-angle are generated by Sphere Measurement Model,
(32) repeat to calculate its degree of correlation on the visual field in fluoroscopy images, so that feedback adjustment camera posture, after adjustment again Secondary generation fluoroscopy images are optimized.
Fig. 2 is Sphere Measurement Model schematic diagram.Each pixel on fish eye images is projected into Sphere Measurement Model in calculating process On azimuthAnd elevation angle theta.As shown in Fig. 2 spatial point P projects to the p points on Sphere Measurement Model.Position according to subpoint can To calculate azimuthAnd elevation angle theta.Any point on sphereCalculate the coordinate (x of three dimensionsp,yp,zp). The three-dimensional point on two width fish eye images is calculated in processing procedure respectively.Directly calculated for preceding image and project to space coordinates (xp, yp,zp), such as shown in formula (1), the space coordinate transformation such as formula (2) for rear image projection to sphere is shown, will coordinate 90 degree of rotate counterclockwise.
It follows that the θ repeated on two images,Scope, if obtain repeat θ,Scope, then only take Pixel wherein on piece image is projected.There is repeat region during projection can effectively be eliminated in the method.
As shown in figure 3, the fluoroscopy images schematic diagram based on spherical diagram picture.O is the center of circle, in being by any point p on sphere The heart, the fluoroscopy images block of the fixed length and width of generation.P is the central point on fluoroscopy images block, and OpP is conllinear.The radius of the spheroid is rs, Ordinary circumstance rs=1, as unit sphere.lsFor fluoroscopy images border projects to the intersection point of the centre of sphere and sphere to the arc length of p points, φ is the angle of fluoroscopy images border projection line and OpP lines.If fluoroscopy images size is fixed, φ is bigger, and fluoroscopy images are regarded Wild also bigger, fluoroscopy images are closer to sphere, conversely, fluoroscopy images are away from sphere.So change can be reached by adjusting φ angles The effect of focal length.The foundation of the Sphere Measurement Model can be obtained at 360 degree without any viewpoint direction in dead angle can be with the perspective of varifocal Image block.For follow-up flame identification is laid a good foundation.
Embodiments of the present invention are elaborated above.But the present invention is not limited to above-mentioned implementation method, In the ken that art those of ordinary skill possesses, can also be done on the premise of present inventive concept is not departed from Go out various change.

Claims (4)

1. a kind of flame identification method based on deep learning model, it is characterised in that comprise the following steps:
(1) video information is gathered by back-to-back two fisheye cameras, then reads each two field picture of video information of collection, Then the fish eye images of acquisition are carried out into gaussian filtering, obtains filtered fish eye images;
(2) aligning step (1) obtains the inner parameter of fish eye images, and subsequently into step (3), inner parameter includes tangential mistake Difference, radial error and the light heart error margin;
(3) external parameter of fish eye images is corrected, into step (4);
(4) Sphere Measurement Model is built, the fish eye images after step (3) is corrected are projected on Sphere Measurement Model, then removal is projected in Repeat region on Sphere Measurement Model, forms spherical diagram picture, subsequently into step (5);
(5) interference information recognized to flame portion by smog in the spherical diagram picture that goes smoke model removal step (4) to obtain, Subsequently into step (6);
(6) using dynamic area on Codebook methods acquisition spherical diagram picture is improved, subsequently into step (7);
(7) the dynamic area part for obtaining step (6) generates fluoroscopy images;
(8) fluoroscopy images that normalisation step (7) is obtained, using fluoroscopy images as having trained seven layer architecture convolutional neural networks Input, whether identification dynamic area is flame, if dynamic area is flame, into step (9), otherwise terminates this operation;
(9) recognition result is transferred to slave computer, shows recognition result in slave computer and produce warning message.
2. a kind of flame identification method based on deep learning model according to claim 1, it is characterised in that work as step (3) when the fish eye images in are converted to the figure of 640 × 480 pixels, if angle point number is detected on fish eye images more than 300, step Suddenly (3) comprise the following steps:
(31) normalized image that perspective view is certain, size determines is generated centered on the characteristic point for detecting on the Sphere Measurement Model Block;
(32) relevance degree of characteristic point pair is calculated within the scope of the determination in adjacent two field pictures,
(33) best match pair is finally given, optimizes the relative pose of camera.
3. a kind of flame identification method based on deep learning model according to claim 1, it is characterised in that work as step (3) when the fish eye images in are converted to the figure of 640 × 480 pixels, if angle point number is detected on fish eye images less than 300, step Suddenly (3) comprise the following steps:
(31) using the side and geometry detected on image, end point pair is obtained on Sphere Measurement Model,
(32) transformation relation of structure on image is obtained based on end point pair, so as to calculate and optimize the relative appearance of camera Gesture, so that Optimal Parameters.
4. a kind of flame identification method based on deep learning model according to claim 1, it is characterised in that work as step (3) when the angle in the repetition visual field of big field-of-view image is more than 15 degree in, step (3) is comprised the following steps:
(31) fluoroscopy images of wide-angle are generated by Sphere Measurement Model,
(32) repeat to calculate its degree of correlation on the visual field in fluoroscopy images, so that feedback adjustment camera posture, secondary again after adjustment Optimized into fluoroscopy images.
CN201710047239.8A 2017-01-22 2017-01-22 Flame identification method based on deep learning model Expired - Fee Related CN106845410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710047239.8A CN106845410B (en) 2017-01-22 2017-01-22 Flame identification method based on deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710047239.8A CN106845410B (en) 2017-01-22 2017-01-22 Flame identification method based on deep learning model

Publications (2)

Publication Number Publication Date
CN106845410A true CN106845410A (en) 2017-06-13
CN106845410B CN106845410B (en) 2020-08-25

Family

ID=59120999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710047239.8A Expired - Fee Related CN106845410B (en) 2017-01-22 2017-01-22 Flame identification method based on deep learning model

Country Status (1)

Country Link
CN (1) CN106845410B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830143A (en) * 2018-05-03 2018-11-16 深圳市中电数通智慧安全科技股份有限公司 A kind of video analytic system based on deep learning
CN108985374A (en) * 2018-07-12 2018-12-11 天津艾思科尔科技有限公司 A kind of flame detecting method based on dynamic information model
CN109165575A (en) * 2018-08-06 2019-01-08 天津艾思科尔科技有限公司 A kind of pyrotechnics recognizer based on SSD frame
CN111310662A (en) * 2020-02-17 2020-06-19 淮阴工学院 Flame detection and identification method and system based on integrated deep network
WO2020247358A1 (en) * 2019-06-07 2020-12-10 Honeywell International Inc. Method and system for connected advanced flare analytics
CN113450530A (en) * 2021-06-03 2021-09-28 河北华电石家庄热电有限公司 Safety early warning system based on intelligent video analysis algorithm

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060017578A1 (en) * 2004-07-20 2006-01-26 Shubinsky Gary D Flame detection system
CN101441712A (en) * 2008-12-25 2009-05-27 北京中星微电子有限公司 Flame video recognition method and fire hazard monitoring method and system
CN101625789A (en) * 2008-07-07 2010-01-13 北京东方泰坦科技有限公司 Method for monitoring forest fire in real time based on intelligent identification of smoke and fire
CN102163358A (en) * 2011-04-11 2011-08-24 杭州电子科技大学 Smoke/flame detection method based on video image analysis
CN104200468A (en) * 2014-08-28 2014-12-10 中国矿业大学 Method for obtaining correction parameter of spherical perspective projection model
CN104581076A (en) * 2015-01-14 2015-04-29 国网四川省电力公司电力科学研究院 Mountain fire monitoring and recognizing method and device based on 360-degree panoramic infrared fisheye camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060017578A1 (en) * 2004-07-20 2006-01-26 Shubinsky Gary D Flame detection system
CN101625789A (en) * 2008-07-07 2010-01-13 北京东方泰坦科技有限公司 Method for monitoring forest fire in real time based on intelligent identification of smoke and fire
CN101441712A (en) * 2008-12-25 2009-05-27 北京中星微电子有限公司 Flame video recognition method and fire hazard monitoring method and system
CN102163358A (en) * 2011-04-11 2011-08-24 杭州电子科技大学 Smoke/flame detection method based on video image analysis
CN104200468A (en) * 2014-08-28 2014-12-10 中国矿业大学 Method for obtaining correction parameter of spherical perspective projection model
CN104581076A (en) * 2015-01-14 2015-04-29 国网四川省电力公司电力科学研究院 Mountain fire monitoring and recognizing method and device based on 360-degree panoramic infrared fisheye camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
B. UGˇUR TO¨REYIN 等: "Computer vision based method for real-time fire and flame detection", 《PATTERN RECOGNITION LETTERS 27》 *
丁飞: "基于视觉传感器的火焰识别与高压细水雾喷淋实验研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830143A (en) * 2018-05-03 2018-11-16 深圳市中电数通智慧安全科技股份有限公司 A kind of video analytic system based on deep learning
CN108985374A (en) * 2018-07-12 2018-12-11 天津艾思科尔科技有限公司 A kind of flame detecting method based on dynamic information model
CN109165575A (en) * 2018-08-06 2019-01-08 天津艾思科尔科技有限公司 A kind of pyrotechnics recognizer based on SSD frame
CN109165575B (en) * 2018-08-06 2024-02-20 天津艾思科尔科技有限公司 Pyrotechnic recognition algorithm based on SSD frame
WO2020247358A1 (en) * 2019-06-07 2020-12-10 Honeywell International Inc. Method and system for connected advanced flare analytics
US11927944B2 (en) 2019-06-07 2024-03-12 Honeywell International, Inc. Method and system for connected advanced flare analytics
CN111310662A (en) * 2020-02-17 2020-06-19 淮阴工学院 Flame detection and identification method and system based on integrated deep network
CN113450530A (en) * 2021-06-03 2021-09-28 河北华电石家庄热电有限公司 Safety early warning system based on intelligent video analysis algorithm

Also Published As

Publication number Publication date
CN106845410B (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN106845410A (en) A kind of flame identification method based on deep learning model
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN106600888B (en) Automatic forest fire detection method and system
CN104123544B (en) Anomaly detection method and system based on video analysis
CN111680588A (en) Human face gate living body detection method based on visible light and infrared light
CN104992452B (en) Airbound target automatic tracking method based on thermal imaging video
CN103761514A (en) System and method for achieving face recognition based on wide-angle gun camera and multiple dome cameras
CN106707296A (en) Dual-aperture photoelectric imaging system-based unmanned aerial vehicle detection and recognition method
CN104301669A (en) Suspicious target detection tracking and recognition method based on dual-camera cooperation
CN111652082B (en) Face living body detection method and device
CN111985365A (en) Straw burning monitoring method and system based on target detection technology
CN102819847A (en) Method for extracting movement track based on PTZ mobile camera
CN108038866A (en) A kind of moving target detecting method based on Vibe and disparity map Background difference
TWI759767B (en) Motion control method, equipment and storage medium of the intelligent vehicle
CN109063625A (en) A kind of face critical point detection method based on cascade deep network
EP2813973A1 (en) Method and system for processing video image
WO2022161139A1 (en) Driving direction test method and apparatus, computer device, and storage medium
TWI726278B (en) Driving detection method, vehicle and driving processing device
CN106915303A (en) Automobile A-column blind area perspective method based on depth data and fish eye images
CN103544478A (en) All-dimensional face detection method and system
CN114140745A (en) Method, system, device and medium for detecting personnel attributes of construction site
CN110703760A (en) Newly-increased suspicious object detection method for security inspection robot
CN114998737A (en) Remote smoke detection method, system, electronic equipment and medium
CN106815567A (en) A kind of flame detecting method and device based on video
WO2021248564A1 (en) Panoramic big data application monitoring and control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200825

Termination date: 20210122

CF01 Termination of patent right due to non-payment of annual fee