CN109815917A - A kind of fire-fighting unmanned plane progress fire disaster target knowledge method for distinguishing - Google Patents

A kind of fire-fighting unmanned plane progress fire disaster target knowledge method for distinguishing Download PDF

Info

Publication number
CN109815917A
CN109815917A CN201910081863.9A CN201910081863A CN109815917A CN 109815917 A CN109815917 A CN 109815917A CN 201910081863 A CN201910081863 A CN 201910081863A CN 109815917 A CN109815917 A CN 109815917A
Authority
CN
China
Prior art keywords
characteristic pattern
image
candidate frame
fire
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910081863.9A
Other languages
Chinese (zh)
Inventor
孙刘云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910081863.9A priority Critical patent/CN109815917A/en
Publication of CN109815917A publication Critical patent/CN109815917A/en
Pending legal-status Critical Current

Links

Abstract

Fire disaster target is carried out the present invention relates to a kind of fire-fighting unmanned plane and knows method for distinguishing, comprising: camera is carried using unmanned plane and acquires testing image;Testing image is imported into convolutional neural networks, obtains convolution characteristic pattern;Before full articulamentum, is handled using convolution characteristic pattern of the RPN to extraction, obtain the characteristic information of candidate frame, i.e. candidate region.The information of comprehensive convolutional layer feature and candidate frame, the coordinate of candidate frame in the input image is mapped in the last layer of convolution characteristic pattern;ROI is handled with minimum circumscribed rectangle algorithm again, makes ROI layers each, obtains fixed-size characteristic pattern, and be connected with subsequent full articulamentum;After full articulamentum, the classification of characteristic pattern is judged and extracted using detection class probability, and position adjustment is carried out to the candidate frame for belonging to a certain feature using device is returned, class probability and frame are returned and carry out joint training, to obtain target area;Finally obtain the target image of fire location.

Description

A kind of fire-fighting unmanned plane progress fire disaster target knowledge method for distinguishing
[technical field]
The invention belongs to unmanned plane applications and computer vision field more particularly to a kind of fire-fighting unmanned plane to carry out fire source The method of target identification.
[background technique]
Target identification is carried out to fire-fighting unmanned plane acquired image, so that target position is accurately positioned, after being convenient for Continuous tracking and analysis fire condition and feature.Target identification refers to special objective (or a type of target) from other mesh The process for being distinguished out in mark (or other types of target).Target referred herein is scene of fire, that is, from complexity Image in identify conflagration area.Currently, being mainly the characteristic according to possessed by fire source image for the identification of fire source image Such as chromatographic characterization, visible light and infrared signature, spreading trend situation image information characteristic distinguish fire source or smog Area.It is to be carried out based on BP neural network to color of image feature, textural characteristics and shape feature that fire image, which detects a kind of method, The method of fire image detection.This method has dependence to the selection of manual features.Another method is advanced to image Then row segmentation goes out the conflagration area in image from identification or classification and Detection in image using picture characteristics.Although this method It can recognize that image information, but there is the problems such as time complexity is high, and robustness is not high.
[summary of the invention]
To solve the above-mentioned problems, and overcome existing fire identification application algorithm in detection speed and the skill of detection accuracy Art defect carries out fire disaster target the invention proposes a kind of fire-fighting unmanned plane and knows method for distinguishing.
The technical solution adopted by the invention is as follows:
A kind of fire-fighting unmanned plane progress fire disaster target knowledge method for distinguishing, comprising the following steps:
Step S1: camera is carried using unmanned plane, possible fire source region is shot, get testing image;
Step S2: the testing image is imported into convolutional neural networks, feature extraction is carried out, obtains convolution characteristic pattern;
Step S3: it is handled using convolution characteristic pattern of the RPN to extraction, obtains the characteristic information of candidate frame;Comprehensive volume The coordinate of candidate frame in the input image is mapped to the last layer of convolution characteristic pattern by the information of lamination characteristic pattern and candidate frame In;
Step S4: handling candidate frame with minimum circumscribed rectangle algorithm, i.e., comprehensive convolution characteristic pattern and candidate frame Information, candidate frame information is extracted, background information is reduced, makes ROI layers each, obtain fixed-size characteristic pattern, and Input full articulamentum;
Step S5: after full articulamentum, the classification of characteristic pattern is judged and extracted using detection class probability, and utilize frame It returns device and position adjustment is carried out to the candidate frame for belonging to a certain feature, class probability and frame are returned and carry out joint training, To obtain target area;
Step S6: being based on the target area, obtains the image with fire disaster target.
Further, the step S2 is specifically included: colored testing image being converted to gray level image, utilizes filter Gray level image is filtered, characteristic pattern is extracted, using non-linear unit, Leaky is passed through to the corresponding region of characteristic pattern The non-thread line cell processing of ReLU;The operation of maximum value pondization is carried out again, to obtain convolution characteristic pattern.
Further, in the step S3, the RPN is according to the obtained result of step S2 with the sliding window of n*n in convolution Characteristic pattern slides over window, generates the full connection features that length is 256 or 512 dimension length.
Further, in the step S4, the calculation method of minimum circumscribed rectangle algorithm is: determining the side of candidate frame first Frame region, and boundary rectangle length, width and area are recorded, rotation center is set, frame is rotated by certain angle, and records Boundary rectangle parameter of its outline on coordinate system direction seeks minimum circumscribed rectangle by calculating boundary rectangle area.
Further, in the step S5, it is that size adjustment, translation or contracting are carried out to original object window that frame, which returns device, Small and amplification window, makes its positioning in place.
Further, in the step S6, according to target area, target area is drawn out on original testing image, To obtain the image with fire disaster target.
The invention has the benefit that improving the detection speed and detection accuracy of fire disaster target recognizer, reduce few The case where reporting and failing to report.
[Detailed description of the invention]
Described herein the drawings are intended to provide a further understanding of the invention, constitutes part of this application, but It does not constitute improper limitations of the present invention, in the accompanying drawings:
Fig. 1 is the basic flow chart of the method for the present invention.
Fig. 2 is convolutional neural networks structure chart of the present invention.
[specific embodiment]
Come that the present invention will be described in detail below in conjunction with attached drawing and specific embodiment, illustrative examples therein and says It is bright to be only used to explain the present invention but not as a limitation of the invention.
Referring to attached drawing 1, it illustrates the present invention is based on the basic flow charts of the fire disaster target recognition methods of fire-fighting unmanned plane. The specific steps of this method are described as follows:
Step S1: camera is carried using unmanned plane, possible fire source region is shot, get testing image.
Unmanned plane used in the present invention can be any one unmanned plane in the prior art, as long as can set on unmanned plane Set camera, the invention is not limited in this regard.Unmanned plane carries camera and shoots to the area that fire may occur, Testing image is got, further to identify real fire disaster target by subsequent step.
Step S2: the testing image is imported into convolutional neural networks, feature extraction is carried out, obtains convolution characteristic pattern.
Specifically, the testing image of camera shooting is a color image, and color image is converted to grayscale image first Picture is filtered gray level image using filter, extracts characteristic pattern, using non-linear unit, to the correspondence of characteristic pattern The non-thread line cell processing of Leaky ReLU is passed through in region;The operation of maximum value pondization is carried out again, to obtain convolution characteristic pattern.
It wherein, include M network layer in the convolutional neural networks, M is positive integer, and M >=2, as shown in Fig. 2, rolling up For product neural network using VGG16 as basic network structure, which shares 13 convolutional layers, 3 full articulamentums, 4 ponds Change layer, wherein Conv1 and Conv2 is respectively there are two convolutional layer, and there are three convolutional layers respectively by Conv3, Conv4 and Conv5, every One activation primitive of a convolutional layer postposition accesses 3 full articulamentums, and after full articulamentum after the activation primitive of Conv5 Activation primitive is added.
Wherein, the effect of convolutional layer is the different local features for extracting input;The effect of pond layer is to realize dimensionality reduction, is reduced The space size of data, realizes the purpose of non-linear;And full articulamentum is the local feature convolutional layer again through weight square Battle array is assembled into complete figure.In convolution algorithm, filter is set as 3 × 3 matrix, by moving filter on the image simultaneously The matrix i.e. convolution feature that dot product obtains is calculated, can detecte the edge feature of image, and non-linear unit uses Leaky ReLU function, expression formula are as follows:
Wherein, α=0.01.
Step S3: being handled using convolution characteristic pattern of the RPN to extraction, obtain the characteristic information of candidate frame, i.e., candidate Region;The information of comprehensive convolutional layer characteristic pattern and candidate frame, is mapped to convolution feature for the coordinate of candidate frame in the input image In the last layer of figure.
Specifically, the RPN slides over window in convolution characteristic pattern with the sliding window of n*n according to the obtained result of step S2 Mouthful, the full connection features that length is 256 or 512 dimension length are generated, to generate candidate region.
Step S4: handling ROI (candidate frame i.e. on convolution characteristic pattern) with minimum circumscribed rectangle algorithm, i.e., comprehensive The information for closing convolution characteristic pattern and candidate frame, extracts candidate frame information, reduces background information, makes ROI layers each, obtains Fixed-size characteristic pattern, and be connected with subsequent full articulamentum.
Wherein, the calculation method of minimum circumscribed rectangle algorithm is: determining the frame region of candidate frame first, and records external Rectangle length, width and area set rotation center, frame are rotated by certain angle, and record its outline in coordinate system side Upward boundary rectangle parameter seeks minimum circumscribed rectangle by calculating boundary rectangle area.
Step S5: after full articulamentum, the classification of characteristic pattern is judged and extracted using detection class probability, and utilize recurrence Device carries out position adjustment to the candidate frame for belonging to a certain feature, and class probability and frame are returned and carry out joint training, thus Obtain target area.
Specifically, classified using Softmax Loss (detection class probability) to candidate frame classification, judge candidate regions Whether domain is fire source region.The mathematical formulae of Softmax Loss are as follows:
Wherein,SjIt is j-th of value of the output vector S of softmax.
It is that size adjustment is carried out to original object window that frame, which returns device, and shifting or zooming out and amplification window make its positioning In place, window generally uses four dimensional vectors (x, y, w, h) to indicate, respectively indicates the center point coordinate and width of window It is high.
Step S6: being based on the target area, obtains the image with fire disaster target.
Specifically, according to target area, target area is drawn out on original testing image, to obtain with fire source The image of target.
The above description is only a preferred embodiment of the present invention, thus it is all according to the configuration described in the scope of the patent application of the present invention, The equivalent change or modification that feature and principle are done, is included in the scope of the patent application of the present invention.

Claims (6)

1. a kind of fire-fighting unmanned plane, which carries out fire disaster target, knows method for distinguishing, which comprises the following steps:
Step S1: camera is carried using unmanned plane, possible fire source region is shot, get testing image;
Step S2: the testing image is imported into convolutional neural networks, feature extraction is carried out, obtains convolution characteristic pattern;
Step S3: it is handled using convolution characteristic pattern of the RPN to extraction, obtains the characteristic information of candidate frame;Comprehensive convolutional layer The coordinate of candidate frame in the input image is mapped in the last layer of convolution characteristic pattern by the information of characteristic pattern and candidate frame;
Step S4: handling candidate frame with minimum circumscribed rectangle algorithm, i.e., the letter of comprehensive convolution characteristic pattern and candidate frame Breath, extracts candidate frame information, reduces background information, makes ROI layers each, obtains fixed-size characteristic pattern, and input Full articulamentum;
Step S5: after full articulamentum, the classification of characteristic pattern is judged and extracted using detection class probability, and return using frame Device carries out position adjustment to the candidate frame for belonging to a certain feature, and class probability and frame are returned and carry out joint training, thus Obtain target area;
Step S6: being based on the target area, obtains the image with fire disaster target.
2. the method according to claim 1, wherein the step S2 is specifically included: by colored testing image Gray level image is converted to, gray level image is filtered using filter, extracts characteristic pattern, it is right using non-linear unit The non-thread line cell processing of LeakyReLU is passed through in the corresponding region of characteristic pattern;The operation of maximum value pondization is carried out again, to obtain convolution Characteristic pattern.
3. method described in -2 any one according to claim 1, which is characterized in that in the step S3, the RPN is according to step The rapid obtained result of S2 slides over window in convolution characteristic pattern with the sliding window of n*n, and generating length is 256 or 512 dimension length Full connection features.
4. method according to claim 1 to 3, which is characterized in that in the step S4, minimum circumscribed rectangle The calculation method of algorithm is: determining the frame region of candidate frame first, and records boundary rectangle length, width and area, sets Frame is rotated by certain angle, and records boundary rectangle parameter of its outline on coordinate system direction by rotation center, passes through meter It calculates boundary rectangle area and seeks minimum circumscribed rectangle.
5. method according to any of claims 1-4, which is characterized in that in the step S5, frame, which returns device, is Size adjustment, shifting or zooming out and amplification window are carried out to original object window, make its positioning in place.
6. method described in -5 any one according to claim 1, which is characterized in that in the step S6, according to target area, Target area is drawn out on original testing image, to obtain the image with fire disaster target.
CN201910081863.9A 2019-01-28 2019-01-28 A kind of fire-fighting unmanned plane progress fire disaster target knowledge method for distinguishing Pending CN109815917A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910081863.9A CN109815917A (en) 2019-01-28 2019-01-28 A kind of fire-fighting unmanned plane progress fire disaster target knowledge method for distinguishing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910081863.9A CN109815917A (en) 2019-01-28 2019-01-28 A kind of fire-fighting unmanned plane progress fire disaster target knowledge method for distinguishing

Publications (1)

Publication Number Publication Date
CN109815917A true CN109815917A (en) 2019-05-28

Family

ID=66605530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910081863.9A Pending CN109815917A (en) 2019-01-28 2019-01-28 A kind of fire-fighting unmanned plane progress fire disaster target knowledge method for distinguishing

Country Status (1)

Country Link
CN (1) CN109815917A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110975191A (en) * 2019-12-24 2020-04-10 尹伟 Fire extinguishing method for unmanned aerial vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106890419A (en) * 2017-04-28 2017-06-27 成都谍翼科技有限公司 Unmanned aerial vehicle (UAV) control fire-fighting system
CN107792367A (en) * 2016-08-30 2018-03-13 重庆成吉消防材料有限公司 A kind of novel intelligent fire-fighting unmanned plane for being capable of automatic identification burning things which may cause a fire disaster
CN108830285A (en) * 2018-03-14 2018-11-16 江南大学 A kind of object detection method of the reinforcement study based on Faster-RCNN
CN109029641A (en) * 2018-07-27 2018-12-18 华南理工大学 A kind of water meter automatic testing method based on Faster-rcnn
CN109448307A (en) * 2018-11-12 2019-03-08 哈工大机器人(岳阳)军民融合研究院 A kind of recognition methods of fire disaster target and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107792367A (en) * 2016-08-30 2018-03-13 重庆成吉消防材料有限公司 A kind of novel intelligent fire-fighting unmanned plane for being capable of automatic identification burning things which may cause a fire disaster
CN106890419A (en) * 2017-04-28 2017-06-27 成都谍翼科技有限公司 Unmanned aerial vehicle (UAV) control fire-fighting system
CN108830285A (en) * 2018-03-14 2018-11-16 江南大学 A kind of object detection method of the reinforcement study based on Faster-RCNN
CN109029641A (en) * 2018-07-27 2018-12-18 华南理工大学 A kind of water meter automatic testing method based on Faster-rcnn
CN109448307A (en) * 2018-11-12 2019-03-08 哈工大机器人(岳阳)军民融合研究院 A kind of recognition methods of fire disaster target and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JING WU 等: "A Real-time Fire Detection Model Based on Cascade Strategy", 《INTERNATIONAL JOURNAL OF SOFTWARE & HARDWARE RESEARCH IN ENGINEERING》 *
伍伟明: "基于Faster_R_CNN的目标检测算法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110975191A (en) * 2019-12-24 2020-04-10 尹伟 Fire extinguishing method for unmanned aerial vehicle

Similar Documents

Publication Publication Date Title
CN110929560B (en) Video semi-automatic target labeling method integrating target detection and tracking
CN108509859B (en) Non-overlapping area pedestrian tracking method based on deep neural network
CN109255322B (en) A kind of human face in-vivo detection method and device
CN109829893B (en) Defect target detection method based on attention mechanism
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN109740413B (en) Pedestrian re-identification method, device, computer equipment and computer storage medium
US11763485B1 (en) Deep learning based robot target recognition and motion detection method, storage medium and apparatus
CN108334848B (en) Tiny face recognition method based on generation countermeasure network
CN105069472B (en) A kind of vehicle checking method adaptive based on convolutional neural networks
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN109102547A (en) Robot based on object identification deep learning model grabs position and orientation estimation method
CN111126412B (en) Image key point detection method based on characteristic pyramid network
CN109766873B (en) Pedestrian re-identification method based on hybrid deformable convolution
CN113592911B (en) Apparent enhanced depth target tracking method
CN111079518B (en) Ground-falling abnormal behavior identification method based on law enforcement and case handling area scene
CN110490907A (en) Motion target tracking method based on multiple target feature and improvement correlation filter
CN113159043B (en) Feature point matching method and system based on semantic information
CN113312973B (en) Gesture recognition key point feature extraction method and system
CN109448307A (en) A kind of recognition methods of fire disaster target and device
CN111046789A (en) Pedestrian re-identification method
CN114926747A (en) Remote sensing image directional target detection method based on multi-feature aggregation and interaction
CN115761627A (en) Fire smoke flame image identification method
CN111199245A (en) Rape pest identification method
CN114463619B (en) Infrared dim target detection method based on integrated fusion features
CN111260687A (en) Aerial video target tracking method based on semantic perception network and related filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190528

RJ01 Rejection of invention patent application after publication