CN111062434A - Multi-scale fusion detection method for unmanned aerial vehicle inspection - Google Patents

Multi-scale fusion detection method for unmanned aerial vehicle inspection Download PDF

Info

Publication number
CN111062434A
CN111062434A CN201911283201.6A CN201911283201A CN111062434A CN 111062434 A CN111062434 A CN 111062434A CN 201911283201 A CN201911283201 A CN 201911283201A CN 111062434 A CN111062434 A CN 111062434A
Authority
CN
China
Prior art keywords
scale
residual error
aerial vehicle
unmanned aerial
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911283201.6A
Other languages
Chinese (zh)
Inventor
李映国
陈俊吉
周杰
殷树才
杨宏
毛昕儒
何涛
陈健欣
黄亮
蒋沁知
夏维建
杨洪椿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yongchuan Power Supply Co of State Grid Chongqing Electric Power Co Ltd
Original Assignee
Yongchuan Power Supply Co of State Grid Chongqing Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yongchuan Power Supply Co of State Grid Chongqing Electric Power Co Ltd filed Critical Yongchuan Power Supply Co of State Grid Chongqing Electric Power Co Ltd
Priority to CN201911283201.6A priority Critical patent/CN111062434A/en
Publication of CN111062434A publication Critical patent/CN111062434A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations

Abstract

The invention discloses a multi-scale fusion detection method for unmanned aerial vehicle routing inspection, which comprises the following steps: and (3) enriching a data set: the data set is enriched by means of turning, translating, noising and the like on the picture, and different scale spaces are formed; extracting scale space features: carrying out image block training on different scale spaces to obtain a residual error output by a network; fusion of residual error information of the scale space; through the improvement of the convolutional neural network model, the thought of the deep residual error neural network is used for reference, all layers in each dense module are directly connected, so that the input of each layer contains the feature maps of all the earlier layers, the gradient problem is effectively solved through interlayer connection, the transfer of the features is strengthened, the problem of gradient disappearance is effectively solved, the transfer of the features is strengthened, the features of the convolutional neural network are more effectively reused, the number of parameters is greatly reduced, and the calculated amount is reduced.

Description

Multi-scale fusion detection method for unmanned aerial vehicle inspection
Technical Field
The invention relates to the technical field of image processing, in particular to a multi-scale fusion detection method for unmanned aerial vehicle routing inspection.
Background
And defect identification is carried out by using an unmanned aerial vehicle, and the defects which are difficult to find at the positions of the tower bottle mouth and above and by manpower account for 78.5 percent. Efficiency and quality are showing and are improving to greatly reduced intensity of labour, promoted and patrolled and examined efficiency, ensured the operation maintenance ability to the power equipment state. Therefore, the unmanned aerial vehicle application provides an effective solution for the intelligent development of line inspection, and the unmanned aerial vehicle is cooperatively matched with the traditional manual inspection, can be used for services such as daily inspection of a power grid, equipment basic data collection, fault inspection, investigation and evidence obtaining, disaster investigation, equipment acceptance, survey design, foreign matter clearing and the like, and has the advantages of rapidness, high working efficiency, no influence of regions, high inspection quality, good safety and the like.
But because the restriction of hardware equipment and cost, unmanned aerial vehicle's resolution ratio is fixed, and at unmanned aerial vehicle inspection process, because the motion blur, or factors such as light, air circumstance, lead to a lot of picture formation of image blurs, become incomplete picture, the unable clear information of patrolling and examining of reflecting, for this reason, we provide an unmanned aerial vehicle and patrol and examine multiscale fusion detection method, carry out the precision with the incomplete picture of a great amount and rebuild, thereby can be clear accurate the content and the information that show the picture, be used for solving the above-mentioned problem that proposes.
Disclosure of Invention
The invention aims to provide a multi-scale fusion detection method for unmanned aerial vehicle inspection, which aims to solve the problems that the resolution ratio of an unmanned aerial vehicle is fixed due to the limitation of hardware equipment and cost, and in the unmanned aerial vehicle inspection process, a plurality of pictures are blurred in imaging and become incomplete pictures due to motion blur or factors such as light, air environment and the like, and inspection information cannot be clearly reflected.
In order to achieve the purpose, the invention provides the following technical scheme: an unmanned aerial vehicle inspection multi-scale fusion detection method comprises the following steps:
step 1: and (3) enriching a data set: the data set is enriched by means of turning, translating, noising and the like on the picture, and different scale spaces are formed;
step 2: extracting scale space features: carrying out image block training on different scale spaces to obtain a residual error output by a network;
and step 3: and (3) fusion of scale space residual error information: fusing the residual error information of different scale spaces to obtain fused residual error and finally obtain a reconstructed image PF(x);
And 4, step 4: loss function: using the mean square path difference as a loss function of the network to obtain an integral loss function;
and 5: dot drawing frame: the larger feature map is eventually assigned to the more accurate box of points for the small target.
Preferably, in step 1, the number of scale spaces is 4, and is represented by S1, S2, S3 and S4.
Preferably, in step 2, the network output residual error formula is: integral multiple ofS(x)=WS×Hs+bsH is a characteristic diagram output by the residual error learning network, W is a weight of convolution, and b is a bias item.
Preferably, in step 3, the residual information fusion formula is: integral multiple ofF(x)=m×∫S(x)+(1—m)×∫S(x) Where m represents the weight of the scale space prediction residual.
Preferably, in step 3, the reconstructed image is formulated as: pF(x)=∫F(x)+x。
Preferably, in step 4, the formula of the loss function is as follows:
Figure BDA0002317317530000021
Figure BDA0002317317530000022
compared with the prior art, the invention provides a multi-scale fusion detection method for unmanned aerial vehicle routing inspection, which has the following beneficial effects:
through the improvement of the convolutional neural network model, the thought of the deep residual error neural network is used for reference, all layers in each dense module are directly connected, so that the input of each layer contains the feature maps of all the earlier layers, the gradient problem is effectively solved through interlayer connection, the transfer of the features is strengthened, the problem of gradient disappearance is effectively solved, the transfer of the features is strengthened, the features of the convolutional neural network are more effectively reused, the number of parameters is greatly reduced, and the calculated amount is reduced.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a technical scheme that: an unmanned aerial vehicle inspection multi-scale fusion detection method comprises the following steps:
step 1: and (3) enriching a data set: the data set is enriched by means of turning, translating, noising and the like on the picture, and different scale spaces are formed;
step 2: extracting scale space features: carrying out image block training on different scale spaces to obtain a residual error output by a network;
and step 3: and (3) fusion of scale space residual error information: fusing the residual error information of different scale spaces to obtain fused residual error and finally obtain a reconstructed image PF(x);
And 4, step 4: loss function: using the mean square path difference as a loss function of the network to obtain an integral loss function;
and 5: dot drawing frame: the larger feature map is eventually assigned to the more accurate box of points for the small target.
The first embodiment is as follows:
and (3) enriching a data set: the data set is enriched by means of turning, translating, noising and the like on the picture, and different scale spaces are formed; extracting scale space features: carrying out image block training on different scale spaces to obtain a residual error output by a network; and (3) fusion of scale space residual error information: fusing the residual error information of different scale spaces to obtain fused residual error and finally obtain a reconstructed image PF(x) (ii) a Loss function: using the mean square path difference as a loss function of the network to obtain an integral loss function; dot drawing frame: the larger feature map is eventually assigned to the more accurate box of points for the small target.
Example two:
in the first embodiment, the following steps are added:
in step 1, the number of scale spaces is 4, and is represented by S1, S2, S3, and S4.
And (3) enriching a data set: the data set is enriched by means of turning, translating, noising and the like on the picture, and different scale spaces are formed; extracting scale space features: carrying out image block training on different scale spaces to obtain a residual error output by a network; and (3) fusion of scale space residual error information: fusing the residual error information of different scale spaces to obtain fused residual error and finally obtain a reconstructed image PF(x) (ii) a Loss function:using the mean square path difference as a loss function of the network to obtain an integral loss function; dot drawing frame: the larger feature map is eventually assigned to the more accurate box of points for the small target.
Example three:
in the second embodiment, the following steps are added:
in step 2, the network output residual equation is: integral multiple ofS(x)=WS×Hs+bsH is a characteristic diagram output by the residual error learning network, W is a weight of convolution, and b is a bias item.
And (3) enriching a data set: the data set is enriched by means of turning, translating, noising and the like on the picture, and different scale spaces are formed; extracting scale space features: carrying out image block training on different scale spaces to obtain a residual error output by a network; and (3) fusion of scale space residual error information: fusing the residual error information of different scale spaces to obtain fused residual error and finally obtain a reconstructed image PF(x) (ii) a Loss function: using the mean square path difference as a loss function of the network to obtain an integral loss function; dot drawing frame: the larger feature map is eventually assigned to the more accurate box of points for the small target.
Example four:
in the third embodiment, the following steps are added:
in step 3, the residual information fusion formula is: integral multiple ofF(x)=m×∫S(x)+(1—m)×∫S(x) Where m represents the weight of the scale space prediction residual.
And (3) enriching a data set: the data set is enriched by means of turning, translating, noising and the like on the picture, and different scale spaces are formed; extracting scale space features: carrying out image block training on different scale spaces to obtain a residual error output by a network; and (3) fusion of scale space residual error information: fusing the residual error information of different scale spaces to obtain fused residual error and finally obtain a reconstructed image PF(x) (ii) a Loss function: using the mean square path difference as a loss function of the network to obtain an integral loss function; dot drawing frame: finally, the larger feature map is assigned toSmall targets are more accurate boxes of points.
Example five:
in the fourth example, the following steps were added:
in step 3, the reconstructed image formula is: pF(x)=∫F(x)+x。
And (3) enriching a data set: the data set is enriched by means of turning, translating, noising and the like on the picture, and different scale spaces are formed; extracting scale space features: carrying out image block training on different scale spaces to obtain a residual error output by a network; and (3) fusion of scale space residual error information: fusing the residual error information of different scale spaces to obtain fused residual error and finally obtain a reconstructed image PF(x) (ii) a Loss function: using the mean square path difference as a loss function of the network to obtain an integral loss function; dot drawing frame: the larger feature map is eventually assigned to the more accurate box of points for the small target.
Example six:
in the fifth example, the following steps were added:
in step 4, the loss function is formulated as:
Figure BDA0002317317530000041
and (3) enriching a data set: the data set is enriched by means of turning, translating, noising and the like on the picture, and different scale spaces are formed; extracting scale space features: carrying out image block training on different scale spaces to obtain a residual error output by a network; and (3) fusion of scale space residual error information: fusing the residual error information of different scale spaces to obtain fused residual error and finally obtain a reconstructed image PF(x) (ii) a Loss function: using the mean square path difference as a loss function of the network to obtain an integral loss function; dot drawing frame: the larger feature map is eventually assigned to the more accurate box of points for the small target.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. An unmanned aerial vehicle inspection multi-scale fusion detection method is characterized in that: the method comprises the following steps:
step 1: and (3) enriching a data set: the data set is enriched by means of turning, translating, noising and the like on the picture, and different scale spaces are formed;
step 2: extracting scale space features: carrying out image block training on different scale spaces to obtain a residual error output by a network;
and step 3: and (3) fusion of scale space residual error information: fusing the residual error information of different scale spaces to obtain fused residual error and finally obtain a reconstructed image PF(x);
And 4, step 4: loss function: using the mean square path difference as a loss function of the network to obtain an integral loss function;
and 5: dot drawing frame: the larger feature map is eventually assigned to the more accurate box of points for the small target.
2. The unmanned aerial vehicle inspection tour multi-scale fusion detection method according to claim 1, characterized in that: in step 1, the number of scale spaces is 4, and is represented by S1, S2, S3 and S4.
3. The unmanned aerial vehicle inspection tour multi-scale fusion detection method according to claim 1, characterized in that: in step 2, the network output residual error formula is: integral multiple ofS(x)=WS×Hs+bsH is a characteristic diagram output by the residual error learning network, W is a weight of convolution, and b is a bias item.
4. The unmanned aerial vehicle inspection tour multi-scale fusion detection method according to claim 1, characterized in that: in step 3, the residual information fusion formula is as follows: integral multiple ofF(x)=m×∫S(x)+(1-m)×∫S(x) Where m represents the weight of the scale space prediction residual.
5. The unmanned aerial vehicle inspection tour multi-scale fusion detection method according to claim 1, characterized in that: in step 3, the reconstructed image formula is as follows: pF(x)=∫F(x)+x。
6. The unmanned aerial vehicle inspection tour multi-scale fusion detection method according to claim 1, characterized in that: in step 4, the formula of the loss function is:
Figure FDA0002317317520000011
CN201911283201.6A 2019-12-13 2019-12-13 Multi-scale fusion detection method for unmanned aerial vehicle inspection Pending CN111062434A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911283201.6A CN111062434A (en) 2019-12-13 2019-12-13 Multi-scale fusion detection method for unmanned aerial vehicle inspection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911283201.6A CN111062434A (en) 2019-12-13 2019-12-13 Multi-scale fusion detection method for unmanned aerial vehicle inspection

Publications (1)

Publication Number Publication Date
CN111062434A true CN111062434A (en) 2020-04-24

Family

ID=70301608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911283201.6A Pending CN111062434A (en) 2019-12-13 2019-12-13 Multi-scale fusion detection method for unmanned aerial vehicle inspection

Country Status (1)

Country Link
CN (1) CN111062434A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096561A (en) * 2016-06-16 2016-11-09 重庆邮电大学 Infrared pedestrian detection method based on image block degree of depth learning characteristic
CN106295655A (en) * 2016-08-03 2017-01-04 国网山东省电力公司电力科学研究院 A kind of transmission line part extraction method patrolling and examining image for unmanned plane
CN106599780A (en) * 2016-10-27 2017-04-26 国家电网公司 Power grid polling image intelligent identification method and device
CN108537731A (en) * 2017-12-29 2018-09-14 西安电子科技大学 Image super-resolution rebuilding method based on compression multi-scale feature fusion network
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning
CN109191411A (en) * 2018-08-16 2019-01-11 广州视源电子科技股份有限公司 A kind of multitask image rebuilding method, device, equipment and medium
CN109544501A (en) * 2018-03-22 2019-03-29 广东电网有限责任公司清远供电局 A kind of transmission facility defect inspection method based on unmanned plane multi-source image characteristic matching
CN110119749A (en) * 2019-05-16 2019-08-13 北京小米智能科技有限公司 Identify method and apparatus, the storage medium of product image
CN110188641A (en) * 2019-05-20 2019-08-30 北京迈格威科技有限公司 Image recognition and the training method of neural network model, device and system
CN110245644A (en) * 2019-06-22 2019-09-17 福州大学 A kind of unmanned plane image transmission tower lodging knowledge method for distinguishing based on deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096561A (en) * 2016-06-16 2016-11-09 重庆邮电大学 Infrared pedestrian detection method based on image block degree of depth learning characteristic
CN106295655A (en) * 2016-08-03 2017-01-04 国网山东省电力公司电力科学研究院 A kind of transmission line part extraction method patrolling and examining image for unmanned plane
CN106599780A (en) * 2016-10-27 2017-04-26 国家电网公司 Power grid polling image intelligent identification method and device
CN108537731A (en) * 2017-12-29 2018-09-14 西安电子科技大学 Image super-resolution rebuilding method based on compression multi-scale feature fusion network
CN109544501A (en) * 2018-03-22 2019-03-29 广东电网有限责任公司清远供电局 A kind of transmission facility defect inspection method based on unmanned plane multi-source image characteristic matching
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning
CN109191411A (en) * 2018-08-16 2019-01-11 广州视源电子科技股份有限公司 A kind of multitask image rebuilding method, device, equipment and medium
CN110119749A (en) * 2019-05-16 2019-08-13 北京小米智能科技有限公司 Identify method and apparatus, the storage medium of product image
CN110188641A (en) * 2019-05-20 2019-08-30 北京迈格威科技有限公司 Image recognition and the training method of neural network model, device and system
CN110245644A (en) * 2019-06-22 2019-09-17 福州大学 A kind of unmanned plane image transmission tower lodging knowledge method for distinguishing based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
丁文东等: "移动机器人视觉里程计综述", 《自动化学报》 *
戴伟聪等: "遥感图像中飞机的改进YOLOv3实时检测算法", 《光电工程》 *
汪家明等: "多尺度残差深度神经网络的卫星图像超分辨率算法", 《武汉工程大学学报》 *

Similar Documents

Publication Publication Date Title
CN110827251B (en) Power transmission line locking pin defect detection method based on aerial image
Rahman et al. Autonomous vision-based primary distribution systems porcelain insulators inspection using UAVs
WO2022111219A1 (en) Domain adaptation device operation and maintenance system and method
Mittal et al. Vision based railway track monitoring using deep learning
CN109859163A (en) A kind of LCD defect inspection method based on feature pyramid convolutional neural networks
CN112184711A (en) Photovoltaic module defect detection and positioning method and system
Balasubramani et al. Infrared thermography based defects testing of solar photovoltaic panel with fuzzy rule-based evaluation
CN114743119B (en) High-speed rail contact net hanger nut defect detection method based on unmanned aerial vehicle
CN110648310A (en) Weak supervision casting defect identification method based on attention mechanism
CN111223087B (en) Automatic bridge crack detection method based on generation countermeasure network
US20220046220A1 (en) Multispectral stereo camera self-calibration algorithm based on track feature registration
CN114332020A (en) Photovoltaic panel positioning and defect detection method and system based on visible light image
CN115861263A (en) Insulator defect image detection method based on improved YOLOv5 network
Liao et al. Using Drones for Thermal Imaging Photography and Building 3D Images to Analyze the Defects of Solar Modules
CN114943689A (en) Method for detecting components of steel cold-rolling annealing furnace based on semi-supervised learning
CN115995058A (en) Power transmission channel safety on-line monitoring method based on artificial intelligence
Kumar et al. Detection of concrete cracks using dual-channel deep convolutional network
CN113837994B (en) Photovoltaic panel defect diagnosis method based on edge detection convolutional neural network
CN115170816A (en) Multi-scale feature extraction system and method and fan blade defect detection method
CN114419421A (en) Subway tunnel crack identification system and method based on images
CN112837281B (en) Pin defect identification method, device and equipment based on cascade convolution neural network
CN114494875A (en) Visual detection method, system, equipment and medium for power grid equipment
Han et al. SSGD: A smartphone screen glass dataset for defect detection
CN111062434A (en) Multi-scale fusion detection method for unmanned aerial vehicle inspection
CN114596244A (en) Infrared image identification method and system based on visual processing and multi-feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination