CN113888358A - Overhead line engineering quality common fault detection method and system based on deep learning - Google Patents

Overhead line engineering quality common fault detection method and system based on deep learning Download PDF

Info

Publication number
CN113888358A
CN113888358A CN202111220069.1A CN202111220069A CN113888358A CN 113888358 A CN113888358 A CN 113888358A CN 202111220069 A CN202111220069 A CN 202111220069A CN 113888358 A CN113888358 A CN 113888358A
Authority
CN
China
Prior art keywords
detection
data
prediction
picture
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111220069.1A
Other languages
Chinese (zh)
Inventor
于新民
聂克剑
李棣
刘沁
林瑞宗
刘志伟
陈远浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Fujian Electric Power Co Ltd
Economic and Technological Research Institute of State Grid Fujian Electric Power Co Ltd
Original Assignee
State Grid Fujian Electric Power Co Ltd
Economic and Technological Research Institute of State Grid Fujian Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Fujian Electric Power Co Ltd, Economic and Technological Research Institute of State Grid Fujian Electric Power Co Ltd filed Critical State Grid Fujian Electric Power Co Ltd
Priority to CN202111220069.1A priority Critical patent/CN113888358A/en
Publication of CN113888358A publication Critical patent/CN113888358A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Economics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an overhead line engineering quality common fault detection method and system based on deep learning, wherein the method comprises the following steps: acquiring and preprocessing pictures related to common quality faults of nuts and leads of the power transmission and transformation tower project; constructing a data set according to the requirements of a Yolov5 target detection algorithm; tuning the training hyper-parameters of the YOLOv5 algorithm, adding an optimization algorithm optimization detection model into the YOLOv5 algorithm, and then training the detection model through a data set to obtain a detection model; performing target detection on the input picture according to the detection model to obtain a primary detection result; decoding the preliminary detection result, screening out a final detection result by adopting a non-maximum suppression algorithm, and drawing a detection frame in the input picture; and judging the category of the common quality diseases according to the final detection result, and putting the category result of the common quality diseases in a result picture. The method and the system can effectively identify and judge whether the overhead line has the common quality problem.

Description

Overhead line engineering quality common fault detection method and system based on deep learning
Technical Field
The invention belongs to the field of image recognition and computer vision, and particularly relates to an overhead line engineering quality common fault detection method and system based on deep learning.
Background
Along with the increasing importance of normal operation of power systems in national production and life, the attention level of various common quality problems possibly existing in power transmission and transformation engineering construction in the industry is also increased continuously. In order to ensure efficient and healthy operation of power transmission and transformation project construction, and further implement responsibility for quality general fault prevention and control of the power transmission and transformation project, and improve quality level of the power transmission and transformation project construction, the national grid company compiles requirements and technical measures for quality general fault prevention and control of the power transmission and transformation project according to national and industry related project construction quality standards and specifications, provides specific prevention and control measures for common quality general faults of the power transmission and transformation project construction from a technical angle, and continuously updates the prevention and control requirements and technical measures in combination with treatment effect and new problems of the engineering general faults. The prevention and control of the construction quality common faults of the power transmission and transformation tower project are important links of project quality management, in the power transmission and transformation tower project construction, the common faults which can cause quality problems are many, and the typical problems are damage and rust marks of leads, untightening and missing of anti-theft nuts, inconsistent specifications of anti-theft nuts and the like. These quality problems can lead to circuit damage and even the breakdown of the entire power system. Therefore, during construction, workers should enhance inspection of the power equipment. However, currently, the detection of whether the power equipment has the common quality problem is mainly based on manual inspection, which is easily interfered by various factors to reduce efficiency and considerably waste human resources, so that the efficiency of the method is relatively low, and the actual requirements of each safety supervision department cannot be fully met.
Disclosure of Invention
The invention aims to provide a method and a system for detecting common quality faults of an overhead line project based on deep learning, which can effectively identify and judge whether the overhead line has common quality faults.
In order to achieve the purpose, the invention adopts the technical scheme that: a method for detecting common faults of engineering quality of an overhead line based on deep learning comprises the following steps:
s1, acquiring pictures related to common quality faults of the transmission and transformation tower engineering nuts and the leads and preprocessing the pictures;
s2, constructing a common quality fault detection data set of the transmission and transformation tower engineering nut and the lead according to the requirements of a YOLOv5 target detection algorithm;
s3, optimizing the training hyper-parameters of the YOLOv5 algorithm, adding an optimization algorithm optimization detection model into the YOLOv5 algorithm, and training the detection model through data sets to obtain a common quality fault detection model of the transmission and transformation tower engineering nuts and the leads;
s4, carrying out target detection on the input picture according to the obtained detection model to obtain a primary detection result;
s5, decoding the preliminary detection result, screening out a final detection result by adopting a non-maximum suppression algorithm, and drawing a detection frame in the input picture;
and S6, judging the type of the common quality diseases according to the final detection result, and putting the result of the type of the common quality diseases in a result picture.
Further, the step S1 is specifically:
s11, acquiring data pictures related to common quality faults of the transmission and transformation tower engineering nuts and the leads, screening the data pictures and removing useless data;
s12, preprocessing the screened data picture by using a method comprising neighborhood denoising and data image normalization;
s13, determining different object types in the pictures with common quality problems, and labeling the preprocessed data pictures by using a LabelImg labeling tool to obtain and store labeling information;
and S14, performing data enhancement by adopting a method comprising geometric transformation, image mixing and Mixup, and expanding the data picture sample.
Further, the step S2 is specifically:
s21, naming all the data pictures in a unified format, and dividing all the data pictures into a training set, a testing set and a verification set according to the requirements of a YOLOv5 algorithm;
s22, carrying out normalization processing on the object coordinates of the image data labeling information, mapping object type information, and generating a file required by the training model.
Further, the step S3 is specifically:
s31, obtaining an optimal value of the hyper-parameter, and optimizing the hyper-parameter to enable the performance of the training model to be optimal;
s32, presetting momentum parameters momentum and weight attenuation regular coefficients in momentum gradient descent in a training configuration module of YOLOv5, and adjusting the learning rate by adopting a steps method;
s33, adding a STEM module in a PeleNet network into a YOLOv5 network structure, and extracting different features by using two routes of convolution and pooling so as to improve the feature expression capability of the network while not increasing too much calculation time;
s34, calculating a prior frame anchor of the data set by using a k-means clustering algorithm, and normalizing the width and the height of the bounding box by using the width and the height of the data picture;
s35, adding a Transformer module in a YOLOv5 network structure, and overcoming the limitation caused by convolution induction deviation by using a self-attention mechanism;
s36, using a Focus module in a YOLOv5 network structure, carrying out slicing operation on an input image to obtain a plurality of feature maps, and then carrying out convolution operation of 32 convolution kernels on the feature maps once to obtain a feature map required by training;
and S37, starting to train the detection model based on the obtained characteristic diagram, and further obtaining the trained common quality detection model of the transmission and transformation tower engineering nut and the lead.
Further, in the step S34, the width and height w of the data picture is usedimg,himgFor width and height w of bounding boxbox,hboxNormalization is carried out, and specifically:
Figure BDA0003312267720000031
wherein, WNormalize,HNormalizeNormalized width and height;
let anchor ═ wanchor,hanchor),box=(wbox,hbox),wanchor,hanchorFor the width and height of the prior box anchor, the intersection ratio IOU is used as a measure, which is calculated as follows:
Figure BDA0003312267720000032
the value of the IOU is between 0 and 1, the more similar the two box are, the larger the IOU value is, d is the final measurement, and the calculation formula is as follows:
d=1-IOU(box,anchor)
randomly selecting k bounding boxes in the data set as initial anchors, using IOU measurement to allocate each bounding box to the anchor closest to the bounding box, traversing all the bounding boxes, calculating the average value of the width and the height of all the bounding boxes in each cluster, updating the anchors, and repeating the steps until the anchors are not changed or the maximum iteration number is reached.
Further, the step S4 is specifically:
s41, detecting pictures by using the trained model, and obtaining three feature maps with different sizes after input data pictures are processed by a feature extraction network;
and S42, convolving the three initial feature maps according to the extracted three feature maps with different sizes, wherein one part of the obtained results is used for outputting a prediction result corresponding to the feature map, and the other part of the obtained results is used for combining with other feature maps after deconvolution, so that the prediction results of the three effective feature maps are finally obtained.
Further, the step S5 is specifically:
s51, adjusting a preset prior frame according to the obtained prediction result to obtain the size and the position information of the prediction frame;
s52, processing the prediction frame after adjustment by using a non-maximum suppression algorithm, performing local search in the candidate target, searching the prediction frame with the highest confidence coefficient and suppressing the prediction frame with the low confidence coefficient;
and S53, obtaining a final detection frame after non-maximum value suppression, calculating the position information of the detection frame in the output picture according to the coordinates of the center point of the detection frame and the width and the height, and drawing the detection frame in the original picture to obtain a result picture.
Further, the step S51 is specifically:
s511, dividing the feature map into S multiplied by S grids, and then adjusting a preset prior frame to the effective feature map;
s512, obtaining priori frame information x from network prediction resultsoffest,yoffset,wanchor,hanchorThe values of the offset of the prediction box relative to the prior box in the x and y axes and the width and the height of the prior box are respectively represented;
s513, the central point coordinates of the prior frame corresponding to the grid are subjected to sigmoid function processing, and then the corresponding x is addedoffest,yoffsetObtaining the center of the prediction frame, and reusing wanchor,hanchorCalculating to obtain the width and the height of the prediction frame, and finally obtaining the size and the position information of the prediction frame;
the step S52 specifically includes:
s521, when non-maximum value suppression is performed, sorting the prediction frames of the same target from large confidence level to small confidence level, and taking out the prediction frame with the highest confidence level to calculate the IOU with the rest prediction frames;
setting two detection boxes B according to the process of finding local maximum value by using intersection ratio IOU1And B2Then the intersection between the two is as follows:
Figure BDA0003312267720000041
and S522, if the calculation result is larger than the set threshold, the prediction frame is restrained and cannot be output as a result, and after all the prediction frames are calculated, the prediction frame with the maximum confidence coefficient in the rest prediction frames is taken out.
Further, the step S6 is specifically:
s61, judging whether the picture has the common quality problem according to the detection result obtained in the step S5;
and S62, performing RGB color space conversion on the picture, displaying a Chinese result of common quality detection in the picture, and outputting a final result picture.
The invention provides an overhead line engineering quality common fault detection system based on deep learning, which comprises a memory, a processor and computer program instructions stored on the memory and capable of being operated by the processor, wherein when the processor operates the computer program instructions, the steps of the method can be realized.
Compared with the prior art, the invention has the following beneficial effects: the overhead line engineering quality common fault detection method and system based on deep learning are provided, can effectively identify and judge whether quality common fault problems exist in power equipment in power transmission and transformation tower engineering nuts and lead wires, have good generalization capability and robustness, and have good detection performance in a complex environment.
Drawings
FIG. 1 is a flow chart of a method implementation of an embodiment of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As shown in fig. 1, the embodiment provides a method for detecting common faults of engineering quality of an overhead line based on deep learning, which includes the following steps:
s1, obtaining pictures related to common quality faults of the transmission and transformation tower engineering nuts and the leads, such as data pictures of the leads, the anti-theft caps and the like, and preprocessing the pictures. The step S1 specifically includes:
s11, acquiring data pictures related to common quality faults of the transmission and transformation tower engineering nuts and the leads, screening the data pictures and removing useless data;
s12, preprocessing the screened data picture by using a method comprising neighborhood denoising and data image normalization;
s13, determining different object types in the pictures with common quality problems, and labeling the preprocessed data pictures by using a LabelImg labeling tool to obtain and store labeling information;
and S14, performing data enhancement by adopting a method comprising geometric transformation, image mixing and Mixup, and expanding the data picture sample.
S2, constructing a common quality fault detection data set of the transmission and transformation tower engineering nut and the lead according to the requirements of a YOLOv5 target detection algorithm. The step S2 specifically includes:
s21, naming all the data pictures in a unified format, and dividing all the data pictures into a training set, a testing set and a verification set according to the requirements of a YOLOv5 algorithm;
s22, carrying out normalization processing on the object coordinates of the image data labeling information, mapping object type information, and generating a txt file required by a training model.
S3, optimizing the training hyper-parameters of the YOLOv5 algorithm, adding an optimization algorithm optimization detection model into the YOLOv5 algorithm, and training the detection model through data sets to obtain a common quality fault detection model of the transmission and transformation tower engineering nuts and the leads. The step S3 specifically includes:
and S31, obtaining the optimal value of the hyper-parameter, and optimizing the hyper-parameter to enable the performance of the training model to be optimal.
S32, presetting a momentum parameter momentum and a weight attenuation regular coefficient in momentum gradient descent in a training configuration module of YOLOv5, and adjusting the learning rate by adopting a steps method.
Specifically, the momentum parameter momentum in the momentum gradient descent is set to 0.9 in the training configuration file of YOLOv5, so that the loss function in network training can be effectively prevented from falling into a local minimum value, and the speed of gradient convergence to an optimal value is accelerated; setting the weight-attenuated regularization coefficient decade to 0.0005 effectively prevents overfitting. If the learning rate is too large, the weight updating speed is high, but the selection of the optimal value is easily missed, if the learning rate is too small, the weight updating speed is slow, the training efficiency is low, the training speed and the selection of the optimal value can be effectively improved by setting a relatively proper learning rate, a steps method is adopted in the learning rate adjusting mode, and the learning rate is attenuated by a certain multiple when a certain number of iterations is reached.
S33, adding a STEM module in a PeleNet network into a YOLOv5 network structure, and extracting different features by using two routes of convolution and pooling so as to improve the feature expression capability of the network while not increasing too much calculation time consumption.
Specifically, a STEM module in a PeleNet is added into a YOLOv5 network structure, different features are extracted by using a convolution path and a pooling path, features are extracted by using two convolution kernels of 1 × 1 and 3 × 3 on the convolution path, features are extracted by using a 2 × 2 maximum pooling path, and finally the two extracted features are fused.
S34, calculating a priori frame anchor of the data set by using a k-means clustering algorithm, and using the width and height w of the data pictureimg,himgFor width and height w of bounding boxbox,hboxAnd (6) carrying out normalization.
The method specifically comprises the following steps:
Figure BDA0003312267720000061
wherein, WNormalize,HNormalizeNormalized width and height;
let anchor ═ wanchor,hanchor),box=(wbox,hbox),wanchor,hanchorFor the width and height of the prior box anchor, the intersection ratio IOU is used as a measure, which is calculated as follows:
Figure BDA0003312267720000071
the value of the IOU is between 0 and 1, the more similar the two box are, the larger the IOU value is, d is the final measurement, and the calculation formula is as follows:
d=1-IOU(box,anchor)
randomly selecting k bounding boxes in the data set as initial anchors, using IOU measurement to allocate each bounding box to the anchor closest to the bounding box, traversing all the bounding boxes, calculating the average value of the width and the height of all the bounding boxes in each cluster, updating the anchors, and repeating the steps until the anchors are not changed or the maximum iteration number is reached.
S35, adding a Transformer module in a YOLOv5 network structure, effectively overcoming the limitation caused by convolution induction deviation by using a self-attention mechanism, and calculating attention weight between each pair of features so as to obtain an updated feature mapping. Each position contains information about any other feature in the same image, and the detection performance of the detection model is effectively improved.
S36, a Focus module is used in a YOLOv5 network structure, a plurality of feature maps are obtained after slicing operation is carried out on an input image, and then 32 convolution kernels are carried out on the feature maps once to obtain the feature maps required by training. Compared with a method for obtaining a feature map by directly utilizing a Conv2d convolution kernel, the Focus module can effectively reduce FLOPS (floating point operation times per second) and network layer depth and improve the model reasoning speed.
And S37, starting to train the detection model based on the obtained characteristic diagram, and further obtaining the trained common quality detection model of the transmission and transformation tower engineering nut and the lead.
And S4, carrying out target detection on the input picture according to the obtained detection model, and obtaining a preliminary detection result. The step S4 specifically includes:
s41, detecting pictures by using the trained model, and obtaining three feature maps with different sizes after input data pictures are processed by a feature extraction network;
and S42, convolving the three initial feature maps according to the extracted three feature maps with different sizes, wherein one part of the obtained results is used for outputting a prediction result corresponding to the feature map, and the other part of the obtained results is used for combining with other feature maps after deconvolution, so that the prediction results of the three effective feature maps are finally obtained.
And S5, decoding the preliminary detection result, screening out a final detection result by adopting a non-maximum suppression algorithm, and drawing a detection frame in the input picture. The step S5 specifically includes:
and S51, adjusting the preset prior frame according to the obtained prediction result to obtain the size and the position information of the prediction frame. The step S51 specifically includes:
s511, dividing the obtained three feature maps into S multiplied by S grids, and then adjusting a preset prior frame to the effective feature map;
s512, obtaining priori frame information x from network prediction resultsoffest,yoffset,wanchor,hanchorThe values of the offset of the prediction box relative to the prior box in the x and y axes and the width and the height of the prior box are respectively represented;
s513, the central point coordinates of the prior frame corresponding to the grid are subjected to sigmoid function processing, and then the corresponding x is addedoffest,yoffsetObtaining the center of the prediction frame, and reusing wanchor,hanchorAnd calculating to obtain the width and the height of the prediction frame, and finally obtaining the size and the position information of the prediction frame.
And S52, processing the adjusted prediction frame by using a non-maximum suppression algorithm, performing local search in the candidate target, searching the prediction frame with the highest confidence coefficient and suppressing the prediction frame with the lower confidence coefficient. The step S52 specifically includes:
s521, when non-maximum value suppression is performed, sorting the prediction frames of the same target from large confidence level to small confidence level, and taking out the prediction frame with the highest confidence level to calculate the IOU with the rest prediction frames;
setting two detection boxes B according to the process of finding local maximum value by using intersection ratio IOU1And B2Then the intersection between the two is as follows:
Figure BDA0003312267720000081
and S522, if the calculation result is larger than the set threshold, the prediction frame is restrained and cannot be output as a result, and after all the prediction frames are calculated, the prediction frame with the maximum confidence coefficient in the rest prediction frames is taken out.
And S53, obtaining a final detection frame after non-maximum value suppression, calculating the position information of the detection frame in the output picture according to the coordinates of the center point of the detection frame and the width and the height, and drawing the detection frame in the original picture to obtain a result picture.
And S6, judging the type of the common quality diseases according to the final detection result, and displaying the result of the common quality diseases in a result picture in Chinese. The step S6 specifically includes:
s61, judging whether the picture has common quality problems or not by using methods such as variance calculation and the like according to the detection result obtained in the step S5;
and S62, performing RGB color space conversion on the picture, displaying a Chinese result of common quality disease detection in the picture by using NotoSensCJK-Black.
The embodiment also provides an overhead line engineering quality common fault detection system based on deep learning, which comprises a memory, a processor and computer program instructions stored on the memory and capable of being executed by the processor, wherein when the processor executes the computer program instructions, the steps of the method can be realized.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is directed to preferred embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.

Claims (10)

1. A method for detecting common faults of engineering quality of an overhead line based on deep learning is characterized by comprising the following steps:
s1, acquiring pictures related to common quality faults of the transmission and transformation tower engineering nuts and the leads and preprocessing the pictures;
s2, constructing a common quality fault detection data set of the transmission and transformation tower engineering nut and the lead according to the requirements of a YOLOv5 target detection algorithm;
s3, optimizing the training hyper-parameters of the YOLOv5 algorithm, adding an optimization algorithm optimization detection model into the YOLOv5 algorithm, and training the detection model through data sets to obtain a common quality fault detection model of the transmission and transformation tower engineering nuts and the leads;
s4, carrying out target detection on the input picture according to the obtained detection model to obtain a primary detection result;
s5, decoding the preliminary detection result, screening out a final detection result by adopting a non-maximum suppression algorithm, and drawing a detection frame in the input picture;
and S6, judging the type of the common quality diseases according to the final detection result, and putting the result of the type of the common quality diseases in a result picture.
2. The overhead line engineering quality common fault detection method based on deep learning of claim 1, wherein the step S1 specifically comprises:
s11, acquiring data pictures related to common quality faults of the transmission and transformation tower engineering nuts and the leads, screening the data pictures and removing useless data;
s12, preprocessing the screened data picture by using a method comprising neighborhood denoising and data image normalization;
s13, determining different object types in the pictures with common quality problems, and labeling the preprocessed data pictures by using a LabelImg labeling tool to obtain and store labeling information;
and S14, performing data enhancement by adopting a method comprising geometric transformation, image mixing and Mixup, and expanding the data picture sample.
3. The overhead line engineering quality common fault detection method based on deep learning of claim 1, wherein the step S2 specifically comprises:
s21, naming all the data pictures in a unified format, and dividing all the data pictures into a training set, a testing set and a verification set according to the requirements of a YOLOv5 algorithm;
s22, carrying out normalization processing on the object coordinates of the image data labeling information, mapping object type information, and generating a file required by the training model.
4. The overhead line engineering quality common fault detection method based on deep learning of claim 1, wherein the step S3 specifically comprises:
s31, obtaining an optimal value of the hyper-parameter, and optimizing the hyper-parameter to enable the performance of the training model to be optimal;
s32, presetting momentum parameters momentum and weight attenuation regular coefficients in momentum gradient descent in a training configuration module of YOLOv5, and adjusting the learning rate by adopting a steps method;
s33, adding a STEM module in a PeleNet network into a YOLOv5 network structure, and extracting different features by using two routes of convolution and pooling so as to improve the feature expression capability of the network while not increasing too much calculation time;
s34, calculating a prior frame anchor of the data set by using a k-means clustering algorithm, and normalizing the width and the height of the bounding box by using the width and the height of the data picture;
s35, adding a Transformer module in a YOLOv5 network structure, and overcoming the limitation caused by convolution induction deviation by using a self-attention mechanism;
s36, using a Focus module in a YOLOv5 network structure, carrying out slicing operation on an input image to obtain a plurality of feature maps, and then carrying out convolution operation of 32 convolution kernels on the feature maps once to obtain a feature map required by training;
and S37, starting to train the detection model based on the obtained characteristic diagram, and further obtaining the trained common quality detection model of the transmission and transformation tower engineering nut and the lead.
5. The overhead line engineering quality fault detection method based on deep learning of claim 4, wherein in the step S34, the width and height w of the data picture is usedimg,himgWidth and height w of bounding boxbox,hboxNormalization is carried out, and specifically:
Figure FDA0003312267710000021
wherein, WNormalize,HNormalizeNormalized width and height;
let anchor ═ wanchor,hanchor),box=(wbox,hbox),wanchor,hanchorFor the width and height of the prior box anchor, the intersection ratio IOU is used as a measure, which is calculated as follows:
Figure FDA0003312267710000022
the value of the IOU is between 0 and 1, the more similar the two box are, the larger the IOU value is, d is the final measurement, and the calculation formula is as follows:
d=1-IOU(box,anchor)
randomly selecting k bounding boxes in the data set as initial anchors, using IOU measurement to allocate each bounding box to the anchor closest to the bounding box, traversing all the bounding boxes, calculating the average value of the width and the height of all the bounding boxes in each cluster, updating the anchors, and repeating the steps until the anchors are not changed or the maximum iteration number is reached.
6. The overhead line engineering quality common fault detection method based on deep learning of claim 1, wherein the step S4 specifically comprises:
s41, detecting pictures by using the trained model, and obtaining three feature maps with different sizes after input data pictures are processed by a feature extraction network;
and S42, convolving the three initial feature maps according to the extracted three feature maps with different sizes, wherein one part of the obtained results is used for outputting a prediction result corresponding to the feature map, and the other part of the obtained results is used for combining with other feature maps after deconvolution, so that the prediction results of the three effective feature maps are finally obtained.
7. The overhead line engineering quality common fault detection method based on deep learning of claim 1, wherein the step S5 specifically comprises:
s51, adjusting a preset prior frame according to the obtained prediction result to obtain the size and the position information of the prediction frame;
s52, processing the prediction frame after adjustment by using a non-maximum suppression algorithm, performing local search in the candidate target, searching the prediction frame with the highest confidence coefficient and suppressing the prediction frame with the low confidence coefficient;
and S53, obtaining a final detection frame after non-maximum value suppression, calculating the position information of the detection frame in the output picture according to the coordinates of the center point of the detection frame and the width and the height, and drawing the detection frame in the original picture to obtain a result picture.
8. The overhead line engineering quality common fault detection method based on deep learning of claim 7, wherein the step S51 is specifically as follows:
s511, dividing the feature map into S multiplied by S grids, and then adjusting a preset prior frame to the effective feature map;
s512, obtaining priori frame information x from network prediction resultsoffest,yoffset,wanchor,hanchorThe values of the offset of the prediction box relative to the prior box in the x and y axes and the width and the height of the prior box are respectively represented;
s513, the central point coordinates of the prior frame corresponding to the grid are subjected to sigmoid function processing, and then the corresponding x is addedoffest,yoffsetObtaining the center of the prediction frame, and reusing wanchor,hanchorCalculating to obtain the width and the height of the prediction frame, and finally obtaining the size and the position information of the prediction frame;
the step S52 specifically includes:
s521, when non-maximum value suppression is performed, sorting the prediction frames of the same target from large confidence level to small confidence level, and taking out the prediction frame with the highest confidence level to calculate the IOU with the rest prediction frames;
setting two detection boxes B according to the process of finding local maximum value by using intersection ratio IOU1And B2Then the intersection between the two is as follows:
Figure FDA0003312267710000041
and S522, if the calculation result is larger than the set threshold, the prediction frame is restrained and cannot be output as a result, and after all the prediction frames are calculated, the prediction frame with the maximum confidence coefficient in the rest prediction frames is taken out.
9. The overhead line engineering quality common fault detection method based on deep learning of claim 1, wherein the step S6 specifically comprises:
s61, judging whether the picture has the common quality problem according to the detection result obtained in the step S5;
and S62, performing RGB color space conversion on the picture, displaying a Chinese result of common quality detection in the picture, and outputting a final result picture.
10. An overhead line engineering quality health detection system based on deep learning, comprising a memory, a processor and computer program instructions stored on the memory and executable by the processor, the computer program instructions when executed by the processor being capable of performing the method steps of claims 1-9.
CN202111220069.1A 2021-10-20 2021-10-20 Overhead line engineering quality common fault detection method and system based on deep learning Pending CN113888358A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111220069.1A CN113888358A (en) 2021-10-20 2021-10-20 Overhead line engineering quality common fault detection method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111220069.1A CN113888358A (en) 2021-10-20 2021-10-20 Overhead line engineering quality common fault detection method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN113888358A true CN113888358A (en) 2022-01-04

Family

ID=79003881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111220069.1A Pending CN113888358A (en) 2021-10-20 2021-10-20 Overhead line engineering quality common fault detection method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN113888358A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100495A (en) * 2022-07-08 2022-09-23 福州大学 Lightweight safety helmet detection method based on sub-feature fusion

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100495A (en) * 2022-07-08 2022-09-23 福州大学 Lightweight safety helmet detection method based on sub-feature fusion

Similar Documents

Publication Publication Date Title
CN108416307B (en) Method, device and equipment for detecting pavement cracks of aerial images
US11688057B2 (en) Method and system for quickly matching image features applied to mine machine vision
CN112598054B (en) Power transmission and transformation project quality common disease prevention and detection method based on deep learning
CN112967243A (en) Deep learning chip packaging crack defect detection method based on YOLO
CN110533022B (en) Target detection method, system, device and storage medium
CN114549563A (en) Real-time composite insulator segmentation method and system based on deep LabV3+
CN111222478A (en) Construction site safety protection detection method and system
CN110866872B (en) Pavement crack image preprocessing intelligent selection method and device and electronic equipment
CN108229524A (en) A kind of chimney and condensing tower detection method based on remote sensing images
CN115240075B (en) Construction and training method of electric power vision multi-granularity pre-training large model
CN114913606A (en) YOLO-based violation detection method for deep learning industrial field production work area
CN117190900B (en) Tunnel surrounding rock deformation monitoring method
CN111091101A (en) High-precision pedestrian detection method, system and device based on one-step method
CN115222727A (en) Method for identifying target for preventing external damage of power transmission line
CN113888358A (en) Overhead line engineering quality common fault detection method and system based on deep learning
CN116052094A (en) Ship detection method, system and computer storage medium
CN115830533A (en) Helmet wearing detection method based on K-means clustering improved YOLOv5 algorithm
CN117726991B (en) High-altitude hanging basket safety belt detection method and terminal
CN102496155A (en) Underwater optical image processing method for optimizing C-V (chan-vese) model
CN111310899B (en) Power defect identification method based on symbiotic relation and small sample learning
CN116523871A (en) Method and device for detecting defects of machined part, electronic equipment and storage medium
CN112818836B (en) Method and system for detecting personnel target of transformer substation scene
CN114359797A (en) Construction site night abnormity real-time detection method based on GAN network
CN112380985A (en) Real-time detection method for intrusion foreign matters in transformer substation
CN115760758A (en) Power grid infrastructure intelligent identification and evaluation method suitable for common quality faults of pole tower nuts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination