CN115239697A - YOLO v5 model-based distribution network overhead line insulated terminal dirt identification method - Google Patents

YOLO v5 model-based distribution network overhead line insulated terminal dirt identification method Download PDF

Info

Publication number
CN115239697A
CN115239697A CN202210998021.1A CN202210998021A CN115239697A CN 115239697 A CN115239697 A CN 115239697A CN 202210998021 A CN202210998021 A CN 202210998021A CN 115239697 A CN115239697 A CN 115239697A
Authority
CN
China
Prior art keywords
image
stain
dirt
terminal
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210998021.1A
Other languages
Chinese (zh)
Inventor
王榆
黄毅标
龚杭章
程航
林川杰
冯振波
郑孝干
周云婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Fujian Electric Power Co Ltd
Fuzhou Power Supply Co of State Grid Fujian Electric Power Co Ltd
Original Assignee
State Grid Fujian Electric Power Co Ltd
Fuzhou Power Supply Co of State Grid Fujian Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Fujian Electric Power Co Ltd, Fuzhou Power Supply Co of State Grid Fujian Electric Power Co Ltd filed Critical State Grid Fujian Electric Power Co Ltd
Priority to CN202210998021.1A priority Critical patent/CN115239697A/en
Publication of CN115239697A publication Critical patent/CN115239697A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a YOLO v5 model-based distribution network overhead line insulation terminal dirt identification method, which comprises the steps of collecting insulation terminal images, manually marking the insulation terminal images, and dividing insulation terminals into a dirt insulation terminal and a dirt-free insulation terminal according to whether the area of a dirt pixel is smaller than the standard size of the dirt; extracting position coordinates of stains in the stained image, and intercepting and copying the stains corresponding to the coordinates into an insulating terminal background image; carrying out data enhancement processing and image blocking to obtain a feature map; inputting the data into a neutral network for feature fusion processing to obtain a detection graph; inputting the data into a prediction module for prediction, and obtaining a final training model based on a loss function; carry the camera through the arm and carry out image acquisition to the insulated terminal of high-voltage line department, detect the picture of gathering based on the model that YOLO V5 trained, judge its spot degree.

Description

Distribution network overhead line insulated terminal dirt identification method based on YOLO v5 model
Technical Field
The invention relates to a YOLO v5 model-based distribution network overhead line insulated terminal dirt identification method, and belongs to the technical field of image processing.
Background
The coastal areas are mostly in high-temperature, high-humidity and high-salt-mist severe pollution areas due to the influence of salt mist and industrial pollution, the pollution level is high, pollution flashover faults can not occur on the surface of distribution lines and circuits, and the safe and stable operation of power distribution network equipment is seriously threatened. Serious, large-area, long-term or sudden filth deposition is one of the important reasons for frequent power failure of power distribution network equipment.
The pollution flashover of the distribution line is mainly a phenomenon that the insulation performance of the surface of a power supply is reduced and discharge continuously occurs because pollutants attached to the surface of the power supply are influenced by environmental conditions. The distribution line is formed by collectively combining a plurality of small parts, the main part of which is an insulated terminal. The insulated terminal is an insulated control part of a high-voltage distribution line, plays an important insulating role in an overhead distribution line, and has the functions of supporting a lead and preventing current from flowing back. And pollution flashover can lead to the insulation level of insulated terminal to reduce, can seriously influence insulated terminal's insulating action when serious, causes the electric current backward flow, the capacitive reactance that falls of electric wire receives the influence to can lead to the current loss to increase, take place some special conditions, if suffer the thunderbolt, lead to serious influences such as circuit tripping operation.
The traditional solution is that the pollution flashover of the insulated terminal is cleaned manually, the hot-line work on the traditional high-voltage distribution line is finished manually, so that higher risk exists, and the working environment is high altitude, so that the danger is high, and personal casualty accidents are easily caused;
manual work is not only inefficient, but it is also highly likely that out of specification results in work that is not expected, thereby resulting in the high voltage distribution line not working properly, creating an unexpected risk.
Therefore, a cleaning mechanical arm is arranged on a platform of the insulating bucket arm vehicle, a camera carried by the mechanical arm is used for carrying out image acquisition on an insulating terminal at a high-voltage wire, the degree of the stain is judged, and if the detection result contains the stain, the mechanical arm is driven to move to the position corresponding to the insulating terminal for cleaning and other work, so that the risk caused by manual work is reduced;
if image acquisition is not carried out to judge the stain degree, cleaning is directly carried out, the work amount is large, and the conditions of low working efficiency and ineffective cleaning exist; if the user judges the oil stain degree through the manual work of the image of gathering, not only there is the judgement error and judges inefficiency.
Disclosure of Invention
The invention aims to provide a YOLO v5 model-based distribution network overhead line insulated terminal dirt identification method, and aims to solve the problem of judging the dirt degree before cleaning an insulated terminal in the background technology.
The technical scheme of the invention is as follows:
an insulating terminal stain image detection method based on a YOLO v5 model comprises the following steps:
collecting an insulating terminal image, and manually marking the insulating terminal image data, wherein the insulating terminal image data comprises a position coordinate P of a stain marking frame and a stain category;
calculating the area of the blocky dirt according to the size of the marking frame, and dividing the insulating terminal image into a dirt insulating terminal image and a dirt-free insulating terminal image according to whether the dirt area is smaller than the preset dirt standard size;
intercepting a stain image from a stained insulating terminal image according to the position coordinates of the stain marking frame, copying at least one intercepted stain image into a background image of a non-stained insulating terminal, and performing data amplification on the stained insulating terminal image;
performing data enhancement processing on the image of the dirt-containing insulating terminal after data amplification to obtain a high-resolution insulating terminal dirt image dataset;
inputting an image in a high-resolution insulating terminal dirt image data set to a YOLO v5 model, carrying out image blocking on the input image through a Backbone module, giving a window size and a moving step length in advance by adopting a sliding window method, if the window contains dirt, keeping a block image as a characteristic image, and if the window does not contain dirt, not keeping the block image;
inputting the feature map into a hack network for feature fusion processing to obtain a detection map;
inputting the detection graph into a head network for prediction, outputting a predicted stain position coordinate P 'through the head network, constructing a loss function, calculating a loss value of the predicted stain position coordinate P' and a position coordinate P of a stain marking frame in a corresponding input image, performing back propagation on the basis of the calculated loss value, adjusting a network weight parameter, and performing iterative training to obtain an insulating terminal stain detection model;
and carrying out dirt detection on the field insulation terminal by adopting an insulation terminal dirt detection model.
Preferably, the position coordinate P = (x 1, x2, y1, y 2) of the stain labeling box, where (x 1, y 1), (x 2, y 2) represent the coordinates of the upper left corner and the lower right corner of the stain labeling box, respectively.
Preferably, the insulating end image data is manually annotated using the labelImg system.
Preferably, the method for image blocking of the input image by the Backbone moduleThe method comprises the following steps: setting the size of an input image as W1H 1, the size of a sliding window as W2H 2, the moving step length of the moving window as d, and the position coordinate of a stain marking frame as P = (x) 1 ,y 1 ,x 2 ,y 2 ) The position of the stain marking frame after the blocking operation is executed is P1= (x' 1 ,y′ 1 ,x′ 2 ,y′ 2 ) The position coordinate calculation formula of P1 is:
x′ 1 =x 1 -i*d
y′ 1 =y 1 -j*d
x′ 2 =x 2 -i*d
y′ 2 =y 2 -j*d
wherein i is the right sliding frequency of the sliding window image, and j is the downward sliding frequency of the sliding window.
Preferably, the training picture is divided into K × K grids during training, each grid is responsible for predicting B anchor frames, and a conversion formula between the predicted values and the target position is as follows:
Figure BDA0003806417640000041
wherein t is x And t y For predicting the distance, t, of the frame center coordinate from the grid upper left point coordinate w And t h For the scaling factor of the width and height of the prediction frame and the anchor frame, a w And a h Width and height of anchor frame, c x And c y Coordinates of a left upper point of a grid where the anchor frame is located and a confidence coefficient Conf for judging whether a target exists or not are obtained, coordinates of a center point of a predicted stain position predicted by the network are (x, y), and w and h are width and height of the prediction frame.
Preferably, in the step of constructing a Loss value of the Loss function for calculating the predicted stain position coordinate P' and the position coordinate P corresponding to the stain marking frame in the input image, the giru Loss position regression Loss function constructed for the stain marking frame position coordinate P is as follows:
Figure BDA0003806417640000042
Figure BDA0003806417640000043
wherein λ is coord In order to be a position loss factor,
Figure BDA0003806417640000044
is the true center coordinates of the target and,
Figure BDA0003806417640000045
for the true width and height of the target, if the anchor frame at (i, j) contains the target, then
Figure BDA0003806417640000046
The value is 1, otherwise the value is 0.
Preferably, the acquired field insulation terminal image is divided into pictures with specified sizes before detection, the dividing method adopts a moving window method, the dividing size is consistent with the size of a training image in image partitioning of a high-resolution insulator stain data set, the size of the acquired picture is W1H 1, the divided size is W2H 2, the moving window moving step length is d, the total number of the divided pictures after one piece of cloth is acquired is M, and the calculation formula of M is as follows:
Figure BDA0003806417640000047
wherein the finger rounding up the X and marking the divided picture as
Figure BDA0003806417640000051
Figure BDA0003806417640000052
During detection, S ij And inputting the test data into an insulation terminal dirt detection model in batches for detection.
Preferably, the picture S is detected ij When the stain P2 is medium and the coordinates are (x, y),splicing and displaying the segmented pictures on an image detection software interface and recording the final stain position coordinate P2' = (x ', y '), wherein: (x '= x + i × d), (y' = y + j × d).
Preferably, the YOLO v5 model comprises an input end, a backhaul module, a Neck network, a Head network and an output end;
input end: the method is used for enhancing the collected insulated terminal pictures through Mosaic data, and splicing the pictures in the modes of random zooming, random collection, random arrangement and the like;
a backhaul module: firstly, carrying out block slicing operation on an insulated terminal picture processed at an input end through a Focus module; then, performing two-part operation on the feature diagram through the CSP1_ X structure, wherein one part is subjected to convolution operation, and the other part and the result of the former part of the convolution operation are subjected to contite operation;
the hack network: fusing information at different stages in the feature map through the FPN-PAN structure;
head network: and comparing the length-width ratio, the overlapping area and the central point distance of the prediction frame and the actual frame by adopting the GIOU _ LOSSZ as a loss function of the bounding box, thereby determining the position of target detection.
The invention has the following beneficial effects:
the traditional YOLOv5 algorithm is improved aiming at the characteristics of the insulating terminal, so that the YOLOv5 algorithm has higher recognition accuracy on the stains of the insulating terminal;
carry the camera through the arm and carry out image acquisition to the insulated terminal of high-voltage line department, detect the picture of gathering based on the model that YOLO V5 trained, judge its spot degree, if the testing result contains the spot, then accessible path planning algorithm drives the arm operation to insulated terminal department and carries out work such as washing. The dirt is identified by collecting the image, the dirt cleaning work is automatically completed, and necessary picture data is transmitted to an engineer;
the robot cleans the pollution flashover of the insulated terminal by applying the algorithm of the scheme, so that huge work load caused by manually judging the oil stain degree of the insulated terminal can be avoided, the condition that the robot does not perform image recognition to clean the insulated terminal can also be avoided, the work load is increased, and the efficiency is low; the machine cleaning is beneficial to cleaning to meet the expected requirement, the risk caused by manual completion of live working on the traditional high-voltage distribution line is solved, and the risk caused by manual working is reduced.
Detailed Description
The present invention will be described in detail with reference to specific examples.
YOLO v5 is a deep processing image model based on a convolutional neural network, and mainly comprises an input end, a backhaul module, a Neck network, a Head network and an output end.
The input end is used for enhancing the collected pictures through Mosaic data, splicing the pictures in modes of random zooming, random collection, random arrangement and the like, so that the pictures obtained by different scales can be correctly judged when the model faces different angles, in addition, in the module, the model can set anchor frames with initially set length and width aiming at different detection targets, when the network is trained, a prediction frame is output on the basis of the initial anchor frame and compared with a real frame, the difference between the prediction frame and the real frame is calculated, then reverse updating is carried out, network parameters are iterated, and the optimal anchor frame values in different training sets are calculated in a self-adaptive mode, so that the specific position and size of the anchor frame are determined, in addition, in order to use image detection with different lengths and widths, the original pictures are zoomed to a standard size in the same mode in the module and then sent to the detection network for detecting the pictures.
A Backbone module: firstly, the Focus module is used for carrying out block slicing operation on the picture processed by the input end, an input channel is expanded by 4 times, the calculation power is improved under the condition that information is not lost, then, through the CSP1_ X structure, two parts of operation are carried out on the characteristic diagram, one part of operation is carried out convolution operation, and the other part of operation and the result of the former part of convolution operation are carried out state, so that the network learning capability is effectively enhanced, and the calculation amount is reduced.
The hack network: different stages of information in the feature diagram are fused through the FPN-PAN structure, the FPN layer transmits strong semantic features from top to bottom, the PAN tower transmits positioning features from bottom to top, and meanwhile, the CSP2 structure is adopted, so that the feature fusion capability of the network is enhanced, and the capability of learning the features of the network is enhanced.
Head network: and determining the position of target detection by taking the aspect ratio, the overlapping area and the center point distance of the prediction frame and the actual frame into consideration by adopting CIOU _ LOSSZ as a loss function of the bounding box.
In the step 1, a labelImg tool is used for manually marking the data of the insulating terminal image, wherein the data comprises position coordinates P of a stain marking frame and stain types. Where P = (x 1, x2, y1, y 2), where (x 1, y 1), (x 2, y 2) represent the coordinates of the upper left corner and the lower right corner of the image, respectively.
Calculating the area of the block-shaped stain according to the size of the stain marking frame, and calculating the size of the stain according to the size of the stain marking frame, wherein the step of calculating the size of the stain is as follows: when a collected picture is marked by a marking frame, marking is carried out according to the size of a stain, the size of the marking frame is close to the size of the stain in the picture, then the size of the marking frame is calculated, the pixel value occupied by the marking frame in the picture is converted into the actual stain area, and the insulation terminal is divided into a stain insulation terminal and a non-stain insulation terminal according to whether the stain pixel area is smaller than the standard size of the stain or not. And extracting the position coordinates of the stains in the stained image, and then cutting out the stains corresponding to the coordinates to finish the copying of the accurate pixels corresponding to the target object. And taking a prepared insulating terminal picture without stains as a background picture, and pasting the copied stained objects to the background picture, wherein the number of the objects pasted in each background picture can be set by self.
And (3) carrying out image blocking on the high-resolution insulator stain data set obtained after data amplification (because the resolution of an industrial camera used for general image acquisition is very high, the obtained image is very large, the direct processing is very slow, and blocking processing needs to be carried out firstly). And (3) adopting a sliding window method, giving the size of a window and the moving step length in advance, if the window contains dirt, keeping the block diagram as a final data set, and if the window does not contain the dirt, not keeping the block diagram. And the size of the original image is W1H 1, the size of the sliding window is W2H 2, the moving step length of the moving window is d, and the position coordinate of the stain marking frame is P = (x) 1 ,y 1 ,x 2 ,y 2 ) Execute byThe coordinate position of the stain marking frame after the blocking operation is P1= (x' 1 ,y′ 1 ,x′ 2 ,y′ 2 ) The position coordinate calculation formula of P1 is:
x′ 1 =x 1 -i*d
y′ 1 =y 1 -j*d
x′ 2 =x 2 -i*d
y′ 2 =y 2 -j*d
wherein i is the sliding window image right sliding frequency, and j is the sliding window downward sliding frequency.
The network divides the training picture into K grid when training, each grid is responsible for predicting B anchor frames, and the conversion formula between the predicted value and the target position is as follows:
x=σ(t x )+c x #
y=σ(t y )+c y #
Figure BDA0003806417640000081
Figure BDA0003806417640000082
wherein t is x And t y For predicting the distance, t, of the frame center coordinate from the grid upper left point coordinate w And t h For the scaling factor of the width and height of the prediction frame and the anchor frame, a w And a h Width and height of anchor frame, c x And c y Coordinates of a center point of a predicted stain position predicted by the network are (x, y), and w and h are the width and height of the prediction frame.
Adopting GIOU Loss as a position regression Loss function for the position coordinate P of the stain marking frame, and obtaining the position regression Loss L box The calculation is as follows:
Figure BDA0003806417640000083
Figure BDA0003806417640000084
wherein λ is coord Is the position loss factor.
Figure BDA0003806417640000085
Is the true center coordinates of the target and,
Figure BDA0003806417640000086
the actual width and height of the target. If the anchor frame at (i, j) contains the target, then
Figure BDA0003806417640000091
The value is 1, otherwise the value is 0.
The method comprises the steps of dividing an acquired insulating terminal image into pictures with specified sizes before detection, wherein a moving window method is adopted in the dividing method, and the dividing size is consistent with the size of a training image in image blocking of a high-resolution insulator stain data set. The size of the collected picture is W1H 1, the size of the collected picture after segmentation is W2H 2, and the moving step length of the moving window is d. The total number of the divided pictures after collecting one piece of cloth is M, and the calculation formula of M is as follows, wherein
Figure BDA0003806417640000094
The finger rounds up X.
Figure BDA0003806417640000092
Marking the segmented picture as
Figure BDA0003806417640000093
During detection, S ij And inputting the batch into a network for detection.
Detecting a picture S ij When the dirty mark P2 is in the middle and the coordinate is (x, y), the divided pictures need to be spliced and displayed on the image detectionThe software interface was tested and the final soil position coordinates P2' = (x ', y '), where (x ' = x + i x d), (y ' = y + j x d) were recorded.
The above description is only an embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes performed by the present invention or directly or indirectly applied to other related technical fields are also included in the scope of the present invention.

Claims (9)

1. An insulating terminal stain image detection method based on a YOLO v5 model is characterized in that: the method comprises the following steps:
collecting an insulating terminal image, and manually marking the insulating terminal image data, wherein the insulating terminal image data comprises a position coordinate P of a stain marking frame and a stain category;
calculating the area of the blocky dirt according to the size of the labeling frame, and dividing the insulating terminal image into a dirt insulating terminal image and a dirt-free insulating terminal image according to whether the dirt area is smaller than a preset dirt standard size;
intercepting a stain image from a stained insulating terminal image according to the position coordinates of the stain marking frame, copying at least one intercepted stain image into a background image of a non-stained insulating terminal, and performing data amplification on the stained insulating terminal image;
performing data enhancement processing on the image of the dirt-containing insulating terminal after data amplification to obtain a high-resolution insulating terminal dirt image dataset;
inputting an image in a high-resolution insulating terminal dirt image data set to a YOLO v5 model, carrying out image blocking on the input image through a Backbone module, giving a window size and a moving step length in advance by adopting a sliding window method, if the window contains dirt, keeping a block image as a characteristic image, and if the window does not contain dirt, not keeping the block image;
inputting the feature map into a hack network for feature fusion processing to obtain a detection map;
inputting the detection graph into a head network for prediction, outputting a predicted stain position coordinate P 'through the head network, constructing a loss function, calculating the predicted stain position coordinate P' and a loss value of the position coordinate P corresponding to a stain marking frame in the input image, performing back propagation on the basis of the calculated loss value to adjust a network weight parameter, and performing iterative training to obtain an insulation terminal stain detection model;
and carrying out stain detection on the field insulation terminal by adopting an insulation terminal stain detection model.
2. The distribution network overhead line insulated terminal dirt identification method based on the YOLO v5 model as claimed in claim 1, characterized in that: the position coordinates P = (x 1, x2, y1, y 2) of the stain labeling box, where (x 1, y 1), (x 2, y 2) represent the coordinates of the upper left corner and the lower right corner of the stain labeling box, respectively.
3. The distribution network overhead line insulated terminal dirt identification method based on the YOLO v5 model as claimed in claim 2, characterized in that: the insulating terminal image data was manually annotated using the labelImg system.
4. The distribution network overhead line insulated terminal dirt identification method based on the YOLO v5 model as claimed in claim 3, characterized in that: the method for performing image blocking on the input image through the backhaul module specifically comprises the following steps: setting the size of an input image as W1H 1, the size of a sliding window as W2H 2, the moving step length of the moving window as d, and the position coordinate of a stain marking frame as P = (x) 1 ,y 1 ,x 2 ,y 2 ) The position of the stain marking frame after the blocking operation is executed is P1= (x' 1 ,y′ 1 ,x′ 2 ,y′ 2 ) The calculation formula of the position coordinate of P1 is:
x′ 1 =x 1 -i*d
y′ 1 =y 1 -j*d
x′ 2 =x 2 -i*d
y′ 2 =y 2 -j*d
wherein i is the right sliding frequency of the sliding window image, and j is the downward sliding frequency of the sliding window.
5. The distribution network overhead line insulated terminal dirt identification method based on the YOLO v5 model as claimed in claim 4, characterized in that: during training, dividing a training picture into K grids, wherein each grid is responsible for predicting B anchor frames, and a conversion formula between a predicted value and a target position is as follows:
Figure FDA0003806417630000021
wherein t is x And t y For predicting the distance, t, of the frame center coordinate from the grid upper left point coordinate w And t h For the scaling factor of the width and height of the prediction frame and the anchor frame, a w And a h Width and height of anchor frame, c x And c y Coordinates of a center point of a predicted stain position predicted by the network are (x, y), and w and h are the width and height of the prediction frame.
6. The distribution network overhead line insulated terminal dirt identification method based on the YOLO v5 model as claimed in claim 5, characterized in that: in the step of constructing a Loss function to calculate the Loss value of the predicted stain position coordinate P' and the position coordinate P corresponding to the stain marking frame in the input image, the GIOU Loss position regression Loss function constructed on the position coordinate P of the stain marking frame is as follows:
Figure FDA0003806417630000031
wherein λ is coord In order to be a position loss factor,
Figure FDA0003806417630000032
is the coordinates of the center of the object at its true center,
Figure FDA0003806417630000033
for the true width and height of the target, if the anchor frame at (i, j) contains the target, then
Figure FDA0003806417630000034
The value is 1, otherwise the value is 0.
7. The distribution network overhead line insulated terminal dirt identification method based on the YOLO v5 model as claimed in claim 6, characterized in that: the method comprises the steps that before detection, collected field insulation terminal images are divided into pictures with specified sizes, a moving window method is adopted in the dividing method, the dividing size is consistent with the size of a training image in image blocks of a high-resolution insulator stain data set, the size of the collected pictures is W1H 1, the size of the divided pictures is W2H 2, the moving window moving step length is d, the total number of the divided pictures after one piece of cloth is collected is M, and the calculation formula of M is as follows:
Figure FDA0003806417630000035
wherein the finger rounding up the X and marking the divided picture as
Figure FDA0003806417630000036
Figure FDA0003806417630000037
During detection, S ij And inputting the test data into an insulation terminal dirt detection model in batches for detection.
8. The distribution network overhead line insulated terminal dirt identification method based on the YOLO v5 model as claimed in claim 7, characterized in that: detecting a picture S ij When the stain is detected in P2 and the coordinates are (x, y), the segmented pictures need to be displayed on an image detection software interface in a splicing manner, and the final stain position coordinates P2' = (x ', y ') are recorded, wherein: (x '= x + i × d), (y' = y + j × d).
9. The distribution network overhead line insulated terminal dirt identification method based on the YOLO v5 model as claimed in any one of claims 1-8, characterized in that: the YOLO v5 model comprises an input end, a backhaul module, a Neck network, a Head network and an output end;
input end: the system is used for enhancing the collected insulating terminal pictures through Mosaic data, and splicing the pictures in the modes of random zooming, random collection, random arrangement and the like;
a backhaul module: firstly, carrying out block slicing operation on an insulated terminal picture processed at an input end through a Focus module; then, performing two-part operation on the feature diagram through the CSP1_ X structure, wherein one part is subjected to convolution operation, and the other part and the result of the former part of convolution operation are subjected to constant operation;
the hack network: fusing information at different stages in the characteristic diagram through the FPN-PAN structure;
head network: and comparing the length-width ratio, the overlapping area and the central point distance of the prediction frame and the actual frame by adopting the GIOU _ LOSSZ as a loss function of the bounding box, thereby determining the position of target detection.
CN202210998021.1A 2022-08-19 2022-08-19 YOLO v5 model-based distribution network overhead line insulated terminal dirt identification method Pending CN115239697A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210998021.1A CN115239697A (en) 2022-08-19 2022-08-19 YOLO v5 model-based distribution network overhead line insulated terminal dirt identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210998021.1A CN115239697A (en) 2022-08-19 2022-08-19 YOLO v5 model-based distribution network overhead line insulated terminal dirt identification method

Publications (1)

Publication Number Publication Date
CN115239697A true CN115239697A (en) 2022-10-25

Family

ID=83680398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210998021.1A Pending CN115239697A (en) 2022-08-19 2022-08-19 YOLO v5 model-based distribution network overhead line insulated terminal dirt identification method

Country Status (1)

Country Link
CN (1) CN115239697A (en)

Similar Documents

Publication Publication Date Title
CN107742093B (en) Real-time detection method, server and system for infrared image power equipment components
CN109376768B (en) Aerial image tower signboard fault diagnosis method based on deep learning
CN109800697B (en) Transformer target detection and appearance defect identification method based on VGG-net style migration
CN113313005B (en) Power transmission conductor on-line monitoring method and system based on target identification and reconstruction
CN109543665B (en) Image positioning method and device
CN110084116A (en) Pavement detection method, apparatus, computer equipment and storage medium
CN110321933B (en) Fault identification method and device based on deep learning
CN109146880A (en) A kind of electric device maintenance method based on deep learning
CN111046950A (en) Image processing method and device, storage medium and electronic device
CN112418155A (en) Method for detecting position and type of workpiece on subway car side inspection image
CN114187505A (en) Detection method and device for falling-off of damper of power transmission line, medium and terminal equipment
CN113538503A (en) Solar panel defect detection method based on infrared image
CN116167999A (en) Distribution line zero-value insulator infrared thermal image detection method based on image matching
CN112183264A (en) Method for judging people lingering under crane boom based on spatial relationship learning
CN107967679A (en) A kind of automatic method for choosing positioning core based on PCB product vector graphics
CN115239697A (en) YOLO v5 model-based distribution network overhead line insulated terminal dirt identification method
CN113160209A (en) Target marking method and target identification method for building facade damage detection
CN113723389A (en) Method and device for positioning strut insulator
CN102866158B (en) Detection method of power transmission and distribution cables for tunnel routing inspection
CN112435274A (en) Remote sensing image planar ground object extraction method based on object-oriented segmentation
CN111780744A (en) Mobile robot hybrid navigation method, equipment and storage device
Jiale et al. Automatic identification method of pointer meter under complex environment
CN113486873B (en) Transformer substation equipment inspection method and system based on big data and artificial intelligence
CN105043275A (en) Image processing-based icing thickness measurement method for power transmission line
CN110223346B (en) Shape prior-based multi-insulator positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination