CN116403129A - Insulator detection method suitable for complex climate environment - Google Patents

Insulator detection method suitable for complex climate environment Download PDF

Info

Publication number
CN116403129A
CN116403129A CN202310305929.4A CN202310305929A CN116403129A CN 116403129 A CN116403129 A CN 116403129A CN 202310305929 A CN202310305929 A CN 202310305929A CN 116403129 A CN116403129 A CN 116403129A
Authority
CN
China
Prior art keywords
insulator
image
fog
feature
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310305929.4A
Other languages
Chinese (zh)
Inventor
曹忠
陈开鸿
尚文利
赵文静
陈俊佐
刘晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202310305929.4A priority Critical patent/CN116403129A/en
Publication of CN116403129A publication Critical patent/CN116403129A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Insulators (AREA)

Abstract

The invention provides an insulator detection method adapting to complex climate environment, which comprises the following steps: expanding the insulator public data set by using a random transformation method; obtaining an insulator data set containing complex climate environment information by using a simulated foggy day algorithm and a simulated rainy day algorithm; manually labeling the training set to generate a labeling file; and performing iterative training on the yolov5 network model by using the image data and the annotation file in the training set, and storing the model weight with highest precision. And carrying out model evaluation by using the model weight with highest precision and the test set to obtain an insulator detection result. The insulator detection model obtained by the invention can keep higher precision under the complex climate environment information, and solves the problem of difficult detection caused by more noise in aviation images acquired by unmanned aerial vehicle inspection under the complex climate environment.

Description

Insulator detection method suitable for complex climate environment
Technical Field
The invention relates to the field of insulator detection, in particular to an insulator detection method suitable for a complex climate environment.
Background
The insulator is an important component of the power transmission circuit and has the functions of machine support and power insulation. At present, the insulator is exposed in natural environment for a long time, is influenced by environmental factors such as lightning stroke, temperature difference, ice and snow, and is subjected to strong electric field and long-term work load in operation, so that the damage phenomenon is easy to occur, the local power supply of a power transmission circuit is possibly interrupted, regional power grid faults can be caused in severe cases, and the safe and stable operation of a power grid system is directly threatened. The insulators are detected regularly, the defective insulators are found in time, the accidents can be prevented and avoided in advance, and the method has important significance for the safety and stability of a power grid system.
At present, the insulator detection mode mainly comprises traditional manual inspection, robot inspection and unmanned aerial vehicle inspection. Traditional manual inspection refers to that an maintainer climbs to a power supply tower to detect an insulator under the condition of power failure. The method has high cost, low efficiency and certain danger. The robot inspection requires a special mechanical structure to move over the wire and across obstacles such as damper. The disadvantage of this approach is the high requirements for robot operation and the complex process. The mode of combining unmanned aerial vehicle inspection and deep learning technology becomes the research hotspot of transmission line inspection at present. According to the method, the aerial image is acquired by using the camera on the unmanned aerial vehicle platform, and then the defect insulator is classified and positioned through the target detection model, so that the method is low in cost, efficient and rapid, and the problem of difficult detection of the aerial image under the complex background and different illumination conditions is solved.
However, the combination of unmanned aerial vehicle inspection and deep learning technology still has the defects. The part of the areas in China are often in complex alternate climatic environments such as sunny days, cloudy days, foggy days, rainy days and the like, and the aerial images of the insulators shot in the foggy and rainy days have more noise, so that the detection is difficult, the unmanned aerial vehicle inspection is difficult to adapt to the complex climatic environments, and the detection efficiency is affected.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a method for detecting an insulator adapted to a complex climate environment, the method comprising:
s1, acquiring an insulator public data set shot in a sunny weather environment and a cloudy weather environment, and expanding the insulator public data set through a random transformation method;
s2, processing the expanded insulator disclosure data set through a fog-weather simulation algorithm and a rain-weather simulation algorithm to obtain an insulator data set containing complex climate environment information;
s3, dividing the insulator data set obtained in the step S2 into a training set and a testing set according to a proportion, and marking each picture in the training set by using an easy data platform to generate a marking file;
s4, performing iterative training on the yolov5 network model which is introduced into the attention mechanism by using the image data and the annotation file of the training set, and storing the model weight with highest precision;
and S5, performing model evaluation by using the model weight with the highest precision and the test set to obtain an insulator detection result.
Further, the stochastic transformation method includes, but is not limited to, a translation operation, a rotation operation, and a mirroring operation.
Further, the processing the expanded image data by the simulated foggy weather algorithm and the simulated rainy weather algorithm specifically includes:
the principle of the simulated fog day algorithm is that a center point synthetic fog method is used, and the method specifically comprises the following steps: the center point synthetic fog method is based on an McCarney atmospheric scattering model, and the formula is as follows:
I(x)=J(x)t(x)+A[1-t(x)]
wherein J (x) represents the original haze-free image; i (x) represents a synthetic fog image; a represents the brightness of the atmospheric light component; t (x) represents transmittance; the range of brightness A is [0,1], and the range of brightness A represents the color of the synthetic fog, namely the gray value; the closer A is to 0, the more the color of the resultant fog is biased to white; the closer A is to 1, the more the color of the resultant fog is biased toward black;
the range of the transmittance t (x) is also [0,1], the range of the transmittance t (x) represents the proportion of the synthetic fog in the output image, and when t (x) is set to 0, the output image is a pure fog image; when t (x) is set to 1, the output image is a plain original image;
after the determination of a, a transmittance t (x) is set for each pixel in the original image, and the formula of the transmittance t (x) is as follows:
Figure BDA0004147063750000021
wherein-0.04 is a default parameter; (w, h) represents a certain pixel point of the image; (w) c ,h c ) Is the center point of the image, i.e. the center of the fog; s represents the size of the haze, taking the square root of the maximum value of the image width and height; delta represents the concentration of mist, and the larger delta is, the smaller t (x) is; conversely, the smaller δ, the greater the transmittance t (x).
Further, the principle of the rainy day simulation algorithm is that a random noise and filter method is used, and the motion trail of the simulated raindrops is superimposed for the image, and the method specifically comprises the following steps:
using a filter2D function in OpenCV and a parameter v to generate random noise with different densities on an original image to simulate the density of raindrops, wherein the size of v controls the number of raindrops;
in order to simulate raindrops with different sizes and directions, parameters w, l and a are introduced to respectively control the width, length and inclination angle of the raindrops;
the getration matrix2D, warpAffine, gaussianBlur, filter D and normal functions of OpenCV are utilized to obtain raindrop effects with different sizes and directions;
and superposing the generated raindrop noise and the original image to obtain the simulated raining scene.
Further, the process of using the filter function and parameter v in OpenCV to generate random noise of different densities on the original image to simulate the degree of rain drop density uses uniform random numbers and threshold control noise levels.
Further, the specific step of S3 includes:
the insulator data set obtained in the step S2 is processed according to the following steps: 2, dividing the ratio into a training set and a testing set, and then manually marking each picture of the data set by using an easy data platform;
the manual labeling comprises the following specific steps:
creating two labels on the labeling interface, wherein the two labels correspond to the normal insulator and the defect insulator respectively;
marking the insulator targets of each picture by adopting a rectangular frame mode, and selecting a label corresponding to the targets;
and generating the xml annotation file from the annotation information.
Further, each xml markup file corresponds to each marked picture, and the markup information of the xml markup file comprises 4 position attributes and 1 category attribute, wherein the position attributes are respectively x, which is the left upper-corner abscissa of the rectangular frame min And the ordinate y min And lower right-hand abscissa x max And the ordinate y max The category attribute is a label selected when the picture is marked.
Further, the yolov5 network model of the attention-introducing mechanism comprises an extraction layer, a feature fusion layer and an output layer;
the specific steps of the S4 are as follows:
preprocessing an input image, wherein the preprocessing is to enhance data of the image by using a Mosaic method;
the feature extraction layer carries out multi-layer convolution feature extraction on the preprocessed image so as to form a feature image, wherein the feature extraction layer consists of a 6 multiplied by 6 convolution layer, a CSP structure, a space pyramid pooling structure and a plurality of other convolution layers;
the method for realizing the attention mechanism in the insulator fault detection comprises the steps of adding an SE attention mechanism module into a CSP structure, wherein the core of the SE module is a Squeeze operation and a correlation operation, and the Squeeze operation is the generation of channel statistics Z by using global average pooling c The association operation is to use the full connection layer and the activation function to generate a weight value for each characteristic channel; channel statistics Z c The calculation formula of (2) is as follows:
Figure BDA0004147063750000031
wherein W and H are the width and height of the input feature map of the SE module respectively; u (u) c (i, j) represents a pixel value of a certain pixel point of the input feature map.
The spatial pyramid pooling structure is operated by using a group of pooling operators with different scales, and then the characteristic images with any size are converted into characteristic vectors with fixed sizes by splicing results.
After the feature extraction layer extracts the features of the image, the feature fusion layer is used for realizing fusion of deep information and shallow information, so that the accuracy of target detection is improved;
the output layer of the yolov5 network is designed into three output heads with different scales to carry out multi-scale prediction, a feature graph after the feature fusion layer is connected with the output layer to obtain an output result, an nms non-maximum value suppression method is used for carrying out loss calculation, the weight is updated, an index reflecting the model precision is output, and the model weight with the highest precision is finally saved through iterative training of the model; for loss calculation, yolov5 uses the GIoU loss function L GIoU Loss function L GIoU Is defined as follows:
L GIoU =1-GIoU
Figure BDA0004147063750000041
Figure BDA0004147063750000042
wherein IoU is the cross-over ratio, which is a parameter for measuring the uniformity of the aspect ratio; a is that c The area of the smallest circumscribed rectangle for the prediction frame and the real frame; i represents the intersection area of the prediction frame and the real frame; u represents the sum of the areas of the predicted and real frames minus the area where the two frames intersect.
Further, the feature fusion layer comprises an FPN structure and a PAN structure;
the specific steps for realizing the fusion of the deep information and the shallow information by using the feature fusion layer are as follows: the FPN structure fuses the deep feature map with the shallow feature map through upsampling; the PAN structure fuses the shallow feature map with the deep feature map by downsampling.
Further, the accuracy evaluation index of the yolov5 network is accuracy Precision, recall ratio Recall, F1 fraction and average accuracy mean mAP, and the calculation formula is as follows:
Figure BDA0004147063750000043
Figure BDA0004147063750000044
Figure BDA0004147063750000045
Figure BDA0004147063750000046
wherein TP represents and target box IoU>IoU threshold IoU of the number of detection frames of (a) threshold A threshold of IoU; FP representation and target box IoU < IoU threshold The number of detection frames of (a)An amount of; FN represents the number of target boxes that are not detected; mAP represents the average value of the APs of each category, and the APs are average accuracy; k represents the class of target detection.
Compared with the prior art, the invention has the advantages that:
according to the method, aiming at the problem of difficult detection caused by more noise in aerial images shot by unmanned aerial vehicle inspection in a complex climate environment, an insulator image is processed by using a simulated foggy day algorithm and a simulated rainy day algorithm, and an insulator data set containing complex climate environment information is obtained. Iterative training is carried out on the yolov5 network which is introduced into the SE attention mechanism by the data set and the corresponding annotation files, the obtained insulator detection model can keep higher detection precision in sunny, overcast, foggy and rainy weather environments, the influence of climate factors on unmanned aerial vehicle inspection is reduced, and dynamic detection in unmanned aerial vehicle inspection is further realized.
Drawings
The invention will be further described with reference to the accompanying drawings, in which embodiments do not constitute any limitation of the invention, and other drawings can be obtained by one of ordinary skill in the art without inventive effort from the following drawings.
Fig. 1 is a general flow chart of an insulator detection method adapted to a complex climate environment.
Fig. 2 is an internal frame of an insulator dataset containing complex climate environment information in accordance with an embodiment of the present invention.
Fig. 3 is a block diagram of a feature extraction layer in yolov5 according to an embodiment of the invention.
Fig. 4 is a block diagram of a feature fusion layer in yolov5 according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a SE attention module according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
Embodiment one:
the invention provides an insulator detection method adapting to complex climate environment, the general flow of the method is shown in figure 1, and the specific steps are as follows:
s1, acquiring an insulator public data set shot in sunny and cloudy environments, and expanding the data set through random transformation methods such as translation, rotation, mirror image operation and the like.
S2, because the fog day insulator image and the rain day insulator image are difficult to collect, the expanded image data is processed through a fog day simulation algorithm and a rain day simulation algorithm, and an insulator data set containing complex climate environment information is obtained.
The principle of the simulated fog day algorithm is to adopt a center point synthetic fog method. The center point synthesized fog means that fog synthesis and diffusion are carried out at the center point of a fog picture, and the closer the distance from the center point is, the stronger the fog synthesis effect is; conversely, the farther from the center point, the weaker the effect of mist composition. The center point synthetic fog method is based on an McCarney atmospheric scattering model:
I(x)=J(x)t(x)+A[1-t(x)]
wherein J (x) represents the original haze-free image; i (x) represents a synthetic fog image; a represents the brightness of the atmospheric light component; t (x) represents transmittance.
The range of luminance a is 0,1, representing the color of the resultant fog, i.e., the gray value. The closer A is to 0, the more the color of the resultant fog is biased to white; the closer A is to 1, the more the color of the resultant fog is biased toward black. The transmittance t (x) is also in the range of [0,1], indicating the proportion of the synthetic mist in the output image. When t (x) is set to 0, the output image is a pure fog image; when t (x) is set to 1, the output image is a plain original image.
After the a determination, an appropriate transmittance t (x) needs to be set for each pixel in the original image. the formula for t (x) is shown below:
Figure BDA0004147063750000051
wherein-0.04 is a default parameter; (w, h) represents a certain pixel point of the image; (w) c ,h c ) Is the center point of the image, i.e. the center of the fog; s represents the size of the haze, which is generally taken as the square root of the maximum of the image width and height; delta is the concentration of the fog, the bigger delta is, the smaller t (x) is, namely the heavier the fog is; conversely, the smaller δ, the larger t (x), i.e., the thinner the mist. And obtaining the insulator image data containing the fog weather information through a fog weather simulation algorithm.
The principle of the rainy day simulation algorithm is that a random noise and filter method is adopted to superimpose and simulate the motion trail of raindrops on an image.
The specific process is as follows:
first, the filter2D function and the parameter v in OpenCV are used to generate random noise of different densities on the original image to simulate the degree of rain drop density. Where the magnitude of v controls the number of raindrops, the process uses a uniform random number and a threshold to control the level of noise.
Secondly, in order to simulate raindrops with different sizes and directions, parameters w, l and a are also introduced to control the width, length and inclination angle of the raindrops respectively. The getRotationMatrix2D, warpAffine, gaussianBlur, filter D and normal functions of OpenCV were then used to obtain raindrop effects of different sizes and directions.
And finally, operating the pixels of each channel with the generated raindrop noise, and superposing the generated raindrop noise and the original image to obtain the simulated raindrop scene. And obtaining the insulator image data containing the rainy day information through simulating a rainy day algorithm.
S3, the insulator data set obtained in the step S2 is processed according to the following steps of about 8: the scale of 2 is divided into training and test sets as shown in fig. 2. Wherein the training set comprises 16485 images, the test set comprises 4078 images, and the training set and the test set both comprise complex climate environment information. Then, each picture of the data set is manually marked by using an easy data platform, and the specific method is as follows:
two labels are created at the labeling interface and correspond to the normal insulator and the defect insulator respectively. And then, marking the insulator targets of each picture in a rectangular frame mode, and selecting the labels corresponding to the targets. And finally, generating the xml annotation file from the annotation information.
Each xml file corresponds to each noted picture, and the noted information of the file comprises 4 position attributes and 1 category attribute. The position attributes are respectively the left upper-corner abscissa x of the rectangular frame min And the ordinate y min And lower right-hand abscissa x max And the ordinate y max . The category attribute is a label selected when the picture is annotated. The location attribute and the category attribute respectively represent the location and the category of the region where the insulator is located in the picture (i.e., a normal insulator or a defective insulator).
And S4, performing iterative training on a yolov5 network model (the model comprises a feature extraction layer, a feature fusion layer and an output layer) which is introduced into an attention mechanism by using image data and a labeling file in a training set, and storing model weights with highest precision. The specific method comprises the following steps:
the input image is preprocessed. The image is data enhanced using the Mosaic approach. According to the method, different images are spliced in a random scaling, random cutting and random arrangement mode, so that the background of a detection target in the picture is richer. In addition, the adaptive picture scaling is performed on pictures of different sizes.
The feature extraction layer performs multi-layer convolution feature extraction on the preprocessed image, so that a feature map is formed. The feature extraction layer consists of a 6 x 6 convolutional layer, a CSP structure, a spatial pyramid pooling Structure (SPPF), and several other convolutional layers, as shown in fig. 3. The 6 x 6 convolution layer downsamples the image without losing information, focusing the information of W and H of the image into the channel space. The CSP structure adopts the idea of a residual neural network, effectively increases the depth of the network, reduces the gradient disappearance, and improves the characteristic extraction capability of the network.
In actual transmission line insulator detection, an insulator target to be detected is often accompanied by a complex background, so that the target is difficult to identify and is easy to miss. Therefore, attention mechanisms are introduced in the fault detection of the insulator, so that important characteristics of the insulator can be focused, and other irrelevant characteristic information in the background can be restrained or ignored, thereby effectively improving the accuracy of the fault detection of the insulator.
A specific implementation of the attention mechanism is to add an SE attention mechanism module to the CSP structure, as shown in fig. 3. The core of the SE module is a compression (Squeeze) operation and an Excitation (specification) operation. The specific method comprises the following steps: the method comprises the steps of carrying out global pooling on an input feature map, changing the size of the feature map into 1 multiplied by 1 channels, then adjusting the feature map with the 1 multiplied by 1 channels through a full connection layer and a Sigmoid activation function, changing the feature map into the weight of each feature map, and multiplying the feature map with the input feature. As shown in FIG. 4, the compression operation is to generate channel statistics Z using global averaging pooling c The excitation operation is to generate a weight value for each feature channel using the full connection layer and Sigmoid activation function. Z is Z c The calculation formula of (2) is as follows:
Figure BDA0004147063750000071
wherein W and H are the width and height of the input feature map of the SE module respectively; u (u) c (i, j) represents a pixel value of a certain pixel point of the input feature map.
SPPF uses a group of pooling operators with different scales to operate, and then the result is spliced to convert the feature map with any size into the feature vector with fixed size. This structure solves the problem that the CNN needs to fix the size of the input image, resulting in unnecessary loss of precision.
And then, the feature fusion layer is used for realizing fusion of deep information and shallow information, so that the accuracy of target detection is improved. The feature fusion layer is composed of an FPN structure and a PAN structure, as shown in fig. 5, and the specific fusion process is as follows:
because the deep feature map carries stronger semantic features and weaker positioning information, the shallow feature map carries stronger positioning information and weaker semantic features. The FPN structure fuses the deep feature map through upsampling and the shallow feature map, and transmits the deep semantic features to the shallow layer, so that semantic expression on multiple scales is enhanced. The PAN structure fuses the shallow feature map with the deep feature map through downsampling, and transmits shallow positioning information to the deep layer, so that positioning capability on multiple scales is enhanced. The fusion of deep information and shallow information can be realized through the combination of the FPN structure and the PAN structure.
For the output of the network, the output layer of yolov5 is designed as three different scale output Head for multi-scale prediction. And the feature map after passing through the feature fusion layer is connected with the output layer to obtain an output result. And screening out a final target prediction frame by using an nms non-maximum value inhibition method, carrying out loss calculation on the final target prediction frame and a mark frame in a mark file, updating weight parameters in a back propagation mode, and outputting an index reflecting model precision. And finally, the model weight with highest precision is saved through iterative training of the model. For loss calculation, yolov5 uses the GIoU loss function L GIoU 。L GIoU Is defined as follows:
L GIoU =1-GIoU
Figure BDA0004147063750000081
Figure BDA0004147063750000082
wherein IoU is the cross-over ratio, which is a parameter for measuring the uniformity of the aspect ratio; a is that c The area of the smallest circumscribed rectangle for the prediction frame and the real frame; i represents the intersection area of the prediction frame and the real frame; u represents the sum of the areas of the predicted and real frames minus the area where the two frames intersect.
The accuracy evaluation of the yolov5 network is mainly measured by accuracy Precision, recall ratio Recall, F1 fraction and average accuracy mean mAP. The calculation formula is as follows:
Figure BDA0004147063750000083
Figure BDA0004147063750000084
Figure BDA0004147063750000085
Figure BDA0004147063750000086
wherein TP represents and target box IoU>IoU threshold IoU of the number of detection frames of (a) threshold A threshold of IoU, typically 0.5; FP representation and target box IoU < IoU threshold The number of detection frames of (a); FN represents the number of target boxes that are not detected; mAP is an average value of APs of each class, and AP represents average accuracy. The AP calculation method comprises the following steps: for each different Recall value (comprising 0 and 1), selecting the Precision maximum value when the Recall value is larger than or equal to the Recall values, and then obtaining the area under the PR curve as the AP. K represents the class of target detection.
And S5, carrying out model evaluation on the model weight with highest precision and the test set obtained in the step S4 to obtain an insulator detection result. The specific results are as follows:
the accuracy and recall rate are maintained at about 94% and 99.5%, respectively; the highest F1 fraction can reach 96.7%; the mAP can reach 99.5% at the highest. The detection result shows that the insulator detection method can keep higher detection precision under the complex climate environment information.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. An insulator detection method adapting to a complex climate environment, which is characterized by comprising the following steps:
s1, acquiring an insulator public data set shot in a sunny weather environment and a cloudy weather environment, and expanding the insulator public data set through a random transformation method;
s2, processing the expanded insulator disclosure data set through a fog-weather simulation algorithm and a rain-weather simulation algorithm to obtain an insulator data set containing complex climate environment information;
s3, dividing the insulator data set obtained in the step S2 into a training set and a testing set according to a proportion, and marking each picture in the training set by using an easy data platform to generate a marking file;
s4, performing iterative training on the yolov5 network model which is introduced into the attention mechanism by using the image data and the annotation file of the training set, and storing the model weight with highest precision;
and S5, performing model evaluation by using the model weight with the highest precision and the test set to obtain an insulator detection result.
2. A method of detecting an insulator adapted to a complex climatic environment according to claim 1, wherein the random transformation method includes, but is not limited to, a translation operation, a rotation operation and a mirror operation.
3. The method for detecting an insulator adapted to a complex climate according to claim 1, wherein the processing of the expanded image data by a simulated fog algorithm and a simulated rain algorithm comprises:
the principle of the simulated fog day algorithm is that a center point synthetic fog method is used, and the method specifically comprises the following steps: the center point synthetic fog method is based on an McCarney atmospheric scattering model, and the formula is as follows:
I(x)=J(x)t(x)+A[1-t(x)]
wherein J (x) represents the original haze-free image; i (x) represents a synthetic fog image; a represents the brightness of the atmospheric light component; t (x) represents transmittance; the range of brightness A is [0,1], and the range of brightness A represents the color of the synthetic fog, namely the gray value; the closer A is to 0, the more the color of the resultant fog is biased to white; the closer A is to 1, the more the color of the resultant fog is biased toward black;
the range of the transmittance t (x) is also [0,1], the range of the transmittance t (x) represents the proportion of the synthetic fog in the output image, and when t (x) is set to 0, the output image is a pure fog image; when t (x) is set to 1, the output image is a plain original image;
after the determination of a, a transmittance t (x) is set for each pixel in the original image, and the formula of the transmittance t (x) is as follows:
Figure FDA0004147063740000011
wherein-0.04 is a default parameter; (w, h) represents a certain pixel point of the image; (w) c ,h c ) Is the center point of the image, i.e. the center of the fog; s represents the size of the haze, taking the square root of the maximum value of the image width and height; delta represents the concentration of mist, and the larger delta is, the smaller t (x) is; conversely, the smaller δ, the greater the transmittance t (x).
4. The method for detecting the insulator adapting to the complex climate environment according to claim 1, wherein the principle of the rainy day simulation algorithm is that a random noise and filter method is used, and the motion trail of the simulated raindrops is superimposed for the image, and the specific steps are as follows:
using a filter2D function in OpenCV and a parameter v to generate random noise with different densities on an original image to simulate the density of raindrops, wherein the size of v controls the number of raindrops;
in order to simulate raindrops with different sizes and directions, parameters w, l and a are introduced to respectively control the width, length and inclination angle of the raindrops;
the getration matrix2D, warpAffine, gaussianBlur, filter D and normal functions of OpenCV are utilized to obtain raindrop effects with different sizes and directions;
and superposing the generated raindrop noise and the original image to obtain the simulated raining scene.
5. The method according to claim 4, wherein the process of generating random noise of different densities on the original image using the filter2D function and the parameter v in OpenCV to simulate the degree of the rain drop density uses uniform random numbers and a threshold to control the noise level.
6. The method for detecting an insulator adapted to a complex climate according to claim 1, wherein the specific step of S3 comprises:
the insulator data set obtained in the step S2 is processed according to the following steps: 2, dividing the ratio into a training set and a testing set, and then manually marking each picture of the data set by using an easy data platform;
the manual labeling comprises the following specific steps:
creating two labels on the labeling interface, wherein the two labels correspond to the normal insulator and the defect insulator respectively;
marking the insulator targets of each picture by adopting a rectangular frame mode, and selecting a label corresponding to the targets;
and generating the xml annotation file from the annotation information.
7. The method for detecting insulators according to the complex climate environment according to claim 6, wherein each xml markup file corresponds to each picture to be marked, and the markup information of the xml markup file includes 4 location attributes and 1 category attribute, and the location attributes are respectively the upper left corner abscissa x of the rectangular frame min And the ordinate y min And lower right-hand abscissa x max And the ordinate y max The category attribute is a label selected when the picture is marked.
8. The method for detecting an insulator adapted to a complex climate according to claim 1, wherein the yolov5 network model of the attention-introducing mechanism comprises a feature extraction layer, a feature fusion layer and an output layer;
the specific steps of the S4 are as follows:
preprocessing an input image, wherein the preprocessing is to enhance data of the image by using a Mosaic method;
the feature extraction layer performs multi-layer convolution feature extraction on the preprocessed image, so that a feature map is formed. The feature extraction layer consists of a 6 multiplied by 6 convolution layer, a CSP structure, a space pyramid pooling structure and a plurality of other convolution layers;
an attention mechanism is introduced in the fault detection of the insulator;
the space pyramid pooling structure is operated by using a group of pooling operators with different scales, and then a result is spliced to convert a feature map with any size into a feature vector with fixed size;
after the feature extraction layer extracts the features of the image, the feature fusion layer is used for realizing fusion of deep information and shallow information, so that the accuracy of target detection is improved;
the output layer of the yolov5 network is designed into three output heads with different scales to conduct multi-scale prediction, the feature graph after the feature fusion layer is connected with the output layer to obtain an output result, the nms non-maximum value suppression method is used for carrying out loss calculation, the weight is updated, the index reflecting the model precision is output, and the model weight with the highest precision is finally saved through iterative training of the model.
9. The method for detecting an insulator adapted to a complex climate according to claim 8, wherein the feature fusion layer comprises a FPN structure and a PAN structure;
the specific steps for realizing the fusion of the deep information and the shallow information by using the feature fusion layer are as follows: the FPN structure fuses the deep feature map with the shallow feature map through upsampling; the PAN structure fuses the shallow feature map with the deep feature map by downsampling.
10. The method for detecting the insulator adapting to the complex climate environment according to claim 8, wherein the Precision evaluation index of the yolov5 network is Precision, recall, F1 fraction and average Precision average mAP, and the calculation formula is as follows:
Figure FDA0004147063740000031
Figure FDA0004147063740000032
Figure FDA0004147063740000033
Figure FDA0004147063740000034
wherein TP represents and target box IoU>IoU threshold IoU of the number of detection frames of (a) threshold A threshold of IoU; FP representation and target box IoU < IoU threshild The number of detection frames of (a); FN represents the number of target boxes that are not detected; mAP represents the average value of the APs of each category, and the APs are average accuracy; k represents the class of target detection.
CN202310305929.4A 2023-03-24 2023-03-24 Insulator detection method suitable for complex climate environment Pending CN116403129A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310305929.4A CN116403129A (en) 2023-03-24 2023-03-24 Insulator detection method suitable for complex climate environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310305929.4A CN116403129A (en) 2023-03-24 2023-03-24 Insulator detection method suitable for complex climate environment

Publications (1)

Publication Number Publication Date
CN116403129A true CN116403129A (en) 2023-07-07

Family

ID=87008376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310305929.4A Pending CN116403129A (en) 2023-03-24 2023-03-24 Insulator detection method suitable for complex climate environment

Country Status (1)

Country Link
CN (1) CN116403129A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351022A (en) * 2023-12-06 2024-01-05 长沙能川信息科技有限公司 Transmission line insulator defect detection method based on complex environment
CN117422689A (en) * 2023-10-31 2024-01-19 南京邮电大学 Rainy day insulator defect detection method based on improved MS-PReNet and GAM-YOLOv7

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422689A (en) * 2023-10-31 2024-01-19 南京邮电大学 Rainy day insulator defect detection method based on improved MS-PReNet and GAM-YOLOv7
CN117422689B (en) * 2023-10-31 2024-05-31 南京邮电大学 Rainy day insulator defect detection method based on improved MS-PReNet and GAM-YOLOv7
CN117351022A (en) * 2023-12-06 2024-01-05 长沙能川信息科技有限公司 Transmission line insulator defect detection method based on complex environment
CN117351022B (en) * 2023-12-06 2024-03-08 长沙能川信息科技有限公司 Transmission line insulator defect detection method based on complex environment

Similar Documents

Publication Publication Date Title
CN116403129A (en) Insulator detection method suitable for complex climate environment
CN109376768B (en) Aerial image tower signboard fault diagnosis method based on deep learning
CN114743119B (en) High-speed rail contact net hanger nut defect detection method based on unmanned aerial vehicle
CN112184692A (en) Multi-target detection method for power transmission line
CN109712127B (en) Power transmission line fault detection method for machine inspection video stream
CN112365468A (en) AA-gate-Unet-based offshore wind power tower coating defect detection method
CN112101138A (en) Bridge inhaul cable surface defect real-time identification system and method based on deep learning
CN114494908A (en) Improved YOLOv5 power transmission line aerial image defect detection method
CN115984672B (en) Detection method and device for small target in high-definition image based on deep learning
CN116503709A (en) Vehicle detection method based on improved YOLOv5 in haze weather
CN115239710A (en) Insulator defect detection method based on attention feedback and double-space pyramid
Yuan et al. Identification method of typical defects in transmission lines based on YOLOv5 object detection algorithm
CN116385911A (en) Lightweight target detection method for unmanned aerial vehicle inspection insulator
CN115546664A (en) Cascaded network-based insulator self-explosion detection method and system
CN114820609A (en) Photovoltaic module EL image defect detection method
CN114627044A (en) Solar photovoltaic module hot spot detection method based on deep learning
CN114119454A (en) Device and method for smoke detection of power transmission line
CN113971666A (en) Power transmission line machine inspection image self-adaptive identification method based on depth target detection
CN117830216A (en) Lightning damage defect detection method and device for insulated terminal of power transmission line
CN112862766A (en) Insulator detection method and system based on image data expansion technology
CN116994161A (en) Insulator defect detection method based on improved YOLOv5
CN116452848A (en) Hardware classification detection method based on improved attention mechanism
CN115410154A (en) Method for identifying thermal fault of electrical equipment of wind power engine room
CN115049856A (en) Fan blade fault detection method and system based on improved YOLOv5
CN115311196A (en) Multi-vision fusion wind driven generator blade surface defect detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination