CN113807450A - Unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution picture - Google Patents

Unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution picture Download PDF

Info

Publication number
CN113807450A
CN113807450A CN202111116257.XA CN202111116257A CN113807450A CN 113807450 A CN113807450 A CN 113807450A CN 202111116257 A CN202111116257 A CN 202111116257A CN 113807450 A CN113807450 A CN 113807450A
Authority
CN
China
Prior art keywords
representing
picture
aerial vehicle
unmanned aerial
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111116257.XA
Other languages
Chinese (zh)
Inventor
陈雷平
贺达江
段意强
李妮菲
周妮
丁黎明
舒薇
牛红军
刘柏罕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaihua University
Original Assignee
Huaihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaihua University filed Critical Huaihua University
Priority to CN202111116257.XA priority Critical patent/CN113807450A/en
Publication of CN113807450A publication Critical patent/CN113807450A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses an unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution pictures, which comprises the following steps: firstly, establishing a fault type picture data set of an unmanned aerial vehicle aerial photography power transmission line; step two, establishing a YOLO detection model; step three, training the YOLO detection model through a low-resolution cut graph and a corresponding xml file to obtain a trained YOLO detection model; and step four, inputting the fault type picture of the power transmission line to be detected aerial photographed by the unmanned aerial vehicle into the trained YOLO detection model to obtain a detection result. The method adopts the sliding window to segment the picture, adopts a method of Mosaic data enhancement to expand the original data set, adaptively calculates the anchor frame according to the user-defined data set, performs adaptive picture scaling and has a new network architecture and a loss function with finer granularity characteristic and denser density, and effectively improves the accuracy of the fault detection of the power transmission line.

Description

Unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution picture
Technical Field
The invention belongs to the field of automatic inspection, and particularly relates to an unmanned aerial vehicle power line inspection fault detection method based on an ultrahigh resolution picture.
Background
In recent years, the economy of China is continuously, stably and rapidly developed, and the increase of the demand for electric energy causes the scale growth of the power grid of China to be rapid. By 2011, the scale of the power grid in China is ranked first in the world. In the beginning of establishment of new China, in 1949, the length of a transmission line of 35 kilovolts or more is 6475 kilometers; in 1978, the length of a transmission line of 35 kilovolts or more is 23 thousands of meters; in 2006, the length of the transmission line of 35 kilovolts and above breaks through 100 ten thousand kilometers; in 2018, the length of the 35 kV and above power transmission line is 189.2 ten thousand kilometers, and is 291 times longer than that in 1949; it is expected that by 2025, it will exceed 200 ten thousand kilometers. In practical situations, a considerable part of the high voltage transmission lines often need to cross mountains and rivers, which makes manual routine inspection of the high voltage transmission lines extremely difficult and dangerous. High-voltage transmission lines and various electrical elements installed on the transmission lines are completely exposed in the natural environment, and the transmission lines are affected by sunlight, wind, snow, thunderstorm and other natural forces. Along with the lapse of time and accumulation, the damage to the transmission line will be enlarged, thereby causing the problems of abrasion, corrosion, even strand breakage and the like of the transmission line. At present, the traditional line patrol modes of the power transmission line in China include pedestrian ground line patrol, base-by-base tower-climbing line patrol and the like, and the line patrol modes have high requirements on personnel and need to depend on the patrol personnel with rich experience.
The traditional power transmission line inspection mode has the following problems:
(1) the inspection efficiency is low. The power transmission line often strides over areas with few people smoke, such as mountains, forests and the like, inspection workers need to turn over mountains and cross mountains, move forward by walking and only remotely watch the power transmission line by human eyes, and the inspection efficiency of the power transmission line is low due to the fact that trees are shielded, and more false inspection and missed inspection conditions can be caused.
(2) And (5) inspecting quality. Due to the fact that communication traffic is inconvenient and quick, inspection efficiency is low, the inspection period of the power transmission line is long, the fault is not solved in time, one round of inspection is not finished frequently, and the inspection work is not significant due to the fact that the power transmission line which is inspected in the prior art has problems.
(3) The safety of the working personnel is not guaranteed. As part of the transmission lines are distributed in traffic dead zones and communication blind zones, once an accident happens to a worker who inspects the transmission lines, the worker is in danger of life.
Nowadays, it is domestic comparatively common to patrol the line for unmanned aerial vehicle, through ground staff control unmanned aerial vehicle, take the picture of transmission line and iron tower, finish patrolling the line back and look up the photo by the recognition personnel through professional training, mark in the local picture again that has the trouble hidden danger, if the transmission line strand breaks, hang foreign matter, insulator stained etc. confirm the actual geographical position of transmission line trouble according to the longitude and latitude information that picture self carried. However, the pictures aerial-shot by the unmanned aerial vehicle power line inspection tour belong to ultrahigh resolution, the resolution is as high as 5472 × 3078, the comparison of power line fault elements with the whole picture is less than 1%, the fault types of the power line are more, and the distance spanned by the power line is thousands of kilometers, so that the background semantic information of the pictures is very rich (forest, mountain, river, field, road and the like), and the interference information is more. The artificial judgment depends on the technical level of image recognition workers, the problems of missed detection, false detection and the like are easily caused, the checking workload of actual field maintenance personnel can be caused, meanwhile, the number of aerial images of the power transmission line of the unmanned aerial vehicle is very large, and the artificial detection of the faults of the power elements of the images is time-consuming and labor-consuming.
For the normal operation of guaranteeing the electric wire netting and maintainer's personal safety, utilize computer and unmanned aerial vehicle to carry on patrol and examine equipment as the platform, it is significant to carry out effectual artificial intelligence picture identification to high tension transmission line. At present, a conventional recognition method is used for recognizing a problem line, for example, a conventional target recognition algorithm using a HOG (Histogram of organized Gradient, HOG) feature + SVM (Support Vector Machine, SVM) algorithm or a DPM (constrained Part Model, DPM) algorithm is used for classification. The traditional identification method has the problems of low identification precision, high professional requirement on a marker, low identification speed and the like.
With the rise of deep learning in recent years, computer vision technology has made great progress in the past few years. Researchers have proposed using fast R-CNN, SSD and YOLO to detect transmission line faults. Fast R-CNN typically receives 1000 x 600 pixel pictures, whereas SSD uses 300 x 300 or 512 x 512 pixel input pictures and YOLO uses 416 x 416 or 640 x 640 pixel input pictures. Although these frames achieved good performance on the conventional target detection data set, none of them could directly detect 5472 × 3078 a very high fraction of pictures. In addition, multiple downsampling layers are typically used in these network models to generate object-differentiated relative features. This is problematic if the object of interest has only a few or a few tens of pixels in the picture. For example, consider the default YOLO network architecture, which downsamples by a factor of 32 and returns a 13 x 13 prediction trellis. This means that if the object centroid is spaced less than 32 pixels apart, it is difficult for the network to learn the feature information of the object.
Unmanned aerial vehicle takes photo by plane because its shooting angle problem can cause the object to have arbitrary direction, and this kind of limited rotation invariance scope is very troublesome. Furthermore, we also note that a very high fraction of pictures can simply solve some of the problems mentioned above. For example, upsampling a picture can ensure that the objects of interest are large enough and dispersed enough to meet the requirements of the standard architecture, but this simple and brute force approach is not feasible because it would also increase the runtime many times, be too demanding on hardware devices, and be less economical. Similarly, running a sliding window classifier on a picture to quickly search for objects of interest becomes computationally difficult to process because each object size requires multiple window sizes. If the target is a 10 meter long ship in a global digital picture, we must evaluate over 100 ten thousand sliding window cuts.
The main technical difficulties of unmanned aerial vehicle routing inspection are as follows:
the size of the ultra-high resolution picture is 5472 x 3078, while the faulty target object is particularly small. Therefore, if the ultrahigh resolution picture is simply reduced to the input size (hundreds of pixels) required by most algorithms, many small targets can be detected, and the unmanned aerial vehicle power patrol fault detection mainly detects the small target object (the object is not small in actual size, but is on the 5472 × 3078 picture, and is relatively a small object). If the original picture is simply downsampled without scaling, the downsampling multiple is too large, so that the loss of characteristic data information is easily caused; if the down-sampling multiple is too small, the video memory required by network forward propagation is extremely large, a large number of feature maps need to be stored in the video memory, GPU resources are greatly consumed, and the hardware cost for training the network model is greatly increased.
Secondly, the faulty object belongs to a small target object. In the ultrahigh resolution pictures taken by the unmanned aerial vehicle, the objects of interest are very small, and some objects are also very densely gathered, unlike the traditional data set. Objects in traditional datasets are large and salient. For example, objects such as cars, each occupy approximately 20% -80% of the area of the original picture in the conventional dataset, whereas for faults such as loss of R-pins in the power line, each object has a range of only between approximately 20 pixels and 100 pixels, even at the highest resolution, that is, each object occupies approximately-1% of the area of the original picture.
Complete rotational invariance and occlusion problems. The unmanned aerial vehicle aerial photography often can meet the problem that a fault object in a picture is seriously shielded due to the shooting angle, the complex background and other reasons, and the unmanned aerial vehicle can have any direction from an object observed at high altitude.
Training data is relatively lacking. Data in the aspect of power failure detection of unmanned aerial vehicle line patrol are few at present, on one hand, because traditional power line patrol is manual, and related failure data can not be photographed and stored when a failure occurs, and on the other hand, because the problem relates to the safety problem of China, the failure data of a common power transmission line can not be disclosed on the internet. Based on the two points, the number of the transmission line fault pictures which can be acquired by the user is relatively small.
Disclosure of Invention
In order to solve the problems, the invention discloses an unmanned aerial vehicle power line patrol fault detection method based on an ultrahigh resolution picture. The method adopts the sliding window to segment the picture, adopts a method of Mosaic data enhancement to expand the original data set, adaptively calculates the anchor frame according to the user-defined data set, performs adaptive picture scaling and has a new network architecture and a loss function with finer granularity characteristic and denser density, and effectively improves the accuracy of the fault detection of the power transmission line.
In order to achieve the purpose, the technical scheme of the invention is as follows:
an unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution pictures comprises the following steps:
firstly, establishing a fault type picture data set of an unmanned aerial vehicle aerial photography power transmission line;
step two, establishing a YOLO detection model, wherein the YOLO detection model comprises a sliding window map cutting unit, and the sliding window map cutting unit cuts a fault type picture data set of the power transmission line aerial photographed by the unmanned aerial vehicle by adopting a sliding window to obtain a plurality of low-resolution maps and corresponding xml files;
step three, training the YOLO detection model through a low-resolution cut graph and a corresponding xml file to obtain a trained YOLO detection model;
and step four, inputting the fault type picture of the power transmission line to be detected aerial photographed by the unmanned aerial vehicle into the trained YOLO detection model to obtain a detection result.
In the first step, the unmanned aerial vehicle aerial photography power transmission line fault type picture is obtained by combining autonomous acquisition and network crawling, and the data set is enhanced by an affine transformation-based method.
In a further improvement, in the second step, the step of cutting the graph by using the sliding window is as follows: firstly, labeling the original picture with ultrahigh fraction by using Labelimg labeling software to obtain an xml labeling file, then shearing to obtain a picture with low fraction and an xml file corresponding to the picture with low fraction, and renaming.
4. The ultra-high resolution picture based unmanned aerial vehicle power patrol fault detection method of claim 2, wherein the low fraction of pictures has an overlap ratio of 20% and a length and width of 960 x 960.
In the second step, in the YOLO detection model, the CSPDarkNet53 is improved to be CSP × X; the larger the value is, the stronger the capability of extracting features is, the longer the corresponding training time and testing time become, and the model specification is selected to be X, which is the heaviest and the largest model; the YOLO detection model comprises an input end, a feature extraction part, a feature fusion part and a result output part; and a feature extraction section.
In a further improvement, the feature extraction part comprises a sliding window map cutting unit, and the sliding window map cutting unit is of a Focus structure; adding a CSP structure into a DarkNet53 network of a sliding window graph cutting unit, namely adding a convolution branch outside a residual block of a DarkNet53 network, and finally adding an SPP network into a DarkNet53 network;
the characteristic fusion part adopts a characteristic pyramid network and a path aggregation network; the characteristic pyramid network consists of an upper part, a lower part and a lower part; the network of the top-down part is used for extracting the features of the aerial photo, and the network of the bottom-up part is used for fusing feature information of different scales; the path aggregation network is used for creating a path from bottom to top, shortening an information transmission path, and transmitting bottom-layer basic information to a high layer to help classification and determination by using accurate positioning information stored in a low-level characteristic layer;
and the result output part adopts three feature layers with different scales to predict a final result.
In a further refinement, the three feature layers with different scales include three of 40 × (C +5) × 3,80 × (C +5) × 3,160 × (C +5) × 3, where C represents the number of fault categories, and 5 includes five elements of (Xmin, Ymin, Xmax, Ymax, configence), representing a matrix frame; xmin, Ymin respectively represent the abscissa and ordinate of the upper left corner coordinate of the matrix frame, Xmax, Ymax respectively represent the abscissa and ordinate of the lower right corner coordinate of the matrix frame, probability that objects exist in the Confidence, and 3 represents 3 anchor frames.
In a further improvement, when training is carried out in the third step, L is establishedclassificationRepresenting a classification loss function, and performing fault classification:
activating by a sigmoid activation function to obtain a probability value corresponding to the ith category; wherein the formula of Sigmoid is:
Figure BDA0003275432060000071
xirepresenting the characteristics of the ith sample, and e representing a natural constant; p is a radical ofiRepresenting a probability value corresponding to the ith category;
the formula for BCELoss is:
Figure BDA0003275432060000072
wherein L isclassificationRepresenting a classification loss function, S representing the size of a feature layer, i representing the ith grid of the feature layer, j representing the jth anchor box, classes representing a category general class,
Figure BDA00032754320600000810
representing true value, pi(c) Representing a predicted value;
establishing a Localization loss function:
Figure BDA0003275432060000081
the CIOU Loss represents a positioning Loss function, the CIOU represents the complete intersection ratio of an anchor frame and a prediction frame, the IOU represents the intersection ratio of the anchor frame and the prediction frame, Distance _2 represents the Euclidean Distance between two central points, and Distance _ C represents the diagonal Distance;
Figure BDA0003275432060000082
where π denotes the circumference ratio, wgtWidth, h, representing the real boxgtHigh, w, representing a real boxpWidth, h, of the prediction boxpRepresents the high of the prediction box;
establishing a Confidence loss function:
Figure BDA0003275432060000083
wherein L isconfidenceRepresenting a confidence loss function, S representing the size of the feature layer, i representing the ith mesh of the feature layer, j representing the jth anchor box,
Figure BDA0003275432060000084
the jth anchor box representing the ith mesh has an object present,
Figure BDA0003275432060000085
the jth anchor box representing the ith mesh is free of objects,
Figure BDA0003275432060000086
the actual value is represented by the value of,
Figure BDA0003275432060000087
representing a predicted value; b represents the number of anchor frames,
Figure BDA0003275432060000088
represents the predicted value of the jth anchor box of the ith network,
Figure BDA0003275432060000089
the real value of the jth anchor box of the ith network is represented; lambda [ alpha ]noobjRepresenting a hyper-parameter;
training makes the total loss function L of the YOLO detection model minimum, and the total loss function L of the YOLO detection model is as follows:
L=Lclassification+Llocalization+Lconfidence
in a further improvement, when the YOLO detection model performs target detection, the low-resolution cut graph is scaled, wherein the scaling is determined by the following steps:
scaling the low-resolution cut graph to a predefined size a 'b', wherein the original graph size corresponding to the low-resolution cut graph is a 'b'; calculating values of a/a 'and b/b', selecting a value with a smaller numerical value as a scaling coefficient of a low-resolution cut image, multiplying the length and the width of an original image corresponding to the low-resolution cut image by the scaling coefficient to obtain a scaled image, and subtracting the width of the scaled image from the length of the scaled image; and obtaining n pixels by adopting a mode of taking the remainder of np.mod in numpy, wherein the number of the filling pixels at the top and the bottom of the width of the zoomed picture is n/2.
The invention has the advantages that:
the method adopts the sliding window to segment the picture, adopts a method of Mosaic data enhancement to expand the original data set, adaptively calculates the anchor frame according to the user-defined data set, performs adaptive picture scaling and has a new network architecture and a loss function with finer granularity characteristic and denser density, and effectively improves the accuracy of the fault detection of the power transmission line.
Drawings
FIG. 1 is a diagram of a sliding window cut image;
FIG. 2 is a schematic diagram of a conventional filling method;
FIG. 3 is a schematic diagram of a conventional filling method according to the present invention;
FIG. 4 is an image enhanced with Mosaic data;
FIG. 5 is a flowchart illustrating a slicing operation performed on an input picture;
FIG. 6 is an effect diagram of the output of an ultra-high resolution image;
FIG. 7 is a model performance PR curve (only following faults are detected: rusting of a vibration damper (damper), self-explosion of an insulator (radiator), bird nest (nest), rusting of a U-shaped ring (U _ tydust), and rusting of a triangular plate (rustt));
FIG. 8 is a graph of model performance PR (only failures below: loss of R-pin (cotter) were detected).
Detailed Description
The invention is further explained with reference to the drawings and the embodiments.
The invention discloses an unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution pictures. The core of the method is as follows: the method comprises the steps of adopting sliding window segmentation, adopting a Mosaic data enhancement method to expand an original data set, adaptively calculating an anchor frame according to a user-defined data set, adaptively zooming pictures and adopting a new network architecture with finer granularity and denser characteristics.
(1) Establishing unmanned aerial vehicle aerial photography power transmission line fault type picture data set
(a) Data collection research combining autonomous acquisition and network crawling
(b) Research for enhancing data set based on affine transformation method
(2) Cutting pictures by sliding windows
Firstly, labeling ultrahigh-fraction original pictures by using Labelimg labeling software to obtain xml labeling files (000001.jpg corresponds to 000001. xml), then cutting the ultrahigh-fraction original pictures to obtain low-fraction pictures and xml files corresponding to the low-fraction pictures, and renaming the low-fraction pictures and the xml files (note: we consider overlapped sliding window cutting pictures and do not save the low-fraction cutting pictures without any fault types) (000001_ 768.jpg and 000001_768_ xml, and 000001_1536_ jpg and 000001_1536_768.xml), and performing network training by using the low-fraction pictures and the xml files corresponding to the low-fraction pictures. Note: obtaining low-fraction pictures from the super-high resolution picture cropping allows customization of low-fraction length and width (default 960 x 960) and overlap ratio (default 20%), which allows flexible manual adjustment based on actual data.
The method of selecting the sliding window cut with overlap ratio is to avoid that some objects are just cut off by segmentation between two small graphs, for example, if the segmented small graph is 960 × 960 pixels, then overlap can be set to 960 × 20% — 192 pixels. As shown in fig. 1.
(3) Adaptive anchor frame computing
In the YOLO algorithm, 9 anchor frames with different lengths and widths are predefined for different training data sets, and in network training, a network outputs a prediction frame according to the predefined anchor frames, and then compares the prediction frame with a real frame group truth (obtained by Labelimg software labeling), calculates errors of the two frames, and then reversely propagates and updates network parameters. A user-defined data set is analyzed by adopting a K mean value and a genetic learning algorithm, and a preset anchor frame suitable for predicting an object boundary frame in the user-defined data set is obtained. Meanwhile, the self-adaptive anchor frame calculation is integrated in the model, and the self-adaptive anchor frame can be changed correspondingly when different parts are trained. The traditional anchor frame calculation needs to be carried out by using a K-means clustering algorithm independently, can be determined only after data preparation is finished, and cannot be adjusted in a self-adaptive mode in the model training process.
(4) Adaptive picture scaling
In a general target detection algorithm, different pictures are different in length and width, so that an often adopted mode is that original pictures are uniformly scaled to a standard size (user-defined), and then input into a trained network model for detection. In practical use, the aspect ratios of a plurality of pictures are different, so after the zooming and filling, the sizes of the black edges at two ends are different, and if the filling is excessive, information redundancy is caused, and the detection speed is influenced
For example: the conventional filling method is shown in FIG. 2
When the project is actually used, the length-width ratios of a plurality of pictures are different, and the size scaling and filling of the original pictures are more, so that information redundancy exists and the reasoning speed is influenced. We improve this by adding the least black edges adaptively to the original picture as shown in fig. 3.
(b) Calculating scaled dimensions
The first step is as follows:
Figure BDA0003275432060000121
the original picture scaling size is 640 x 640, and both the original picture scaling sizes are divided by the original picture size, so that 0.5333 and 0.8 scaling factors can be obtained, and the small scaling factor is selected.
(b) Calculating scaled dimensions
The original picture length and width are multiplied by the minimum scaling factor 0.5333 width to 640 and height to 426.
(c) Calculating fill values
The third step:
Figure BDA0003275432060000122
the 640- > 426 ═ 214 is taken to the height that would otherwise need to be filled. Mod in numpy is adopted to obtain 22 pixels, and the 22 pixels are divided by 2 to obtain numerical values needing to be filled at two ends of the height of the picture.
The training is not performed in a mode of reducing black edges or in a mode of conventional filling, namely, the training is scaled to 640 x 640 size. Only when testing and model reasoning are used, the mode of reducing black edges is adopted, and the speed of target detection and reasoning is improved.
(5) Enhancement with Mosaic data
As shown in fig. 4, 4 low-resolution cut maps are used to perform splicing in a random scaling, random cropping, and random arrangement manner. The training data can be greatly enriched by using the Mosaic data enhancement, and particularly, a plurality of small objects are added by random scaling, so that the robustness of the network is better. (note: small objects are very common in pictures acquired by unmanned aerial vehicle power line patrol, and have a large proportion, on one hand, the actual size of a fault element is very small, and on the other hand, even if the actual size of the fault element is very large due to the shooting distance of the unmanned aerial vehicle and the like, the fault element may be very small in the pictures, and only 40-50 pixels exist). Secondly, when the training network is enhanced by the Mosaic data, the data of 4 pictures can be directly calculated, so that the Mini-Batch is not required to be large, and a neural network model with good effect can be trained on a single GPU. The economic cost is lower, and the application range is wider.
(5) Backbone network
A YOLOv5 model was constructed in which CSPDarkNet53 was refined to CSP X. The larger the value of the number of the user-defined characters, the stronger the ability of extracting the characters, and the longer the corresponding training time and testing time, and vice versa. A schematic diagram of the model YOLOv5 is shown. The model specification selection X is the heaviest and largest model. The model mainly comprises an Input end (Input), a feature extraction part (BackBone), a feature fusion part (Neck) and a result output part (Head).
(a) In BackBone section
The YOLOv5 model adds a Focus structure, and performs a slicing operation on an input picture, and then performs a convolution operation of 32 convolution kernels. Assuming that the original 640 × 3 picture is inputted with the Focus result, the slicing operation is firstly changed into 320 × 12 feature map, and then the convolution operation of 32 convolution kernels is changed into 320 × 32 feature map. Shown schematically in fig. 5.
Assuming a 4 x 3 picture, the slice becomes a 2 x 12 signature.
And the subsequent network structure improves the traditional DarkNet53, adds a CSP structure, and adds a convolution branch in addition to the Residual Block of DarkNet53, so that the network feature extraction capability can be further improved. And adding an SPP (spatial Pyramid Pooling) network at the end of the network, and fusing the convolved results through convolution kernels with different sizes to expand the receptive field of the network.
(b) Heck part
The YOLOv5 model employs a Feature Pyramid Network (FPN) and a Path Aggregation Network (PAN). The feature pyramid network consists of two parts, from top to bottom and from bottom to top. The top-down network is used for extracting features of aerial pictures, and the bottom-up network is used for fusing feature information of different scales. The PAN may create a bottom-up path for shortening the information propagation path, with the underlying base information being propagated to higher layers to help better classify and locate, using the accurate location information stored in the low-level feature layer. As shown in the figure:
(c) head section
The YOLOv5 model uses three different scale feature layers to predict the final result. Taking the size of the input picture as 640 × 640 as an example, the 32-fold down-sampling of the original network model is changed into 16-fold down-sampling, so the prediction result sizes are 40 × 40 (C +5) × 3,80 × 80 (C +5) × 3,160 × 160 (C +5) × 3, wherein C represents the number of fault categories, 5 includes (Xmin, Ymin, Xmax, Ymax, configence) to represent the coordinates of the upper left corner of the matrix frame (Xmin, Ymin), the coordinates of the lower right corner of (Xmax, Ymax), and the probability of the presence of an object in the configence, 3 represents 3 anchor frames, and C represents the number of fault categories. The results of different sizes were used to predict objects of different sizes, 160 x (C +5) x 3 predicted small objects because their receptive field was minimal; 40 x (C +5) 3 predicted large objects because their receptive field was maximal; 80 x 80 (C +5) 3 predicts moderately sized objects because of the moderate size of their receptive fields. In the post-processing process of target detection, aiming at the screening of a plurality of object anchor frames, Non-Maximum Suppression (NMS) operation is carried out to screen the anchor frames with different confidence degrees, and the anchor frames with lower Suppression confidence degrees are screened. In YOLOv5, weighted NMS is used, in the process of screening an anchor frame, the confidence of the anchor frame is used as a weight to obtain a new rectangular frame, the rectangular frame is used as a final predicted rectangular frame, and then the anchor frame with the confidence lower than a threshold is removed.
(6) Objective function
(a) Classification loss function
The output labels are mutually exclusive for the classification task. For example, a fault in a transmission line may be damage to an insulator, loss of an R pin, rusting of a vibration damper, etc., and the fault may be in one of three categories. At this time, the prediction values of the three are converted into a probability value with the total value of 1 by using a softmax function, and the probability value is classified into the class with the highest probability.
(a) We used the BCEWithLogitsLoss loss function when calculating the classification loss: wherein the formula of Sigmoid is:
Figure BDA0003275432060000151
xirepresenting the characteristics of the ith sample, and e representing a natural constant; p is a radical ofiRepresenting a probability value corresponding to the ith category;
the formula for BCELoss is:
Figure BDA0003275432060000152
wherein L isclassificationRepresenting a classification loss function, S representing the size of a feature layer, i representing the ith grid of the feature layer, j representing the jth anchor box, classes representing a category general class,
Figure BDA0003275432060000153
representing true value, pi(c) Representing a predicted value;
(b) establishing a Localization loss function:
Figure BDA0003275432060000161
the CIOU Loss represents a positioning Loss function, the CIOU represents Complete interaction over Union, the CIOU completely combines, the IOU represents interaction over Union, the IOU combines, Distance _2 represents the Euclidean Distance between two central points, and Distance _ C represents the diagonal Distance;
Figure BDA0003275432060000162
where π denotes the circumference ratio, wgtWidth, h, representing the real boxgtHigh, w, representing a real boxpWidth, h, of the prediction boxpRepresents the high of the prediction box;
(c) establishing a Confidence loss function:
Figure BDA0003275432060000163
wherein, representing a confidence loss function, S representing the size of the characteristic layer, i representing the ith grid of the characteristic layer, j representing the jth anchor box, classes representing the category general class,
Figure BDA0003275432060000164
the jth anchor box representing the ith mesh has an object present,
Figure BDA0003275432060000165
the jth anchor box representing the ith mesh is free of objects,
Figure BDA0003275432060000166
representing true value, CiRepresenting a predicted value;
training makes the overall loss function L of the YOLO detection model as:
L=Lclassification+Llocalization+Lconfidence
(7) the model of YOLOv5 is divided into 4 models, wherein s, m, l and x are respectively suitable for different actual requirements, and the s model has the fastest speed, the lowest precision, the smallest model and the x model has the slowest speed, the highest precision and the largest model. During the training process, an Adam optimizer is adopted to optimize the overall objective function. By training the network model with training data, we can obtain a power transmission line fault detection model.
The main differences among the s, m, l, x models are as follows:
(a) depth of different networks
The network model uses two different CSP structures, namely CSP1 and CSP2, wherein the CSP1 structure is mainly applied to BackBone part, and the CSP2 structure is mainly applied to Neck part. Note: the depth of each CSP1 and CSP2 structure is different in the four YOLOv5 network structures.
For the CSP1 structure, the first CSP1 uses 1 residual component, so it is named CSP1_1, and the rest is analogized.
For the CSP2 structure, 1 set of convolutions was used in the first CSP2, so it was named CSP2_1, and so on.
Width of the same network:
the four YOLOv5 network structures all have different numbers of convolution kernels at different stages. And therefore directly affects the thickness of the convolved feature map. For example, in the first Focus structure, the number of convolution kernels of the last convolution operation is 32, so that the feature map becomes 320 × 32 through the Focus structure.
8) The test set data was used to verify the actual recognition effect of the trained YOLOv5 model.
(a) Firstly, cutting an ultrahigh-resolution picture to be detected by using a sliding window cutting method with an overlapping rate to obtain a plurality of low-resolution cutting maps;
(b) sending each cutting image into a trained network model for detection, storing a single low-resolution cutting image detection result, converting the detection results of all the low-resolution cutting images into an original ultrahigh-resolution image after the detection of all the low-resolution cutting images is finished, then performing non-maximum suppression operation on the detection results of the original ultrahigh-resolution image, removing a plurality of repeated frames of the same target, and only keeping the best rectangular frame. Thus, many small target fault objects can be detected. And integrating Test Time Augmentation (TTA) in a Test stage, performing data Augmentation operations such as turning and rotating on an input Test picture, and finally performing average data processing on different data Augmentation results of the same sample according to task requirements.
(9) Detailed Experimental procedures
The invention only detects 6 types of faults of rusting of a shockproof hammer (damper), self-explosion of an insulator (insulator), a nest (nest), rusting of a U-shaped ring (U _ cavity), rusting of a triangular plate (rustt) and losing of an R pin (cotter) of the transmission line at present, and the invention is not limited to the detection of the 6 faults. In the actual experiment process, fault targets with different scales are found to be prone to false detection, and the problems caused by the fault targets are solved by adopting a detection model result of fusing images with different scales. A detection model is respectively trained aiming at R pin loss and other fault type targets, the input image scales of the two detection models are different, and when an image is tested, the detection results of different detection models and different low-resolution tangent images are combined together to obtain the final output of an ultrahigh-resolution image.
The effect is shown in fig. 6.
Performance analysis:
as shown in the following graph, PR curves for both models and AP values for each fault category are shown. We model evaluation by Precision (Precision), Recall (Recall), AP and map for each failure category.
Some definitions of relevance.
True Positive (TP) indicates that the True result is a Positive example, and the predicted result is also a Positive example (IOU > of the area corresponding to the True mark box (group channel) is 0.5, where the group channel is marked manually).
False Positive (FP) indicates that the true result is negative, but the predicted result is Positive (IOU <0.5 in the area of the true mark box (group route), where the group route is marked manually).
True Negative (TN) indicates a positive case of True result, and the predicted result is a Negative case (missing group truth area).
False Negative (FN) indicates that the true result is Negative, and the predicted result is also Negative (background area).
Wherein:
Figure BDA0003275432060000191
Figure BDA0003275432060000192
as can be seen in FIGS. 7 and 8, Recall tends to be lower when Precision is high; while Recall is high, Precision tends to be low, i.e., Precision and Recall are a pair of contradictory performance metrics. In general, we rank the resulting samples of the model prediction, with the samples ranked first for which the model considers "most likely" to be a good case and the samples ranked last for which the model considers "least likely" to be a good case. By predicting samples as positive examples one by one in this order, the current Precision and Recall can be calculated each time. Then, the PR curve is obtained by plotting Precision as the vertical axis and Recall as the horizontal axis. When recall is 1 and precision is 1, this means FN is 0 and FP is 0, the model effect is very perfect, so it can be known that the closer to the upper right corner, the better the model effect is.
The rust (damper) precision of the vibration damper is 98.1%, the self-explosion (insulator) precision is 86.1%, the nest (nest) precision is 78.8%, the rust (U _ tydust) precision of the U-shaped ring is 93.6%, and the mAP @0.5 of the whole model with the rust (rust) precision of the triangle (98.3%) is 91%. The overall model mAP @0.5 with an R-pin loss (cotter) accuracy of 62.9% was 62.9%. Overall, the invention has good precision.
While embodiments of the invention have been disclosed above, it is not limited to the applications set forth in the specification and the embodiments, which are fully applicable to various fields of endeavor for which the invention pertains, and further modifications may readily be made by those skilled in the art, it being understood that the invention is not limited to the details shown and described herein without departing from the general concept defined by the appended claims and their equivalents.

Claims (9)

1. An unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution pictures is characterized by comprising the following steps:
firstly, establishing a fault type picture data set of an unmanned aerial vehicle aerial photography power transmission line;
step two, establishing a YOLO detection model, wherein the YOLO detection model comprises a sliding window map cutting unit, and the sliding window map cutting unit cuts a fault type picture data set of the power transmission line aerial photographed by the unmanned aerial vehicle by adopting a sliding window to obtain a plurality of low-resolution maps and corresponding xml files;
step three, training the YOLO detection model through a low-resolution cut graph and a corresponding xml file to obtain a trained YOLO detection model;
and step four, inputting the fault type picture of the power transmission line to be detected aerial photographed by the unmanned aerial vehicle into the trained YOLO detection model to obtain a detection result.
2. The unmanned aerial vehicle power line patrol fault detection method based on the ultrahigh resolution picture as claimed in claim 1, wherein in the first step, the unmanned aerial vehicle aerial photography power transmission line fault type picture is obtained by combining autonomous acquisition and network crawling, and is obtained by performing data set enhancement through a method based on affine transformation.
3. The unmanned aerial vehicle power line patrol fault detection method based on the ultrahigh resolution picture as claimed in claim 1, wherein in the second step, the step of cutting the picture by using the sliding window is as follows: firstly, labeling the original picture with ultrahigh fraction by using Labelimg labeling software to obtain an xml labeling file, then shearing to obtain a picture with low fraction and an xml file corresponding to the picture with low fraction, and renaming.
4. The ultra-high resolution picture based unmanned aerial vehicle power patrol fault detection method of claim 2, wherein the low fraction of pictures has an overlap ratio of 20% and a length and width of 960 x 960.
5. The unmanned aerial vehicle power line patrol fault detection method based on the ultrahigh resolution picture as claimed in claim 1, wherein in the second step, in a YOLO detection model, the CSPDarkNet53 is improved to be CSP × X; the larger the value is, the stronger the capability of extracting features is, the longer the corresponding training time and testing time become, and the model specification is selected to be X, which is the heaviest and the largest model; the YOLO detection model comprises an input end, a feature extraction part, a feature fusion part and a result output part; and a feature extraction section.
6. The unmanned aerial vehicle power line patrol fault detection method based on the ultrahigh resolution picture as claimed in claim 5, wherein the feature extraction part comprises a sliding window map cutting unit, and the sliding window map cutting unit is of a Focus structure; adding a CSP structure into a DarkNet53 network of a sliding window graph cutting unit, namely adding a convolution branch outside a residual block of a DarkNet53 network, and finally adding an SPP network into a DarkNet53 network;
the characteristic fusion part adopts a characteristic pyramid network and a path aggregation network; the characteristic pyramid network consists of an upper part, a lower part and a lower part; the network of the top-down part is used for extracting the features of the aerial photo, and the network of the bottom-up part is used for fusing feature information of different scales; the path aggregation network is used for creating a path from bottom to top, shortening an information transmission path, and transmitting bottom-layer basic information to a high layer to help classification and determination by using accurate positioning information stored in a low-level characteristic layer;
and the result output part adopts three feature layers with different scales to predict a final result.
7. The ultrahigh resolution picture-based unmanned aerial vehicle power patrol fault detection method according to claim 5, wherein the three feature layers with different scales comprise three of 40 × 40 (C +5) × 3,80 × 80 (C +5) × 3 and 160 × 160 (C +5) × 3, wherein C represents fault category number, and 5 comprises five elements of (Xmin, Ymin, Xmax, Ymax, configence), representing a matrix frame; xmin and Ymin respectively represent an abscissa and an ordinate of an upper left corner coordinate of the matrix frame, and Xmax, Ymax respectively represents an abscissa and an ordinate of a lower right corner coordinate of the matrix frame, and probability that objects exist in Confidence, 3 represents 3 anchor frames, and C represents the number of fault categories.
8. The unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution pictures of claim 5, wherein L is established during training in the third stepclassificationRepresenting a classification loss function, and performing fault classification:
activating by a sigmoid activation function to obtain a probability value corresponding to the ith category; wherein the formula of Sigmoid is:
Figure FDA0003275432050000031
xirepresenting the characteristics of the ith sample, and e representing a natural constant; p is a radical ofiRepresenting a probability value corresponding to the ith category;
the formula for BCELoss is:
Figure FDA0003275432050000032
wherein L isclassificationRepresenting a classification loss function, S representing the size of a feature layer, i representing the ith grid of the feature layer, j representing the jth anchor box, classes representing a category general class,
Figure FDA0003275432050000033
representing true value, pi(c) Representing a predicted value;
establishing a Localization loss function:
Figure FDA0003275432050000034
the CIOU Loss represents a positioning Loss function, the CIOU represents the complete intersection ratio of an anchor frame and a prediction frame, the IOU represents the intersection ratio of the anchor frame and the prediction frame, Distance _2 represents the Euclidean Distance between two central points, and Distance _ C represents the diagonal Distance;
Figure FDA0003275432050000035
where π denotes the circumference ratio, wgtWidth, h, representing the real boxgtHigh, w, representing a real boxpWidth, h, of the prediction boxpRepresenting prediction blocksHigh;
establishing a Confidence loss function:
Figure FDA0003275432050000041
wherein, IconfidenceRepresenting a confidence loss function, S representing the size of the feature layer, i representing the ith mesh of the feature layer, j representing the jth anchor box,
Figure FDA0003275432050000042
the jth anchor box representing the ith mesh has an object present,
Figure FDA0003275432050000043
the jth anchor box representing the ith mesh is free of objects,
Figure FDA0003275432050000044
the actual value is represented by the value of,
Figure FDA0003275432050000045
representing a predicted value; b represents the number of anchor frames,
Figure FDA0003275432050000046
represents the predicted value of the jth anchor box of the ith network,
Figure FDA0003275432050000047
the real value of the jth anchor box of the ith network is represented; lambda [ alpha ]noobjRepresenting a hyper-parameter;
training makes the total loss function L of the YOLO detection model minimum, and the total loss function of the YOLO detection model is as follows:
L=Lclassification+Llocalization+Lconfidence
9. the unmanned aerial vehicle power line patrol fault detection method based on the ultrahigh resolution picture as claimed in claim 1, wherein when the YOLO detection model performs target detection, the low resolution cut graph is scaled, and the scaling is determined by the following steps:
scaling the low-resolution cut graph to a predefined size a 'b', wherein the original graph size corresponding to the low-resolution cut graph is a 'b'; calculating values of a/a 'and b/b', selecting a value with a smaller numerical value as a scaling coefficient of a low-resolution cut image, multiplying the length and the width of an original image corresponding to the low-resolution cut image by the scaling coefficient to obtain a scaled image, and subtracting the width of the scaled image from the length of the scaled image; and obtaining n pixels by adopting a mode of taking the remainder of np.mod in numpy, wherein the number of the filling pixels at the top and the bottom of the width of the zoomed picture is n/2.
CN202111116257.XA 2021-09-23 2021-09-23 Unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution picture Pending CN113807450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111116257.XA CN113807450A (en) 2021-09-23 2021-09-23 Unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111116257.XA CN113807450A (en) 2021-09-23 2021-09-23 Unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution picture

Publications (1)

Publication Number Publication Date
CN113807450A true CN113807450A (en) 2021-12-17

Family

ID=78896419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111116257.XA Pending CN113807450A (en) 2021-09-23 2021-09-23 Unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution picture

Country Status (1)

Country Link
CN (1) CN113807450A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332697A (en) * 2021-12-19 2022-04-12 西安科技大学 Method, system, equipment and medium for detecting faults of multiple types of targets in power transmission line
CN114418898A (en) * 2022-03-21 2022-04-29 南湖实验室 Data enhancement method based on target overlapping degree calculation and self-adaptive adjustment
CN115497056A (en) * 2022-11-21 2022-12-20 南京华苏科技有限公司 Method for detecting lost articles in region based on deep learning
WO2024045030A1 (en) * 2022-08-29 2024-03-07 中车株洲电力机车研究所有限公司 Deep neural network-based obstacle detection system and method for autonomous rail rapid transit

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101088A (en) * 2020-07-27 2020-12-18 长江大学 Automatic unmanned aerial vehicle power inspection method, device and system
CN112233092A (en) * 2020-10-16 2021-01-15 广东技术师范大学 Deep learning method for intelligent defect detection of unmanned aerial vehicle power inspection
CN112287899A (en) * 2020-11-26 2021-01-29 山东捷讯通信技术有限公司 Unmanned aerial vehicle aerial image river drain detection method and system based on YOLO V5
CN112819804A (en) * 2021-02-23 2021-05-18 西北工业大学 Insulator defect detection method based on improved YOLOv5 convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101088A (en) * 2020-07-27 2020-12-18 长江大学 Automatic unmanned aerial vehicle power inspection method, device and system
CN112233092A (en) * 2020-10-16 2021-01-15 广东技术师范大学 Deep learning method for intelligent defect detection of unmanned aerial vehicle power inspection
CN112287899A (en) * 2020-11-26 2021-01-29 山东捷讯通信技术有限公司 Unmanned aerial vehicle aerial image river drain detection method and system based on YOLO V5
CN112819804A (en) * 2021-02-23 2021-05-18 西北工业大学 Insulator defect detection method based on improved YOLOv5 convolutional neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TIGERZ*: ""目标检测算法——YOLOV5"", Retrieved from the Internet <URL:https://blog.csdn.net/u012863603/article/details/118393567> *
WILLIAM: ""一文读懂YOLO V5 与 YOLO V4"", Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/161083602?tt_from=weixin&utm_id=0> *
江大白: ""深入浅出Yolo系列之Yolov5核心基础知识完整讲解"", pages 4 - 10, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/260400612?utm_id=0> *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332697A (en) * 2021-12-19 2022-04-12 西安科技大学 Method, system, equipment and medium for detecting faults of multiple types of targets in power transmission line
CN114418898A (en) * 2022-03-21 2022-04-29 南湖实验室 Data enhancement method based on target overlapping degree calculation and self-adaptive adjustment
CN114418898B (en) * 2022-03-21 2022-07-26 南湖实验室 Data enhancement method based on target overlapping degree calculation and self-adaptive adjustment
WO2024045030A1 (en) * 2022-08-29 2024-03-07 中车株洲电力机车研究所有限公司 Deep neural network-based obstacle detection system and method for autonomous rail rapid transit
CN115497056A (en) * 2022-11-21 2022-12-20 南京华苏科技有限公司 Method for detecting lost articles in region based on deep learning

Similar Documents

Publication Publication Date Title
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
CN113807450A (en) Unmanned aerial vehicle power line patrol fault detection method based on ultrahigh resolution picture
CN113240688A (en) Integrated flood disaster accurate monitoring and early warning method
KR102328734B1 (en) Method for automatically evaluating labeling reliability of training images for use in deep learning network to analyze images, and reliability-evaluating device using the same
CN112347895A (en) Ship remote sensing target detection method based on boundary optimization neural network
CN115294473A (en) Insulator fault identification method and system based on target detection and instance segmentation
KR20200091331A (en) Learning method and learning device for object detector based on cnn, adaptable to customers&#39; requirements such as key performance index, using target object merging network and target region estimating network, and testing method and testing device using the same to be used for multi-camera or surround view monitoring
CN109815800A (en) Object detection method and system based on regression algorithm
CN109829881A (en) Bird&#39;s Nest detection method and system based on deep learning
CN113591617B (en) Deep learning-based water surface small target detection and classification method
CN114743119A (en) High-speed rail contact net dropper nut defect detection method based on unmanned aerial vehicle
CN115239710A (en) Insulator defect detection method based on attention feedback and double-space pyramid
CN116503318A (en) Aerial insulator multi-defect detection method, system and equipment integrating CAT-BiFPN and attention mechanism
CN116385911A (en) Lightweight target detection method for unmanned aerial vehicle inspection insulator
Yang et al. Real-time object recognition algorithm based on deep convolutional neural network
WO2022219402A1 (en) Semantically accurate super-resolution generative adversarial networks
Manninen et al. Multi-stage deep learning networks for automated assessment of electricity transmission infrastructure using fly-by images
CN116580285B (en) Railway insulator night target identification and detection method
CN116682045A (en) Beam pumping unit fault detection method based on intelligent video analysis
CN111738312A (en) Power transmission line state monitoring method and device based on GIS and virtual reality fusion and computer readable storage medium
Di et al. An automatic and integrated self-diagnosing system for the silting disease of drainage pipelines based on SSAE-TSNE and MS-LSTM
CN115700737A (en) Oil spill detection method based on video monitoring
CN116883390B (en) Fuzzy-resistant semi-supervised defect detection method, device and storage medium
CN117808650B (en) Precipitation prediction method based on Transform-Flownet and R-FPN
Wang et al. Image classification of missing insulators based on EfficientNet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination