CN111784685B - Power transmission line defect image identification method based on cloud edge cooperative detection - Google Patents

Power transmission line defect image identification method based on cloud edge cooperative detection Download PDF

Info

Publication number
CN111784685B
CN111784685B CN202010691927.XA CN202010691927A CN111784685B CN 111784685 B CN111784685 B CN 111784685B CN 202010691927 A CN202010691927 A CN 202010691927A CN 111784685 B CN111784685 B CN 111784685B
Authority
CN
China
Prior art keywords
image
frame
transmission line
frames
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010691927.XA
Other languages
Chinese (zh)
Other versions
CN111784685A (en
Inventor
吴晟
唐远富
徐晓晖
甘湘砚
肖剑
田建伟
徐先勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
State Grid Hunan Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
State Grid Hunan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd, State Grid Hunan Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202010691927.XA priority Critical patent/CN111784685B/en
Publication of CN111784685A publication Critical patent/CN111784685A/en
Application granted granted Critical
Publication of CN111784685B publication Critical patent/CN111784685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Quality & Reliability (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for identifying a defect image of a power transmission line based on cloud edge cooperative detection, which comprises the following steps executed when an unmanned aerial vehicle or a patrol terminal serving as an edge end performs patrol operation: collecting a transmission line inspection image; classifying the acquired transmission line inspection images into distant view images or close view images through a classification model; if the long-range image is obtained through classification, identifying and positioning defects of the large equipment part through a defect detection model deployed at the edge end; and if the close-range images are obtained through classification, calling a defect detection model deployed by a cloud to identify and position the defects of the small equipment parts. The invention can take into account the aspects of recognition speed, recognition precision, positioning precision and the like, can comprehensively recognize various defect types, is beneficial to reducing the labor intensity of operators, and improves the inspection operation efficiency of the power transmission line and the automation and intelligent level thereof.

Description

Power transmission line defect image identification method based on cloud edge cooperative detection
Technical Field
The invention relates to the technical field of digital image recognition, in particular to the field of intelligent detection of a transmission line defect image based on deep learning, and particularly relates to a transmission line defect image recognition method based on cloud edge cooperative detection.
Background
In recent years, unmanned aerial vehicle technology is widely applied to the power industry, especially in the field of transmission line inspection, and unmanned aerial vehicle carrying high-definition cameras overcomes various defects of a traditional operation mode and becomes an important force for maintaining safe and stable operation of a power grid. The unmanned aerial vehicle greatly reduces the labor intensity of operators, but brings new problems. The unmanned aerial vehicle inspection operation generates a large amount of image data, and the speed of the unmanned aerial vehicle inspection operation is continuously increased in an exponential manner, so that no effective method for rapidly screening and extracting information contained in the images exists at present, and only time-consuming and labor-consuming manual means can be adopted. The number and quality of operators can not meet the requirements of current business development, and further improvement of the operation efficiency is severely restricted.
With the rise of big data and artificial intelligence technologies, some universities and enterprises try to solve the problem by adopting image recognition technology based on deep neural networks, and develop corresponding recognition software. However, because of numerous power transmission line devices and complex structures, different devices and even different defect types of the same device have huge differences in appearance forms and digital characterization, the existing software has great defects in the aspects of identification precision, breadth and the like, and the practical requirements are far from being met.
Disclosure of Invention
The invention aims to solve the technical problems: aiming at the problems in the prior art, the invention provides the transmission line defect image recognition method based on cloud edge cooperative detection, which can take into consideration the aspects of recognition speed, recognition precision, positioning precision and the like, can comprehensively recognize various defect types, is beneficial to reducing the labor intensity of operators, and improves the inspection efficiency, automation and intellectualization level of the transmission line.
In order to solve the technical problems, the invention adopts the following technical scheme:
the method comprises the following steps executed when an unmanned aerial vehicle or a patrol terminal serving as an edge end performs patrol operation:
1) Collecting a transmission line inspection image;
2) Classifying the acquired transmission line inspection images into distant view images or close view images through a classification model;
3) If the long-range image is obtained through classification, identifying and positioning defects of the large equipment part through a defect detection model deployed at the edge end; and if the close-range images are obtained through classification, calling a defect detection model deployed by a cloud to identify and position the defects of the small equipment parts.
Optionally, the classification model in the step 2) is a res net-50 classification model, where the res net-50 classification model includes five multi-block convolution layers and a full-connection layer, and the output of the full-connection layer is converted into a binary probability tensor of the distant view image and the close view image through a sigmoid function, and a class with a larger probability value is selected as a prediction class of the input image.
Optionally, the detailed steps of step 2) include: the input transmission line inspection image is processed by 5 multi-block convolution layers to obtain a 32-time downsampled feature image, and then the feature image is classified into a distant view image or a close view image by a full-connection layer and a sigmoid function.
Optionally, step 2) is preceded by the step of training a ResNet-50 classification model: building a training sample containing a distant view image and a close view image; in each round of iterative training, images in a training sample are processed by 5 multi-block convolution layers to obtain a feature image which is sampled 32 times down, then the feature image is classified into a far-view image or a near-view image by a full-connection layer and a sigmoid function, classification loss is constructed by a cross entropy function, and network parameters are updated by a random gradient descent method; and training the ResNet-50 classification model is completed through multiple rounds of iteration.
Optionally, the defect detection model deployed at the edge in the step 3) is a YOLOv3 model, and the defect detection model deployed at the cloud is a master-R-CNN model.
Optionally, step 3) is preceded by the following step of training the YOLOv3 model:
3.1A) constructing a training set sample according to the distant view image and the annotation file thereof; clustering the sizes of all marked frames in the training set sample image by adopting a k-means clustering method to form 9 anchor point frame sizes with different sizes;
3.2A) selecting a training sample image, scaling the training sample image to a uniform size, and taking a Darknet-53 network as a backbone network to extract image features to form three groups of feature images which are sampled 32 times, 16 times and 8 times down respectively; allocating 9 anchor frame sizes to three groups of feature images, wherein each group of feature images comprises 3 feature images, the feature images with small sizes are allocated with large anchor frame sizes, and the feature images with large sizes are allocated with small anchor frame sizes;
3.3A) generating a series of anchor blocks on the original image by taking each pixel point of the feature map as an anchor point according to the size of the anchor block distributed in the step 3.2A), generating 3 blocks for each pixel point, calculating the intersection ratio IOU of each anchor block and the labeling frame, and if one anchor block has the largest IOU, the anchor block is responsible for predicting the object contained in the labeling frame; each anchor block predicts a border, each border in turn containing 4 position parameters: the horizontal coordinate x of the center point of the frame, the vertical coordinate y of the center point of the frame, the width w of the frame, the height h of the frame, 1 confidence score containing the target and the conditional category probability score of each category;
3.4A) constructing frame regression loss by a mean square error function, and constructing confidence coefficient and class probability loss by a cross entropy function, wherein the sum of the confidence coefficient and the class probability loss is the total loss; judging whether the total loss is lower than a preset threshold value, if not, carrying out back propagation calculation to obtain the gradient of each network layer parameter, updating the parameter according to a set learning rate, and then jumping to execute the step 3.2A) to start the training of the next round; if yes, training of the YOLOv3 model is completed.
Optionally, the master-R-CNN model comprises a feature extraction network, a candidate region extraction network, a candidate frame screening layer, a region of interest pooling layer and a classification network which are sequentially connected, wherein the feature extraction network takes a convolution part of a VGG-16 network as a backbone network; the candidate region extraction network consists of two parallel 1*1 convolution layers; the candidate frame screening layer sorts and screens the frames output by the candidate region extraction network again to obtain candidate frames most likely to contain targets; the interesting area pooling layer unifies candidate frames with different sizes into the same size so as to meet the input requirement of the full-connection layer; the classification network consists of two parallel full-connection layers, one full-connection layer is responsible for classifying the input candidate frames to obtain specific class labels, and the other full-connection layer is responsible for carrying out second regression calculation on the candidate frames to obtain accurate position coordinates;
the step 3) is preceded by the following steps of separately training the candidate region extraction network and the classification network:
3.1B) taking the convolution part of the VGG-16 network as a backbone network to perform feature extraction on the input image, and taking the output of the last convolution layer of the VGG-16 network as a shared feature map;
3.2B) inputting the shared feature map into a candidate region extraction network, and generating a series of anchor point frames on the original image by taking each pixel point of the feature map as an anchor point, wherein each pixel point generates 9 frames; calculating the intersection ratio IOU of each anchor point frame and the labeling frame, and giving labels to the anchor point frames: the intersection ratio IOU is larger than 0.7 or the intersection ratio IOU with the highest labeling frame is 1, the anchor point frame containing object is foreground, the intersection ratio IOU is smaller than 0.3 and 0 indicates the anchor point frame containing object is background; randomly selecting 128 '1' class anchor blocks and 128 '0' class anchor blocks, and constructing softmax two-class loss by using a cross entropy function; constructing frame regression loss for all '1' anchor point frames by using a smoothL1 function, and completing training of the candidate region extraction network by minimizing total loss;
3.3B) after the candidate region extraction network training is completed, carrying out score calculation on an anchor point frame, converting the anchor point frame into front/background probability through a softmax function, carrying out regression calculation on the anchor point frame to obtain frames at corrected positions, taking the front M frames according to the size of the foreground probability, removing the parts which exceed the image boundary and are too small in region, then adopting a non-maximum suppression method NMS to remove repeated frames, and taking the front N frames as candidate frames according to the size of the foreground probability;
3.4B) inputting the candidate frames extracted in the step 3.3B) and the shared feature map obtained in the step 3.1B) into a region-of-interest pooling layer together to obtain candidate frame feature maps with consistent sizes, and then inputting the candidate frame feature maps into a classification network;
3.5B) the classification network calculates the intersection ratio IOU of the candidate frames and the labeling frames, and assigns specific class labels for each candidate frame: an overlap ratio IOU greater than 0.5 of "1" indicates that the object contained in the candidate frame is foreground, and an overlap ratio IOU between 0.1 and 0.5 of "0" indicates that the object contained in the candidate frame is background; randomly selecting 32 '1' class candidate frames and 96 '0' class candidate frames, constructing softmax multi-classification loss by a cross entropy function, constructing frame regression loss by a smoothL1 function for all '1' class candidate frames, then calculating total loss of the classification network, and finishing training of the classification network by minimizing the total loss;
the method further comprises the following steps of training the master-R-CNN model before the step 3):
3.1C) constructing a training set sample according to the close-range image and the annotation file thereof;
3.2C) initializing an AGG-16 network, and training a candidate area extraction network;
3.3C) initializing an AGG-16 network, and training a classification network by using the candidate frames output by the candidate region extraction network in the step 3.2C);
3.4C) fixing the AGG-16 network in the step 3.3C), and training the candidate area extraction network again;
3.5C) fixing the AGG-16 network in step 3.3C), and retraining the classification network by using the candidate boxes output by the candidate area extraction network in step 3.4C).
In addition, the invention also provides a transmission line defect image recognition device based on cloud edge cooperative detection, which is an unmanned plane or a patrol terminal, and at least comprises a microprocessor and a memory, wherein the microprocessor is programmed or configured to execute the step of the transmission line defect image recognition method based on cloud edge cooperative detection, or a computer program programmed or configured to execute the transmission line defect image recognition method based on cloud edge cooperative detection is stored in the memory.
In addition, the invention also provides a transmission line defect image recognition system based on cloud edge cooperative detection, which at least comprises a microprocessor and a memory, wherein the microprocessor is programmed or configured to execute the steps of the transmission line defect image recognition method based on cloud edge cooperative detection, or a computer program programmed or configured to execute the transmission line defect image recognition method based on cloud edge cooperative detection is stored in the memory.
In addition, the invention further provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program programmed or configured to execute the transmission line defect image identification method based on cloud edge cooperative detection.
Compared with the prior art, the invention has the following advantages:
1. the method comprises the following steps executed when the unmanned aerial vehicle or the inspection terminal serving as the edge end performs inspection operation: collecting a transmission line inspection image; classifying the acquired transmission line inspection images into distant view images or close view images through a classification model; if the long-range image is obtained through classification, identifying and positioning defects of the large equipment part through a defect detection model deployed at the edge end; and if the close-range images are obtained through classification, calling a defect detection model deployed by a cloud to identify and position the defects of the small equipment parts. The invention can take into account the aspects of recognition speed, recognition precision, positioning precision and the like, can comprehensively recognize various defect types, is beneficial to reducing the labor intensity of operators and improving the inspection efficiency and the automation and intelligent level of the transmission line.
2. According to the invention, the classification model is adopted to classify the inspection images shot by the unmanned aerial vehicle or the inspection terminal serving as the edge end, and defects of large equipment parts and small equipment parts of the power transmission line are respectively identified and positioned based on the long-range view images and the short-range view images, so that the complexity of the model can be effectively reduced, the model is easier to train, and meanwhile, the detection effect of the model can be improved. In particular, identification of large part defects therein can be accomplished at the job site. And after the computing capacity of hardware configuration of the edge end and the performance of the algorithm model are further improved, all the detection models can be completely deployed at the edge end, defect identification can be performed while line inspection at the edge end, and all the operation flows are completed without manual intervention.
3. The edge terminal equipment in the method can be unmanned aerial vehicle, inspection terminals such as inspection robots, other various embedded terminals or portable terminal equipment (such as smart phones and the like), and has the advantage of good universality.
Drawings
FIG. 1 is a schematic diagram of a basic flow of a method according to an embodiment of the present invention.
Detailed Description
As shown in fig. 1, the method for identifying a defect image of a power transmission line based on cloud-edge collaborative detection in this embodiment includes the following steps executed when an unmanned aerial vehicle or a patrol terminal as an edge performs a patrol operation:
1) Collecting a transmission line inspection image;
2) Classifying the acquired transmission line inspection images into distant view images or close view images through a classification model;
3) If the long-range image is obtained through classification, identifying and positioning defects of the large equipment part through a defect detection model deployed at the edge end; and if the close-range images are obtained through classification, calling a defect detection model deployed by a cloud to identify and position the defects of the small equipment parts.
It should be noted that, the large equipment component and the small equipment component are specifically determined by corresponding defect detection models, and the defect detection models deployed at the edge end are used for detecting the large equipment component, such as the equipment component contained in the distant view image, such as a channel, a tower, an accessory facility, a foundation, and the like, in consideration of the shortage of computing power at the edge end, while the defect detection models deployed at the cloud end are used for detecting the small equipment component to exert the advantage of stronger computing power, such as the equipment component contained in the close view image, such as an insulator sheet, a hanging point, a wire, a hardware fitting, and the like. Therefore, the transmission line defect image recognition method based on cloud edge cooperative detection adopts a cloud edge cooperative mode, divides the inspection image into two categories and adopts different models to perform defect detection, so that multiple defect types can be comprehensively recognized in the aspects of recognition speed, recognition precision, positioning precision and the like, technical difficulty can be effectively reduced, detection precision is improved, labor intensity of operators is reduced, and transmission line inspection operation efficiency and automation and intelligent levels thereof are improved. The defect detection model deployed in the cloud is called to identify and position the defects of the large equipment parts, so that the defects can be called in real time, and the defects can be processed after the field operation is finished.
The classification model in the step 2) is used for classifying the acquired transmission line inspection images. In this embodiment, the classification model in step 2) is a res net-50 classification model, where the res net-50 classification model includes five multi-block convolution layers and a full-connection layer, and the output of the full-connection layer is converted into a binary probability tensor of the distant view image and the close view image through a sigmoid function, and a class with a larger probability value is selected as the prediction class of the input image.
In this embodiment, the detailed steps of step 2) include: firstly, scaling a single Zhang Xun detection image shot by an unmanned aerial vehicle to a set size, then extracting image features through 5 multi-block convolution layers, then outputting a classification (distant view or close view) probability value of the image through 1 full-connection layer and sigmoid function, and finally giving the classification corresponding to the larger probability to the image as a label.
In this embodiment, the step 2) further includes the step of training a ResNet-50 classification model: building a training sample containing a distant view image and a close view image; in each round of iterative training, images in a training sample are processed by 5 multi-block convolution layers to obtain a feature image which is sampled 32 times down, then the feature image is classified into a far-view image or a near-view image by a full-connection layer and a sigmoid function, classification loss is constructed by a cross entropy function, and network parameters are updated by a random gradient descent method; and training the ResNet-50 classification model is completed through multiple rounds of iteration.
In this embodiment, the defect detection model deployed at the edge in step 3) is a YOLOv3 model, and the defect detection model deployed at the cloud is a master-R-CNN model.
The steps of the YOLOv3 model for identifying the long-range view inspection image are as follows: and scaling the long-range view inspection image to a set size, and taking a Darknet-53 network as a backbone network to extract image features to form three groups of feature images which are downsampled 32 times, 16 times and 8 times respectively. The 9 anchor block sizes are distributed to three groups of feature graphs, each group of feature graphs comprises 3 feature graphs, the small-size feature graphs are distributed with large anchor block sizes, and the large-size feature graphs are distributed with small anchor block sizes. According to the anchor block size allocated in step S4.2, a series of anchor blocks (3 blocks are generated for each pixel) are generated on the original image by taking the feature image pixels as anchors. Each anchor block predicts a frame, each frame containing 4 positional parameters (frame center point abscissa x, frame center point ordinate y, frame width w, frame height h), 1 confidence score containing the target and a conditional category probability score for each category. Ordering all frames according to the confidence level, removing frames smaller than a threshold value, then performing non-maximum suppression (NMS) operation on the rest frames according to categories, and removing repeated frames. And displaying the finally reserved frame on the original image to finish the identification of the image. In the embodiment, the defect of the large equipment part can be detected rapidly by adopting the YOLOv3 model, and the accuracy of automatic identification can be effectively ensured through confidence constraint. Meanwhile, the image is divided into a plurality of areas, so that the rapid positioning of the defects of the large component is facilitated. In the embodiment, the long-range target detection model is based on a YOLOv3 algorithm, and can identify and position the defects of large equipment parts in long-range images; the YOLOv3 model belongs to a one-stage algorithm, has obvious advantages in calculation speed, can fully exert the speed advantage when being arranged at the edge end, and can even realize real-time detection. Meanwhile, YOLOv3 has a good recognition effect on a large target, and is the first choice of an edge recognition algorithm.
In the embodiment, the close-range target detection model is based on a master-R-CNN model, and can identify and locate small and dense equipment defects in close-range images. For small and dense equipment parts in a close-range image, the proportion of the small and dense equipment parts in the original image is usually small, after repeated pooling downsampling, the information reserved on the feature map is very limited, accurate identification and positioning are difficult to carry out, and the faster-R-CNN model can accurately detect small equipment part defects through twice classification and frame regression. The master-R-CNN model extracts candidate frames through a region candidate network, classifies candidate regions and belongs to a two-stage algorithm. Although the detection speed of the algorithm is slower than that of a one-stage algorithm, the algorithm has more advantages in detection precision, and particularly has better detection effect on small and dense components. The most common defects of the power transmission line comprise bolt loosening, wire stranding, cotter pin missing and the like, which belong to small part defects, and the defects need to be identified by adopting a 'two-stage' algorithm in a targeted manner.
According to the embodiment, the unmanned aerial vehicle inspection images of the transmission line are collected, the images of a channel, a whole tower, a tower body, a tower head, a signboard, a foundation and the like are divided into long-range scenes, and the images of an insulator string, a hanging point and the like are divided into short-range scenes. And respectively screening and marking the parts with standard shooting and clear image quality in the two categories, and finally forming three training sample data sets. In this embodiment, the inspection image is marked, and the marked main information includes a picture name, a picture category (distant view and close view), a defect part name, a defect type, a defect position (horizontal and vertical coordinates of the top left vertex of the marking frame, horizontal and vertical coordinates of the bottom right vertex), a storage path, and the like. And writing the labeling information into json files with the names consistent with the picture names, and forming training sample data together. The establishment of the sample set can provide training samples for the deep learning model, and is convenient for training the classification model and the defect recognition model.
In this embodiment, step 3) is preceded by the following step of training the YOLOv3 model:
3.1A) constructing a training set sample according to the distant view image and the annotation file thereof; clustering the sizes of all marked frames in the training set sample image by adopting a k-means clustering method to form 9 anchor point frame sizes with different sizes;
3.2A) selecting a training sample image, scaling the training sample image to a uniform size, and taking a Darknet-53 network as a backbone network to extract image features to form three groups of feature images which are sampled 32 times, 16 times and 8 times down respectively; allocating 9 anchor frame sizes to three groups of feature images, wherein each group of feature images comprises 3 feature images, the feature images with small sizes are allocated with large anchor frame sizes, and the feature images with large sizes are allocated with small anchor frame sizes;
3.3A) generating a series of anchor blocks on the original image by taking each pixel point of the feature map as an anchor point according to the size of the anchor block distributed in the step 3.2A), generating 3 blocks for each pixel point, calculating the intersection ratio IOU of each anchor block and the labeling frame, and if one anchor block has the largest IOU, the anchor block is responsible for predicting the object contained in the labeling frame; each anchor block predicts a border, each border in turn containing 4 position parameters: the horizontal coordinate x of the center point of the frame, the vertical coordinate y of the center point of the frame, the width w of the frame, the height h of the frame, 1 confidence score containing the target and the conditional category probability score of each category;
3.4A) constructing frame regression loss by a mean square error function, and constructing confidence coefficient and class probability loss by a cross entropy function, wherein the sum of the confidence coefficient and the class probability loss is the total loss; judging whether the total loss is lower than a preset threshold value, if not, carrying out back propagation calculation to obtain the gradient of each network layer parameter, updating the parameter according to a set learning rate, and then jumping to execute the step 3.2A) to start the training of the next round; if yes, training of the YOLOv3 model is completed.
In this embodiment, the ResNet-50 classification model and the Yolov3 model may be trained on a computer in advance, and after the training is completed, the ResNet-50 classification model and the Yolov3 model are transplanted to a high-performance microcomputer chip, and the chip is mounted on the unmanned aerial vehicle body and integrated with the unmanned aerial vehicle through OSDK. As a preferable scheme, the high-performance microcomputer chip of the unmanned aerial vehicle is configured with a GPU, the display memory is not lower than 6G, and the cruising ability of the unmanned aerial vehicle is not lower than 30 minutes under the premise of 300G of load. When the unmanned aerial vehicle executes a line patrol task and shoots a line patrol image, the microcomputer chip automatically reads the image according to a preset program, firstly invokes a ResNet-50 classification model to classify the image, skips if the image belongs to a close-range image, and continuously invokes a YOLOv3 model to detect a large equipment part in the image if the image belongs to a distant-range image. When the existence of the defect is detected, the frame selection marking is carried out on the defective component, the image is pushed to a receiver, the warning and the display are given to operators, and meanwhile, the image and the defect information are stored independently.
In the embodiment, a master-R-CNN model and a hardware configuration related to computation are deployed on a cloud platform to form cloud application services. After the on-site line inspection operation is finished, an operator stores a high-definition image shot by the unmanned aerial vehicle in a local computer, applies for cloud application service, mobilizes software and hardware resources to detect a close-range image in a cloud computing mode, identifies whether a small equipment part has defects, and feeds results of the defect part, the defect type, the defect position and the like back to the operator. The hardware resources configured by the cloud platform can meet the multi-path concurrency requirement, the CPU memory is not lower than 64G, the GPU video memory is not lower than 32G, and the hard disk is not lower than 10T.
In this embodiment, the master-R-CNN model includes a feature extraction network, a candidate region extraction network, a candidate frame screening layer (Proposal layer), a region of interest pooling layer (ROI pooling layer), and a classification network connected in sequence, where the feature extraction network uses a convolution part of the VGG-16 network as a backbone network; the candidate region extraction network consists of two parallel 1*1 convolution layers; the candidate frame screening layer sorts and screens the frames output by the candidate region extraction network again to obtain candidate frames most likely to contain targets; the interesting area pooling layer unifies candidate frames with different sizes into the same size so as to meet the input requirement of the full-connection layer; the classification network consists of two parallel full-connection layers, one full-connection layer is responsible for classifying the input candidate frames to obtain specific class labels, and the other full-connection layer is responsible for carrying out second regression calculation on the candidate frames to obtain accurate position coordinates. The method for identifying the close-range inspection image by the master-R-CNN model comprises the following steps: and performing feature extraction on the close-range image by taking a convolution layer part of the VGG-16 network as a backbone network to form a 16-time downsampled shared feature map. Inputting the feature map into a candidate region extraction network, generating a series of anchor blocks on the original image by taking each pixel point of the feature map as an anchor point, calculating the front/background probability of each anchor block through a classification branch, and calculating the regression offset of each anchor block through a regression branch, thereby obtaining a corresponding frame. Sequencing the frames according to the size of the foreground probability, taking the first M frames, removing the frames exceeding the image boundary and too small, removing the repeated frames by adopting a non-maximum suppression method, and taking the first N frames as candidate frames according to the size of the foreground probability. And commonly inputting the obtained candidate frames and the generated shared feature map into a region-of-interest pooling layer to obtain the candidate frame feature map with the same size, and then inputting the candidate frame feature map into a classification network. In the classification network, calculating the multi-category probability of each candidate frame through a classification branch, taking the category with the maximum probability value as the category label of the candidate frame, and calculating the regression offset of each candidate frame through a regression branch, thereby obtaining the corresponding accurate correction frame. And sorting the accurate correction frames according to the categories and the probability, removing repeated frames by adopting a non-maximum suppression method, outputting the final accurate correction frames, the category labels and the probability values thereof according to the probability threshold value, and displaying the final accurate correction frames on the original image to finish the classification and positioning tasks.
In this embodiment, the following steps of separately training the candidate region extraction network and the classification network (the training of the feature extraction network is integrated in the training process of the candidate region extraction network and the classification network) are further included before the step 3):
3.1B) taking the convolution part of the VGG-16 network as a backbone network to perform feature extraction on the input image, and taking the output of the last convolution layer of the VGG-16 network as a shared feature map;
3.2B) inputting the shared feature map into a candidate region extraction network, and generating a series of anchor point frames on the original image by taking each pixel point of the feature map as an anchor point, wherein each pixel point generates 9 frames; calculating the intersection ratio IOU of each anchor point frame and the labeling frame, and giving labels to the anchor point frames: the intersection ratio IOU is larger than 0.7 or the intersection ratio IOU with the highest labeling frame is 1, the anchor point frame containing object is foreground, the intersection ratio IOU is smaller than 0.3 and 0 indicates the anchor point frame containing object is background; randomly selecting 128 '1' class anchor blocks and 128 '0' class anchor blocks, and constructing softmax two-class loss by using a cross entropy function; constructing frame regression loss for all '1' anchor point frames by using a smoothL1 function, and completing training of the candidate region extraction network by minimizing total loss;
3.3B) after the candidate region extraction network training is completed, carrying out score calculation on an anchor point frame, converting the anchor point frame into front/background probability through a softmax function, carrying out regression calculation on the anchor point frame to obtain frames at corrected positions, taking the front M frames according to the size of the foreground probability, removing the parts which exceed the image boundary and are too small in region, then adopting a non-maximum suppression method NMS to remove repeated frames, and taking the front N frames as candidate frames according to the size of the foreground probability;
3.4B) inputting the candidate frames extracted in the step 3.3B) and the shared feature map obtained in the step 3.1B) into a region-of-interest pooling layer together to obtain candidate frame feature maps with consistent sizes, and then inputting the candidate frame feature maps into a classification network;
3.5B) the classification network calculates the intersection ratio IOU of the candidate frames and the labeling frames, and assigns specific class labels for each candidate frame: an overlap ratio IOU greater than 0.5 of "1" indicates that the object contained in the candidate frame is foreground, and an overlap ratio IOU between 0.1 and 0.5 of "0" indicates that the object contained in the candidate frame is background; randomly selecting 32 '1' class candidate frames and 96 '0' class candidate frames, constructing softmax multi-classification loss by a cross entropy function, constructing frame regression loss by a smoothL1 function for all '1' class candidate frames, then calculating total loss of the classification network, and finishing training of the classification network by minimizing the total loss;
The method further comprises the following steps of training the master-R-CNN model before the step 3):
3.1C) constructing a training set sample according to the close-range image and the annotation file thereof;
3.2C) initializing an AGG-16 network, and training a candidate area extraction network;
3.3C) initializing an AGG-16 network, and training a classification network by using the candidate frames output by the candidate region extraction network in the step 3.2C);
3.4C) fixing the AGG-16 network in the step 3.3C), and training the candidate area extraction network again;
3.5C) fixing the AGG-16 network in step 3.3C), and retraining the classification network by using the candidate boxes output by the candidate area extraction network in step 3.4C).
In this embodiment, the inspection images of the unmanned aerial vehicle of the transmission line are collected in advance, the images of the parts such as the channel, the whole tower, the tower body, the tower head, the signboard, the foundation and the like are divided into long-range scenes, the images of the parts such as the insulator and the hanging point are divided into short-range scenes, and a data set 1 for training the classification model is formed. And screening out parts with standard shooting and clear image quality in the two categories, respectively marking, and mainly marking large equipment part defects by long-range images and small equipment part defects by short-range images. The annotation file is stored in json format and contains information such as defect type, defect coordinates (upper left horizontal and vertical coordinates, lower right horizontal and vertical coordinates), image type, image name, and the like. The defect image and the annotation file together form a dataset 2 and a dataset 3 for training the object detection model. According to 80 percent: 10%: the 10% ratio divides the three data sets into a training set, a testing set and a verification set, and training of the ResNet-50 classification model, the YOLOv3 target detection model and the master-R-CNN target detection model is completed based on the training set, the testing set and the verification set respectively. In the training process of the model, network loss is calculated through forward propagation, gradient of each parameter is calculated through backward propagation, and then the parameters are updated according to a set learning step length, so that the training of one epoch is completed. And then forward calculation is carried out on the test set, and the generalization performance of the test model is lost through the network. And (3) completing model training through multiple rounds of iteration, and finally checking the recognition effect of the model on the verification set. And deploying the trained ResNet-50 classification model and the YOLOv3 target detection model into a Xinjiang Manifold2 high-performance airborne chip, wherein Manifold2 is carried on a Xinjiang M210 RTK unmanned aerial vehicle body and communicated with an unmanned aerial vehicle middle station through a USB interface. When the unmanned aerial vehicle executes a line patrol task and shoots a line patrol image, the Manifold2 automatically acquires the line patrol image and calls a ResNet-50 classification model to classify the image. If the image is a distant view type, the YOLOv3 model is continuously called to detect whether the image has defects. When the equipment defect is detected, the box is marked, and then the box is pushed to a receiver to display and alarm to operators. And deploying the master-R-CNN model on a cloud platform to form cloud application service. To meet the multiple concurrency requirement, the hardware configuration should not be lower than: 64G CPU memory, 32G GPU video memory. After the on-site line inspection operation is finished, an operator stores the high-definition image shot by the unmanned aerial vehicle in a local computer, applies for cloud application service, mobilizes software and hardware resources to detect the close-range image in a cloud computing mode, and feeds back the result to the operator.
In summary, because the transmission line has a complex structure and numerous devices, different devices and even different defect types of the same device have huge appearance characterization differences, and it is difficult to identify a plurality of defect components by using the same algorithm model. The embodiment provides a new identification method based on cloud edge collaborative detection for the defect image identification of the power transmission line, wherein an image classification method is adopted to divide a power transmission line inspection image into a distant view image and a close view image, and different algorithm models are respectively adopted to detect the defect part based on the distant view image and the close view image. For larger components such as a channel, a tower, accessory settings, a foundation and the like, performing edge end identification by using a YOLOv3 model; and carrying out cloud identification on smaller parts such as insulator sheets, hanging points, wires, hardware fittings and the like by adopting a master-R-CNN model. The method provided by the power transmission line defect image recognition method based on cloud edge cooperative detection can take into account the aspects of recognition speed, recognition precision, positioning precision and the like, can comprehensively recognize various defect types, is beneficial to reducing the labor intensity of operators and improving the inspection operation efficiency of the power transmission line and the automation and intelligent level thereof.
In addition, the embodiment further provides a power transmission line defect image recognition device based on cloud edge cooperative detection, wherein the power transmission line defect image recognition device based on cloud edge cooperative detection is an unmanned plane or a patrol terminal, the power transmission line defect image recognition device based on cloud edge cooperative detection at least comprises a microprocessor and a memory, the microprocessor is programmed or configured to execute the steps of the power transmission line defect image recognition method based on cloud edge cooperative detection, or a computer program programmed or configured to execute the power transmission line defect image recognition method based on cloud edge cooperative detection is stored in the memory.
In addition, the embodiment also provides a system for identifying the defect image of the electric transmission line based on the cloud edge cooperative detection, which at least comprises a microprocessor and a memory, wherein the microprocessor is programmed or configured to execute the steps of the method for identifying the defect image of the electric transmission line based on the cloud edge cooperative detection, or a computer program programmed or configured to execute the method for identifying the defect image of the electric transmission line based on the cloud edge cooperative detection is stored in the memory.
In addition, the embodiment also provides a computer readable storage medium, and a computer program programmed or configured to execute the above-mentioned transmission line defect image identification method based on cloud edge cooperative detection is stored in the computer readable storage medium.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is directed to methods, apparatus (systems), and computer program products in accordance with embodiments of the present application that produce means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks by reference to the instructions that execute in the flowchart and/or processor. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the present invention may occur to one skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.

Claims (9)

1. The method for identifying the defect image of the power transmission line based on cloud edge cooperative detection is characterized by comprising the following steps executed when an unmanned aerial vehicle or a patrol terminal serving as an edge performs patrol operation:
1) Collecting a transmission line inspection image;
2) Classifying the acquired transmission line inspection images into distant view images or close view images through a classification model;
3) If the long-range image is obtained through classification, identifying and positioning defects of the large equipment part through a defect detection model deployed at the edge end; if the close-range image is obtained through classification, a defect detection model deployed by a cloud is called to identify and position the defects of the small equipment parts; the defect detection model deployed at the edge end is a YOLOv3 model, and the defect detection model deployed at the cloud end is a master-R-CNN model; the step of identifying the long-range view inspection image by the YOLOv3 model is as follows: scaling the long-range view inspection image to a set size, extracting image features by taking a Darknet-53 network as a backbone network, respectively forming three groups of feature images of 32 times, 16 times and 8 times of downsampling, distributing 9 anchor point frame sizes to the three groups of feature images, distributing 3 feature images of each group of feature images, distributing a large anchor point frame size to a small feature image, and distributing a small anchor point frame size to a large feature image; according to the size of the allocated anchor frames, characteristic image pixel points are used as anchor points to generate a series of anchor frames on an original image, each anchor frame predicts a frame, each frame comprises 4 position parameters including a frame center point abscissa x, a frame center point ordinate y, a frame width w, a frame height h,1 confidence score containing a target and a conditional category probability score of each category, all frames are ordered according to the confidence level, frames smaller than a threshold are removed, non-maximum suppression operation is carried out on the rest frames according to the categories, repeated frames are removed, and finally reserved frames are displayed on the original image to complete image identification.
2. The method for identifying the defect image of the power transmission line based on the cloud edge collaborative detection according to claim 1, wherein the classification model in the step 2) is a ResNet-50 classification model, the ResNet-50 classification model comprises five multi-block convolution layers and a full-connection layer, the output of the full-connection layer is converted into a classification probability tensor of a distant view image and a close view image through a sigmoid function, and a class with a larger probability value is selected as a prediction class of the input image.
3. The method for identifying the defect image of the power transmission line based on cloud edge cooperative detection according to claim 2, wherein the detailed steps of the step 2) include: the input transmission line inspection image is processed by 5 multi-block convolution layers to obtain a 32-time downsampled feature image, and then the feature image is classified into a distant view image or a close view image by a full-connection layer and a sigmoid function.
4. The method for identifying the defect image of the power transmission line based on cloud edge cooperative detection as claimed in claim 3, wherein the step 2) further comprises the step of training a ResNet-50 classification model: building a training sample containing a distant view image and a close view image; in each round of iterative training, images in a training sample are processed by 5 multi-block convolution layers to obtain a feature image which is sampled 32 times down, then the feature image is classified into a far-view image or a near-view image by a full-connection layer and a sigmoid function, classification loss is constructed by a cross entropy function, and network parameters are updated by a random gradient descent method; and training the ResNet-50 classification model is completed through multiple rounds of iteration.
5. The method for identifying the defect image of the power transmission line based on cloud edge cooperative detection as claimed in claim 1, wherein the step 3) is preceded by the following steps of training a YOLOv3 model:
3.1A) constructing a training set sample according to the distant view image and the annotation file thereof; clustering the sizes of all marked frames in the training set sample image by adopting a k-means clustering method to form 9 anchor point frame sizes with different sizes;
3.2A) selecting a training sample image, scaling the training sample image to a uniform size, and taking a Darknet-53 network as a backbone network to extract image features to form three groups of feature images which are sampled 32 times, 16 times and 8 times down respectively; allocating 9 anchor frame sizes to three groups of feature images, wherein each group of feature images comprises 3 feature images, the feature images with small sizes are allocated with large anchor frame sizes, and the feature images with large sizes are allocated with small anchor frame sizes;
3.3A) generating a series of anchor blocks on the original image by taking each pixel point of the feature map as an anchor point according to the size of the anchor block distributed in the step 3.2A), generating 3 blocks for each pixel point, calculating the intersection ratio IOU of each anchor block and the labeling frame, and if one anchor block has the largest IOU, the anchor block is responsible for predicting the object contained in the labeling frame; each anchor block predicts a border, each border in turn containing 4 position parameters: the horizontal coordinate x of the center point of the frame, the vertical coordinate y of the center point of the frame, the width w of the frame, the height h of the frame, 1 confidence score containing the target and the conditional category probability score of each category;
3.4A) constructing frame regression loss by a mean square error function, and constructing confidence coefficient and class probability loss by a cross entropy function, wherein the sum of the confidence coefficient and the class probability loss is the total loss; judging whether the total loss is lower than a preset threshold value, if not, carrying out back propagation calculation to obtain the gradient of each network layer parameter, updating the parameter according to a set learning rate, and then jumping to execute the step 3.2A) to start the training of the next round; if yes, training of the YOLOv3 model is completed.
6. The method for identifying the defect image of the power transmission line based on cloud edge cooperative detection as claimed in claim 1, wherein the step 3) further comprises the following steps of separately training a candidate region extraction network and a classification network:
3.1B) taking the convolution part of the VGG-16 network as a backbone network to perform feature extraction on the input image, and taking the output of the last convolution layer of the VGG-16 network as a shared feature map;
3.2B) inputting the shared feature map into a candidate region extraction network, and generating a series of anchor point frames on the original image by taking each pixel point of the feature map as an anchor point, wherein each pixel point generates 9 frames; calculating the intersection ratio IOU of each anchor point frame and the labeling frame, and giving labels to the anchor point frames: the intersection ratio IOU is larger than 0.7 or the intersection ratio IOU with the highest labeling frame is 1, the anchor point frame containing object is foreground, the intersection ratio IOU is smaller than 0.3 and 0 indicates the anchor point frame containing object is background; randomly selecting 128 '1' class anchor blocks and 128 '0' class anchor blocks, and constructing softmax two-class loss by using a cross entropy function; constructing frame regression loss for all '1' anchor point frames by using a smoothL1 function, and completing training of the candidate region extraction network by minimizing total loss;
3.3B) after the candidate region extraction network training is completed, carrying out score calculation on an anchor point frame, converting the anchor point frame into front/background probability through a softmax function, carrying out regression calculation on the anchor point frame to obtain frames at corrected positions, taking the front M frames according to the size of the foreground probability, removing the parts which exceed the image boundary and are too small in region, then adopting a non-maximum suppression method NMS to remove repeated frames, and taking the front N frames as candidate frames according to the size of the foreground probability;
3.4B) inputting the candidate frames extracted in the step 3.3B) and the shared feature map obtained in the step 3.1B) into a region-of-interest pooling layer together to obtain candidate frame feature maps with consistent sizes, and then inputting the candidate frame feature maps into a classification network;
3.5B) the classification network calculates the intersection ratio IOU of the candidate frames and the labeling frames, and assigns specific class labels for each candidate frame: an overlap ratio IOU greater than 0.5 of "1" indicates that the object contained in the candidate frame is foreground, and an overlap ratio IOU between 0.1 and 0.5 of "0" indicates that the object contained in the candidate frame is background; randomly selecting 32 '1' class candidate frames and 96 '0' class candidate frames, constructing softmax multi-classification loss by a cross entropy function, constructing frame regression loss by a smoothL1 function for all '1' class candidate frames, then calculating total loss of the classification network, and finishing training of the classification network by minimizing the total loss;
The method further comprises the following steps of training the master-R-CNN model before the step 3):
3.1C) constructing a training set sample according to the close-range image and the annotation file thereof;
3.2C) initializing an AGG-16 network, and training a candidate area extraction network;
3.3C) initializing an AGG-16 network, and training a classification network by using the candidate frames output by the candidate region extraction network in the step 3.2C);
3.4C) fixing the AGG-16 network in the step 3.3C), and training the candidate area extraction network again;
3.5C) fixing the AGG-16 network in step 3.3C), and retraining the classification network by using the candidate boxes output by the candidate area extraction network in step 3.4C).
7. The utility model provides a transmission line defect image recognition device based on cloud limit collaborative detection, this transmission line defect image recognition device based on cloud limit collaborative detection is unmanned aerial vehicle or inspection terminal, contains microprocessor and memory at least in this transmission line defect image recognition device based on cloud limit collaborative detection, characterized in that, microprocessor is programmed or configured to carry out the step of transmission line defect image recognition method based on cloud limit collaborative detection of any one of claims 1 ~ 6, or the memory has stored the computer program that is programmed or configured to carry out the transmission line defect image recognition method based on cloud limit collaborative detection of any one of claims 1 ~ 6.
8. A system for identifying a defect image of a power transmission line based on cloud edge cooperative detection, at least comprising a microprocessor and a memory, wherein the microprocessor is programmed or configured to perform the steps of the method for identifying a defect image of a power transmission line based on cloud edge cooperative detection according to any one of claims 1 to 6, or the memory stores a computer program programmed or configured to perform the method for identifying a defect image of a power transmission line based on cloud edge cooperative detection according to any one of claims 1 to 6.
9. A computer-readable storage medium, wherein a computer program programmed or configured to perform the method for identifying a transmission line defect image based on cloud-edge cooperative detection as claimed in any one of claims 1 to 6 is stored in the computer-readable storage medium.
CN202010691927.XA 2020-07-17 2020-07-17 Power transmission line defect image identification method based on cloud edge cooperative detection Active CN111784685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010691927.XA CN111784685B (en) 2020-07-17 2020-07-17 Power transmission line defect image identification method based on cloud edge cooperative detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010691927.XA CN111784685B (en) 2020-07-17 2020-07-17 Power transmission line defect image identification method based on cloud edge cooperative detection

Publications (2)

Publication Number Publication Date
CN111784685A CN111784685A (en) 2020-10-16
CN111784685B true CN111784685B (en) 2023-08-18

Family

ID=72764232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010691927.XA Active CN111784685B (en) 2020-07-17 2020-07-17 Power transmission line defect image identification method based on cloud edge cooperative detection

Country Status (1)

Country Link
CN (1) CN111784685B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112367400B (en) * 2020-11-12 2022-04-29 广东电网有限责任公司 Intelligent inspection method and system for power internet of things with edge cloud coordination
CN112419401A (en) * 2020-11-23 2021-02-26 上海交通大学 Aircraft surface defect detection system based on cloud edge cooperation and deep learning
CN112183788B (en) * 2020-11-30 2021-03-30 华南理工大学 Domain adaptive equipment operation detection system and method
CN112734703A (en) * 2020-12-28 2021-04-30 佛山市南海区广工大数控装备协同创新研究院 PCB defect optimization method by utilizing AI cloud collaborative detection
CN112837282A (en) * 2021-01-27 2021-05-25 上海交通大学 Small sample image defect detection method based on cloud edge cooperation and deep learning
CN112966608A (en) * 2021-03-05 2021-06-15 哈尔滨工业大学 Target detection method, system and storage medium based on edge-side cooperation
CN113052820A (en) * 2021-03-25 2021-06-29 贵州电网有限责任公司 Circuit equipment defect identification method based on neural network technology
CN113326871A (en) * 2021-05-19 2021-08-31 天津理工大学 Cloud edge cooperative meniscus detection method and system
CN113515829B (en) * 2021-05-21 2023-07-21 华北电力大学(保定) Situation awareness method for transmission line hardware defects under extremely cold disasters
CN113408087B (en) * 2021-05-25 2023-03-24 国网湖北省电力有限公司检修公司 Substation inspection method based on cloud side system and video intelligent analysis
CN113255605A (en) * 2021-06-29 2021-08-13 深圳市城市交通规划设计研究中心股份有限公司 Pavement disease detection method and device, terminal equipment and storage medium
CN113486779A (en) * 2021-07-01 2021-10-08 国网北京市电力公司 Panoramic intelligent inspection system for power transmission line
CN113592839B (en) * 2021-08-06 2023-01-13 广东电网有限责任公司 Distribution network line typical defect diagnosis method and system based on improved fast RCNN
CN114359285B (en) * 2022-03-18 2022-07-29 南方电网数字电网研究院有限公司 Power grid defect detection method and device based on visual context constraint learning
CN114397306B (en) * 2022-03-25 2022-07-29 南方电网数字电网研究院有限公司 Power grid grading ring hypercomplex category defect multi-stage model joint detection method
CN114943904A (en) * 2022-06-07 2022-08-26 国网江苏省电力有限公司泰州供电分公司 Operation monitoring method based on unmanned aerial vehicle inspection
CN114972721A (en) * 2022-06-13 2022-08-30 中国科学院沈阳自动化研究所 Power transmission line insulator string recognition and positioning method based on deep learning
CN114926667B (en) * 2022-07-20 2022-11-08 安徽炬视科技有限公司 Image identification method based on cloud edge cooperation
CN115220479B (en) * 2022-09-20 2022-12-13 山东大学 Dynamic and static cooperative power transmission line refined inspection method and system
CN115272981A (en) * 2022-09-26 2022-11-01 山东大学 Cloud-edge co-learning power transmission inspection method and system
US11836968B1 (en) * 2022-12-08 2023-12-05 Sas Institute, Inc. Systems and methods for configuring and using a multi-stage object classification and condition pipeline
CN115965627B (en) * 2023-03-16 2023-06-09 中铁电气化局集团有限公司 Micro component detection system and method applied to railway operation
CN116579609B (en) * 2023-05-15 2023-11-14 三峡科技有限责任公司 Illegal operation analysis method based on inspection process
CN116703926B (en) * 2023-08-08 2023-11-03 苏州思谋智能科技有限公司 Defect detection method, defect detection device, computer equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446228A (en) * 2010-09-30 2012-05-09 深圳市雅都软件股份有限公司 Three-dimensional space visualized display method and system of transmission line
CN104811608A (en) * 2014-01-28 2015-07-29 聚晶半导体股份有限公司 Image capturing apparatus and image defect correction method thereof
CN108810620A (en) * 2018-07-18 2018-11-13 腾讯科技(深圳)有限公司 Identify method, computer equipment and the storage medium of the material time point in video
CN109657596A (en) * 2018-12-12 2019-04-19 天津卡达克数据有限公司 A kind of vehicle appearance component identification method based on deep learning
CN110033453A (en) * 2019-04-18 2019-07-19 国网山西省电力公司电力科学研究院 Based on the power transmission and transformation line insulator Aerial Images fault detection method for improving YOLOv3
WO2020134943A1 (en) * 2018-12-25 2020-07-02 阿里巴巴集团控股有限公司 Car insurance automatic payout method and system
CN111400536A (en) * 2020-03-11 2020-07-10 无锡太湖学院 Low-cost tomato leaf disease identification method based on lightweight deep neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446228A (en) * 2010-09-30 2012-05-09 深圳市雅都软件股份有限公司 Three-dimensional space visualized display method and system of transmission line
CN104811608A (en) * 2014-01-28 2015-07-29 聚晶半导体股份有限公司 Image capturing apparatus and image defect correction method thereof
CN108810620A (en) * 2018-07-18 2018-11-13 腾讯科技(深圳)有限公司 Identify method, computer equipment and the storage medium of the material time point in video
CN109657596A (en) * 2018-12-12 2019-04-19 天津卡达克数据有限公司 A kind of vehicle appearance component identification method based on deep learning
WO2020134943A1 (en) * 2018-12-25 2020-07-02 阿里巴巴集团控股有限公司 Car insurance automatic payout method and system
CN110033453A (en) * 2019-04-18 2019-07-19 国网山西省电力公司电力科学研究院 Based on the power transmission and transformation line insulator Aerial Images fault detection method for improving YOLOv3
CN111400536A (en) * 2020-03-11 2020-07-10 无锡太湖学院 Low-cost tomato leaf disease identification method based on lightweight deep neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进Faster R-CNN的空中目标检测;冯小雨等;《光学学报》(第6期);250-258 *

Also Published As

Publication number Publication date
CN111784685A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111784685B (en) Power transmission line defect image identification method based on cloud edge cooperative detection
CN107742093B (en) Real-time detection method, server and system for infrared image power equipment components
Wang et al. Road damage detection and classification with faster R-CNN
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
CN109902806A (en) Method is determined based on the noise image object boundary frame of convolutional neural networks
CN111179249A (en) Power equipment detection method and device based on deep convolutional neural network
CN111145174A (en) 3D target detection method for point cloud screening based on image semantic features
CN114743119B (en) High-speed rail contact net hanger nut defect detection method based on unmanned aerial vehicle
CN111767927A (en) Lightweight license plate recognition method and system based on full convolution network
CN112560675B (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN112581443A (en) Light-weight identification method for surface damage of wind driven generator blade
CN111222478A (en) Construction site safety protection detection method and system
CN109815800A (en) Object detection method and system based on regression algorithm
CN111223129A (en) Detection method, detection device, monitoring equipment and computer readable storage medium
CN110059539A (en) A kind of natural scene text position detection method based on image segmentation
CN110992307A (en) Insulator positioning and identifying method and device based on YOLO
CN110929795A (en) Method for quickly identifying and positioning welding spot of high-speed wire welding machine
CN114049572A (en) Detection method for identifying small target
CN111540203B (en) Method for adjusting green light passing time based on fast-RCNN
Yan et al. Insulator detection and recognition of explosion based on convolutional neural networks
CN115147383A (en) Insulator state rapid detection method based on lightweight YOLOv5 model
CN111524121A (en) Road and bridge fault automatic detection method based on machine vision technology
CN113901911B (en) Image recognition method, image recognition device, model training method, model training device, electronic equipment and storage medium
CN109657540A (en) Withered tree localization method and system
CN111597939B (en) High-speed rail line nest defect detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant