CN115100546A - Mobile-based small target defect identification method and system for power equipment - Google Patents

Mobile-based small target defect identification method and system for power equipment Download PDF

Info

Publication number
CN115100546A
CN115100546A CN202210478368.3A CN202210478368A CN115100546A CN 115100546 A CN115100546 A CN 115100546A CN 202210478368 A CN202210478368 A CN 202210478368A CN 115100546 A CN115100546 A CN 115100546A
Authority
CN
China
Prior art keywords
layer
defect detection
target
defect
detection module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210478368.3A
Other languages
Chinese (zh)
Inventor
李孟轩
韩军科
杨知
刘彬
李丹煜
赵斌滨
赵彬
刘毅
王剑
汉京善
孔小昂
姬昆鹏
刘畅
张国强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
State Grid Tianjin Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
State Grid Tianjin Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, China Electric Power Research Institute Co Ltd CEPRI, State Grid Tianjin Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202210478368.3A priority Critical patent/CN115100546A/en
Publication of CN115100546A publication Critical patent/CN115100546A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for identifying defects of small targets of power equipment based on mobilenet, which comprises the following steps: acquiring image data of to-be-detected electrical equipment; inputting the image data of the electric power equipment to be detected to a pre-trained defect detection network, and acquiring a defect target detection result output by the pre-trained defect detection network; wherein the defect detection network is a target detection network based on mobilenet with attention mechanism. The method can obtain the defect target characteristic diagram more quickly by building the target detection network with the attention mechanism of the mobilenet, and has the characteristics of simple realization, high detection precision and wide application.

Description

Mobile-based small target defect identification method and system for power equipment
Technical Field
The invention belongs to the field of power equipment defect detection, and particularly relates to a power equipment small target defect identification method and system based on mobilenet.
Background
The transmission line has large reserve and wide distribution range, is usually in the sparse areas of the human tracks such as hills, mountains, unfavorable geological areas and the like, and has extremely bad operation conditions. The power equipment is likely to be defective or damaged due to the influence of external environmental factors such as wind power, ice coating and temperature for a long time, and although the hidden dangers cannot influence the equipment in a short time, if the hidden dangers cannot be found and processed in time, the loss of the equipment is increased, the efficiency is lowered, and serious accidents such as wire breakage, transformer equipment damage and power distribution network outage can be caused in serious cases. Therefore, regular routing inspection and even uninterrupted monitoring are carried out on the power equipment, hidden dangers and faults are found and timely processed, and the method is of great importance to national social life and production safety.
The traditional method for the inspection of the traditional power equipment finds the equipment abnormity by means of human vision, and takes pictures and videos of the wires by using a camera or an unmanned aerial vehicle arranged on a tower, and then carries out manual identification by background personnel. The mode seriously depends on the operation experience of background personnel, and has huge manpower consumption and low efficiency. And present image access is all by tour personnel manual operation to unmanned aerial vehicle is the example, and the image of shooing is deposited in unmanned aerial vehicle local earlier, returns the manual work again after the office and leads to the server on, and the ageing is lower, lacks defect automatic checkout technique.
In recent years, image recognition technology has been developed, and based on expert experience and known knowledge, the image recognition technology automatically identifies and evaluates shapes, modes, curves, numbers, character formats and graphics by using a computer and a mathematical reasoning method based on a large amount of image information and data. By adopting the image artificial intelligence recognition technology, the problem of power transmission line equipment inspection can be expected to be solved, and the breadth, depth, frequency and reliability of inspection work are greatly improved. The deep learning model expresses input data in a more abstract way through the learning of the original data. The learning process requires a large number of labeled samples for network training, and the trained network is used for processing unlabeled samples. The deep learning model is very suitable for processing the change detection problem, because the deep learning model has strong characteristic learning and classification capability, the image change detection is essentially the understanding and classification of the image, the deep learning obtains abstract information in the image through multi-layer abstract learning, noise interference can be inhibited, and the strong classification capability of the model can judge changed and non-changed areas in the image. Because the line defect picture is influenced by shooting equipment, angles, weather and backgrounds, the background is often more complicated, and the existing image recognition algorithm still has the defects of low recognition rate and the like.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a power equipment small target defect identification method based on mobilenet, which comprises the following steps:
acquiring image data of to-be-detected electrical equipment;
inputting the image data of the electric power equipment to be detected to a pre-trained defect detection network, and acquiring a defect target detection result output by the pre-trained defect detection network;
wherein the defect detection network is a target detection network based on mobilenet with attention mechanism.
Preferably, the training process of the pre-trained defect detection network includes:
acquiring a sample image and a power grid environment background image of power equipment, extracting a defect target from the sample image by using an image segmentation method, and performing image mixing on the defect target and the power grid environment background image to acquire primary image data;
performing image preprocessing on the primary image data to obtain secondary image data;
carrying out data annotation on the secondary image data to form a screenshot data set; the screenshot data set is image data of the to-be-detected power equipment;
and building a network structure of the mobilenet with an attention mechanism, training the network structure of the mobilenet with the attention mechanism by using the screenshot data set, and taking the trained network structure as a defect detection network.
Preferably, the image preprocessing is performed on the primary image data to obtain secondary image data, and the image preprocessing includes:
performing image enhancement on the primary image data by adopting a Laplace sharpening method to obtain edge information of an enhanced image;
and determining pixel change values corresponding to the defect target and the background in the primary image data according to the edge information, and determining secondary image data.
Preferably, the data annotation for the secondary image data to form a screenshot data set includes:
acquiring target central point data corresponding to the secondary image data based on a preset sample label;
intercepting an image with a preset length and width in the secondary image data through the target central point data to obtain an intercepted image;
and acquiring an image label corresponding to the intercepted image according to an equal scaling principle, and establishing a screenshot data set according to the image label.
Preferably, the object detection network with attention mechanism of the mobilenet comprises: the system comprises a backbone network, a first feature fusion module, a second feature fusion module, a first defect detection module, a second defect detection module, a third defect detection module and a fourth defect detection module;
the backbone network consists of 1 convolutional layer, 6 Bottlenet layers and 1 Trans layer, the backbone network is used for extracting image characteristics, and the Trans layer is used for capturing global information and context information; wherein, 6 layers of the Bottlenet layer comprise: a first, second, third, fourth, fifth and sixth boltleet layer; the convolutional layer, the first Bottlenet layer, the second Bottlenet layer, the third Bottlenet layer, the fourth Bottlenet layer, the fifth Bottlenet layer, the sixth Bottlenet layer and the Trans layer are connected in sequence;
the first feature fusion module comprises a convolution layer, an upper sampling layer, a Contact layer and a CBAM which are connected in sequence; the output end of a fifth Bottlenet layer in the backbone network is connected with the input end of a Contact layer in the first feature fusion module;
the second feature fusion module comprises a convolution layer, an upper sampling layer, a Contact layer and a CBAM which are connected in sequence; the output end of a sixth Bottlenet layer in the backbone network is connected with the input end of a Contact layer in the second feature fusion module; the output end of a Trans layer in the backbone network is connected with the input end of a convolution layer in the second feature fusion module; the output end of the CBAM in the second feature fusion module is connected with the input end of the convolution layer in the first feature fusion module;
the first defect detection module comprises a convolutional layer, an upper sampling layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the first feature fusion module is connected with the input end of the convolutional layer in the first defect detection module; the output end of a third Bottlenet layer in the backbone network is connected with the input end of a Contact layer in the first defect detection module; a Trans layer in the first defect detection module outputs a first NMS predicted target detection result;
the second defect detection module comprises a convolution layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the first defect detection module is connected with the input end of the convolution layer in the second defect detection module; the output end of the convolution layer in the first defect detection module is connected with the input end of the Contact layer in the second defect detection module; a Trans layer in the second defect detection module outputs a second NMS predicted target detection result;
the third defect detection module comprises a convolution layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the second defect detection module is connected with the input end of the convolution layer in the third defect detection module; the output end of the convolution layer in the first characteristic fusion module is connected with the input end of the Contact layer in the third defect detection module; a Trans layer in the third defect detection module outputs a third NMS predicted target detection result;
the fourth defect detection module comprises a convolutional layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the third defect detection module is connected with the input end of the convolutional layer in the fourth defect detection module; the output end of the convolution layer in the second characteristic fusion module is connected with the input end of the Contact layer in the fourth defect detection module; and a Trans layer in the fourth defect detection module outputs a fourth NMS predicted target detection result.
Preferably, the Bottlenet layer comprises a first 1x1 convolutional layer, a depth separable convolutional layer, a second 1x1 convolutional layer and an addition layer which are connected in sequence, the first 1x1 convolutional layer is an input end of the Bottlenet layer, and the addition layer is an output end of the Bottlenet layer; the output end of the first 1x1 convolutional layer is connected with the input end of the addition layer; the Bottlenet layers in the target detection network with the attention mechanism of the mobilenet all adopt a residual network structure.
Preferably, the loss function of the target detection network with attention mechanism of the mobilenet is as follows:
loss=l box +l obj +l cls
wherein, loss represents a loss function,
Figure BDA0003626650580000041
A C representing the minimum box area containing the prediction box and the real box,
Figure BDA0003626650580000042
Figure BDA0003626650580000043
wherein i represents the sequence number of the trellis, j represents the sequence number of the anchor box,
Figure BDA0003626650580000044
indicating that the box at the ith grid, jth anchor box, has no target and has a value of 1, otherwise it is 0;
Figure BDA0003626650580000045
indicating that the box at the ith grid, jth anchor box, has a target with a value of 1, otherwise 0; a represents a true rectangular box; b represents a prediction rectangular frame; l. the box Represents a rectangular box box corresponding error; l. the obj Representing a confidence error of the target; l cls A classification error representing a target; lambda nobj Representing a no-target weight; s represents the size of the grid; n represents that N candidate frames are generated for each grid; c. C i Representing the confidence of the ith grid training;
Figure BDA0003626650580000046
representing a confidence for the ith mesh prediction; lambda [ alpha ] obj Indicating target weight; lambda class Representing a category weight; c represents a certain category; classes represents a set of categories; p is a radical of i (c) Representing the probability of training out the category c in the ith grid;
Figure BDA0003626650580000047
indicating the prediction class c probability in the ith mesh.
Preferably, after obtaining the defect target detection result output by the defect detection network trained in advance, the method further includes:
and calculating the accuracy and the omission factor of the detection result of the defect target.
Preferably, the accuracy is expressed by the following relation:
Figure BDA0003626650580000048
wherein acc represents the calculation result of the accuracy of defect detection, N represents the number of detected target categories, r represents the recall rate under the set IOU threshold, p represents the accuracy under the set IOU threshold, and cls represents the category number of target detection.
Preferably, the relationship expression of the missed detection rate is shown as follows:
Figure BDA0003626650580000049
wherein omi represents the miss rate calculation result of defect detection, N FP Number of false targets predicted as true targets, N FN For true targets to be predicted as false targets, N TN Is the number of false targets predicted to be false targets.
Based on the same inventive concept, the invention also provides a power equipment small target defect identification system based on the mobilenet, which comprises:
a data acquisition module: the method comprises the steps of acquiring image data of the to-be-detected power equipment;
a defect detection module: the system comprises a pre-trained defect detection network, a pre-trained defect detection network and a defect target detection result output by the pre-trained defect detection network, wherein the pre-trained defect detection network is used for detecting the defect target of the power equipment to be detected;
wherein the defect detection network is a target detection network based on mobilenet with attention mechanism.
Preferably, the training process of the defect detection network in the defect detection module includes:
acquiring a sample image and a power grid environment background image of power equipment, extracting a defect target from the sample image by using an image segmentation method, and performing image mixing on the defect target and the power grid environment background image to acquire primary image data;
performing image preprocessing on the primary image data to obtain secondary image data;
carrying out data annotation on the secondary image data to form a screenshot data set; the screenshot data set is image data of the to-be-detected power equipment;
and building a network structure of the mobilenet with an attention mechanism, training the network structure of the mobilenet with the attention mechanism by using the screenshot data set, and taking the trained network structure as a defect detection network.
Preferably, the image preprocessing is performed on the primary image data in the defect detection module to obtain secondary image data, and the method includes:
performing image enhancement on the primary image data by adopting a Laplace sharpening method to obtain edge information of an enhanced image;
and determining pixel change values corresponding to the defect target and the background in the primary image data according to the edge information, and determining secondary image data.
Preferably, the data annotation performed on the secondary image data in the defect detection module to form a screenshot data set includes:
acquiring target central point data corresponding to the secondary image data based on a preset sample label;
intercepting an image with a preset length and width in the secondary image data through the target central point data to obtain an intercepted image;
and acquiring an image label corresponding to the intercepted image according to an equal scaling principle, and establishing a screenshot data set according to the image label.
Preferably, the target detection network of the mobilenet with attention mechanism in the defect detection module includes: the system comprises a backbone network, a first feature fusion module, a second feature fusion module, a first defect detection module, a second defect detection module, a third defect detection module and a fourth defect detection module;
the main network consists of 1 convolutional layer, 6 Bottlenet layers and 1 Trans layer, the main network is used for extracting image characteristics, and the Trans layer is used for capturing global information and context information; wherein, 6 layers of the Bottlenet layer comprise: a first, second, third, fourth, fifth and sixth boltleet layer; the convolutional layer, the first Bottlenet layer, the second Bottlenet layer, the third Bottlenet layer, the fourth Bottlenet layer, the fifth Bottlenet layer, the sixth Bottlenet layer and the Trans layer are sequentially connected;
the first feature fusion module comprises a convolution layer, an upper sampling layer, a Contact layer and a CBAM which are connected in sequence; the output end of a fifth Bottlenet layer in the backbone network is connected with the input end of a Contact layer in the first feature fusion module;
the second feature fusion module comprises a convolution layer, an upper sampling layer, a Contact layer and a CBAM which are connected in sequence; the output end of a sixth Bottlenet layer in the backbone network is connected with the input end of a Contact layer in the second feature fusion module; the output end of a Trans layer in the backbone network is connected with the input end of a convolution layer in the second feature fusion module; the output end of the CBAM in the second feature fusion module is connected with the input end of the convolution layer in the first feature fusion module;
the first defect detection module comprises a convolutional layer, an upper sampling layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the first feature fusion module is connected with the input end of the convolutional layer in the first defect detection module; the output end of a third Bottlenet layer in the backbone network is connected with the input end of a Contact layer in the first defect detection module; a Trans layer in the first defect detection module outputs a first NMS predicted target detection result;
the second defect detection module comprises a convolution layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the first defect detection module is connected with the input end of the convolution layer in the second defect detection module; the output end of the convolution layer in the first defect detection module is connected with the input end of the Contact layer in the second defect detection module; a Trans layer in the second defect detection module outputs a second NMS predicted target detection result;
the third defect detection module comprises a convolution layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the second defect detection module is connected with the input end of the convolution layer in the third defect detection module; the output end of the convolution layer in the first feature fusion module is connected with the input end of the Contact layer in the third defect detection module; a Trans layer in the third defect detection module outputs a third NMS predicted target detection result;
the fourth defect detection module comprises a convolutional layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the third defect detection module is connected with the input end of the convolutional layer in the fourth defect detection module; the output end of the convolution layer in the second characteristic fusion module is connected with the input end of the Contact layer in the fourth defect detection module; a Trans layer in the fourth defect detection module outputs a fourth NMS predicted target detection result.
Preferably, the Bottlenet layer in the defect detection module includes a first 1x1 convolutional layer, a depth separable convolutional layer, a second 1x1 convolutional layer, and an addition layer, which are connected in sequence, where the first 1x1 convolutional layer is an input end of the Bottlenet layer, and the addition layer is an output end of the Bottlenet layer; the output end of the first 1x1 convolution layer is connected with the input end of the addition layer; the Bottlenet layers in the target detection network with the attention mechanism of the mobilenet all adopt a residual error network structure.
Preferably, the loss function of the target detection network with attention mechanism of the mobilenet in the defect detection module is as follows:
loss=l box +l obj +l cls
where, loss represents the loss function,
Figure BDA0003626650580000071
A C representing the minimum box area containing the prediction box and the real box,
Figure BDA0003626650580000072
Figure BDA0003626650580000073
wherein i represents the serial number of the mesh, j represents the serial number of the anchor box,
Figure BDA0003626650580000074
indicating that the box at the ith grid, jth anchor box, has no target and has a value of 1, otherwise it is 0;
Figure BDA0003626650580000075
indicating that the box at the ith grid, jth anchor box, has a target with a value of 1, otherwise 0; a. theRepresenting a true rectangular box; b represents a prediction rectangular frame; l box Representing the corresponding error of the rectangular box; l obj Representing a confidence error of the target; l cls A classification error representing a target; lambda [ alpha ] nobj Representing a no-target weight; s represents the size of the grid; n represents that N candidate frames are generated for each grid; c. C i Representing the confidence of the ith grid training;
Figure BDA0003626650580000076
representing a confidence for the ith mesh prediction; lambda [ alpha ] obj Indicating a target weight; lambda [ alpha ] class Representing a category weight; c represents a certain category; classes represents a set of categories; p is a radical of i (c) Representing the probability of training out the category c in the ith grid;
Figure BDA0003626650580000079
representing the prediction of the class c probability in the ith mesh.
Preferably, after the defect detection module obtains the defect target detection result output by the pre-trained defect detection network, the method further includes:
and calculating the accuracy and the omission factor of the detection result of the defect target.
Preferably, the relationship expression of the accuracy in the defect detection module is as follows:
Figure BDA0003626650580000077
wherein acc represents the calculation result of the accuracy of defect detection, N represents the number of detected target categories, r represents the recall rate under the set IOU threshold, p represents the accuracy under the set IOU threshold, and cls represents the category number of target detection.
Preferably, the relationship expression of the missing rate in the defect detection module is shown as follows:
Figure BDA0003626650580000078
wherein omi represents the miss rate calculation result of defect detection, N FP Number of false targets predicted as true targets, N FN For true targets to be predicted as false targets, N TN Is the number of decoys that the decoys are predicted to.
Compared with the closest prior art, the invention has the following beneficial effects:
1. the invention provides a method and a system for identifying defects of small targets of power equipment based on mobilenet, which comprises the following steps: acquiring image data of to-be-detected electrical equipment; inputting the image data of the electric power equipment to be detected to a pre-trained defect detection network, and acquiring a defect target detection result output by the pre-trained defect detection network; the defect detection network is based on a mobile lens target detection network with an attention mechanism, and the target characteristic diagram can be obtained more quickly and the defect target can be better focused by building the mobile lens target detection network with the attention mechanism, so that the precision and the accuracy of target detection are improved;
2. furthermore, the image segmentation method and the Laplace sharpening are favorable for improving the contrast of the defect target and the background; 4 prediction output ports are designed in the target detection network, so that the defects of the electric power equipment with different sizes can be detected, and the precision of defect target prediction is improved; by adding the attention mechanism model, the robustness, the stability and the generalization capability of the deep learning model are improved; and by adopting a full residual error network structure, the network information fusion capability is favorably improved.
Drawings
Fig. 1 is a schematic flow chart of a method for identifying defects of small targets of power equipment based on mobilenet, provided by the invention;
fig. 2 is a schematic view of a target defect detection flow in the method for identifying a small target defect of power equipment based on a mobilenet provided by the present invention;
fig. 3 is a schematic overall framework diagram of a method for identifying a defect of a small target of electrical equipment based on a mobilene provided by the present invention;
fig. 4 is a network structure diagram of a mobilene with attention mechanism in the method for identifying defects of small targets of power equipment based on the mobilene provided by the present invention;
fig. 5 is a structural diagram of a Bottlenet layer in a network structure in the method for identifying defects of small targets of power equipment based on mobilenet provided by the invention;
fig. 6 is a structural diagram of a CBAM in a network structure in the method for identifying defects of small targets of power equipment based on mobilenet provided by the present invention;
fig. 7 is a structural diagram of a Trans layer in a network structure in the method for identifying the small target defect of the power device based on the mobilenet provided by the present invention;
fig. 8 is a schematic structural diagram of a power equipment small target defect identification system based on mobilenet provided by the present invention.
Detailed Description
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
Example 1:
the invention provides a method dz for identifying defects of small targets of power equipment based on mobilenet
Step 1: acquiring image data of electric equipment to be detected;
step 2: inputting the image data of the electric power equipment to be detected to a pre-trained defect detection network, and acquiring a defect target detection result output by the pre-trained defect detection network;
wherein the defect detection network is a target detection network based on a mobilenet with attention mechanism;
specifically, as shown in fig. 2 to 3, step 2 includes:
step 2-1: extracting a defect target from a sample image of the power equipment by using a graph cutting method, and mixing the defect target with an actual power grid environment background;
step 2-2: preprocessing the high-definition image by a Laplace sharpening method, and improving the contrast of a target and a background to obtain a data enhancement sample;
step 2-3: performing data annotation on the data enhancement sample;
step 2-4: intercepting a target image through a target center point interception method according to a preset sample label, generating a label corresponding to a target, and forming an intercepted image data set;
step 2-5, constructing a network of the mobilene with the attention mechanism, training a network structure of the mobilene with the attention mechanism by using the screenshot data set, and taking the trained network structure as a defect detection network;
step 2-6, inputting the screenshot data set into a network, training the network, and calculating the accuracy and the omission factor of target detection on a verification set;
specifically, the step 2-1 comprises the following steps: the method comprises the following steps of extracting a defect target from an image of the power equipment by using a graph cut function, obtaining a more robust, stable and generalizable deep learning model, extracting the defect target by using a graph cut algorithm, mixing the defect target with an actual power grid environment background, expanding a sample set, and balancing the number of samples of a normal target and the defect target, wherein the graph cut function is defined as:
Figure BDA0003626650580000091
where i and j represent two different nodes that are adjacent, x i And x j Representing labels corresponding to two adjacent different nodes, wherein X represents all pixel nodes in the image; y represents a connecting edge between nodes; label x of the pixel if inode belongs to foreground i 1 is ═ 1; label x of the pixel if inode belongs to the background i =0,E 1 An energy value representing whether the node belongs to the foreground or the background; e 2 Representing energy values of two adjacent different pixel nodes; λ is an energy balance parameter; the foreground and the background can be separated by utilizing a definition formula to obtain the minimum value of a graph cut function E (x), and a target obtained by separation is fused with the actual power grid background to expand a sample set;
the step 2-2 comprises the following steps: the picture is enhanced by using laplacian sharpening, so that the edge details of a target in the image can be highlighted, the contrast between the target and a background is improved, and the detection performance is further improved, wherein a laplacian sharpening function is defined as:
g(x,y)=f(x,y)-▽ 2 f(x,y)
wherein x and y respectively represent two directions of gradient x and y, and f (x and y) represents an original image- 2 f (x, y) represents a Laplace image, g (x, y) represents a sharpened image, and the sharpened image can effectively improve the contrast and reduce the influence of image blurring and the like;
step 2-3, manually marking the preprocessed image, wherein the marked object is a defect target of the power equipment;
the steps 2-4 comprise: the process of intercepting the target image by the target center point interception method comprises the following steps: the target center point in the original image is set to (x) c ,y c ) The width and length of the original image are set as W and H, and the original image is randomly shifted with the target center point as the center
Figure BDA0003626650580000101
Distance as the center point of the intercepted image, where x shift Abscissa offset, y, representing random offset shift The vertical coordinate offset of random offset is represented, the size of the screenshot area is set to be w and h, and the target label in the screenshot can be obtained according to equal scaling
Figure BDA0003626650580000102
Repeating this operation can create a screenshot data set.
The network structure of steps 2-5 is shown in fig. 4, and includes: the main network of the network consists of 1 convolution layer, 6 Bottlenet layers and 1 Trans layer and is used for extracting features; the Trans layer has attention characteristics and can capture global information and rich context information; the characteristic fusion module is composed of a convolution layer, an upper sampling layer, a Concat layer and a CBAM (CBAM), wherein the CBAM has the channel and space attention capacity, the characteristic maps subjected to multiple times of down sampling have different receptive fields, the information side emphasis of target characteristics on different characteristic layers is also different, multi-scale fusion is favorable for improving the target detection accuracy, the power equipment targets have size diversity, in order to meet the requirement of the detection of the targets with different sizes, 4 prediction output ports are designed in a network structure, the NMS target prediction detection layer closer to the upper part is used for detecting small targets, and the NMS target prediction detection layer closer to the lower part is used for detecting targets with larger sizes;
as shown in fig. 5, bottelnet in the conventional mobile network is different, a residual network structure is adopted completely, network information fusion capability is increased, CBAM and Trans layers in fig. 6 and 7 are conventional network layers, target position detection is performed on features fused in different scales, and a loss function used by the whole network is as follows:
loss=l box +l obj +l cls
wherein the content of the first and second substances,
Figure BDA0003626650580000103
A C representing the minimum box area containing the prediction box and the real box,
Figure BDA0003626650580000104
Figure BDA0003626650580000105
wherein i represents the serial number of the mesh, j represents the serial number of the anchor box,
Figure BDA0003626650580000106
indicating that the box at the ith grid, jth anchor box, has no target and has a value of 1, otherwise it is 0;
Figure BDA0003626650580000111
indicating that the box at the ith grid, jth anchor box, has a target with a value of 1, otherwise 0; a represents a true rectangular box; b represents a prediction rectangular frame; l box Representing the corresponding error of the rectangular box; l obj Representing a confidence error of the target; l. the cls Representing objectsThe classification error of (2); lambda nobj Representing a no-target weight; s represents the size of the grid; n represents that N candidate frames are generated for each grid; c. C i Representing the confidence of the ith grid training;
Figure BDA0003626650580000112
representing a confidence for the ith mesh prediction; lambda obj Indicating target weight; lambda [ alpha ] class Representing a category weight; c represents a certain category; classes represents a set of categories; p is a radical of i (c) Representing the probability of training out the category c in the ith grid;
Figure BDA0003626650580000113
representing the prediction class c probability in the ith mesh;
when the constructed network sets initial training parameters, 100 epochs are set; the initial learning rate was 0.001; the learning rate reduction mechanism is set to be a cosine annealing mode; the Batchsize is adjusted according to the data volume; early stop mechanism: stopping updating when the generalization error grows within 5 consecutive cycles;
the steps 2-6 comprise: the accuracy calculation formula is as follows:
Figure BDA0003626650580000114
wherein, N represents the number of the detected target categories, r represents the recall rate under the set IOU threshold, p represents the accuracy under the set IOU threshold, cls represents the category number of the target detection, the IOU takes values in 0.5-0.95 interval at the interval of 0.05, a series of p and r can be obtained, r is the abscissa, p is the ordinate, a curve can be obtained, the area between the coordinate axis and the curve is the average accuracy rate, and the area passes through
Figure BDA0003626650580000115
And (4) calculating.
The missing rate calculation formula is as follows:
Figure BDA0003626650580000116
wherein N is FP Number of false targets predicted as true targets, N FN For true targets to be predicted as false targets, N TN Is the number of false targets predicted to be false targets.
Example 2:
based on the same concept, the invention provides a power equipment small target defect identification system based on mobilenet, the system structure is shown in fig. 8, and the system comprises:
a data acquisition module: the method comprises the steps of acquiring image data of the to-be-detected power equipment;
a defect detection module: the system comprises a pre-trained defect detection network, a pre-trained defect detection network and a defect target detection result output by the pre-trained defect detection network, wherein the pre-trained defect detection network is used for detecting the defect target of the power equipment to be detected;
wherein the defect detection network is a target detection network based on a mobilenet with attention mechanism;
specifically, the training process of the defect detection network in the defect detection module includes:
acquiring a sample image and a power grid environment background image of power equipment, extracting a defect target from the sample image by using an image segmentation method, and performing image mixing on the defect target and the power grid environment background image to acquire primary image data;
performing image preprocessing on the primary image data to obtain secondary image data;
carrying out data annotation on the secondary image data to form a screenshot data set; the screenshot data set is image data of the to-be-detected power equipment;
building a network structure of the mobilene with an attention mechanism, training the network structure of the mobilene with the attention mechanism by using the screenshot data set, and taking the trained network structure as a defect detection network;
the image preprocessing is carried out on the primary image data in the defect detection module to obtain secondary image data, and the method comprises the following steps:
performing image enhancement on the primary image data by adopting a Laplace sharpening method to obtain edge information of an enhanced image;
determining pixel change values corresponding to a defect target and a background in the primary image data according to the edge information, and determining secondary image data;
the defect detection module carries out data annotation aiming at the secondary image data to form a screenshot data set, and the method comprises the following steps:
acquiring target central point data corresponding to the secondary image data based on a preset sample label;
intercepting an image with a preset length and width in the secondary image data through the target central point data to obtain an intercepted image;
acquiring an image label corresponding to the intercepted image according to an equal scaling principle, and establishing a screenshot data set according to the image label;
the object detection network with attention mechanism for the mobilenet in the defect detection module comprises: the system comprises a backbone network, a first feature fusion module, a second feature fusion module, a first defect detection module, a second defect detection module, a third defect detection module and a fourth defect detection module;
the backbone network consists of 1 convolutional layer, 6 Bottlenet layers and 1 Trans layer, the backbone network is used for extracting image characteristics, and the Trans layer is used for capturing global information and context information; wherein, 6 layers of the Bottlenet layer comprise: a first, second, third, fourth, fifth and sixth boltleet layer; the convolutional layer, the first Bottlenet layer, the second Bottlenet layer, the third Bottlenet layer, the fourth Bottlenet layer, the fifth Bottlenet layer, the sixth Bottlenet layer and the Trans layer are connected in sequence;
the first feature fusion module comprises a convolution layer, an upper sampling layer, a Contact layer and a CBAM which are connected in sequence; the output end of a fifth Bottlenet layer in the backbone network is connected with the input end of a Contact layer in the first feature fusion module;
the second feature fusion module comprises a convolution layer, an upper sampling layer, a Contact layer and a CBAM which are connected in sequence; the output end of a sixth Bottlenet layer in the backbone network is connected with the input end of a Contact layer in the second feature fusion module; the output end of a Trans layer in the backbone network is connected with the input end of a convolution layer in the second feature fusion module; the output end of the CBAM in the second feature fusion module is connected with the input end of the convolution layer in the first feature fusion module;
the first defect detection module comprises a convolution layer, an upper sampling layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the first characteristic fusion module is connected with the input end of the convolution layer in the first defect detection module; the output end of a third Bottlenet layer in the backbone network is connected with the input end of a Contact layer in the first defect detection module; a Trans layer in the first defect detection module outputs a first NMS predicted target detection result;
the second defect detection module comprises a convolution layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the first defect detection module is connected with the input end of the convolution layer in the second defect detection module; the output end of the convolution layer in the first defect detection module is connected with the input end of the Contact layer in the second defect detection module; a Trans layer in the second defect detection module outputs a second NMS predicted target detection result;
the third defect detection module comprises a convolution layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the second defect detection module is connected with the input end of the convolution layer in the third defect detection module; the output end of the convolution layer in the first characteristic fusion module is connected with the input end of the Contact layer in the third defect detection module; a Trans layer in the third defect detection module outputs a third NMS predicted target detection result;
the fourth defect detection module comprises a convolution layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the third defect detection module is connected with the input end of the convolution layer in the fourth defect detection module; the output end of the convolution layer in the second characteristic fusion module is connected with the input end of the Contact layer in the fourth defect detection module; a Trans layer in the fourth defect detection module outputs a fourth NMS predicted target detection result;
the Bottlenet layer in the defect detection module comprises a first 1x1 convolutional layer, a depth separable convolutional layer, a second 1x1 convolutional layer and an addition layer which are sequentially connected, wherein the first 1x1 convolutional layer is an input end of the Bottlenet layer, and the addition layer is an output end of the Bottlenet layer; the output end of the first 1x1 convolutional layer is connected with the input end of the addition layer; the Bottlenet layers in the target detection network with the attention mechanism of the mobilenet all adopt a residual error network structure;
the loss function of the target detection network with attention mechanism of the mobilenet in the defect detection module is as follows:
loss=l box +l obj +l cls
where, loss represents the loss function,
Figure BDA0003626650580000141
A C representing the minimum box area containing the prediction box and the real box,
Figure BDA0003626650580000142
Figure BDA0003626650580000143
wherein i represents the serial number of the mesh, j represents the serial number of the anchor box,
Figure BDA0003626650580000144
indicating that the box at the ith grid, jth anchor box, has no target and has a value of 1, otherwise it is 0;
Figure BDA0003626650580000145
indicating that the box at the ith grid, jth anchor box, has a target with a value of 1, otherwise 0; a represents a true rectangular box; b represents a prediction rectangular frame; l. the box Representing the corresponding error of the rectangular box; l obj Representing a confidence error of the target; l cls A classification error representing a target; lambda [ alpha ] nobj Representing a no-target weight; s represents the size of the grid; n represents that N candidate frames are generated for each grid; c. C i Representing the confidence of the ith grid training;
Figure BDA0003626650580000146
representing a confidence for the ith mesh prediction; lambda [ alpha ] obj Indicating target weight; lambda [ alpha ] class Representing a category weight; c represents a certain category; classes represents a set of categories; p is a radical of formula i (c) Representing the probability of training out the category c in the ith grid;
Figure BDA0003626650580000147
representing the prediction class c probability in the ith mesh;
after the defect detection module obtains the defect target detection result output by the pre-trained defect detection network, the method further comprises the following steps:
and calculating the accuracy and the omission factor of the detection result of the defect target.
Preferably, the relational expression of the accuracy in the defect detection module is as follows:
Figure BDA0003626650580000148
wherein acc represents the accuracy calculation result of defect detection, N represents the number of detected target categories, r represents the recall rate under the set IOU threshold, p represents the accuracy under the set IOU threshold, and cls represents the category number of target detection;
the relationship expression of the missing rate in the defect detection module is shown as the following formula:
Figure BDA0003626650580000149
wherein omi represents the miss rate calculation result of defect detection, N FP Number of false targets predicted as true targets, N FN For true targets to be predicted as false targets, N TN Is the number of false targets predicted to be false targets.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting the protection scope thereof, and although the present invention has been described in detail with reference to the above-mentioned embodiments, those skilled in the art should understand that after reading the present invention, they can make various changes, modifications or equivalents to the specific embodiments of the application, but these changes, modifications or equivalents are all within the protection scope of the claims of the application.

Claims (12)

1. A method for identifying defects of small targets of electric power equipment based on mobilenet is characterized by comprising the following steps:
acquiring image data of electric equipment to be detected;
inputting the image data of the electric power equipment to be detected to a pre-trained defect detection network, and acquiring a defect target detection result output by the pre-trained defect detection network;
wherein the defect detection network is a mobilenet-based target detection network with attention mechanism.
2. The method of claim 1, wherein the training process of the pre-trained defect detection network comprises:
acquiring a sample image and a power grid environment background image of power equipment, extracting a defect target from the sample image by using an image segmentation method, and performing image mixing on the defect target and the power grid environment background image to acquire primary image data;
performing image preprocessing on the primary image data to obtain secondary image data;
carrying out data annotation on the secondary image data to form a screenshot data set; the screenshot data set is image data of the to-be-detected power equipment;
and constructing a network structure with an attention mechanism of the mobilene, training the network structure with the attention mechanism of the mobilene by using the screenshot data set, and taking the trained network structure as a defect detection network.
3. The method of claim 2, wherein the image pre-processing the primary image data to obtain secondary image data comprises:
performing image enhancement on the primary image data by adopting a Laplace sharpening method to obtain edge information of an enhanced image;
and determining pixel change values corresponding to the defect target and the background in the primary image data according to the edge information, and determining secondary image data.
4. The method of claim 2, wherein said data annotating said secondary image data to form a screenshot data set, comprises:
acquiring target central point data corresponding to the secondary image data based on a preset sample label;
intercepting an image with a preset length and width in the secondary image data through the target central point data to obtain an intercepted image;
and acquiring an image label corresponding to the intercepted image according to an equal scaling principle, and establishing a screenshot data set according to the image label.
5. The method of claim 2, wherein the object detection network of the mobilenet with attention mechanism comprises: the system comprises a backbone network, a first feature fusion module, a second feature fusion module, a first defect detection module, a second defect detection module, a third defect detection module and a fourth defect detection module;
the main network consists of 1 convolutional layer, 6 Bottlenet layers and 1 Trans layer, the main network is used for extracting image characteristics, and the Trans layer is used for capturing global information and context information; wherein, 6 layers of the Bottlenet layer comprise: a first, second, third, fourth, fifth and sixth boltleet layer; the convolutional layer, the first Bottlenet layer, the second Bottlenet layer, the third Bottlenet layer, the fourth Bottlenet layer, the fifth Bottlenet layer, the sixth Bottlenet layer and the Trans layer are connected in sequence;
the first feature fusion module comprises a convolution layer, an upper sampling layer, a Contact layer and a CBAM which are connected in sequence; the output end of a fifth Bottlenet layer in the backbone network is connected with the input end of a Contact layer in the first feature fusion module;
the second feature fusion module comprises a convolution layer, an upper sampling layer, a Contact layer and a CBAM which are connected in sequence; the output end of a sixth Bottlenet layer in the backbone network is connected with the input end of a Contact layer in the second feature fusion module; the output end of a Trans layer in the backbone network is connected with the input end of a convolution layer in the second feature fusion module; the output end of the CBAM in the second feature fusion module is connected with the input end of the convolution layer in the first feature fusion module;
the first defect detection module comprises a convolution layer, an upper sampling layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the first characteristic fusion module is connected with the input end of the convolution layer in the first defect detection module; the output end of a third Bottlenet layer in the backbone network is connected with the input end of a Contact layer in the first defect detection module; a Trans layer in the first defect detection module outputs a first NMS predicted target detection result;
the second defect detection module comprises a convolution layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the first defect detection module is connected with the input end of the convolution layer in the second defect detection module; the output end of the convolution layer in the first defect detection module is connected with the input end of the Contact layer in the second defect detection module; a Trans layer in the second defect detection module outputs a second NMS predicted target detection result;
the third defect detection module comprises a convolution layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the second defect detection module is connected with the input end of the convolution layer in the third defect detection module; the output end of the convolution layer in the first characteristic fusion module is connected with the input end of the Contact layer in the third defect detection module; a Trans layer in the third defect detection module outputs a third NMS predicted target detection result;
the fourth defect detection module comprises a convolution layer, a Contact layer, a Trans layer and a CBAM which are connected in sequence, and the output end of the CBAM in the third defect detection module is connected with the input end of the convolution layer in the fourth defect detection module; the output end of the convolution layer in the second characteristic fusion module is connected with the input end of the Contact layer in the fourth defect detection module; and a Trans layer in the fourth defect detection module outputs a fourth NMS predicted target detection result.
6. The method of claim 5, wherein said Bottleneet layer comprises a first 1x1 convolutional layer, a depth separable convolutional layer, a second 1x1 convolutional layer, and an addition layer connected in sequence, said first 1x1 convolutional layer being an input end of the Bottleneet layer, said addition layer being an output end of the Bottleneet layer; the output end of the first 1x1 convolutional layer is connected with the input end of the addition layer; the Bottlenet layers in the target detection network with the attention mechanism of the mobilenet all adopt a residual network structure.
7. The method of claim 1, wherein the loss function of the target detection network of the mobilenet with attention mechanism is:
loss=l box +l obj +l cls
where, loss represents the loss function,
Figure FDA0003626650570000031
A C representing the minimum box area containing the prediction box and the real box,
Figure FDA0003626650570000032
Figure FDA0003626650570000033
wherein i represents the serial number of the mesh, j represents the serial number of the anchor box,
Figure FDA0003626650570000034
indicating that the box at the ith grid, jth anchor box, has no target and has a value of 1, otherwise it is 0;
Figure FDA0003626650570000035
indicating that the box at the ith grid, jth anchor box, has a target with a value of 1, otherwise 0; a represents a true rectangular box; b represents a prediction rectangular frame; l box Representing the corresponding error of the rectangular box; l obj Representing a confidence error of the target; l cls A classification error representing a target; lambda [ alpha ] nobj Representing a no-target weight; s represents the size of the grid; n represents that N candidate frames are generated for each grid; c. C i Representing the confidence of the ith grid training;
Figure FDA0003626650570000038
representing a confidence for the ith mesh prediction; lambda [ alpha ] obj Indicating target weight; lambda [ alpha ] class Representing a category weight; c represents a certain category; classes represents a set of categories; p is a radical of i (c) Representing the probability of training out the category c in the ith grid;
Figure FDA0003626650570000039
representing the prediction of the class c probability in the ith mesh.
8. The method of claim 1, wherein after obtaining the defect target detection result output by the pre-trained defect detection network, the method further comprises:
and calculating the accuracy and the omission factor of the detection result of the defect target.
9. The method of claim 8, wherein the accuracy is expressed in terms of a relationship as follows:
Figure FDA0003626650570000036
wherein acc represents the calculation result of the accuracy of defect detection, N represents the number of detected target categories, r represents the recall rate under the set IOU threshold, p represents the accuracy under the set IOU threshold, and cls represents the category number of target detection.
10. The method of claim 8, wherein the relationship of the missed rate is expressed as follows:
Figure FDA0003626650570000037
wherein omi represents the miss rate calculation result of defect detection, N FP Number of false targets predicted as true targets, N FN For a true target to be predicted as a false target, N TN Is the number of false targets predicted to be false targets.
11. A small target defect identification system of electric power equipment based on mobilene is characterized by comprising:
a data acquisition module: the method comprises the steps of acquiring image data of the to-be-detected power equipment;
a defect detection module: the system comprises a defect detection network, a defect detection module, a fault detection module and a fault analysis module, wherein the defect detection network is used for inputting image data of the electric power equipment to be detected to the defect detection network trained in advance and acquiring a defect target detection result output by the defect detection network trained in advance;
wherein the defect detection network is a target detection network based on mobilenet with attention mechanism.
12. The system of claim 11, wherein the training process for the defect detection network in the defect detection module comprises:
acquiring a sample image and a power grid environment background image of power equipment, extracting a defect target from the sample image by using an image segmentation method, and performing image mixing on the defect target and the power grid environment background image to acquire primary image data;
performing image preprocessing on the primary image data to obtain secondary image data;
carrying out data annotation on the secondary image data to form a screenshot data set; the screenshot data set is image data of the power equipment to be detected;
and building a network structure of the mobilenet with an attention mechanism, training the network structure of the mobilenet with the attention mechanism by using the screenshot data set, and taking the trained network structure as a defect detection network.
CN202210478368.3A 2022-05-05 2022-05-05 Mobile-based small target defect identification method and system for power equipment Pending CN115100546A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210478368.3A CN115100546A (en) 2022-05-05 2022-05-05 Mobile-based small target defect identification method and system for power equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210478368.3A CN115100546A (en) 2022-05-05 2022-05-05 Mobile-based small target defect identification method and system for power equipment

Publications (1)

Publication Number Publication Date
CN115100546A true CN115100546A (en) 2022-09-23

Family

ID=83287004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210478368.3A Pending CN115100546A (en) 2022-05-05 2022-05-05 Mobile-based small target defect identification method and system for power equipment

Country Status (1)

Country Link
CN (1) CN115100546A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993327A (en) * 2023-09-26 2023-11-03 国网安徽省电力有限公司经济技术研究院 Defect positioning system and method for transformer substation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993327A (en) * 2023-09-26 2023-11-03 国网安徽省电力有限公司经济技术研究院 Defect positioning system and method for transformer substation
CN116993327B (en) * 2023-09-26 2023-12-15 国网安徽省电力有限公司经济技术研究院 Defect positioning system and method for transformer substation

Similar Documents

Publication Publication Date Title
CN112819804B (en) Insulator defect detection method based on improved YOLOv convolutional neural network
CN111368690B (en) Deep learning-based video image ship detection method and system under influence of sea waves
CN111353413A (en) Low-missing-report-rate defect identification method for power transmission equipment
CN111832398B (en) Unmanned aerial vehicle image distribution line pole tower ground wire broken strand image detection method
CN109712127B (en) Power transmission line fault detection method for machine inspection video stream
CN114743119B (en) High-speed rail contact net hanger nut defect detection method based on unmanned aerial vehicle
CN111862013A (en) Insulator detection method, device and equipment based on deep convolutional neural network
CN111145222A (en) Fire detection method combining smoke movement trend and textural features
CN113255590A (en) Defect detection model training method, defect detection method, device and system
CN111815576B (en) Method, device, equipment and storage medium for detecting corrosion condition of metal part
Mao et al. Automatic image detection of multi-type surface defects on wind turbine blades based on cascade deep learning network
CN114120093A (en) Coal gangue target detection method based on improved YOLOv5 algorithm
CN115147383A (en) Insulator state rapid detection method based on lightweight YOLOv5 model
CN116503318A (en) Aerial insulator multi-defect detection method, system and equipment integrating CAT-BiFPN and attention mechanism
CN115830407A (en) Cable pipeline fault discrimination algorithm based on YOLOV4 target detection model
CN115661932A (en) Fishing behavior detection method
CN115100546A (en) Mobile-based small target defect identification method and system for power equipment
CN114694130A (en) Method and device for detecting telegraph poles and pole numbers along railway based on deep learning
CN114677357A (en) Model, method and equipment for detecting self-explosion defect of aerial photographing insulator and storage medium
CN113536944A (en) Distribution line inspection data identification and analysis method based on image identification
CN115937492B (en) Feature recognition-based infrared image recognition method for power transformation equipment
CN116994161A (en) Insulator defect detection method based on improved YOLOv5
Meng et al. Research on Metallurgical Saw Blade Surface Defect Detection Algorithm Based on SC-YOLOv5. Processes 2023, 11, 2564
Fan et al. An Algorithm for Detecting the Integrity of Outer Frame Protection Net on Construction Site Based on Improved SSD
Lu et al. A Simple and Effective Surface Defect Detection Method of Power Line Insulators for Difficult Small Objects.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication