CN113688857A - Method for detecting foreign matters in power inspection image based on generation countermeasure network - Google Patents

Method for detecting foreign matters in power inspection image based on generation countermeasure network Download PDF

Info

Publication number
CN113688857A
CN113688857A CN202110453727.5A CN202110453727A CN113688857A CN 113688857 A CN113688857 A CN 113688857A CN 202110453727 A CN202110453727 A CN 202110453727A CN 113688857 A CN113688857 A CN 113688857A
Authority
CN
China
Prior art keywords
image
detected
network
reconstructed
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110453727.5A
Other languages
Chinese (zh)
Inventor
胡川黔
姬鹏飞
郝军
邓海义
陈科羽
陈凤翔
徐梁刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Power Grid Co Ltd
Original Assignee
Guizhou Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Power Grid Co Ltd filed Critical Guizhou Power Grid Co Ltd
Priority to CN202110453727.5A priority Critical patent/CN113688857A/en
Publication of CN113688857A publication Critical patent/CN113688857A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

Compared with other supervised deep learning models, the data set used by the method only needs to contain a large amount of unlabelled positive sample image data, and does not need a large amount of labeled negative sample data. The problems that the number of load examples in the power inspection image is small, the load examples are difficult to collect, a large amount of effort is needed to label data and the like are solved, and the feasibility and the adaptability of the training process are improved.

Description

Method for detecting foreign matters in power inspection image based on generation countermeasure network
Technical Field
The invention relates to the technical field of image processing, in particular to a method for detecting foreign matters in a power inspection image based on a generation countermeasure network.
Background
The development of the state cannot depart from the development of electric power energy, and along with the development of the scale of the power transmission line, the manpower, material resources and financial resources required to be invested in later-stage line maintenance are increased. Meanwhile, the regions of China are wide, the geographical environments of different regions are greatly different, and the power transmission line spans multiple terrains such as plains, mountains, swamps and lakes. Therefore, most of power transmission line networks are erected in natural environments and are extremely vulnerable to foreign matters, and if the foreign matters on the line cannot be found and treated in time, an unpredictable power accident can be caused, so that the production power consumption and the normal life power consumption of factories are influenced. Therefore, it is necessary to inspect and maintain the transmission line in a planned way. The phenomenon that foreign matters on the power transmission line are hung in the research generally refers to the phenomenon that kites, balloons, agricultural plastic greenhouse fragments and the like are hung on the overhead power transmission line. Because most high tension overhead transmission line is bare wire, the existence of foreign matter on the line can make the transmission line to ground discharge safe distance shorten, causes the injury easily to pedestrian, vehicle or the house of passing under the line. More seriously, when the humidity in the air causes the conductivity of the foreign matter to exceed the insulated critical line, and the foreign matter on the hanging wire contacts the ground or the conductors such as the pole tower for a long time, the single-phase grounding short circuit accident can be caused. Or when foreign matters are simultaneously hung between two-phase or even three-phase wires due to wind blowing or shaking, more serious accidents of two-phase short circuit or three-phase short circuit can be caused.
Through the literature search of the prior art, foreign matter detection research on the power transmission line is relatively less at home and abroad compared with the electric wire extraction and the power patrol fault detection. On the one hand, the foreign object detection related technology starts late, and the development is obviously lagged behind other technologies. On the other hand, foreign matter caught on a power line has the following characteristics with respect to power patrol and power lines. First, the foreign matter is various in kind, including a kite, a balloon, a fragment of an agricultural vinyl house, and the like. Secondly, the foreign matters have different shapes and colors, the background environments of the power transmission lines are different, and the distinguishing difficulty between the foreground foreign matters and the background is higher. Therefore, it is very challenging to find a method that can be applied widely and can detect foreign matters in a complicated background.
Disclosure of Invention
In view of the above, the present invention provides a method for detecting a foreign object in a power patrol image based on a generation countermeasure network. For solving the problems set forth in the background art.
The purpose of the first aspect of the invention is realized by the following technical scheme:
the method for detecting the foreign matters in the power inspection image based on the generation countermeasure network comprises the following steps:
step S1: utilizing the U-Net to improve a generation network for generating a countermeasure network, constructing a corresponding discrimination network, and defining a loss function L;
step S2: taking the normal power inspection image as a sample construction data set for training an improved foreign matter detection model;
step S3: inputting the power inspection image to be tested into the trained model;
step S4: the model reconstructs the power inspection image to be detected through a generation network, obtains a reconstructed image of the power inspection image to be detected, and calculates an abnormal score;
step S5: if the abnormal score is smaller than a certain threshold value, judging that the power patrol image to be detected is normal; and if the abnormal score is larger than a certain threshold value, judging that foreign matters exist in the power patrol inspection image to be detected.
Further, in step S1, the generating network includes a sub-encoder network GEA decoder subnetwork GD(ii) a The discriminating network D comprises a class encoder subnetwork DESaid encoder subnetwork GEAnd decoder subnetwork GDThe structures of (2) are symmetrical to each other, and jump connection exists between the layers with the same size, so that direct information transmission between the layers is realized.
Further, the loss function L employs a countering loss LadvContext loss LconAnd potential loss LlatWeighting of three loss functions as an objective function for training a generating networkThe final loss function can be expressed as:
L=λadvLadvconLconlatLlat
wherein λadv,λcon,λlatAre coefficients used to balance different loss functions;
against loss Ladv: the training process of the GAN network is the countermeasure process of the generator and the discriminator, and after the generator outputs the reconstructed power patrol image, the discriminator judges whether the image is from the generator or from the original image. The fight loss can be expressed as:
Figure BDA0003039739830000021
wherein, X represents an original image, X' represents a reconstructed image, and D (X) represents a discrimination result of the discriminator on the original image; d (X') represents the discrimination result of the discriminator on the reconstructed image,
Figure BDA0003039739830000022
representing X samples in the original data distribution px
Context loss Lcon: the opposition loss forcing model generates a true sample, but does not guarantee learning of contextual information about the input. In order to learn context information sufficiently to capture the data distribution of the input samples, context loss is then constructed. The context loss can be expressed as:
Figure BDA0003039739830000023
wherein X represents an original image, X' represents a reconstructed image,
Figure BDA0003039739830000031
representing X samples in the original data distribution px
Potential loss Llat: loss of antagonism and by definition of the precedingContext loss, the model is able to generate realistic and contextually similar images. In addition to these goals, it is also desirable to have the input image X and reconstructed image X' have potential representations that are as similar as possible, in order to ensure that the network is able to produce a contextually sound potential representation for the common example. The potential loss can be expressed as:
Figure BDA0003039739830000032
wherein X represents an original image, X 'represents a reconstructed image, f (X) represents a feature vector extracted from an encoder of a generation network to the original image, f (X') represents a feature vector extracted from a last convolution layer of a discriminator to the reconstructed image,
Figure BDA0003039739830000033
representing X samples in the original data distribution px
Further, in step S2, a large number of normal power inspection images are collected as a data set for training an improved foreign object detection model, the generator network G learns the data distribution of the original image of the sample, reconstructs the image, the discrimination network D determines whether the reconstructed image is true or false, the generation network G and the discrimination network D perform countertraining, and update continuously in an iterative manner until the value of the objective function L converges and reaches the minimum value, so that the image generated by the generation network G and the original image are almost consistent, and the discrimination network cannot distinguish whether the generated image is true or false, thereby completing the training of the model.
Further, in step S3, the image to be measured is input to the trained model, and when the normal power patrol image is input, the trained encoder subnetwork GEAnd decoder subnetwork GDReconstructing the input original image to be detected in a manner of normal sample image data distribution to obtain a reconstructed image to be detected, wherein in this case, the reconstructed image to be detected can well restore the original image to be detected, an error between the reconstructed image to be detected and the original image to be detected is smaller than a certain threshold value, and the model judges that the input power patrol inspection image isNormal; when abnormal power patrol images are input, the trained encoder sub-network GEAnd decoder subnetwork GDThe input original image to be detected is reconstructed in a manner of data distribution of a normal sample image, but due to the fact that the data distribution of an abnormal image is different from the data distribution of the normal sample image, the original image to be detected cannot be well restored by the reconstructed image to be detected, and the error between the reconstructed image to be detected and the original image to be detected is larger than a certain threshold value, the model judges that the input power inspection image is abnormal, namely, foreign matters exist.
Further, in step S4, the trained generation network G sequentially generates the potential space vector Z and the reconstructed image X' of the original image to be detected for all the original images X to be detected input in the same batch; and the trained discrimination network D respectively and sequentially generates potential space vectors Z' of the to-be-detected reconstructed images for all to-be-detected reconstructed images input in the same batch through the last convolution layer, and calculates the distance between the potential space vectors of the to-be-detected original images and the potential space vectors of the to-be-detected reconstructed images, namely calculates the abnormal score. For the image X to be measured, the anomaly score can be expressed as:
A(X)=ωR(X)+(1-ω)T(X)
Figure BDA0003039739830000041
Figure BDA0003039739830000042
X'=G(X)
Z=f(X)
Z'=f(X')
wherein X represents the power inspection image to be detected, omega is a weight coefficient, and X' represents the reconstructed power inspection imageLike Z represents the corresponding potential space vector of the original image, Z' represents the corresponding potential space vector of the reconstructed image,
Figure BDA0003039739830000043
representing X samples in the original data distribution pxR (X) is based on LconThe formula (2) measures a reconstruction score for the contextual similarity between the input image and the reconstructed image, and T (X) is based on LlatThe formula (a) measures the score of the potential difference between the input image and the reconstructed image, g (x) represents the generation of the network reconstructed image, f (x) represents the potential space vector of the extracted image, a (x) represents the anomaly score, and finally at [0,1 ]]The anomaly score is scaled within (2). 5) If the abnormal score is smaller than a certain threshold value, the model judges that the power inspection image to be detected is normal; and if the abnormal score is larger than a certain threshold value, the model judges that foreign matters exist in the power patrol inspection image to be detected.
It is an object of a second aspect of the invention to provide a computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described above when executing the computer program.
It is an object of a third aspect of the invention to provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method as previously described.
Compared with the prior art:
1) compared with other supervised deep learning models, the improved foreign matter detection model is adopted, and the data set used by the method only needs to contain a large amount of unlabelled positive sample image data and does not need a large amount of labeled negative sample data. When the power inspection image to be detected is input, if no foreign matter exists, the strong image generation capacity of the GAN network is utilized, the original image can be restored through the reconstructed image, and the error between the reconstructed image and the original image is smaller than a certain threshold value; if the foreign matter exists, the data distribution of the image to be detected and the data distribution of the sample image are different, so that the error between the image to be detected and the reconstructed image is larger than a certain threshold value, and the model judges that the foreign matter exists in the input power patrol inspection image. The problems that the number of load examples in the power inspection image is small, the load examples are difficult to collect, a large amount of effort is needed to label data and the like are solved, and the feasibility and the adaptability of the training process are improved.
2) The invention improves and generates the network, adds the jump connection of the encoder and the decoder on the corresponding scale, and reserves the local and global multi-scale information through the direct information transmission between layers, thereby generating better reconstruction effect and improving the final detection accuracy.
3) The method considers that the discrimination network can be used as a classifier and also has a feature extraction function, so that the discrimination network is used for replacing a second encoder in the foreign matter detection model, the model is simplified, the parameter quantity needing to be trained is reduced, and the training is more efficient.
4) The invention extracts the vector of the original image mapped to the potential space by the encoder of the generating network, extracts the vector of the reconstructed image mapped to the potential space by the discriminating network, compares the distance between the potential space vector of the original image and the potential space vector of the reconstructed image to express the error between the original image and the potential image, replaces the complex high-dimensional vector by the one-dimensional vector and reduces the calculation complexity.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the present invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings, in which:
FIG. 1 is a diagram of an overall model architecture;
FIG. 2 is a data connection diagram where network improvements are generated;
FIG. 3 is a training flow diagram;
fig. 4 is a flowchart of foreign matter detection.
Detailed Description
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be understood that the preferred embodiments are illustrative of the invention only and are not limiting upon the scope of the invention.
The invention discloses a method for detecting foreign matters in a power inspection image based on a generation countermeasure network, which comprises the following steps:
step S1: utilizing the U-Net to improve a generation network for generating a countermeasure network, constructing a corresponding discrimination network, and defining a loss function L;
step S2: taking the normal power inspection image as a sample construction data set for training an improved foreign matter detection model;
step S3: inputting the power inspection image to be tested into the trained model;
step S4: the model reconstructs the power inspection image to be detected through a generation network, obtains a reconstructed image of the power inspection image to be detected, and calculates an abnormal score;
step S5: if the abnormal score is smaller than a certain threshold value, judging that the power patrol image to be detected is normal; and if the abnormal score is larger than a certain threshold value, judging that foreign matters exist in the power patrol inspection image to be detected.
In step S1, the loss function is specifically constructed as follows: the invention adopts the countermeasure loss LadvContext loss LconAnd potential loss LlatThe weights of the three loss functions are used as the objective function for training the generation network, and the final loss function can be expressed as:
L=λadvLadvconLconlatLlat
wherein λadv,λcon,λlatAre coefficients used to balance different loss functions.
Against loss Ladv: the training process of the GAN network is the countermeasure process of the generator and the discriminator, and after the generator outputs the reconstructed power patrol image, the discriminator needs to be usedTo determine whether the image is from the generator or the original image.
The fight loss can be expressed as:
Figure BDA0003039739830000061
wherein, X represents an original image, X' represents a reconstructed image, and D (X) represents a discrimination result of the discriminator on the original image; d (X') represents the discrimination result of the discriminator on the reconstructed image,
Figure BDA0003039739830000062
representing X samples in the original data distribution px
Context loss Lcon: the opposition loss forcing model generates a true sample, but does not guarantee learning of contextual information about the input. In order to learn context information sufficiently to capture the data distribution of the input samples, context loss is then constructed. The context loss can be expressed as:
Figure BDA0003039739830000063
wherein X represents an original image, X' represents a reconstructed image,
Figure BDA0003039739830000064
representing X samples in the original data distribution px
Potential loss Llat: by the above defined antagonism and context loss, the model is able to generate real and contextually similar images. In addition to these goals, it is also desirable to have the input image X and reconstructed image X' have potential representations that are as similar as possible, in order to ensure that the network is able to produce a contextually sound potential representation for the common example.
The potential loss can be expressed as:
Figure BDA0003039739830000065
wherein X represents an original image, X 'represents a reconstructed image, f (X) represents a feature vector extracted from an encoder of a generation network to the original image, f (X') represents a feature vector extracted from a last convolution layer of a discriminator to the reconstructed image,
Figure BDA0003039739830000066
representing X samples in the original data distribution px
In step S2, the specific steps of constructing the data set and constructing the foreign object detection model are:
(1) constructing a data set: the model of the invention only needs to learn the data distribution of the sample during training, so a large number of normal power inspection image images need to be collected to make a training set of the sample. Meanwhile, in order to evaluate the performance of the model and improve the accuracy of the detection of the model foreign matters, a testing link is added in the training step, so that a testing set needs to be manufactured, and a positive sample image and a negative sample image need to exist in the testing set at the same time, so that the detection effect of the model is tested. First, make training total sample, because the current public data is concentrated and is lacked the electric power and patrol and examine the image, the electric power that the training set data gathered the sample through unmanned aerial vehicle shooting in this embodiment patrols and examines the image. To expand the data, the original data is randomly flipped and rotated by 90 ° to enrich the sample. And cutting the image in a center cutting mode to finally obtain 1200 normal power inspection images. From the total training samples, 200 pieces were randomly drawn into the test set. Because the test needs to use the image data of the negative sample in a centralized way, but the number of the power inspection images with foreign matters is small, and the power inspection images are difficult to collect. And finally obtaining 400 pieces of image data of the test set, wherein the number of the positive samples and the number of the negative samples are 200.
(2) Constructing a foreign matter detection model: the overall framework of the improved foreign matter detection model is shown in fig. 1. The generation countermeasure network can be mainly divided into two parts of a generation network and a discrimination network, wherein the generation network G comprisesAn encoder subnetwork GEA decoding subnetwork GD(ii) a The discriminating network D comprises a class encoder subnetwork DE. Encoder subnetwork GEThe method comprises eight layers of convolution, wherein input data is an original image X, 1 layer is filled in the edge of the first seven layers of convolution, the convolution step length is 2, down sampling is achieved, the size is reduced by one half, and a BN layer and a LeakyReLU activation function layer are added after the convolution layer. And (4) the final layer of convolution has no filling, the convolution step is 1, and the one-dimensional potential space vector Z of the original image is output after convolution. Encoder subnetwork GEThe detailed parameters of each layer are shown in table 1 below:
table 1 encoder subnetwork parameters table:
Figure BDA0003039739830000081
decoder subnetwork GDOf (2) and an encoder subnetwork GEThe structure of the method is symmetrical, the first layer of transposition convolution is not filled, the convolution step length is 1, the input is a one-dimensional potential space vector Z of the original image, the last four layers of transposition convolution are filled with 1 layer at the edge, the convolution step length is 2, the up-sampling is realized, and the size is doubled. And after the first seven layers are transposed to the convolution layer, a BN layer and a ReLU activation function layer are added, and the last layer is activated by using a tanh function without using BN, so that a reconstructed image X' is output. It is to be noted that the decoder subnetwork GDInput data of each layer, output result of previous convolution layer and encoder sub-network GEThe output results of the corresponding layers are spliced as input to the next convolutional layer, as shown in the data connection diagram of fig. 2. Decoder subnetwork GDThe detailed parameters of each layer are shown in table 2 below:
table 2 decoder subnet parameter table:
Figure BDA0003039739830000091
discriminating network routing class encoder subnetwork DEForm, class encoder subnetwork DEStructured like encoder subnetwork GEThe difference between the output of the last convolutional layer as the potential space vector Z' of the reconstructed image is the encoder-like subnetwork DEAnd a dense connecting layer is used after the last convolution layer, and the sigmoid function is used for activation, so that the effect of judging the authenticity of the reconstructed image is achieved.
In step S3, as shown in fig. 3, the training flowchart for training the foreign object detection model is to input the data in the training set into the generation network G, perform the countermeasure training between the generation network G and the discrimination network D, and continuously update the data in an iterative manner until the value of the objective function L converges and reaches the minimum value, so that the image generated by the generation network G is almost consistent with the original image, and the discrimination network cannot distinguish the authenticity of the generated image, thereby completing the training of the model. And recording and observing a loss function L in the training process, and reasonably adjusting the learning round number epoch and the learning rate to prevent the model from being over-fitted and influencing the generalization capability of the model.
The objective evaluation model performance includes: after each epoch is trained, the data of the test set is input into the model, and the AUC (area under the ROC curve) value is calculated, so that the performance of the foreign matter detection model is evaluated. AUC is defined as the area under the ROC curve. Wherein, the ROC curve is called a receiver operating characteristic curve (receiver operating characteristic curve), and is a curve drawn according to a series of different two classification modes (boundary values or decision thresholds) by taking a true positive rate (sensitivity) as an ordinate and a false positive rate (1-specificity) as an abscissa. The reason why the AUC value is often used as an evaluation criterion of the model is that many times ROC curves cannot clearly indicate which classifier has a better effect, and the AUC value is used as a numerical value, which corresponds to a classifier with a larger AUC, has a better effect. AUC 1 represents a perfect classifier, 0.5 < AUC < 1 represents an advantage over a random classifier.
Record experimental data: experimental environment for this example: the system is Win10, the display card adopts Tesla P100, the deep learning framework adopts pyrorch 1.2, and the test results of the improved foreign object detection model on the power inspection data set are shown in Table 3.
Table 3 detection results for different foreign substances:
type of foreign matter Kite Balloon with a sealing member Agricultural plastic greenhouse fragments
AUC 0.857 0.832 0.877
The model of the invention has excellent test results on the power inspection data set.
The detection flow of detecting the power inspection image in the step S4 is as shown in fig. 4, and the power inspection image X to be detected is input into the trained model, and the potential space vector Z of the original image to be detected and the reconstructed image X' to be detected are respectively generated in sequence; and the trained discrimination network D generates a potential space vector Z 'of the to-be-detected reconstructed image through the last convolution layer of the input to-be-detected reconstructed image X'. And calculating the distance between the potential space vector of the original image to be detected and the potential space vector of the reconstructed image to be detected, namely calculating the abnormal score. Wherein the anomaly score can be expressed as:
A(X)=ωR(X)+(1-ω)T(X)
Figure BDA0003039739830000101
Figure BDA0003039739830000102
X'=G(X)
Z=f(X)
Z'=f(X')
wherein X represents the power patrol image to be detected, omega is a weight coefficient, X 'represents the reconstructed power patrol image, Z represents a potential space vector corresponding to the original image, Z' represents a potential space vector corresponding to the reconstructed image,
Figure BDA0003039739830000103
representing X samples in the original data distribution pxR (X) is based on LconThe formula (2) measures a reconstruction score for the contextual similarity between the input image and the reconstructed image, and T (X) is based on LlatThe formula (a) measures the score of the potential difference between the input image and the reconstructed image, g (x) represents the generation of the network reconstructed image, f (x) represents the potential space vector of the extracted image, a (x) represents the anomaly score, and finally at [0,1 ]]The anomaly score is scaled within (2). In this embodiment, the final threshold is about 0.2 by calculation. If the abnormal score is smaller than the adaptive threshold, judging that the power patrol image to be detected is normal; and if the abnormal score is larger than the self-adaptive threshold, judging that foreign matters exist in the power patrol image to be detected.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (8)

1. A foreign matter detection method in a power inspection image based on a generation countermeasure network is characterized in that: the method comprises the following steps:
step S1: utilizing the U-Net to improve a generation network for generating a countermeasure network, constructing a corresponding discrimination network, and defining a loss function L;
step S2: taking the normal power inspection image as a sample construction data set for training an improved foreign matter detection model;
step S3: inputting the power inspection image to be tested into the trained model;
step S4: the model reconstructs the power inspection image to be detected through a generation network, obtains a reconstructed image of the power inspection image to be detected, and calculates an abnormal score;
step S5: if the abnormal score is smaller than a certain threshold value, judging that the power patrol image to be detected is normal; and if the abnormal score is larger than a certain threshold value, judging that foreign matters exist in the power patrol inspection image to be detected.
2. The method for detecting the foreign matter in the power inspection image based on the generation countermeasure network as claimed in claim 1, wherein: in step S1, the generating network includes a sub-encoder network GEA decoder subnetwork GD(ii) a The discriminating network D comprises a class encoder subnetwork DESaid encoder subnetwork GEAnd decoder subnetwork GDThe structures of (2) are symmetrical to each other, and jump connection exists between the layers with the same size, so that direct information transmission between the layers is realized.
3. The method for detecting the foreign matter in the power inspection image based on the generation countermeasure network as claimed in claim 2, wherein: the loss function L adopts the countermeasure loss LadvContext loss LconAnd potential loss LlatThe weights of the three loss functions are used as the objective function for training the generation network, and the final loss function can be expressed as:
L=λadvLadvconLconlatLlat
wherein λadv,λcon,λlatAre coefficients used to balance different loss functions;
against loss Ladv: the training process of the GAN network is the countermeasure process of the generator and the discriminator, after the generator outputs the reconstructed power patrol image, the discriminator judges whether the image is from the generator or from the original image, and the countermeasure loss can be expressed as:
Figure RE-FDA0003310735780000011
wherein, X represents an original image, X' represents a reconstructed image, and D (X) represents a discrimination result of the discriminator on the original image; d (X') represents the discrimination result of the discriminator on the reconstructed image,
Figure RE-FDA0003310735780000012
representing X samples in the original data distribution px
Context loss Lcon: the resist-loss forcing model generates real samples but does not guarantee learning of contextual information about the input, in order to learn the contextual information sufficiently to capture the data distribution of the input samples, and then build a contextual loss, which can be expressed as:
Figure RE-FDA0003310735780000021
wherein X represents an original image, X' represents a reconstructed image,
Figure RE-FDA0003310735780000022
representing X samples in the original data distribution px
Potential loss Llat: the resistance loss and context defined by the precedingLoss, which can generate true and context-like images, it is desirable to have as similar potential representations as possible for the input image X and the reconstructed image X', in addition to these objectives, in order to ensure that the network can produce a context-sound potential representation for the common example, the potential loss can be expressed as:
Figure RE-FDA0003310735780000023
wherein X represents an original image, X 'represents a reconstructed image, f (X) represents a feature vector extracted from an encoder of a generation network to the original image, f (X') represents a feature vector extracted from a last convolution layer of a discriminator to the reconstructed image,
Figure RE-FDA0003310735780000024
representing X samples in the original data distribution px
4. The method for detecting the foreign matter in the power inspection image based on the generation countermeasure network as claimed in claim 1, wherein: in step S2, a large number of normal power inspection images are collected as a data set for training an improved foreign object detection model, the generator network G learns the data distribution of the original image of the sample, reconstructs the image, the discrimination network D determines whether the reconstructed image is true or false, the generation network G and the discrimination network D perform countertraining, and update continuously in an iterative manner until the value of the objective function L converges and reaches the minimum value, so that the image generated by the generation network G is almost consistent with the original image, and the discrimination network cannot distinguish whether the generated image is true or false, thereby completing the training of the model.
5. The method for detecting the foreign matter in the power inspection image based on the generation countermeasure network as claimed in claim 1, wherein: in step S3, the image to be measured is input to the trained model, and when the normal power patrol image is input, the trained encoder subnetwork GEAnd decoderSub-network GDReconstructing the input original image to be detected in a manner of normal sample image data distribution to obtain a reconstructed image to be detected, wherein in this case, the reconstructed image to be detected can well restore the original image to be detected, and the error between the reconstructed image to be detected and the original image to be detected is smaller than a certain threshold value, and the model judges that the input power patrol inspection image is normal; when abnormal power patrol images are input, the trained encoder sub-network GEAnd decoder subnetwork GDThe input original image to be detected is reconstructed in a manner of data distribution of a normal sample image, but due to the fact that the data distribution of an abnormal image is different from the data distribution of the normal sample image, the original image to be detected cannot be well restored by the reconstructed image to be detected, and the error between the reconstructed image to be detected and the original image to be detected is larger than a certain threshold value, the model judges that the input power inspection image is abnormal, namely, foreign matters exist.
6. The method for detecting the foreign matter in the power inspection image based on the generation countermeasure network as claimed in claim 1, wherein: in step S4, the trained generation network G sequentially generates the potential space vector Z and the reconstructed image X' of the original image to be detected for all the original images X to be detected input in the same batch; the trained discrimination network D respectively and sequentially generates potential space vectors Z' of the to-be-detected reconstructed images for all to-be-detected reconstructed images input in the same batch through the last convolution layer, calculates the distance between the potential space vectors of the to-be-detected original images and the potential space vectors of the to-be-detected reconstructed images, namely calculates an abnormal score, and for the to-be-detected image X, the abnormal score can be expressed as:
A(X)=ωR(X)+(1-ω)T(X)
Figure RE-FDA0003310735780000031
Figure RE-FDA0003310735780000032
X'=G(X)
Z=f(X)
Z'=f(X')
wherein X represents the power patrol image to be detected, omega is a weight coefficient, X 'represents the reconstructed power patrol image, Z represents a potential space vector corresponding to the original image, Z' represents a potential space vector corresponding to the reconstructed image,
Figure RE-FDA0003310735780000033
representing X samples in the original data distribution pxR (X) is based on LconThe formula (2) measures a reconstruction score for the contextual similarity between the input image and the reconstructed image, and T (X) is based on LlatThe formula (a) measures the score of the potential difference between the input image and the reconstructed image, g (x) represents the generation of the network reconstructed image, f (x) represents the potential space vector of the extracted image, a (x) represents the anomaly score, and finally at [0,1 ]]Scaling the anomaly score within the range of (1); if the abnormal score is smaller than a certain threshold value, the model judges that the power inspection image to be detected is normal; and if the abnormal score is larger than a certain threshold value, the model judges that foreign matters exist in the power patrol inspection image to be detected.
7. A computer apparatus comprising a memory, a processor, and a computer program stored on the memory and capable of running on the processor, wherein: the processor, when executing the computer program, implements the method of any of claims 1-6.
8. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program, when executed by a processor, implements the method of any one of claims 1-6.
CN202110453727.5A 2021-04-26 2021-04-26 Method for detecting foreign matters in power inspection image based on generation countermeasure network Pending CN113688857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110453727.5A CN113688857A (en) 2021-04-26 2021-04-26 Method for detecting foreign matters in power inspection image based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110453727.5A CN113688857A (en) 2021-04-26 2021-04-26 Method for detecting foreign matters in power inspection image based on generation countermeasure network

Publications (1)

Publication Number Publication Date
CN113688857A true CN113688857A (en) 2021-11-23

Family

ID=78576318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110453727.5A Pending CN113688857A (en) 2021-04-26 2021-04-26 Method for detecting foreign matters in power inspection image based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN113688857A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984285A (en) * 2023-03-21 2023-04-18 上海仙工智能科技有限公司 Library bit state detection method and system based on generation countermeasure network and storage medium
CN116128676A (en) * 2023-04-14 2023-05-16 北京航空航天大学 Method, system, equipment and medium for detecting abnormality of civil monitoring data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110823576A (en) * 2019-11-18 2020-02-21 苏州大学 Mechanical anomaly detection method based on generation of countermeasure network
WO2020168731A1 (en) * 2019-02-19 2020-08-27 华南理工大学 Generative adversarial mechanism and attention mechanism-based standard face generation method
CN112184654A (en) * 2020-09-24 2021-01-05 上海电力大学 High-voltage line insulator defect detection method based on generation countermeasure network
CN112435221A (en) * 2020-11-10 2021-03-02 东南大学 Image anomaly detection method based on generative confrontation network model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020168731A1 (en) * 2019-02-19 2020-08-27 华南理工大学 Generative adversarial mechanism and attention mechanism-based standard face generation method
CN110823576A (en) * 2019-11-18 2020-02-21 苏州大学 Mechanical anomaly detection method based on generation of countermeasure network
CN112184654A (en) * 2020-09-24 2021-01-05 上海电力大学 High-voltage line insulator defect detection method based on generation countermeasure network
CN112435221A (en) * 2020-11-10 2021-03-02 东南大学 Image anomaly detection method based on generative confrontation network model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙旭日;李延真;彭博;李晓悦;周超群;: "基于生成对抗网络和深度残差神经网络的变电站异物检测", 电网与清洁能源, no. 09, pages 72 - 79 *
张焕坤;李军毅;张斌;: "基于改进型YOLO v3的绝缘子异物检测方法", 中国电力, no. 02, pages 54 - 60 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984285A (en) * 2023-03-21 2023-04-18 上海仙工智能科技有限公司 Library bit state detection method and system based on generation countermeasure network and storage medium
CN116128676A (en) * 2023-04-14 2023-05-16 北京航空航天大学 Method, system, equipment and medium for detecting abnormality of civil monitoring data

Similar Documents

Publication Publication Date Title
CN112184654A (en) High-voltage line insulator defect detection method based on generation countermeasure network
CN110555474B (en) Photovoltaic panel fault detection method based on semi-supervised learning
CN110633745A (en) Image classification training method and device based on artificial intelligence and storage medium
CN109902018B (en) Method for acquiring test case of intelligent driving system
CN113688857A (en) Method for detecting foreign matters in power inspection image based on generation countermeasure network
CN110197208A (en) A kind of textile flaw intelligent measurement classification method and device
CN111242144B (en) Method and device for detecting abnormality of power grid equipment
CN109813542A (en) The method for diagnosing faults of air-treatment unit based on production confrontation network
CN108596883A (en) It is a kind of that method for diagnosing faults is slid based on the Aerial Images stockbridge damper of deep learning and distance restraint
CN115035913B (en) Sound abnormity detection method
CN113468703A (en) ADS-B message anomaly detector and detection method
CN113222949B (en) X-ray image automatic detection method for plugging position of power equipment conductor
CN115114965B (en) Wind turbine generator gearbox fault diagnosis method, device, equipment and storage medium
CN112651435A (en) Self-learning-based detection method for flow abnormity of power network probe
CN113449769A (en) Power transmission line icing identification model training method, identification method and storage medium
CN112634254A (en) Insulator defect detection method and related device
CN115208680B (en) Dynamic network risk prediction method based on graph neural network
CN113935237A (en) Power transmission line fault type distinguishing method and system based on capsule network
CN113252701A (en) Cloud edge cooperation-based power transmission line insulator self-explosion defect detection system and method
CN112364704A (en) Clustering method and system based on clock synchronization partial discharge
CN116434051A (en) Transmission line foreign matter detection method, system and storage medium
Fahim et al. An unsupervised protection scheme for overhead transmission line with emphasis on situations during line and source parameter variation
CN113012107B (en) Power grid defect detection method and system
CN114331214A (en) Domain-adaptive bearing voiceprint fault diagnosis method and system based on reinforcement learning
CN114169249A (en) Artificial intelligence identification method for high-resistance grounding fault of power distribution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination