CN114612468A - Equipment external defect detection method based on positive sample - Google Patents

Equipment external defect detection method based on positive sample Download PDF

Info

Publication number
CN114612468A
CN114612468A CN202210496827.0A CN202210496827A CN114612468A CN 114612468 A CN114612468 A CN 114612468A CN 202210496827 A CN202210496827 A CN 202210496827A CN 114612468 A CN114612468 A CN 114612468A
Authority
CN
China
Prior art keywords
equipment
network model
external
image
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210496827.0A
Other languages
Chinese (zh)
Other versions
CN114612468B (en
Inventor
孙自伟
华泽玺
陈玉洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Durui Sensing Technology Co ltd
Southwest Jiaotong University
Original Assignee
Sichuan Durui Sensing Technology Co ltd
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Durui Sensing Technology Co ltd, Southwest Jiaotong University filed Critical Sichuan Durui Sensing Technology Co ltd
Priority to CN202210496827.0A priority Critical patent/CN114612468B/en
Publication of CN114612468A publication Critical patent/CN114612468A/en
Application granted granted Critical
Publication of CN114612468B publication Critical patent/CN114612468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

The invention relates to a positive sample-based method for detecting external defects of equipment, which comprises the following steps: segmenting the network model for the example and generating a confrontation network model for training; inputting an image of the equipment to be detected into the trained example segmentation network model to obtain an equipment external image, and fusing an edge detection algorithm to obtain a complete equipment external image; inputting the complete equipment external graph into a trained generator for generating a confrontation network model to obtain a corresponding non-defective equipment external graph; and calculating the distance difference value of the complete equipment external image obtained in the step S3 and the correspondingly generated non-defective equipment external image by using a Frecher Markov distance algorithm, thereby determining and positioning the defect position of the image of the equipment to be detected. The invention realizes automatic detection of whether the exterior of the equipment has defects, provides technical support for automatic detection of the exterior defects of the unattended equipment, and improves the intelligent degree of unattended operation.

Description

Equipment external defect detection method based on positive sample
Technical Field
The invention relates to the technical field of equipment external defect detection, in particular to an equipment external defect detection method based on a positive sample.
Background
Most of the existing methods for detecting the surface defects of the equipment are directed at industrial products, few solutions for detecting the external defects of the large-scale equipment exist, and the methods for detecting the external defects of the equipment mainly include the following two methods:
(1) conventional image processing algorithms. The method mainly detects the external defects of the equipment by analyzing the characteristics of the external defects of the equipment, such as color characteristics and texture characteristics. The method is greatly influenced by environments such as illumination intensity, complex background and the like, and has no robustness.
(2) A method for target detection based on deep learning. The method mainly trains a target detection network by using the labeled defect data, thereby realizing the detection of the appearance defects of the equipment. For example, the publication No. CN 113724233 a, entitled "method for detecting defects in an appearance image of a power transformation device based on fusion data generation and transfer learning technology", is a defect problem in which a defect image and a normal image are input to an appearance defect detection model of a power transformation device together, the appearance defect detection model of the power transformation device is trained, the image to be detected is input to the appearance defect detection model of the power transformation device after the training is completed, and finally, an image of the power transformation device is output. The document also uses a defect image as a training set, but this method needs a large amount of device appearance defect data, and in fact, the defect types of the device appearance are various, but the data is very lacking, so this method can not accurately detect the device appearance defect.
Disclosure of Invention
The invention aims to improve the precision and efficiency of equipment external defect detection on the premise of replacing manual inspection of equipment external defects, and provides an equipment external defect detection method based on a positive sample.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
a positive sample-based device external defect detection method comprises the following steps:
step S1, inputting a plurality of device external images without defects and with backgrounds and device external instance segmentation labels corresponding to the device external images into an instance segmentation network model training frame for training, thereby obtaining a trained instance segmentation network model;
step S2, inputting a plurality of equipment external images without defects and backgrounds into a generation confrontation network model training frame for training, thereby obtaining a trained generation confrontation network model;
step S3, inputting the image of the equipment to be detected into the trained example segmentation network model to obtain an equipment external image, and fusing an edge detection algorithm to obtain a complete equipment external image;
step S4: inputting the complete equipment external graph into a trained generator for generating a confrontation network model to obtain a corresponding non-defective equipment external graph; and calculating the distance difference value of the complete equipment external image obtained in the step S3 and the correspondingly generated non-defective equipment external image by using a Frecher Markov distance algorithm, thereby determining and positioning the defect position of the image of the equipment to be detected.
In the above scheme, the whole defect detection is divided into a complete device external extraction stage and a defect detection stage, a model needs to be trained before the two application stages are performed, step S1 and step S2 are the training process for the model, step S3 is the complete device external extraction stage, step S4 is the defect detection stage, and finally the defect detection result of the device to be detected image is obtained.
The step S1 specifically includes the following steps:
step S1-1: collecting a plurality of equipment external images which have no defects and contain backgrounds, and labeling external instance segmentation labels of the equipment by using labeling software; the device external instance split tag comprises pixels external to the device;
step S1-2: inputting a plurality of equipment external images without defects and containing backgrounds into an example segmentation network model to obtain example segmentation output, inputting the example segmentation output and corresponding labeled equipment external example segmentation labels into an example segmentation loss function to obtain a loss value, reversely propagating the loss value, adjusting weight parameters of the example segmentation network model through a gradient descent optimization algorithm, and training the example segmentation network model;
step S1-3: and after the training is carried out to the set step length or the loss is converged, fixing the weight of the example segmentation network model, thereby obtaining the trained example segmentation network model.
The step S2 specifically includes the following steps:
step S2-1: collecting a plurality of defect-free and background-free device external images;
step S2-2: the generation confrontation network model comprises a generator and a discriminator, wherein the generator consists of an encoder, a convolution long-term and short-term memory network and a decoder; inputting a plurality of equipment external images without defects and backgrounds into a coder of a generator to extract a characteristic matrix, then storing and extracting the characteristic matrix by a convolution long-term and short-term memory network of the input generator, transmitting the characteristic matrix into a decoder of the generator, and generating an equipment external image without defects by the decoder according to the characteristic matrix;
step S2-3: labeling a plurality of device external images without defects and backgrounds as positive samples, labeling the device external images without defects generated in the step S2-2 as negative samples, inputting sample pictures into a network model of a discriminator to obtain classification predicted values, wherein the sample pictures are the positive samples or the negative samples, and labels of the positive samples or labels of the negative samples; inputting the classification predicted value and the label of the corresponding sample into a classification loss function to obtain a loss value, reversely propagating the loss value, adjusting the weight parameter of the network model of the discriminator through a gradient descent optimization algorithm, and training the network model of the discriminator so as to improve the capability of the discriminator for distinguishing whether the external graph of the device without the defect is generated by the generator;
step S2-4: after training to a set step length or loss convergence, fixing the weight of the network model of the discriminator so as to obtain the trained network model of the discriminator;
step S2-5: inputting a plurality of device external images without defects and backgrounds into a network model of a generator, generating a device external image without defects, and marking the device external image as a positive sample; inputting the generated equipment external graph without defects into the trained network model of the discriminator with fixed weight parameters in the step S2-4 to obtain a classification predicted value, inputting the classification predicted value and a label marked as a positive sample into a classification loss function to obtain a loss value, reversely propagating the loss value, adjusting the weight parameters of the network model of the generator through a gradient descent optimization algorithm, and training the network model of the generator to improve the capability of extracting and storing the characteristics of the equipment external graph without defects of the network model of an encoder and the model of a convolution long-short term memory network in the generator and improve the capability of generating and restoring the equipment external graph without defects of the network model of a decoder in the generator;
step S2-6: after training to a set step length or loss convergence, fixing the weight of the network model of the generator so as to obtain the trained network model of the generator;
step S2-7: and repeating the step S2-2 to the step S2-6 until the generator can generate the positive sample or the negative sample which can not be distinguished by the discriminator, and obtaining the trained generation confrontation network model.
The step S3 specifically includes the following steps:
step S3-1: inputting the image of the equipment to be detected into the trained example segmentation network model to obtain an external image of the equipment; meanwhile, processing the edge of the image of the equipment to be detected by using an edge detection algorithm to obtain an equipment edge image;
step S3-2: and fusing the equipment external graph and the equipment edge graph to obtain a complete equipment external graph.
The step S4 specifically includes the following steps:
step S4-1: inputting the complete external graph of the equipment into a trained generator, extracting a corresponding external graph feature matrix of the non-defective equipment by an encoder and a convolution long-short term memory network in the generator, and generating and restoring a corresponding external graph of the non-defective equipment by a decoder in the generator according to the extracted external graph feature matrix of the non-defective equipment;
step S4-2: inputting the complete equipment external graph and the correspondingly generated defect-free equipment external graph into a trained discriminator without a full connection layer to respectively obtain corresponding feature vectors;
step S4-3: calculating the distance between the characteristic vectors by a Frecher Markov distance algorithm, judging whether the distance exceeds a preset threshold value, if so, judging that the equipment image to be detected has a defect, and determining the defect position by the image difference between the complete equipment external image and the correspondingly generated non-defective equipment external image; and if not, judging that the image of the equipment to be detected has no defect.
Compared with the prior art, the invention has the following beneficial effects:
(1) the defect detection process of the invention comprises a complete equipment external extraction stage and a defect detection stage, wherein in the complete equipment external extraction stage, a complete equipment external image of the equipment image to be detected is extracted by using a mode of fusing an instance segmentation network model and an edge detection calculator, and the interference of a complex background is removed. In the defect detection stage, the complete equipment external image is input into the generated countermeasure network model to generate a defect-free equipment external image, the distance difference between the complete equipment external image and the defect-free equipment external image is calculated to judge whether the equipment image to be detected has defects or not, and the defect position is positioned.
(2) The invention fuses the example segmentation network model and the edge detection operator at the external extraction stage of the complete equipment, because the edge of the external equipment graph segmented by the example is not smooth generally, and the edge of the external equipment graph can be smooth and complete after the edge detection operator is fused.
(3) The invention realizes automatic detection of whether the exterior of the equipment has defects by using a computer vision technology, provides technical support for automatic examination of the exterior defects of the unattended equipment, and improves the intelligent degree of unattended operation.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a flow chart of a defect detection method according to the present invention;
FIG. 2 is a flow chart of the external extraction phase of the complete apparatus of the present invention;
FIG. 3 is a flow chart of the defect detection stage of the present invention;
FIG. 4 is a flow chart of training of an example segmented network model of the present invention;
FIG. 5 is a flow chart of the inventive generator generating an external view of a device without defects;
FIG. 6 is a flow chart of the training of the arbiter of the present invention;
FIG. 7 is a flow chart of the training of the generator of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Also, in the description of the present invention, the terms "set a", "set B", and the like are used for distinguishing between descriptions and not to be construed as indicating or implying any actual such relationship or order between such entities or operations.
Example (b):
the invention is realized by the following technical scheme, and the device external defect detection method based on the positive sample comprises two application stages, as shown in fig. 1 and fig. 2, namely a complete device external extraction stage and a defect detection stage, wherein a complete device external graph of a device image to be detected is extracted by using an example segmentation network model in the complete device external extraction stage, the complete device external graph is input to generate a confrontation network model in the defect detection stage, a corresponding non-defective device graph is generated, and then a Distance difference value between the complete device external graph and the corresponding non-defective device graph is calculated by using a Friedel's batch Markov Distance algorithm (FMD), so that whether the device image to be detected has defects or not is judged, and the defect position of the device image is determined.
Step S1: inputting a plurality of device external images without defects and with backgrounds and corresponding device external instance segmentation labels into an instance segmentation network model training framework for training so as to obtain a trained instance segmentation network model.
Before the two application phases, the example segmentation network model and the generation confrontation network model need to be trained, please refer to fig. 4, which is a training process for the example segmentation network model:
a large number of defect-free, background-containing images of the exterior of the device, denoted as set a, are first collected. And labeling the set A with a labeling software, wherein the labeled device external instance segmentation label is a set B, and the device external instance segmentation label comprises but is not limited to pixels of a device external image.
And then inputting the set A into the example segmentation network model to obtain example segmentation output, inputting the example segmentation output and the set B into an example segmentation loss function to obtain a loss value, reversely propagating the loss value, adjusting the weight parameters of the example segmentation network model through a gradient descent optimization algorithm, and training the example segmentation network model. And after the training is carried out to the set step length or the loss is converged, fixing the weight of the example segmentation network model, thereby obtaining the trained example segmentation network model.
And step S2, inputting a plurality of device external images without defects and backgrounds into a generation confrontation network model training framework for training, thereby obtaining a trained generation confrontation network model.
Referring to fig. 6, the generated countermeasure network model is composed of a generator composed of an encoder (convolutional neural network feature extraction model), a convolutional long short term memory network (ConvLSTM), and a decoder, and an arbiter which is a simple classifier. The traditional generator only has a decoder, namely a deconvolution neural network, but the scheme improves an anti-network model, and firstly, an encoder is used as a function of a convolutional neural network feature extraction model, and a convolutional long-short term memory network is used for storing and extracting the extracted features.
Referring to fig. 6, before training, referring to fig. 5, a plurality of device external images without defects and backgrounds are collected and recorded as a set C, the set C is input into an encoder of a generator to extract a feature matrix, the feature matrix is read through a convolutional long-short term memory network, information related to the feature matrix is stored in a cell state, and then the cell state is transmitted to a decoder, and the decoder generates a device external image without defects according to the feature matrix and is recorded as a set D.
And (3) because the generator is not trained at the moment, the generated set D is poor in effect, the set C is marked as a positive sample, the set D is marked as a negative sample, and sample pictures are input into the discriminator, wherein the sample pictures are the positive sample or the negative sample, and the label of the positive sample and the label of the negative sample. Please refer to fig. 4, which is a training process for the discriminator, a sample picture is input into a network model of the discriminator to obtain a classification predicted value, the classification predicted value and a label of a corresponding sample are input into a classification loss function to obtain a loss value, the loss value is propagated in a reverse direction, a weight parameter of the discriminator is adjusted through a gradient descent optimization algorithm, and the network model of the discriminator is trained to improve the capability of the discriminator whether a resolution set D is generated by a generator. And after the training is carried out to the set step length or the loss is converged, fixing the weight of the network model of the discriminator so as to obtain the trained network model of the discriminator.
Note that the "network model of the discriminator" is intended to emphasize the structure of the discriminator, and the "discriminator" is intended to emphasize the function of the discriminator, and thus is the structure of the trained discriminator at the time of training, and is the function of the applied discriminator at the time of application.
Next, please refer to fig. 7, which illustrates a training process for the generator and also a training process for the countermeasure network model. Inputting the set C into a network model of a generator to generate a set D, and marking the set D as a positive sample; inputting the set D into a trained network model of a discriminator with fixed weight parameters to obtain a classification predicted value, inputting the classification predicted value and a label marked as a positive sample into a classification loss function to obtain a loss value, reversely propagating the loss value, adjusting the weight parameters of the network model of a generator through a gradient descent optimization algorithm, and training the network model of the generator to improve the capability of extracting and storing the characteristics of the set D of the network model of an encoder and the model of a convolution long-short term memory network in the generator and improve the capability of generating a reduction set D of the network model of a decoder in the generator. And after the training is carried out to the set step length or the loss is converged, fixing the weight of the network model of the generator so as to obtain the trained network model of the generator.
It should be noted that "network model of the generator" is intended to emphasize the structure of the generator, and "generator" is intended to emphasize the function of the generator, and thus is the structure of the generator trained when training, and is the function of the generator applied when applying.
And circulating the training process of the discriminator and the generator until the generator can generate the discriminator and cannot distinguish the positive sample from the negative sample, and obtaining a trained generated confrontation network model.
And step S3, inputting the image of the equipment to be detected into the trained example segmentation network model to obtain an equipment external image, and fusing an edge detection algorithm to obtain a complete equipment external image.
Step S1 and step S2 are training for example segmentation network model and generation of confrontation network model before two application phases, and after model training is completed, please refer to fig. 2, collect the image of the device to be detected, and record it as set E, and input the set E into the trained example segmentation network model to obtain the device external image, which is record it as set F. Meanwhile, the edge of the device image to be detected is processed by combining an edge detection algorithm, so that a device edge image is obtained and is marked as a set G.
And fusing the set F and the set G to obtain a complete external graph of the equipment, and marking as a set H. Step S3 is the complete device external extraction phase.
Step S4: inputting the complete equipment external graph into a trained generator for generating a confrontation network model to obtain a corresponding non-defective equipment external graph; and (4) performing distance difference calculation on the complete equipment external image obtained in the step (S3) and the correspondingly generated non-defective equipment external image by using a Frechet Markov distance algorithm, so as to determine and position the defect position of the image of the equipment to be detected.
Referring to fig. 3, the set H is inputted into the trained generator, the encoder and the convolutional long and short term memory network in the generator extract the corresponding non-defective device histogram feature matrix, the decoder in the generator generates and restores the corresponding non-defective device histogram according to the extracted non-defective device histogram feature matrix, and the restored corresponding non-defective device histogram is denoted as set I.
And inputting the set H and the set I into a trained discriminator without a full connection layer to respectively obtain the feature vector of the set H and the feature vector of the set I. The classifier is designed to output a classification type of prediction when there is a fully connected layer, and to output a feature vector or a feature matrix when there is no fully connected layer.
Finally, calculating the Distance between the characteristic vectors by a Frechet Markov Distance algorithm (FMD), judging whether the Distance exceeds a preset threshold value, if so, judging that the equipment image to be detected has a defect, and determining the defect position by the image difference between a set H (namely a complete equipment external image) and a set I (namely a correspondingly generated non-defective equipment external image); and if not, judging that the image of the equipment to be detected has no defect. Step S4 is a defect detection stage.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (5)

1. A device external defect detection method based on a positive sample is characterized in that: the method comprises the following steps:
step S1, inputting a plurality of device external images without defects and with backgrounds and device external instance segmentation labels corresponding to the device external images into an instance segmentation network model training frame for training, thereby obtaining a trained instance segmentation network model;
step S2, inputting a plurality of equipment external images without defects and backgrounds into a generation confrontation network model training frame for training, thereby obtaining a trained generation confrontation network model;
step S3, inputting the image of the equipment to be detected into the trained example segmentation network model to obtain an equipment external image, and fusing an edge detection algorithm to obtain a complete equipment external image;
step S4: inputting the complete equipment external graph into a trained generator for generating a confrontation network model to obtain a corresponding non-defective equipment external graph; and calculating the distance difference value of the complete equipment external image obtained in the step S3 and the correspondingly generated non-defective equipment external image by using a Frecher Markov distance algorithm, thereby determining and positioning the defect position of the image of the equipment to be detected.
2. The positive-sample-based device external defect detection method according to claim 1, wherein: the step S1 specifically includes the following steps:
step S1-1: collecting a plurality of equipment external images which have no defects and contain backgrounds, and labeling external instance segmentation labels of the equipment by using labeling software; the device external instance split tag comprises pixels external to the device;
step S1-2: inputting a plurality of equipment external images without defects and containing backgrounds into an example segmentation network model to obtain example segmentation output, inputting the example segmentation output and corresponding labeled equipment external example segmentation labels into an example segmentation loss function to obtain a loss value, reversely propagating the loss value, adjusting weight parameters of the example segmentation network model through a gradient descent optimization algorithm, and training the example segmentation network model;
step S1-3: and after the training is carried out to the set step length or the loss is converged, fixing the weight of the example segmentation network model, thereby obtaining the trained example segmentation network model.
3. The positive-sample-based device external defect detection method according to claim 1, wherein: the step S2 specifically includes the following steps:
step S2-1: collecting a plurality of defect-free and background-free device external images;
step S2-2: the generation confrontation network model comprises a generator and a discriminator, wherein the generator consists of an encoder, a convolution long-term and short-term memory network and a decoder; inputting a plurality of equipment external images without defects and backgrounds into a coder of a generator to extract a characteristic matrix, then storing and extracting the characteristic matrix by a convolution long-term and short-term memory network of the input generator, transmitting the characteristic matrix into a decoder of the generator, and generating an equipment external image without defects by the decoder according to the characteristic matrix;
step S2-3: labeling a plurality of device external images without defects and backgrounds as positive samples, labeling the device external images without defects generated in the step S2-2 as negative samples, inputting sample pictures into a network model of a discriminator to obtain classification predicted values, wherein the sample pictures are the positive samples or the negative samples, and labels of the positive samples or labels of the negative samples; inputting the classification predicted value and the label of the corresponding sample into a classification loss function to obtain a loss value, reversely propagating the loss value, adjusting the weight parameter of the network model of the discriminator through a gradient descent optimization algorithm, and training the network model of the discriminator so as to improve the capability of the discriminator for distinguishing whether the external graph of the device without the defect is generated by the generator;
step S2-4: after training to a set step length or loss convergence, fixing the weight of the network model of the discriminator so as to obtain the trained network model of the discriminator;
step S2-5: inputting a plurality of equipment external images without defects and backgrounds into a network model of a generator, generating an equipment external image without defects, and marking the equipment external image as a positive sample; inputting the generated equipment external graph without defects into the trained network model of the discriminator with fixed weight parameters in the step S2-4 to obtain a classification predicted value, inputting the classification predicted value and a label marked as a positive sample into a classification loss function to obtain a loss value, reversely propagating the loss value, adjusting the weight parameters of the network model of the generator through a gradient descent optimization algorithm, and training the network model of the generator to improve the capability of extracting and storing the characteristics of the equipment external graph without defects of the network model of an encoder and the model of a convolution long-short term memory network in the generator and improve the capability of generating and restoring the equipment external graph without defects of the network model of a decoder in the generator;
step S2-6: after training to a set step length or loss convergence, fixing the weight of the network model of the generator so as to obtain the trained network model of the generator;
step S2-7: and repeating the step S2-2 to the step S2-6 until the generator can generate the positive sample or the negative sample which can not be distinguished by the discriminator, and obtaining the trained generation confrontation network model.
4. The positive-sample-based device external defect detection method according to claim 1, wherein: the step S3 specifically includes the following steps:
step S3-1: inputting the image of the equipment to be detected into the trained example segmentation network model to obtain an external image of the equipment; meanwhile, processing the edge of the image of the equipment to be detected by using an edge detection algorithm to obtain an equipment edge image;
step S3-2: and fusing the equipment external graph and the equipment edge graph to obtain a complete equipment external graph.
5. The positive-sample-based device external defect detection method of claim 3, wherein: the step S4 specifically includes the following steps:
step S4-1: inputting the complete external graph of the equipment into a trained generator, extracting a corresponding external graph feature matrix of the non-defective equipment by an encoder and a convolution long-short term memory network in the generator, and generating and restoring a corresponding external graph of the non-defective equipment by a decoder in the generator according to the extracted external graph feature matrix of the non-defective equipment;
step S4-2: inputting the complete equipment external graph and the correspondingly generated defect-free equipment external graph into a trained discriminator without a full connection layer to respectively obtain corresponding feature vectors;
step S4-3: calculating the distance between the characteristic vectors by a Frecher Markov distance algorithm, judging whether the distance exceeds a preset threshold value, if so, judging that the equipment image to be detected has a defect, and determining the defect position by the image difference between the complete equipment external image and the correspondingly generated non-defective equipment external image; and if not, judging that the image of the equipment to be detected has no defect.
CN202210496827.0A 2022-05-09 2022-05-09 Equipment external defect detection method based on positive sample Active CN114612468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210496827.0A CN114612468B (en) 2022-05-09 2022-05-09 Equipment external defect detection method based on positive sample

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210496827.0A CN114612468B (en) 2022-05-09 2022-05-09 Equipment external defect detection method based on positive sample

Publications (2)

Publication Number Publication Date
CN114612468A true CN114612468A (en) 2022-06-10
CN114612468B CN114612468B (en) 2022-07-15

Family

ID=81869354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210496827.0A Active CN114612468B (en) 2022-05-09 2022-05-09 Equipment external defect detection method based on positive sample

Country Status (1)

Country Link
CN (1) CN114612468B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578359A (en) * 2022-10-21 2023-01-06 千巡科技(深圳)有限公司 Few-sample defect detection method, system and device based on generation of countermeasure network and defect-free image measurement and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0880023A1 (en) * 1997-05-23 1998-11-25 Siemag Transplan Gmbh Method and device for the automatic detection of surface faults during the continuous mechanical removal of material from casted products
US20020054293A1 (en) * 2000-04-18 2002-05-09 Pang Kwok-Hung Grantham Method of and device for inspecting images to detect defects
EP3392612A1 (en) * 2015-12-14 2018-10-24 Nikon-Trimble Co., Ltd. Defect detection apparatus and program
CN108734690A (en) * 2018-03-02 2018-11-02 苏州汉特士视觉科技有限公司 A kind of defects of vision detection device and its detection method
CN109242841A (en) * 2018-08-30 2019-01-18 广东工业大学 A kind of transmission tower defect inspection method based on generation confrontation network
CN109377483A (en) * 2018-09-30 2019-02-22 云南电网有限责任公司普洱供电局 Porcelain insulator crack detecting method and device
CN110866915A (en) * 2019-11-22 2020-03-06 郑州智利信信息技术有限公司 Circular inkstone quality detection method based on metric learning
CN111179253A (en) * 2019-12-30 2020-05-19 歌尔股份有限公司 Product defect detection method, device and system
CN112614125A (en) * 2020-12-30 2021-04-06 湖南科技大学 Mobile phone glass defect detection method and device, computer equipment and storage medium
CN113011480A (en) * 2021-03-09 2021-06-22 华南理工大学 Cambered surface defect image generation method based on cyclic generation countermeasure network
CN113436169A (en) * 2021-06-25 2021-09-24 东北大学 Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0880023A1 (en) * 1997-05-23 1998-11-25 Siemag Transplan Gmbh Method and device for the automatic detection of surface faults during the continuous mechanical removal of material from casted products
US20020054293A1 (en) * 2000-04-18 2002-05-09 Pang Kwok-Hung Grantham Method of and device for inspecting images to detect defects
EP3392612A1 (en) * 2015-12-14 2018-10-24 Nikon-Trimble Co., Ltd. Defect detection apparatus and program
CN108734690A (en) * 2018-03-02 2018-11-02 苏州汉特士视觉科技有限公司 A kind of defects of vision detection device and its detection method
CN109242841A (en) * 2018-08-30 2019-01-18 广东工业大学 A kind of transmission tower defect inspection method based on generation confrontation network
CN109377483A (en) * 2018-09-30 2019-02-22 云南电网有限责任公司普洱供电局 Porcelain insulator crack detecting method and device
CN110866915A (en) * 2019-11-22 2020-03-06 郑州智利信信息技术有限公司 Circular inkstone quality detection method based on metric learning
CN111179253A (en) * 2019-12-30 2020-05-19 歌尔股份有限公司 Product defect detection method, device and system
CN112614125A (en) * 2020-12-30 2021-04-06 湖南科技大学 Mobile phone glass defect detection method and device, computer equipment and storage medium
CN113011480A (en) * 2021-03-09 2021-06-22 华南理工大学 Cambered surface defect image generation method based on cyclic generation countermeasure network
CN113436169A (en) * 2021-06-25 2021-09-24 东北大学 Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578359A (en) * 2022-10-21 2023-01-06 千巡科技(深圳)有限公司 Few-sample defect detection method, system and device based on generation of countermeasure network and defect-free image measurement and storage medium

Also Published As

Publication number Publication date
CN114612468B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN111444939B (en) Small-scale equipment component detection method based on weak supervision cooperative learning in open scene of power field
CN114022420A (en) Detection method for automatically identifying defects of photovoltaic cell EL assembly
CN117593304B (en) Semi-supervised industrial product surface defect detection method based on cross local global features
CN114612468B (en) Equipment external defect detection method based on positive sample
CN113780423A (en) Single-stage target detection neural network based on multi-scale fusion and industrial product surface defect detection model
CN114972316A (en) Battery case end surface defect real-time detection method based on improved YOLOv5
CN116071315A (en) Product visual defect detection method and system based on machine vision
CN116071294A (en) Optical fiber surface defect detection method and device
CN111507398A (en) Transformer substation metal instrument corrosion identification method based on target detection
Wei et al. Artificial intelligence for defect detection in infrared images of solid oxide fuel cells
CN116579616B (en) Risk identification method based on deep learning
CN113052103A (en) Electrical equipment defect detection method and device based on neural network
CN116030050A (en) On-line detection and segmentation method for surface defects of fan based on unmanned aerial vehicle and deep learning
CN116503354A (en) Method and device for detecting and evaluating hot spots of photovoltaic cells based on multi-mode fusion
Bhutta et al. Smart-inspect: micro scale localization and classification of smartphone glass defects for industrial automation
CN116228637A (en) Electronic component defect identification method and device based on multi-task multi-size network
CN115100546A (en) Mobile-based small target defect identification method and system for power equipment
CN114463686A (en) Moving target detection method and system based on complex background
Zhang et al. Defect detection of bottled liquor based on deep learning
CN112179846A (en) Prefabricated convex window defect detection system based on improved Faster R-CNN
CN104866825B (en) A kind of sign language video frame sequence classification method based on Hu square
Devereux et al. Automated object detection for visual inspection of nuclear reactor cores
Gao et al. Intelligent appearance quality detection of air conditioner external unit and dataset construction
CN113792630B (en) Method and system for identifying extraterrestrial detection image based on contrast learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant