CN115018790A - Workpiece surface defect detection method based on anomaly detection - Google Patents

Workpiece surface defect detection method based on anomaly detection Download PDF

Info

Publication number
CN115018790A
CN115018790A CN202210634714.2A CN202210634714A CN115018790A CN 115018790 A CN115018790 A CN 115018790A CN 202210634714 A CN202210634714 A CN 202210634714A CN 115018790 A CN115018790 A CN 115018790A
Authority
CN
China
Prior art keywords
workpiece
feature
image
detection
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210634714.2A
Other languages
Chinese (zh)
Inventor
王素琴
任琪
石敏
朱登明
杜昊晨
程成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Original Assignee
North China Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University filed Critical North China Electric Power University
Priority to CN202210634714.2A priority Critical patent/CN115018790A/en
Publication of CN115018790A publication Critical patent/CN115018790A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a workpiece product surface defect detection method based on anomaly detection, which comprises the following steps: step 1: researching the surface characteristics of workpieces actually manufactured by a production line, determining the standard of qualified workpieces, and judging whether the workpieces are qualified or not according to the standard; step 2: acquiring a workpiece surface image based on a specific light source and an industrial camera, and forming a data set after the image is subjected to cutting, filtering and the like; and step 3: and (2) constructing a workpiece surface defect detection model for the data set manufactured in the step (2) based on an anomaly detection technology, extracting the features of different sizes and scales of the surface image of the qualified workpiece by using a convolutional neural network, reconstructing the surface image of the qualified workpiece by combining a multi-scale region feature generator and a mask self-encoder, wherein the feature distribution of the surface image of the unqualified workpiece is greatly different from the previously learned distribution of the model because the surface image of the unqualified workpiece is not learned by the model, and the detection model can detect whether defects exist in the surface image and position the defect position by comparing the difference of the images before and after reconstruction. The method provided by the invention solves the problems that the manual visual detection efficiency is low, the manual visual detection is unstable and high in cost in industrial manufacturing, the traditional image processing detection method is poor in generalization and a large amount of labeled defect data is needed for target detection or segmentation based on deep learning.

Description

Workpiece surface defect detection method based on anomaly detection
Technical Field
The invention belongs to the technical field of surface defect detection, and particularly relates to a workpiece surface defect detection method based on abnormal detection.
Background
With the development of industrial production technology, the modern manufacturing industry has more and more strict requirements on the quality of produced workpieces, the quality of the workpieces is not strictly controlled, the performance of products is affected, normal use cannot be achieved, even potential safety hazards can be left, accidents can be caused, and therefore detection of the surface quality of the workpieces is very important. The manual visual inspection is widely applied to surface defect detection, and inspectors can accurately judge whether workpieces are normal or abnormal by virtue of abundant experience, but the following 3 key limitations exist in the process of detecting each product of a production line by the manual inspectors: 1) different manual detectors have different detection standards for workpieces, and consistent detection results and stable detection precision are difficult to maintain; 2) skilled inspectors require high labor costs; 3) the manual detection has high working strength, and the inspector is difficult to keep the attention concentrated for a long time. In view of the problems of the manual visual inspection, the inspection method based on the machine vision has high efficiency, and has become the focus of research in recent years.
In the traditional machine vision detection method, a special light source is used for polishing a workpiece, then an industrial camera is used for collecting an image of the surface of the workpiece, and then image processing algorithms such as filtering, gray level transformation, threshold segmentation or feature matching and the like are applied to the collected image for defect detection. The traditional machine vision detection method is high in running speed and high in accuracy, so that the traditional machine vision detection method is widely applied to industrial production scenes, but the defects that images acquired in actual production are easily influenced by objective factors such as uneven illumination, machine vibration and the like, the image quality is difficult to keep consistent, and the reliability of the detection result of the method is low. In addition, the traditional machine vision defect detection method is usually designed for a certain type of workpiece, so that the traditional machine vision defect detection method is difficult to be reused in other types of workpiece detection tasks, and the generalization performance is poor. The defect detection method based on deep learning learns the characteristics of a large number of defect samples, and realizes classification and positioning by using the characteristics of the defect samples, so that the detection precision is high, the applicability is strong, and the method is increasingly applied to a surface defect detection task. The supervised detection method needs a large number of defect samples in the training process, but in actual manufacturing, defect products rarely appear, and a large number of defect samples are difficult to collect in advance, so that the number of defect samples in the training set is small. Meanwhile, the number of positive and negative samples in the training set is unbalanced, so that the performance of the model obtained by training may be poor. Under the condition that defect data types are various and defect expression forms are difficult to predict, the supervised defect detection method is difficult to meet the detection requirements. In addition, in order to meet the requirements of network training, the cost for acquiring and manually labeling data of different defect types is high.
In recent years, the anomaly detection technology is an important direction for the development of surface defect detection, and has wide industrial application prospects. The abnormal detection is a technology for identifying abnormal data or mining non-logic data from a group of data, meets the characteristics of more normal samples and less abnormal samples in industrial production, and does not need to be marked. In the image detection problem, the abnormal detection firstly learns the characteristic information in the normal target image, then processes and distributes the information, calculates the difference between the detection sample and the prior distribution in the detection to realize the defect detection, and finally is applied to the actual measurement, the detection and the control. The surface defect detection method based on anomaly detection allows the use of intelligent algorithms, such as deep learning algorithms like convolutional neural networks, to quickly detect targets and meet the requirements of surface defect detection, it is capable of collecting, analyzing, transmitting data and evaluating results, and tends to be highly integrated with automation techniques. Therefore, it is very meaningful to control the quality of the workpiece surface using an intelligent defect detection technique based on anomaly detection.
Disclosure of Invention
In order to solve the problems, the invention provides a workpiece surface defect detection method based on abnormal detection, which comprises the following steps:
1. a method for detecting surface defects of a workpiece based on abnormal detection is characterized by comprising the following steps:
step 1: researching the surface characteristics of workpieces actually manufactured by a production line, determining the standard of qualified workpieces, and judging whether the workpieces are qualified or not according to the standard;
step 2: acquiring a workpiece surface image based on a specific light source and an industrial camera, and forming a data set after the image is subjected to cutting, filtering and the like;
and step 3: constructing a workpiece surface defect detection model for the data set manufactured in the step (2) based on an anomaly detection technology, extracting the features of different sizes and scales of the surface image of the qualified workpiece by using a convolutional neural network, reconstructing the surface image of the qualified workpiece by combining a multi-scale region feature generator and a mask self-encoder, wherein the feature distribution of the surface image of the unqualified workpiece is far different from the previously learned distribution of the model because the surface image of the unqualified workpiece is not learned by the model, and the detection model can detect whether defects exist in the surface image and position the defect position by comparing the difference of the images before and after reconstruction;
2. the method of claim 1, wherein the step 1 comprises: and (3) making a workpiece surface data set by using a manual screening mode on the acquired workpiece surface data set, and dividing the workpiece surface image into a normal sample and a defect sample by comparing a qualified workpiece with an unqualified workpiece, wherein the normal sample is used for training an abnormal detection network, and the defect sample needs to be detected to be defective.
3. The method of claim 1, wherein the workpiece surface defects are in the form of: and if the surface of the workpiece is inconsistent with the surface of the qualified workpiece, the defect is determined.
4. The method of claim 1, wherein the step 2 comprises: and (3) shooting the surface image of the workpiece product by using an industrial camera under the light irradiation condition of the specific structure.
5. The method of claim 1, wherein the collected data is preprocessed using clipping, filtering, etc. to construct a data set, wherein the data set comprises a training set and a testing set, wherein the training set comprises only surface images of qualified workpieces, i.e. normal samples, and the testing set comprises normal samples and abnormal samples of unqualified workpieces.
6. The method of claim 1, wherein the input size of the workpiece surface image is set to 500 x 500, the feature size generated by the region feature generator is 32 x 32, and the depth masked from the encoder is 6 layers.
7. The method of claim 1, wherein the feature extractor of step 3 is a VGG19 network, the network is composed of 5 blocks, and has 19 hidden layers, including 16 convolutional layers and 3 fully-connected layers, and the convolutional kernel with a size of 3 × 3 and the maximum pooling size of 2 × 2 are uniformly used in the whole network.
The multi-scale region feature representation is a dense multi-scale region feature representation that extracts portions of convolutional layer output in the VGG19 network and that is generated using a region feature generator.
8. The method of claim 1, wherein the feature maps of the input region feature generator are resized to the input image size for ease of alignment for different scale sizes of the input region feature generator. And then, carrying out spatial convolution on the aligned feature maps in proper steps by using mean filtering, wherein the convolution can smooth feature changes on the feature maps, improve the robustness of the generated feature maps and control the size of the feature maps by adjusting the step size. And finally connecting the aggregated feature maps into a multi-channel feature map.
9. The method of claim 1, wherein the obtained features of claim 8 are partitioned using an ViT-based masked autoencoder, and sampled and masked using a random sampling strategy. And the mask self-encoder carries out depth feature reconstruction on the features of the shielding part blocks, and detects the defect area in the surface of the workpiece by comparing the difference of the features before and after reconstruction.
The invention has the beneficial effects that:
1. the problems of low speed, low precision and high labor cost of manual visual detection are solved.
2. The problems that the traditional image processing detection method is poor in generalization and production scenes are easily limited are solved.
3. The problem that a large amount of labeled defect data are needed in a supervised deep learning detection method is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method for detecting surface defects of a workpiece based on anomaly detection according to the present invention;
FIG. 2 is an exemplary illustration of a workpiece surface image of the present invention;
FIG. 3 is a network structure diagram of the defect detection method of the present invention;
FIG. 4 is a diagram of a region feature generator for a defect detection method according to the present invention;
FIG. 5 is a diagram of a defect detection method mask self-encoder of the present invention;
FIG. 6 is a diagram illustrating a defect detection result according to the present invention.
Detailed Description
The invention provides a workpiece surface defect detection method based on anomaly detection, which comprises the following steps:
step 1: researching the surface characteristics of workpieces actually manufactured by a production line, determining the standard of qualified workpieces, and judging whether the workpieces are qualified or not according to the standard;
step 2: acquiring a workpiece surface image based on a specific light source and an industrial camera, and forming a data set after the image is subjected to cutting, filtering and the like;
and step 3: constructing a workpiece surface defect detection model for the data set manufactured in the step 2 based on an anomaly detection technology, wherein the model uses a convolutional neural network to extract the characteristics of the qualified workpiece surface image in different sizes and scales, and reconstructs the surface image of the qualified workpiece by combining a multi-scale region characteristic generator and a mask self-encoder, because the model does not learn the surface image of the unqualified workpiece, the difference between the characteristic distribution of the unqualified workpiece surface image and the previously learned distribution of the model is very large, and by comparing the difference of the images before and after reconstruction, the detection model can detect whether defects exist in the surface image and position the defect position;
as shown in fig. 1, the method of the present invention specifically includes the following steps:
step 1): the modern manufacturing industry has more and more strict requirements on the surface quality of processed parts, and the quality of the surface quality not only affects the appearance and shape of the product, but also more likely affects the functions of the product, and causes serious loss to enterprises. Therefore, in order to determine which of the abnormal areas in the workpiece surface image, the workpiece surface characteristics need to be analyzed, as shown in fig. 2, the entire qualified workpiece surface is a uniform background or a textured background, while the unqualified workpiece surface shows that areas inconsistent with the uniform background or the textured background appear, such areas may be darker or brighter than the normal areas, and the conclusion that the areas are not produced by the normal production process can be obviously observed. Based on an abnormal detection technology, the defect detection of the workpiece needs to provide detection whether the detection is qualified or not and to position a defect position.
And 2-1) the defect area in the workpiece surface image occupies a small proportion of the whole image, the defect expression forms are rich, and in order to ensure that the quality of the acquired image meets the detection requirement, the workpiece surface image needs to be acquired through a specific data acquisition mode. The invention defines a data acquisition mode based on an industrial camera, which comprises the following steps: on putting the thing platform, the work piece keeps the testing surface to place in putting thing platform central point position upwards, has the light source to beat out a face structure light in work piece perpendicular top, and the industrial camera also shoots the collection image in work piece perpendicular top.
The original image collected in the step 2-2) is 3 channels, and in order to reduce the calculated amount, the original image is converted into 1 channel from 3 channels by using gray processing. In addition, factors such as image background textures and the like can cause great influence on the defect detection effect, so that the noise interference existing in the background textures is eliminated by adopting mean value filtering on the converted gray level image.
The anomaly detection model used in the step 3-1) is a mask-DFR network, and the network structure is shown in FIG. 3. Inputting the data set constructed in the step 2-2 into a network, extracting image features of different scales of the input image through a pre-trained VGG19, inputting the extracted multi-level feature map into a region feature generator, and converting the multi-level feature map into a single feature map with a relatively large volume, so that a dense multi-scale region representation is established for the whole input image. The dense multi-scale region representation features are then randomly masked and the multi-scale feature representation is reconstructed using a depth autoencoder. And finally, calculating a reconstruction error by comparing the images before and after reconstruction to detect and segment abnormal regions in the images, and if the abnormal score of the reconstructed image is greater than a threshold set by the user reference prior knowledge, segmenting the abnormal regions. Before training the network, the configuration file of the network needs to be modified, model parameters are updated and optimized by using an Adam optimizer, initially, the size of an input network image is set to 256 × 256, learning _ rate is set to 0.0001, Batch _ size is set to 2, epoch is set to 200, the filtering step size is set to 4, the upsampling mode is bicubic linear sampling, Batch Normalization is set to True, the segmentation threshold is set to 0.5, and the fpr value used for estimating the segmentation threshold is set to 0.002. For the masked self-encoder performing the reconstruction task, its random feature mask ratio is set to 0.7, and the number of encoder and decoder layers is set to 6. And finally, training the model through a command line to obtain a workpiece surface defect detection model.
Step 3-2) a feature extractor in the mask-DFR network uses a pre-trained VGG19 network, the feature extractor can generate rich multi-level image features for an input image, and if a convolutional neural network with L layers of convolution (including convolution, batch normalization and activation functions) is provided, an image x with the height of h, the width of w and the channel number of c is input into the feature extraction network, a group of output feature maps can be obtained from the L layers of convolution layers
Figure BDA0003681645700000061
Wherein the size of the L-th layer characteristic diagram is h L ×w L ×c L . Since each feature map is derived from a network layer having a particular receptive field size, it contains a degree of abstraction of the input image. In particular, the shallow convolutional layer has a relatively small receptive field, so it mainly captures low-level features such as texture in the image. As the hierarchy becomes deeper, the corresponding output feature maps correspond to a larger field of view and thus encode more global or higher level information. Based on this, the collection of feature maps forms a rich hierarchical representation of the input image from local details to global semantic information. The 16 convolutional layers of the VGG19 network and the corresponding receptive field sizes are shown in table 1, and as the hierarchy deepens, the receptive field size gradually increases from 3 to 252, so as to generate 16 different levels of feature representations for the input image.
TABLE 1 VGG19 convolution layer and receptive field size thereof
Figure BDA0003681645700000071
Step 3-3) inputting the multi-level convolution feature map into a region feature generator, which can generate a distinctive multi-scale representation for a plurality of sub-regions of the image, the region feature generator is shown in fig. 4. The generator generates the feature map by mapping features from different receptive field sizes
Figure BDA0003681645700000072
Adjusted to the spatial size (h x w) of the input image, i.e. aligned
Figure BDA0003681645700000073
Wherein the feature map after alignment
Figure BDA0003681645700000074
Has a size of h x w x c l . Then align the feature map
Figure BDA0003681645700000075
Performing convolution operation, performing spatial convolution on each aligned feature map by using an average filter in proper steps, and performing aggregation operation as follows:
Figure BDA0003681645700000076
wherein the size of the first layer polymerization characteristic diagram is h o ×w o ×c l . The polymerization operation has two functions: 1) smoothing the feature changes on the feature map to make the generated features more robust to noise input; 2) the spatial size of the aggregate feature representation can be controlled by varying the convolution step size.
Finally, all aggregated feature maps are connected into a feature map with the size h o ×w o ×c o The characteristic diagram of (A):
Figure BDA0003681645700000077
where f {1: L } (x) represents the resulting feature map combined from the 1 st to the L th aggregate feature maps, either depth or number of channels
Figure BDA0003681645700000081
Step 3-4) differs from the classical self-encoder in that here an asymmetric design is used, the encoder only processes partially observed features, i.e. features without mask occlusion, and reconstructs the complete features from the potential representation and mask occlusion using a lightweight decoder, as shown in fig. 5. The training mask self-encoder only uses the region feature representation of the normal image, and the reconstruction loss is represented by the reconstructed dense region feature
Figure BDA0003681645700000082
And corresponding true value f (x) 2 Measured by distance, i.e.
Figure BDA0003681645700000083
The mask self-encoder is used for carrying out depth feature reconstruction on the multi-scale region feature representation, and the specific process comprises the following 3 steps:
the first step is as follows: and (6) feature masking. And dividing the input characteristic diagram into regular non-overlapped blocks, then sampling by adopting a random sampling strategy, and deleting the rest image blocks.
The second step is that: and (5) feature coding and decoding. The encoder is based on ViT embedding the image blocks by linear projection with position descriptions and processing only a part of the complete set of image blocks without using mask blocks. The information input to the decoder is composed of the encoded vector and the mask block output from the encoder. The mask block is a shared learnable vector representing the missing block to be predicted, while embedding the position information into the input set of the decoder, which can perform the image reconstruction task.
The third step: and (5) reconstructing the characteristics. The masked self-encoder reconstructs the input by predicting the pixel values of each masked image block, each output vector of which represents the pixel values of the image block, the last layer is a linear projection, the number of output channels is equal to the number of pixels in the block, and the last output is rearranged to form the reconstruction features.
Step 3-5) the network is based on the reconstructed regional characteristic diagram
Figure BDA0003681645700000084
And its original input features f (x) detect all possible abnormal regions in the input image, and an example of the defect detection result is shown in fig. 6. By comparing the original input features f (x) and their depth-reconstructed features
Figure BDA0003681645700000085
To infer an anomaly score map, which is then binarized with a certain threshold to segment the anomalies. The abnormal score map is composed of input region features f (x) and its reconstructed features
Figure BDA0003681645700000086
Is calculated by the pair-wise reconstruction error between, i.e.
Figure BDA0003681645700000091
Wherein A is i,j (x) Is a regional feature f i,j (x) (ii) an abnormality score of (i, j) representing a region feature f i,j (x) Spatial position on the region feature map f (x) of the input image x. Correspondingly, A (x) is an image feature anomaly map with the same spatial size as f (x), namely h o ×w o . To obtain a pixel-by-pixel anomaly map of an image
Figure BDA0003681645700000092
The feature anomaly map is further trilinearly upsampled. The mask self-encoder based on the masking features trains the region features of the normal image, and the region features corresponding to the abnormal image regions cannot be reproduced, so that larger reconstruction errors of the abnormal regions and the corresponding region features are matched with high abnormal scores on the abnormal image.

Claims (9)

1. A method for detecting surface defects of a workpiece based on abnormal detection is characterized by comprising the following steps:
step 1: researching the surface characteristics of workpieces actually manufactured by a production line, determining the standard of qualified workpieces, and judging whether the workpieces are qualified or not according to the standard;
step 2: acquiring a workpiece surface image based on a specific light source and an industrial camera, and forming a data set after the image is subjected to cutting, filtering and the like;
and step 3: and (2) constructing a workpiece surface defect detection model for the data set manufactured in the step (2) based on an anomaly detection technology, extracting the features of different sizes and scales of the surface image of the qualified workpiece by using a convolutional neural network, reconstructing the surface image of the qualified workpiece by combining a multi-scale region feature generator and a mask self-encoder, wherein the feature distribution of the surface image of the unqualified workpiece is greatly different from the previously learned distribution of the model because the surface image of the unqualified workpiece is not learned by the model, and the detection model can detect whether defects exist in the surface image and position the defect position by comparing the difference of the images before and after reconstruction.
2. The method of claim 1, wherein the step 1 comprises: and (3) making a workpiece surface data set by using a manual screening mode for the acquired workpiece surface data set, and dividing the workpiece surface image into a normal sample and a defect sample by comparing a qualified workpiece with an unqualified workpiece, wherein the normal sample is used for training an abnormal detection network, and the defect sample needs to be detected to detect the defect.
3. The method of claim 1, wherein the workpiece surface defects are in the form of: and if the surface of the workpiece is inconsistent with the surface of the qualified workpiece, the defect is determined.
4. The method of claim 1, wherein the step 2 comprises: and (3) shooting the surface image of the workpiece product by using an industrial camera under the light irradiation condition of the specific structure.
5. The method of claim 1, wherein the collected data is preprocessed using clipping, filtering, etc. to construct a data set, wherein the data set comprises a training set and a testing set, wherein the training set comprises only surface images of qualified workpieces, i.e. normal samples, and the testing set comprises normal samples and abnormal samples of unqualified workpieces.
6. The method of claim 1, wherein the input size of the workpiece surface image is set to 500 x 500, the feature size generated by the region feature generator is 32 x 32, and the depth masked from the encoder is 6 layers.
7. The method of claim 1, wherein the feature extractor of step 3 is a VGG19 network, the network is composed of 5 blocks, and has 19 hidden layers, including 16 convolutional layers and 3 fully-connected layers, and the convolutional kernel with a size of 3 × 3 and the maximum pooling size with a size of 2 × 2 are uniformly used in the whole network;
the multi-scale region feature representation is a dense multi-scale region feature representation that extracts a portion of the convolutional layer output in the network of VGG19 and that is generated using a region feature generator.
8. The method of claim 1, wherein the feature maps of different scale sizes of the input region feature generator are adjusted to the input image size to facilitate alignment, and then the aligned feature maps are spatially convolved with an average filtering in proper steps, and the convolution can smooth the feature variation on the feature maps, improve the robustness of the generated feature maps, and control the size of the feature maps by adjusting the step size, and finally connect the aggregated feature maps into the multi-channel feature maps.
9. The method of claim 1, wherein the feature obtained from claim 8 is segmented and sampled and masked using a random sampling strategy using an ViT-based mask autoencoder, wherein the mask autoencoder performs depth feature reconstruction on the features of the masked segment blocks, and wherein the defect region in the workpiece surface is detected by comparing the difference between the features before and after reconstruction.
CN202210634714.2A 2022-06-07 2022-06-07 Workpiece surface defect detection method based on anomaly detection Pending CN115018790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210634714.2A CN115018790A (en) 2022-06-07 2022-06-07 Workpiece surface defect detection method based on anomaly detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210634714.2A CN115018790A (en) 2022-06-07 2022-06-07 Workpiece surface defect detection method based on anomaly detection

Publications (1)

Publication Number Publication Date
CN115018790A true CN115018790A (en) 2022-09-06

Family

ID=83072667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210634714.2A Pending CN115018790A (en) 2022-06-07 2022-06-07 Workpiece surface defect detection method based on anomaly detection

Country Status (1)

Country Link
CN (1) CN115018790A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861315A (en) * 2023-02-27 2023-03-28 常州微亿智造科技有限公司 Defect detection method and device
CN117372720A (en) * 2023-10-12 2024-01-09 南京航空航天大学 Unsupervised anomaly detection method based on multi-feature cross mask repair

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861315A (en) * 2023-02-27 2023-03-28 常州微亿智造科技有限公司 Defect detection method and device
CN115861315B (en) * 2023-02-27 2023-05-30 常州微亿智造科技有限公司 Defect detection method and device
CN117372720A (en) * 2023-10-12 2024-01-09 南京航空航天大学 Unsupervised anomaly detection method based on multi-feature cross mask repair
CN117372720B (en) * 2023-10-12 2024-04-26 南京航空航天大学 Unsupervised anomaly detection method based on multi-feature cross mask repair

Similar Documents

Publication Publication Date Title
CN108961217B (en) Surface defect detection method based on regular training
CN114549522B (en) Textile quality detection method based on target detection
CN111815601B (en) Texture image surface defect detection method based on depth convolution self-encoder
CN109829891B (en) Magnetic shoe surface defect detection method based on dense generation of antagonistic neural network
CN111402203B (en) Fabric surface defect detection method based on convolutional neural network
CN111383209B (en) Unsupervised flaw detection method based on full convolution self-encoder network
CN110263192B (en) Abrasive particle morphology database creation method for generating countermeasure network based on conditions
CN113989228A (en) Method for detecting defect area of color texture fabric based on self-attention
CN110930357B (en) In-service steel wire rope surface defect detection method and system based on deep learning
CN110992317A (en) PCB defect detection method based on semantic segmentation
CN115018790A (en) Workpiece surface defect detection method based on anomaly detection
CN112837295A (en) Rubber glove defect detection method based on generation of countermeasure network
WO2023050563A1 (en) Autoencoder-based detection method for defective area of colored textured fabric
CN109840483B (en) Landslide crack detection and identification method and device
CN107966444B (en) Textile flaw detection method based on template
CN113239930A (en) Method, system and device for identifying defects of cellophane and storage medium
CN115797354B (en) Method for detecting appearance defects of laser welding seam
CN113298757A (en) Metal surface defect detection method based on U-NET convolutional neural network
CN101140216A (en) Gas-liquid two-phase flow type recognition method based on digital graphic processing technique
CN113469951B (en) Hub defect detection method based on cascade region convolutional neural network
CN116152749B (en) Intelligent gear wear monitoring method based on digital twin
CN112381790A (en) Abnormal image detection method based on depth self-coding
CN114820625A (en) Automobile top block defect detection method
CN110660049A (en) Tire defect detection method based on deep learning
CN113674263A (en) Small sample defect detection method based on generation type countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication