SE1930421A1 - Method and means for detection of imperfections in products - Google Patents

Method and means for detection of imperfections in products

Info

Publication number
SE1930421A1
SE1930421A1 SE1930421A SE1930421A SE1930421A1 SE 1930421 A1 SE1930421 A1 SE 1930421A1 SE 1930421 A SE1930421 A SE 1930421A SE 1930421 A SE1930421 A SE 1930421A SE 1930421 A1 SE1930421 A1 SE 1930421A1
Authority
SE
Sweden
Prior art keywords
network
images
pixel
pixels
training
Prior art date
Application number
SE1930421A
Inventor
Filip Ärlemalm
Nils Bäckström
Oskar Flordal
Original Assignee
Unibap Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unibap Ab filed Critical Unibap Ab
Priority to SE1930421A priority Critical patent/SE1930421A1/en
Priority to PCT/SE2020/051260 priority patent/WO2021137745A1/en
Priority to SE2230231A priority patent/SE2230231A1/en
Publication of SE1930421A1 publication Critical patent/SE1930421A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/772Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal

Abstract

Method and means are disclosed using an autoencoder-type neural network that outputs probability density functions to improve reconstruction network anomaly detection robustness and in order to simplify training. The network is preferably trained on images of imperfection-free products, and a probability density function is generated to pixels or regions in the reconstructed decoder representation. The probability density function expresses the likelihood for a specific pixel property of regions or individual pixels. The network is run on production product images, comparing pixel by pixel the actual pixel properties in production product images with the expected pixel properties predicted by the probability density function.

Description

Method and means for detection of imperfections in products TECHNICAL FIELD OF THE INVENTION The present invention relates to a neural network model and method for detection ofimperfections in products using image analysis on digital representations of the products,wherein images of the products are captured through digital camera means and imported to data processing means running a computer executable artificial neural network.
BACKGROUND AND PRIOR ART Surface defect detection is the process of finding manufacturing defects that affect thequality of a produced item either visually or functionally. At a typical factory this is done atthe end or sometimes in the middle of production to remove or rework defect goodsbefore it reaches the next production steps or the end customer. When considering thatdefects can come in many shapes and forms, this is a domain which has traditionallyinvolved large amounts of human labor. Recent developments with machine learninghave improved the ability of computers to solve these problems. However, this hasprimarily been done by trained methods that learn to detect certain types of defects usingmanually collected training data, or in some cases by using more primitive versions of the algorithm below.A typical running system comprises: ø cameras and carefully placed lighting which are used to collect data from surfacesthat require inspection o images are analyzed by the algorithm and are classified as either defect or not,usually with meta data such as type of defect and position v based on the feedback from the algorithm, the items are either manually or automatically removed from the production line.
An alternative method is letting an operator collecting images of defect items, labeling thewhole image or regions of the image as defective and training an object detector or a classifier neural network to detect the errors. The backside of this process is that o it is time consuming in that a lot of defects need to be found for training o it does not always produce the required resultso it does not necessarily generalize to new types of defects that were not seen before, including catastrophic errors such as a portion of the object missing.
A recent approach is to rely on an autoencoder trained either through a GAN (GenerativeAdversarial Net\Nork) or through other measures to reconstruct the input image. Anautoencoder is a network that typically takes a representation of for example an imagethen learns to encode this into a compact representation (for example 128 numbers) andthen learns to decode these numbers into either the same form or a similar form as theinput but without noise etc. Autoencoders are forced to find a more efficientrepresentation of the image. They are typically easy to train since in the basic case,where we simply try to make the input similar to the output, the data trained on does not need to be labeled.
The autoencoder is only trained on objects without defects, with the assumption that whenthe autoencoder is fed a defect image it will only be able to reconstruct a non-defectversion of the input object. ln other words, the autoencoder will find a way to describe anideal object as precisely as possible, which does not leave any room in its internalrepresentation to describe defects. By running the autoencoder on an image, comparingthe reconstructed image and the input image, we reveal what parts were defect forexample by comparing the distance between individual pixels or using another similaritymetrics like SSIM or PSNR (Structural Similarity lndex Metric or Peak Signal to NoiseRatio). While flexible and only requiring good samples, the downside of this method is thatit is less robust to areas of the image which are inherently chaotic, such as cut areas of metal objects, and these methods can also be complex to train to a sufficient degree.
There is a wealth of other handcrafted methods that try to find defects using classic visionwhere particular features are compared between images. The art of detecting defectsfrom images is very central to the art of machine vision. However, most of the methodstend to be very specific to a specific material and camera setup, and will not handle more complex and flexible defect setups. ln the prior art, generative neural networks of various shapes are used to generate andmanipulate images of different classes. One such method is the conditional image generation with PixelCNN decoders where, pixel by pixel, a convolutional neural network model is trained to understand, that depending on the closest neighbors above and to theleft what pixel intensities are likely to come next (see appended Fig. 1). By randomizing inthe distribution of the likely pixels one at a time, new plausible images can be generated.The purpose of these methods is to generate images rather than detecting defects, sothese techniques stop at describing how to generate an image iteratively from top to the bottom of the image.
The present invention is based on the observation that a conventional reconstructingneural network or classical autoencoder does not handle chaotic areas well and willalways contain a bit of uncertainty. The problem is that an image of a manufactured item typically has a few different sources of what is essentially chaotic noise: o camera introduced noise, either structural or random noise like photon shot noisewill, depending on the camera, make each intensity on the actual object berepresented within a random range after the camera o texture on an object such as grinding patterns, micro facets, or natural random variations in the material.
Additional to this, when we have a statistical model, there is bound to be some uncertaintyin the model for practical reasons. These sources are essentially impossible to describe in a compact form since they behave as noise.
Together with these essentially random factors there are factors concerning the object itself and how the image is taken such as: o variations in how the object is placed in front of the camera, large variation ifthe object is dangling from a conveyor, smaller differences if it is held by arobot or in a fixture o tolerances for size of the object where different objects will have small sizevanafions o with respect to noise there can be allowed variations in color due to the painting process of an object, etc.
A classical autoencoder will be better at handling the second set of differences by, e.g.,learning a representation for the object pose or the acceptable variations as well as color shifts of the object. A classical autoencoder has however no way to encode random noise, and it has also difficulties in describing the pixel perfect position of edges andspecular highlights in the material, that may depend on allowed micro variations since the model ultimately becomes very complex.
SUMMARY OF THE INVENTION An object for the present invention is to provide a neural network model and method forautomatic detection of imperfections in products which avoids the shortcomings of previous models or methods.
Another object of the present invention is to provide a neural network model and methodfor automatic detection of imperfections in products which can handle chaotic areas in the image with a high degree of certainty.
Still another object of the present invention is to provide a neural network model andmethod for automatic detection of imperfections in products which is designed for simplified unsupervised learning.
One or several of these objects are met in a method for detection of imperfections inproducts using image analysis on digital representations of the products, wherein imagesof the products are captured through digital camera means and imported to dataprocessing means running a computer executable artificial neural network of encoder- decoder type. ln its most basic implementation, the method comprising: training the network on images of imperfection-free products, o estimating the probability for a specific property pertaining to pixels or regions ofpixels in a reconstructed decoder representation of the training images, o running the network on production product images, comparing properties ofcorresponding pixels or regions in the training images, v determining as an imperfection a pixel or region in production product images which displays an unexpected property of nil or low probability.
Alternatively explained, the cited step of estimating the probability for a specific pixelproperty, such as pixel intensity, is a training step at which the network is trained and thus taught to make an estimate based on actual properties in training images. ln the disclosure, this feature is also generally named as a probability distribution or probability density function. lt can further be noted that the step of running the network on production product imagesresults in output of estimated probability distribution (density function) for pixels or regions of pixels in production product images. lt shall also further be noted that the step of determining or evaluation of anomalies orimperfections/defects comprises, in other words, sampling of actual pixel properties or intensities in production product images in the calculated probability density function.
An advantage and technical effect provided by this solution is that learning does notrequire a large variety of possible defects in products or in manipulated training images.For the same reason, the conventionally required manual involvement in the training process is significantly reduced. lt shall be emphasized that the presented network model is not limited to training andinference focusing on individual pixels and pixel intensities only. On the contrary, in thetraining process we can model several types of properties beside intensity, such asedginess, intensity variance, features such as pattern or texture, e.g., within a region of the image. ln more specific terms, embodiments of the invention include methods comprising the following steps: o estimating the probability for a specific intensity of pixels or regions of pixels in thereconstructed decoder representation of training images, o running the network on production product images, comparing pixel by pixel theintensity of pixels in production product images with the intensity of correspondingpixels in the training images, o determining as an imperfection a pixel or region in production product images which has an unexpected intensity of nil or low probability. ln a step-by-step instruction, the training run can be implemented as follows for the case of single channel images: o prepare a set of images, grayscale or single color, without anomalies, setting up an autoencoder-type neural network that decodes an image to aprobability density shape, such as cube shape, or a batch of images to a batch ofprobability density shapes, such as cube shapes, by adding output filters to thenetwork, for each training iteration, ca|cu|ate a desired probability density cube by takingthe image and applying a Gaussian shape around the specific intensity for eachpixel when writing the cube, train the network by running a forward pass and calculating MSE of the generateddensity cube and the output from the network, update network and repeat.
The inference run can be implemented as follows in a corresponding step-by-step instruction: run the trained network on an image that may contain defects, the networkgenerates a probability density cube, for each pixel in the input image, sample the probability of the intensity for thatpixel in the probability density cube, apply a preset threshold to determine if the probability is above or below thethreshold: anything that is below is marked as a potentially anomalous pixeltake the whole image and do an erosion, take the whole image of anomalous pixels and apply a dilation function,previous steps will eliminate the smallest anomalous regions (outliers). Look ateach area of connected marked pixels. lf the number of pixels in any such regionis larger than a second threshold, that area is marked as a defect or an imperfection.
Erosion and dilation are known concepts of morphological image processing which are familiar to persons skilled in the art, and thus they need no detailed explanation in this disclosure. ln more general terms, embodiments of the invention can be separated and defined through their individual characterizing features.
One embodiment of the method thus comprises the step of filtering the decoder output,determining if the actual pixel intensity falls inside or outside a pixel intensity range defined by the probability density algorithm.
Another embodiment of the method comprises the step of setting a threshold betweenzero (improbable) and one (highly probable) in the probability density function and rejecting as imperfection a pixel intensity that falls below the probability threshold.
One embodiment of the method foresees that training the network comprisesapproximation of actual pixel intensities in training images into a normal distribution curve formed around a target pixel intensity towards which the network is being trained.
Another embodiment of the method comprises the step of determining the least number ofclustered pixels required outside the predicted pixel intensity range, or below the probability threshold, to qualify as an imperfection.
The present invention also relates to a computer program product storable on acomputer-usable medium containing instructions for a data processing means to execute the inventive method.
The computer program product may be provided at least in part over a data transfer network, such as Ethernet or lnternet.
The computer program product may be installed to run on a computer operated in a physical inspection cell for a production line.
The present invention further relates to a computer readable medium which contains the computer program product.
Advantages and technical effects provided by these and other embodiments are further explained in the accompanying detailed description of preferred embodiments.
SHORT DESCRIPTION OF THE DRAWINGS The invention will be more closely described below with reference made to the accompanying, illustrating drawings, of which Fig.
Fig.
Fig.
Fig.
Fig.
Fig.
Fig.
Fig.
(Prior art) a reproduced illustration taken from the PixelCNN paper (Aäron van denOord et al. Conditional Image Generation with PixelCNN. ln 30"' conference on Neural Information Processing systems (NIPS), 2016), is an image showing image regions of exceptional pixel intensities indicating an anomaly, is a graph illustrating probability density functions in the form of Gauss curves around actual pixel intensities, is a graph illustrating probability density functions generated on pixels close to an edge of a sample, is an overview of the method and neural network model of the present invention installed in an inspection cell in a production line, is a flow chart illustrating a training process for the neural network model of the present invention, is a flowchart illustrating masking, and is a flowchart illustrating iterative training of a classifier in operation of theimperfections detecting method and neural network model of the present invention.
DETAILED DESCRIPTION OF PREFERRED El\/lBODll\/IENTS With reference to Fig. 1, briefly speaking, PixelCNN is a prior art method for generating, pixel by pixel, an image by predictions based on the properties of previous pixels above and to the left of the next generated pixel. Fig. 1 illustrates how the next pixel (black) is generated based on the actual intensities of previous pixels (see the graph) within a rectangular window that is moved in rows over the image from upper left to lower right.
Fig. 2 is an anomaly intensity image showing typical behavior of the model/method of the present invention. The intensity of the white indicates how unlikely a particular value is, strong white indicates that it is less likely. ln this image the white dot in the middle is an anomaly. The noisy pattern indicates the normal noise variations that are enhanced dueto the outliers being slightly unlikely, even though they might be minor pixel shifts they are clearly within the acceptable region.
Fig. 3 is a graph illustrating the assessment basis in inference mode of the neural networkmodel for detecting imperfections or anomalies in products according to the presentinvention. ln the graph, the vertical scale indicates likelihood values from 0-1, and thehorizontal scale indicates pixel intensity values from 0-255. The continuous-line curve(right) illustrates a probability density function comprising estimation of the probability fora specific pixel intensity based on input pixel intensities in imperfection-free trainingimages. The algorithm generates a Gaussian curve approximating the distribution ofintensity values in training images, the shape of the curve predicting the likelihood for aspecific pixel intensity. Since in this case the training images show a narrow range of pixelintensity values between about 120-150 on the horizontal scale, the prediction for anactual pixel intensity of exactly 135 in analyzed images is quite high, about 0.8 on thelikelihood scale. From the graph it can be concluded that a pixel intensity value of 140,e.g., shows high conformity with the predicted intensity range and lands in the region ofabout 0.6 on the likelihood scale. A pixel intensity value of 125, on the other hand, showslow conformity with the predicted intensity range and lands below 0.2 on the likelihoodscale. To separate anomalous samples from imperfection-free products in operation ofthe network model, the inference process can be designed to insert a threshold value on the likelihood scale as a discriminator, in Fig. 3 illustrated by a dash-dot line. ln Fig. 3, a broken-line curve is generated by the algorithm in similar way based on asingle pixel intensity in a production image, hence the high likelihood for that intensity.Obviously, there is no overlap and conformity between the two curves, and that singleintensity which has produced the broken-line curve is clearly outside the predicted intensity range, indicating an anomaly in the production.
Fig. 4 is a graph similar to Fig. 3. ln this case, the continuous-line curve describes thedistribution of pixel intensities in the vicinity of a border or edge in training images. Theprobability density functions are here much wider (cf. Fig. 3) since there is greater spreadand distribution of intensity values in this region, which can be due to how the light breaks in the surface of the product and where exactly the edge falls, e.g. The broken-line curve based on a single intensity value in the analyzed image however represents an intensitywhich is clearly inside of the predicted intensity range, and this image will pass as accepted without anomalies.
According to the invention, instead of calculating a specific value we are interested incalculating a range of possible values at a certain position. This way normally occurringvariance can be easier handled without having to utilize a network with too muchexpressivity (thus also being able to reconstruct defects). Drawing inspiration from densityfunctions as generated by non-GAN based generation networks like PixelCNN the basic autoencoder algorithm can instead be changed to the following: Gather a set of images that preferably have no defects. lf they do have defects, they canbe sorted out by highlighting the most unusual images in the training set for human inspection. For example, by running the algorithm with low thresholds for error. ln a factory this is typically done by installing the cameras in normal production andcapturing images without making any decisions. This can also be done off site in a lab butthe correct settings need to be replicated regarding light etc. or the light at the factory will be determined as anomalous.
Train an autoencoder-like network 9 that is forced to compress its understanding of theinput and where the decoder 11 expands that understanding into a shape which containsone density function for each pixel (or element) that can contain one bin for each intensity value of the input.
To be more specific, for an 8 bit mono-colored 256x256 pixel image a 256x256x256 cubeas the output is provided. Each value of this density cube 12 indicates the likelihood for aspecific pixel intensity for that particular pixel. l.e., for a specific pixel this could describethat the likelihood of the pixel having intensity 0, 1, 2 and 3 (i.e. dark) is relatively high, while the likelihood of it being between 32-100 is very low and unlikely.
The network is set up using convolutional layers (i.e. small kernels that move across theimage) as well as fully connected layers in the middle that compress the information intoone layer which may contain, for example 128 entries or 256 entries. The network is a tailored version of an artificial neural network of encoder - decoder type. 11 The density function output per pixel can be implemented in many ways. ln our preferredembodiment it consists of a series of 1x1 convolutions that are connected to the set offilters that would normally come before the final step of the output autoencoder. ln otherwords, we have a set of, for example, 64 filters from the autoencoder, which is thenanalyzed through a new set of filters for each pixel to calculate the different likelihoods of different intensities in the autoencoder.
The network can be efficiently trained by using a Gauss curve around the correct pixelintensity and training the network to ideally output that Gauss curve by calculating a lossbased on MSE loss (Mean Square Error) against the generated Gauss curve. By using aGauss curve the generated density function doesn't have to precisely match the outputand is thus easier to train. l.e. we help the network to understand that nearby values areacceptable to the actual value at a certain position. Experimentation have shown this to be an efficient way to generate arbitrary density functions from the learned network.
Since only one value for each density function will prove correct for each pixel, analternative way to train is to only train based on a few failing samples each image and allthe correct samples. This way the network needs to learn the distribution itself withoutassuming it is gaussian. This has also proven to work nearly as well as training using the Gaussian curve.
Given this output, which we can call the probability cube, we can sample each value inthe input image on the individual density functions for each pixel, and based on theprobabilities for each value determine whether a region of pixels is unexpected (ananomaly) or not. l.e. if the input image has a value of 23 for a given pixel, we check in theoutput density function for that pixel how likely 23 is as a value. The cube can containnormalized or non-normalized values which gives a likelihood estimate that says whetherthis is likely or very unlikely. Typically, the density functions contain some noise so a threshold is preferably set at some low level.
The output filtering can be done in a number of different ways. The most straightfonNard isto determine that all pixels with a probability below a threshold is deemed as an anomaly.A post filter can then be applied where sufficiently large sets of connected anomalouspixels are deemed as an actual anomaly (since individual outliers can still occur and not be insignificant). The required number of pixels can be calibrated based on reducing the 12 number of false positives while still allowing for detecting small defects. ln a properlytrained network, the number of defects is expected to be small due to the flexibility of the density function.
That an individual anomaly is insignificant can be either due to the defect being verysmall, or that the model is not expressive enough to cover very rare occurrences such as dust, that is still acceptable. ln a more advanced implementation, the output from the process of a neural network canbe trained based on the anomaly map to determine whether the anomaly is significant ornot. The advantage of such a network is that the density cube will behave the same fordifferent types of images of objects being tested. One way to do this is to check eachregion where the total anomaly within the region exceeds a low threshold, or by simply run the post filter on top of all regions of the image.
An alternative way is to train an object detector to suggest interesting regions in theimage. The advantage to training a detector on the output is that the probability cubedomain behaves similar for different types of images. ln other words, a defect on a metalpart can look very similar in the probability cube compared to a defect on a plastic part.This is not necessarily true when looking at the raw image from the camera. This way the effort of training a network can be reused across multiple different object and material types. ln typical implementations we still want to classify defects. This can now be done byrunning a classifier on top of the regions determined to be anomalous. ln some cases, thisstage can also be used to suppress features that, while anomalous, are acceptable inproduction. ln other words, one way to do this is to train a post-operated classifier CNN(Convolutional Neural Network) to recognize false positives from the anomaly network,and pass all found anomalies through this network. This optional network needs to betrained per object type and material to meet whatever classifications are required for this process to get the correct labeling.
Based on the probability cube of an area, these anomalies can be clustered based ontheir likeness so that we can define separate classes of defects that the user of thesystem can give different names. This is a type of auto-labeling. The grouping can be done by a clustering algorithm such as K-Means. A neural network can be used to reduce 13 the dimensionality of the faults, this network can be trained on a smaller set of defectsand reused for new datasets. ln other words, apply a network that takes for example a32x32 pixel area around an error and train a network to separate different labeled defecttypes into different bins by describing them with a 128 entries vector. The same networkcan then be used on a completely new dataset. This is a technique used when trying toseparate for example different faces where not all different faces will have been seenduring training. lnstead you teach the network to learn an efficient representation wheredifferent faces tend to get different 128 vectors as output where similar/same faces get similar output vectors. ln a typical implementation this may have to be done on multiple color channels, sinceRed, Green and Blue have separate output density functions. This can be calculated byextending the density functions for each pixel while keeping a similar network structure, but it could also be done through three separate networks, one for each color.
While the density function is a powerful concept, the human eye/brain will sometimes pickup on defect not because of color variations but due to the structure or lack thereof in asingle area. ln other words, in the chaotic texture after a cutting tool on metal it is easy fora human to pick up a longer scratch even if the scratch has a color that is within the samecolor band as the chaotic texture. To cover this aspect, the same concept can be applied,but instead of using three different color channels a set of channels that describes theintensity of different variations of textures is used. Accordingly, if we have one channelthat describes a vertically striped texture. This effect can be either strong or weak in aregion of the image. Many convolutional neural networks will describe textures a fewlayers into the network since they are central for determining what object an image depicts. Given this information the algorithm thus becomes: Take the output from a detection network (or a reconstructing autoencoder) a few layersinto the network. This will typically generate a vector of lower resolution than the input,and will typically contain more channels (filters). ln the example from the 256x256 imagewe could for example have a 64x64x64 vector that contains texture intensities from 64textures from 64x64 number of 5x5 patches in the image using typical strides andconvolution sizes which gives some overlap between the patches (stride is the lengthbetween each invocation of the convolution kernel). This sampling can be done at several layers in order to convey probabilities over more advanced textures. 14 For each texture intensity we normalize the intensities and can then treat it the same wayas a color intensity for a 64x64 image according to the algorithm above. Similarly, to theway we combine three color channels in the image we, can also calculate them all in thesame network and expand to, for example, 64 different density functions for each region of the image.
Similar to the color density functions this network can also be used to find texture outliers. ln other words, it would be unexpected to find a striped pattern in the middle of a dotted pattern, which probably indicates a scratch.
This can also be implemented by transforming the image with a set of classic filters. Byapplying a Laplacian filter, e.g., in order to find which areas of the image that containmany borders, or by checking transform values from a discrete Fourier transform, bymeasuring variance and mean of an area, or by calculating directional edges, etc. The output of the filter is then used as the input channel to the anomaly detection network.
One or several layers of texture probabilities can be combined with the color probabilitiesto make a decision on the anomalous nature of a region. Preferably this is combined intoa neural network unless training is sufficiently powerful to accurately determine likely or unlikely regions.
Fig. 6 is a flowchart illustrating a process for ensuring the reliability of the network model.ln step 13, images are processed through the network structure 9 which generates theprobability density cube 12 built on probability density functions applied to pixels in theoutput decoder representations. ln post analysis step 14, probabilities of the input imagesare matched with the Gaussian probability curves generated in step 15 for pixels intraining images imported from step 16. ln step 17, MSE loss is calculated with respect tothe Gaussian probability curves. Step 18 is the evaluation step wherein the calculatedMSE loss is compared with a setpoint or threshold value. lf MSE loss is acceptable, themodel is saved in step 19 as a reliable model for operative use. However, if MSE loss is unacceptable the process returns to step 14.
A masking process may be applied as illustrated in Fig. 7 in order to ensure that only therelevant object is seen in the image. Masking is effective for removing variability in thesurrounding environment. A masking neural network can be trained to find out which pixels belong to the object, and set all other pixels in the image to 0. From step 20, an image is imported to a positioning algorithm run in step 21, and trained to isolate therelevant pixels or regions of pixels in the image. ln step 22 the irrelevant pixels or regionsof pixels are masked by the positioning algorithm. ln step 33 the masked image is imported for training of the network.
Fig. 8 is a flow chart and overview showing a network model as represented by thepresent disclosure. Notably in Fig. 8, anomalous images in the decoder output 24 are fedto a post-operated classifier neural network 25 which is trained per product type andmaterial to label and separate imperfections or clusters of imperfections into differentclasses. A feedback loop 26 from decision step 27 to a labeling process 28 isimplemented for iterative training of the classifier 25 on images containing defects which are previously unknown to the classifier.
Briefly and conclusively, the invention as disclosed prescribes the use of a probabilitydensity function in order to relax matching for reconstructing neural networks. Theinvention further suggests reconstruction on features for recreating texture. Training of thenetwork includes features like doing MSE on a Gaussian curve applied as an estimator.Another inventive feature is the application of a classifier on top of the anomaly detectorand the feedback loop made possible from that. The network model and method of thepresent invention can be run on an edge computer or server in a production line, whereastraining can alternatively be done offline and tuned online as customer adds customerspecific features to the setup of the network model and method. The present inventionresults in high precision to find imperfections in products, it can detect imperfections not seen before, and it is robust to false positives in multiple ways.
A computer program product or a computer program implementing the method or a partthereof comprises software or a computer program run on a general purpose or speciallyadapted computer, processor or microprocessor. The software includes computerprogram code elements or software code portions that make the computer perform themethod. The program may be stored in whole or part, on, or in, one or more suitablecomputer readable media or data storage means such as a magnetic disk, CD-ROM orDVD disk, hard disk, magneto-optical memory storage means, in RAM or volatile memory,in ROM or flash memory, as firmware, on a data server, or a cloud server. Such acomputer program product or a computer program can also be supplied via a network, such as Internet. 16 lt is to be understood that the embodiments described above and illustrated in thedrawings are to be regarded only as non-limiting examples of the present invention and may be modified within the scope of the appended claims.

Claims (1)

1. 7 CLA||\/IS _ A method for detection of imperfections in products using image analysis on digital representations of the products, wherein images of the products are capturedthrough digital camera means and imported to data processing means running acomputer executable artificial neural network of encoder-decoder type, the methodcomprising: a) training the network on images of imperfection-free products, b) estimating the probability for a specific property pertaining to pixels or regions ofpixels in a reconstructed decoder representation of the training images, c) running the network on production product images, comparing properties of pixelsor regions in production product images with properties of corresponding pixels orregions in the training images, d) determining as an imperfection a pixel or region in production product images which displays an unexpected property of nil or low probability. The method of claim 1, comprising: - estimating the probability for a specific intensity of pixels or regions of pixels in thereconstructed decoder representation of training images, - running the network on production product images, comparing pixel by pixel theintensity of pixels in production product images with the intensity of correspondingpixels in the training images, - determining as an imperfection a pixel or region in production product images which has an unexpected intensity of nil or low probability. The method of claim 1 or 2, comprising the step of filtering the decoder output,determining if the pixel intensity falls inside or outside a pixel intensity range defined by the probability estimation. The method of any previous claim, comprising the step of setting a thresholdbetween zero (improbable) and one (highly probable) in the probability estimate and rejecting as imperfection a pixel intensity that falls below the probability threshold. 10. 11. 18 The method of any previous claim, wherein training the network comprisesapproximation of actual pixel intensities in training images into a normal distributioncurve formed around a target pixel intensity towards which the network is being trained. The method of any of claims 2 to 5, comprising the step of determining the leastnumber of clustered pixels required outside the predicted pixel intensity range, or below the probability threshold, to qualify as an imperfection. The method of claim 6, comprising the step of filtering the decoder output intensityvalues through a static filter which denotes images having regions containing abovea predetermined number of pixels with unexpected pixel intensities, and feedingthese images or regions through a neural network trained to determine whether or not these regions are imperfections. The method of any previous claim, comprising the step of feeding the decoderoutput to a post-operated neural network which is trained to determine the significance of an imperfection by comparison with a map of known imperfections. The method of any previous claim, comprising the step of feeding the decoderoutput to a post-operated, object detecting neural network which is trained to select regions of interest in the production product images. The method of any previous claim, comprising the step of feeding anomalousimages in the decoder output to a post-operated classifier neural network which istrained per product type and material to label and separate imperfections or clusters of imperfections into different classes. The method of claim 10, wherein training the classifier neural network includesfeedback and labelling of previously unknown imperfections detected during running of the network on production product images. 12 13. 14. 15. 16. 17. 19 _ The method of any previous claim, comprising repeating the method steps a), b), c), d) for each colour red, green and blue in a multicolour image. The method of any of claims 1 to 10, wherein training the network comprisesrepeating the steps a), b), c), d) for each type of texture included in a set of textures previously recognized by a pre-operated neural network. The method of claims 12 and 13, comprising the combination of method steps in claims 12 and 13. The method of any previous claim, comprising the step of isolating the product fromthe environment through masking by setting the intensities of pixels outside the product to 0 in the image. The method of any previous claim, wherein training the network comprises: - preparing a set of (grayscale or single color) images without anomalies, - setting up an autoencoder-type neural network that decodes an image to aprobability density shape, such as cube shape, or a batch of images to a batch ofprobability density shapes, such as cube shapes, by adding output filters to thenetwork, -for each training iteration, calculating a desired density shape by taking the imageand applying a Gaussian shape around the specific intensity for each pixel whenwriting the cube, -training the network by running a fonNard pass and calculating MSE of thegenerated probability density cube and the output from the network, - update network and repeat the previous steps. The method of any previous claim, comprising: - running the trained network on an image that may contain defects, the networkgenerating a probability density cube, -for each pixel in the input image, sampling the probability of the intensity for that pixel in the output probability density cube, 18. 19. 20. 21. - applying a preset threshold to determine if the probability is above or below thethreshold: anything that is below is marked as a potentially anomalous pixel,-taking the whole image and making an erosion, -taking the whole image of anomalous pixels and applying a dilation function,-this way eliminating the smallest anomalous regions (outliers), - evaluating each area of connected marked pixels: if the number of pixels in anysuch region is larger than a preset threshold, that area is marked as a defect or imperfection. A computer program product storable on a computer usable mediumcontaining instructions for a data processing means to execute the method of any of claims 1-17. The computer program product of claim 18 provided at least in part over a data transfer network, such as Ethernet or lnternet. The computer program product of claim 18 or 19, installed to run on a computer operated in a physical inspection cell for a production line. A computer readable medium, characterized in that it contains a computer program product according to claim 18.
SE1930421A 2019-12-30 2019-12-30 Method and means for detection of imperfections in products SE1930421A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
SE1930421A SE1930421A1 (en) 2019-12-30 2019-12-30 Method and means for detection of imperfections in products
PCT/SE2020/051260 WO2021137745A1 (en) 2019-12-30 2020-12-23 A method for detection of imperfections in products
SE2230231A SE2230231A1 (en) 2019-12-30 2020-12-23 A method for detection of imperfections in products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
SE1930421A SE1930421A1 (en) 2019-12-30 2019-12-30 Method and means for detection of imperfections in products

Publications (1)

Publication Number Publication Date
SE1930421A1 true SE1930421A1 (en) 2021-07-01

Family

ID=74104158

Family Applications (2)

Application Number Title Priority Date Filing Date
SE1930421A SE1930421A1 (en) 2019-12-30 2019-12-30 Method and means for detection of imperfections in products
SE2230231A SE2230231A1 (en) 2019-12-30 2020-12-23 A method for detection of imperfections in products

Family Applications After (1)

Application Number Title Priority Date Filing Date
SE2230231A SE2230231A1 (en) 2019-12-30 2020-12-23 A method for detection of imperfections in products

Country Status (2)

Country Link
SE (2) SE1930421A1 (en)
WO (1) WO2021137745A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469977A (en) * 2021-07-06 2021-10-01 浙江霖研精密科技有限公司 Flaw detection device and method based on distillation learning mechanism and storage medium
CN115937109A (en) * 2022-11-17 2023-04-07 创新奇智(上海)科技有限公司 Silicon wafer defect detection method and device, electronic equipment and storage medium
CN116883446A (en) * 2023-09-08 2023-10-13 鲁冉光电(微山)有限公司 Real-time monitoring system for grinding degree of vehicle-mounted camera lens

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298190B (en) * 2021-07-05 2023-04-07 四川大学 Weld image recognition and classification algorithm based on large-size unbalanced samples
FR3125156B1 (en) * 2021-07-12 2023-11-10 Safran NON-DESTRUCTIVE CONTROL OF A PART
CN113706465B (en) * 2021-07-22 2022-11-15 杭州深想科技有限公司 Pen defect detection method based on deep learning, computing equipment and storage medium
CN113689390B (en) * 2021-08-06 2023-10-24 广东工业大学 Abnormality detection method for non-defective sample learning
CN113778719B (en) * 2021-09-16 2024-02-02 北京中科智眼科技有限公司 Anomaly detection algorithm based on copy and paste
CN113888663B (en) * 2021-10-15 2022-08-26 推想医疗科技股份有限公司 Reconstruction model training method, anomaly detection method, device, equipment and medium
EP4174794A1 (en) * 2021-11-02 2023-05-03 Sensyne Health Group Limited Anomaly detection in images
CN114386067B (en) * 2022-01-06 2022-08-23 承德石油高等专科学校 Equipment production data safe transmission method and system based on artificial intelligence
EP4213091A1 (en) * 2022-01-14 2023-07-19 Siemens Aktiengesellschaft Physics-informed anomaly detection in formed metal parts
CN114170227B (en) * 2022-02-11 2022-05-31 北京阿丘科技有限公司 Product surface defect detection method, device, equipment and storage medium
DE102022108979A1 (en) 2022-04-12 2023-10-12 Wipotec Gmbh Method and device for detecting anomalies in two-dimensional digital images of products
CN114494260B (en) * 2022-04-18 2022-07-19 深圳思谋信息科技有限公司 Object defect detection method and device, computer equipment and storage medium
CN114549997B (en) * 2022-04-27 2022-07-29 清华大学 X-ray image defect detection method and device based on regional feature extraction
CN114820541A (en) * 2022-05-07 2022-07-29 武汉象点科技有限公司 Defect detection method based on reconstructed network
CN114646563B (en) * 2022-05-23 2022-08-26 河南银金达新材料股份有限公司 Method for detecting surface abrasion resistance of polyester film with metal coating
CN114821195B (en) * 2022-06-01 2022-12-16 南阳师范学院 Intelligent recognition method for computer image
CN115423798A (en) * 2022-09-22 2022-12-02 中广核核电运营有限公司 Defect identification method, defect identification device, computer equipment, storage medium and computer program product
CN115423807B (en) * 2022-11-04 2023-03-24 山东益民服饰有限公司 Cloth defect detection method based on outlier detection
CN116596875B (en) * 2023-05-11 2023-12-22 哈尔滨工业大学重庆研究院 Wafer defect detection method and device, electronic equipment and storage medium
CN116777292B (en) * 2023-06-30 2024-04-16 北京京航计算通讯研究所 Defect rate index correction method based on multi-batch small sample space product
CN116805312B (en) * 2023-08-21 2024-01-05 青岛时佳汇服装有限公司 Knitted fabric quality detection method based on image processing
CN117272055B (en) * 2023-11-23 2024-02-06 国网山西省电力公司营销服务中心 Electric energy meter abnormality detection method and device based on filtering enhancement self-encoder

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090238432A1 (en) * 2008-03-21 2009-09-24 General Electric Company Method and system for identifying defects in radiographic image data corresponding to a scanned object
US20170148226A1 (en) * 2015-11-19 2017-05-25 Kla-Tencor Corporation Generating simulated images from design information
US20170200260A1 (en) * 2016-01-11 2017-07-13 Kla-Tencor Corporation Accelerating semiconductor-related computations using learning based models
US20170351952A1 (en) * 2016-06-01 2017-12-07 Kla-Tencor Corporation Systems and methods incorporating a neural network and a forward physical model for semiconductor applications
WO2018192672A1 (en) * 2017-04-19 2018-10-25 Siemens Healthcare Gmbh Target detection in latent space
US20190130279A1 (en) * 2017-10-27 2019-05-02 Robert Bosch Gmbh Method for detecting an anomalous image among a first dataset of images using an adversarial autoencoder
WO2019155467A1 (en) * 2018-02-07 2019-08-15 Applied Materials Israel Ltd. Method of generating a training set usable for examination of a semiconductor specimen and system thereof
US20190287230A1 (en) * 2018-03-19 2019-09-19 Kla-Tencor Corporation Semi-supervised anomaly detection in scanning electron microscope images

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6819790B2 (en) * 2002-04-12 2004-11-16 The University Of Chicago Massive training artificial neural network (MTANN) for detecting abnormalities in medical images
US10402700B2 (en) * 2016-01-25 2019-09-03 Deepmind Technologies Limited Generating images using neural networks
CN108009506A (en) * 2017-12-07 2018-05-08 平安科技(深圳)有限公司 Intrusion detection method, application server and computer-readable recording medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090238432A1 (en) * 2008-03-21 2009-09-24 General Electric Company Method and system for identifying defects in radiographic image data corresponding to a scanned object
US20170148226A1 (en) * 2015-11-19 2017-05-25 Kla-Tencor Corporation Generating simulated images from design information
US20170200260A1 (en) * 2016-01-11 2017-07-13 Kla-Tencor Corporation Accelerating semiconductor-related computations using learning based models
US20170351952A1 (en) * 2016-06-01 2017-12-07 Kla-Tencor Corporation Systems and methods incorporating a neural network and a forward physical model for semiconductor applications
WO2018192672A1 (en) * 2017-04-19 2018-10-25 Siemens Healthcare Gmbh Target detection in latent space
US20190130279A1 (en) * 2017-10-27 2019-05-02 Robert Bosch Gmbh Method for detecting an anomalous image among a first dataset of images using an adversarial autoencoder
WO2019155467A1 (en) * 2018-02-07 2019-08-15 Applied Materials Israel Ltd. Method of generating a training set usable for examination of a semiconductor specimen and system thereof
US20190287230A1 (en) * 2018-03-19 2019-09-19 Kla-Tencor Corporation Semi-supervised anomaly detection in scanning electron microscope images

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469977A (en) * 2021-07-06 2021-10-01 浙江霖研精密科技有限公司 Flaw detection device and method based on distillation learning mechanism and storage medium
CN113469977B (en) * 2021-07-06 2024-01-12 浙江霖研精密科技有限公司 Flaw detection device, method and storage medium based on distillation learning mechanism
CN115937109A (en) * 2022-11-17 2023-04-07 创新奇智(上海)科技有限公司 Silicon wafer defect detection method and device, electronic equipment and storage medium
CN116883446A (en) * 2023-09-08 2023-10-13 鲁冉光电(微山)有限公司 Real-time monitoring system for grinding degree of vehicle-mounted camera lens
CN116883446B (en) * 2023-09-08 2023-11-21 鲁冉光电(微山)有限公司 Real-time monitoring system for grinding degree of vehicle-mounted camera lens

Also Published As

Publication number Publication date
WO2021137745A1 (en) 2021-07-08
SE2230231A1 (en) 2022-07-08

Similar Documents

Publication Publication Date Title
SE1930421A1 (en) Method and means for detection of imperfections in products
US11694318B2 (en) Electronic substrate defect detection
US20180150696A1 (en) Detection of logos in a sequence of video frames
CN110956615B (en) Image quality evaluation model training method and device, electronic equipment and storage medium
US10726535B2 (en) Automatically generating image datasets for use in image recognition and detection
Kozamernik et al. Visual inspection system for anomaly detection on KTL coatings using variational autoencoders
CN117274245B (en) AOI optical detection method and system based on image processing technology
JP2017511674A (en) System for identifying a photo camera model associated with a JPEG compressed image, and associated methods, uses and applications
CN113313179A (en) Noise image classification method based on l2p norm robust least square method
CN115830351B (en) Image processing method, apparatus and storage medium
CN113129265A (en) Method and device for detecting surface defects of ceramic tiles and storage medium
US9589331B2 (en) Method and apparatus for determining a detection of a defective object in an image sequence as a misdetection
Kim et al. Camera module defect detection using gabor filter and convolutional neural network
CN114529515A (en) Method for automatically identifying internal defects of solar cell
Francis et al. Feature enhancement and denoising of a forensic shoeprint dataset for tracking wear-and-tear effects
CN115699110A (en) Segmentation mask generation in alpha channel based on automatic encoder
Gao et al. Image quality assessment using image description in information theory
Militsyn et al. Application of dynamic neural network to search for objects in images
Nagarajan et al. A simple technique for removing snow from images
US20230419636A1 (en) Identifying anomaly location
Dai et al. Anomaly detection and segmentation based on defect repaired image resynthesis
Liu et al. Real-Time Metal-Surface-Defect Detection and Classification Using Advanced Machine Learning Technique
Ch et al. A Reference Based Approach to Detect Short Faults in PCB
Guzaitis et al. An efficient technique to detect visual defects in particleboards
Badoiu et al. OCR quality improvement using image preprocessing

Legal Events

Date Code Title Description
NAV Patent application has lapsed