WO2022250253A1 - Apparatus and method with manufacturing anomaly detection - Google Patents

Apparatus and method with manufacturing anomaly detection Download PDF

Info

Publication number
WO2022250253A1
WO2022250253A1 PCT/KR2022/002927 KR2022002927W WO2022250253A1 WO 2022250253 A1 WO2022250253 A1 WO 2022250253A1 KR 2022002927 W KR2022002927 W KR 2022002927W WO 2022250253 A1 WO2022250253 A1 WO 2022250253A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
image
anomaly detection
data
detection model
Prior art date
Application number
PCT/KR2022/002927
Other languages
French (fr)
Inventor
Dae-Hwan Kim
Han-Sang Cho
Original Assignee
Samsung Electro-Mechanics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020210127263A external-priority patent/KR102609153B1/en
Application filed by Samsung Electro-Mechanics Co., Ltd. filed Critical Samsung Electro-Mechanics Co., Ltd.
Priority to CN202280026413.XA priority Critical patent/CN117121053A/en
Publication of WO2022250253A1 publication Critical patent/WO2022250253A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • An apparatus for anomaly inspection or detection may examine whether an image of a manufactured article (hereinafter, referred to as 'a manufacturing image') in a manufacturing industry includes an anomaly, e.g., for quality control or anomaly correction.
  • a typical apparatus for anomaly inspection/detection may employ typical image processing, e.g., according to an image processing algorithm.
  • CNN convolutional neural network
  • an apparatus includes an image generator configured to learn a first anomaly detection model using a plurality of pieces of image data of a predetermined good article, and apply the learned first anomaly detection model to first image data to generate second image data, a first logic operator configured to perform a first logic operation on the first image data and the second image data, and output third image data corresponding to an image difference between the first image data and the second image data, a feature extractor configured to learn a second anomaly detection model using the plurality of pieces of image data of the predetermined good article, and apply the learned second anomaly detection model to the first image data and the second image data, to generate image mask data having feature information between the first image data and the second image data, and a second logic operator configured to perform a second logic operation on the third image data and the image mask data, to generate fourth image data having anomaly indicative information.
  • the learned first anomaly detection model may be configured to, when the first image data corresponds to data of a defective article, perform compression-restoration with respect to the data of the defective article to generate the second image data corresponding to one of a plurality of predetermined good articles.
  • the second logic operator may include a multiplication logic operation unit configured to multiply the third image data and the image mask data.
  • the learning of the first anomaly detection model may include performing compression-restoration learning based on the predetermined good article.
  • the learned first anomaly detection model may perform compression-restoration with respect to the data of the defective article to generate the second image data corresponding to one of a plurality of predetermined good articles.
  • the generating of the image mask data may include extracting the feature vector information between the first image data and the second image data, and generating the image mask data based on the extracted feature vector information.
  • the performing of the second logic operation may include multiplying the third image data and the image mask data.
  • FIG. 3 is a view illustrating an example operation of an anomaly detection model, according to one or more embodiments.
  • FIG. 4 is a view illustrating an example learning operation of an anomaly detection model of an apparatus with anomaly detection, using a contra-embedding approach, according to one or more embodiments.
  • FIG. 7 is a view illustrating example results of an anomaly detection with respect to an example camera module component, according to one or more embodiments.
  • 'anomaly detection' anomaly inspection/detection
  • CNN convolutional neural network
  • a relatively large amount of data may be collected for training of such a CNN.
  • training data may be collected from states in which data of a good article and data of a defective article are balanced.
  • there may be a disadvantage in that it could take a long time (e.g., two (2) weeks to one (1) month, or more) to such collect data, especially in cases where defects have a very low frequency of occurrence.
  • unsupervised learning may typically have advantages in that a data collection period may be greatly shortened, and an operation for data labeling may not be required, there may be problems that such a typical unsupervised learning approach may not work properly for data with a large number of types and quantities of good articles, such as in the context of anomaly detection, since anomaly detection learning may also be desirably based on defective articles.
  • an anomaly detection approach using a generative adversarial network (GAN) or using a variational autoencoder (VAE), rather than the autoencoder structure may be used.
  • GAN generative adversarial network
  • VAE variational autoencoder
  • the GAN there may be disadvantages that random vectors may desirably be fine-tuned when expected, and in the VAE, there may be problems that quality of a restored image may be deteriorated such as in the operation of the autoencoder example.
  • anomaly detection when applying anomaly detection in manufacturing sites, it may be desirable that a potential problem of data loss due to a bottleneck of the example autoencoder may be solved. It may also be beneficial for anomaly detection to be applicable to a wide variety of good articles.
  • the image generator 100 may learn (i.e., train) a first anomaly detection model using a plurality of pieces of image data of a good article(s), as learning target(s).
  • the image generator 100 may apply the learned first anomaly detection model to first image data VD1, as an inspection object, to generate second image data VD2.
  • the feature extractor 300 may learn (i.e., train) a second anomaly detection model using a plurality of pieces of image data of good article(s), as learning target(s). These plurality of pieces of image data of good article(s) may be the same plurality of pieces of image data of good article(s), as learning target(s), used in the learning of the first anomaly detection model or may be different.
  • the feature extractor 300 may apply the learned second anomaly detection model to the first image data VD1 and the second image data VD2, which are inspection objects, to output image mask data VMD having feature information between the first image data VD1 and the second image data VD2.
  • a combined model may be derived by combining the learned second anomaly detection model of the feature extractor 300 and the learned first anomaly detection model of the image generator 100.
  • the image generator 100 with the first anomaly detection model may be predetermined, and the feature extractor 300 with the learned second anomaly detection model may be combined with the image generator 100, results of the image generator 100 compared to results of the feature extractor 300, and the final anomaly detection result generated based on results of this comparison.
  • the image generator 100 may be configured to compress and then restore the first image data VD1 of the good article.
  • the second image data VD2 similar to data of the good article may be generated, when the restoration is performed.
  • third image data VD3 having a first anomaly detection map may be generated.
  • the feature extractor 300 may compare the first image data VD1 for each patch N*32*32 and the second image data VD2 for each patch N*32*32.
  • N denotes the number of patches
  • '32*32' denotes a size of an image having thirty two (32) pixels * thirty two (32) pixels, as non-limiting examples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

A method and apparatus with anomaly detection are provided. The method includes generating second image data by applying a first anomaly detection model to first image data, generating third image data corresponding to an image difference between the first image data and the second image data by performing a first logic operation on the first image data and the second image data, generating image mask data having feature information between the first image data and the second image data by applying a second anomaly detection model to the first image data and the second image data, generating fourth image data having anomaly indicative information by performing a second logic operation on the third image data and the generated image mask data. The method can include learning the first anomaly detection model and the second anomaly detection model using a plurality of pieces of image data of a predetermined good article.

Description

APPARATUS AND METHOD WITH MANUFACTURING ANOMALY DETECTION
The present disclosure relates to an apparatus and method with manufacturing anomaly detection.
An apparatus for anomaly inspection or detection may examine whether an image of a manufactured article (hereinafter, referred to as 'a manufacturing image') in a manufacturing industry includes an anomaly, e.g., for quality control or anomaly correction. A typical apparatus for anomaly inspection/detection may employ typical image processing, e.g., according to an image processing algorithm.
Alternatively, if anomaly inspection/detection were performed using a deep learning technology, such as by a convolutional neural network (CNN), a lot of training data and labels could be required or desired for the training of the CNN, but example availability of the appropriate data and labeling could be limited.
(Prior art Document)
(Patent Document 1) JP Patent Publication No. 2020-139905 (2019.04.03)
(Patent Document 2) KR Patent Publication No. 10-2020-0135730 (2019.05.22)
(Patent Document 3) KR Patent Publication No. 10-2021-0050186 (2019.10.28)
An aspect of the present disclosure is provided to introduce a selection of concepts in simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In one general aspect, an apparatus includes an image generator configured to learn a first anomaly detection model using a plurality of pieces of image data of a predetermined good article, and apply the learned first anomaly detection model to first image data to generate second image data, a first logic operator configured to perform a first logic operation on the first image data and the second image data, and output third image data corresponding to an image difference between the first image data and the second image data, a feature extractor configured to learn a second anomaly detection model using the plurality of pieces of image data of the predetermined good article, and apply the learned second anomaly detection model to the first image data and the second image data, to generate image mask data having feature information between the first image data and the second image data, and a second logic operator configured to perform a second logic operation on the third image data and the image mask data, to generate fourth image data having anomaly indicative information.
The learning of the first anomaly detection model may include performing compression-restoration learning based on the predetermined good article.
The learned first anomaly detection model may be configured to, when the first image data corresponds to data of a defective article, perform compression-restoration with respect to the data of the defective article to generate the second image data corresponding to one of a plurality of predetermined good articles.
For the first logic operation, the first logic operator may include a subtraction logic operation unit configured to subtract the first image data and the second image data.
The first logic operator may be configured to perform a process of learning a structural similarity (SSIM) autoencoder algorithm.
The feature extractor may be configured to extract feature vector information between the first image data and the second image data, and generate the image mask data having the feature information based on the feature vector information.
The learning of the second anomaly detection model may include learning a contra-embedding algorithm comparing, for each of a plurality of respective patch units, image information input to the first anomaly detection model and image information output by the first anomaly detection model of the predetermined good article, and classify, for each of the plurality of respective patch units, a non-defective article and a defective article based on results of the comparing.
The feature extractor may be configured to use the learned second anomaly detection model to extract feature vector information from the first image data for respective patch units of the first image data, and generate the image mask data having a select one between non-defective article information and defective article information for each of the respective patch units based on the extracted feature vector information.
The second logic operator may include a multiplication logic operation unit configured to multiply the third image data and the image mask data.
In one general aspect, an apparatus includes a processor configured to reconstruct non-defective article image data for input image data using a learned first anomaly detection model, where the learned first anomaly detection model includes a structural similarity (SSIM)-autoencoder, generate image mask data having feature information between the input image data and the reconstructed non-defective article image data using a contra-embedding algorithm-based learned second anomaly detection model, and generate defect indicative information based on a result of the SSIM autoencoder and the generated image mask data.
In one general aspect, a processor-implemented method includes generating second image data by applying a learned first anomaly detection model to first image data, generating third image data corresponding to an image difference between the first image data and the second image data by performing a first logic operation on the first image data and the second image data, generating image mask data having feature information between the first image data and the second image data by applying a learned second anomaly detection model to the first image data and the second image data, generating fourth image data having anomaly indicative information by performing a second logic operation on the third image data and the generated image mask data.
The method may further include learning the first anomaly detection model using a plurality of pieces of image data of a predetermined good article, and learning the second anomaly detection model using the plurality of pieces of image data of the predetermined good article.
The learning of the first anomaly detection model may include performing compression-restoration learning based on the predetermined good article.
The learning of the first anomaly detection model may include learning a structural similarity (SSIM) autoencoder algorithm, and the generating of the third image data may include implementing the learned SSIM autoencoder algorithm.
The learning of the second anomaly detection model may include learning a contra-embedding algorithm comparing, for each of a plurality of respective patch units, image information input to the first anomaly detection model and image information output by the first anomaly detection model of the predetermined good article, and classifying, for each of the plurality of respective patch units, a non-defective article and a defective article based on results of the comparing.
The generating of the image mask data may include extracting the feature vector information from the first image data for respective patch units of the first image data, and generating the image mask data having a select one between non-defective article information and defective article information for each of the respective patch units based on the extracted feature vector information.
When the first image data corresponds to data of a defective article, the learned first anomaly detection model may perform compression-restoration with respect to the data of the defective article to generate the second image data corresponding to one of a plurality of predetermined good articles.
The performing of the first logic operation on the first image data and the second image data may include subtracting the first image data and the second image data.
The generating of the image mask data may include extracting the feature vector information between the first image data and the second image data, and generating the image mask data based on the extracted feature vector information.
The performing of the second logic operation may include multiplying the third image data and the image mask data.
Other features and aspects will be apparent from the following detailed description, the drawings, and the claims.
According to one or more embodiments, with respect to an original image and an image restored by an image generator such as an autoencoder or the like, for a manufacturing image in a manufacturing process, contrast learning of feature vectors may be performed by contra-embedding algorithm learning for each patch unit, to reduce a difference between feature vectors at the same location and increase a difference between feature vectors at different locations, to detect quickly defects of the manufacturing image.
In addition, in one or more embodiments, learning may proceed with only good articles, and thus, production thereof may be carried out in a timely manner. In some cases, it may be desired to measure sizes of defective articles. In an example, in order to solve the above with deep learning, a segmentation method may be used. When an automatically generated segmentation mask is used as a segmentation learning label, for example, there may be an effect of significantly reducing a time period in constituting data for segmentation.
FIG. 1 is a view illustrating an apparatus with anomaly detection, according to one or more embodiments.
FIG. 2 is a view illustrating example image data of FIG. 1, according to one or more embodiments.
FIG. 3 is a view illustrating an example operation of an anomaly detection model, according to one or more embodiments.
FIG. 4 is a view illustrating an example learning operation of an anomaly detection model of an apparatus with anomaly detection, using a contra-embedding approach, according to one or more embodiments.
FIG. 5 is a view illustrating an example anomaly detection result, according to one or more embodiments.
FIG. 6 is a view illustrating example results of an anomaly detection with respect to an example capacitor component, according to one or more embodiments.
FIG. 7 is a view illustrating example results of an anomaly detection with respect to an example camera module component, according to one or more embodiments.
FIG. 8 is a view illustrating an example method with anomaly detection, according to one or more embodiments.
Throughout the drawings and the detailed description, unless otherwise described or provided, the same drawing reference numerals will be understood to refer to the same or like elements, features, and structures. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience.
The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of features that are known after an understanding of the disclosure of this application may be omitted for increased clarity and conciseness.
The features described herein may be embodied in different forms and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or systems described herein that will be apparent after an understanding of the disclosure of this application.
The terminology used herein is for describing various examples only and is not to be used to limit the disclosure. The articles "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "includes," and "has" specify the presence of stated features, numbers, operations, members, elements, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, members, elements, and/or combinations thereof.
Throughout the specification, when a component is described as being "connected to," or "coupled to" another component, it may be directly "connected to," or "coupled to" the other component, or there may be one or more other components intervening therebetween. In contrast, when an element is described as being "directly connected to," or "directly coupled to" another element, there can be no other elements intervening therebetween. As used herein, the term "and/or" includes any one and any combination of any two or more of the associated listed items.
Although terms such as "first," "second," and "third" may be used herein to describe various members, components, regions, layers, or sections, these members, components, regions, layers, or sections are not to be limited by these terms. Rather, these terms are only used to distinguish one member, component, region, layer, or section from another member, component, region, layer, or section. Thus, a first member, component, region, layer, or section referred to in the examples described herein may also be referred to as a second member, component, region, layer, or section without departing from the teachings of the examples.
Unless otherwise defined, all terms, including technical and scientific terms, used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains and based on an understanding of the disclosure of the present application. Terms, such as those defined in commonly used dictionaries, are to be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the disclosure of the present application and are not to be interpreted in an idealized or overly formal sense unless expressly so defined herein. The use of the term "may" herein with respect to an example or embodiment (e.g., as to what an example or embodiment may include or implement) means that at least one example or embodiment exists where such a feature is included or implemented, while all examples are not limited thereto.
As noted above, if anomaly inspection/detection (hereinafter referred to as 'anomaly detection') were performed using a deep learning technology, such as through use of a typical convolutional neural network (CNN), it is found herein that the availability of the appropriate data and labeling could be limited. For example, the appropriate defective data and labeling may be very limited in a manufacturing site where anomaly detection may be desirable, such as where rapid changes may be required or desired.
For example, typically, a relatively large amount of data may be collected for training of such a CNN. For example, if anomaly detection were performed using such a CNN, such training data may be collected from states in which data of a good article and data of a defective article are balanced. However, considering such an example CNN based anomaly detection approach, there may be a disadvantage in that it could take a long time (e.g., two (2) weeks to one (1) month, or more) to such collect data, especially in cases where defects have a very low frequency of occurrence.
Since typical deep learning approaches applied to anomaly detection could be very dependent on data, even if an algorithm based on social networking service (SNS) data or a general image to be studied were alternately applied and unsupervised learning were considered, it is found herein that performance thereof may also be degraded, especially considering the difficulty in collecting and learning based on defective data. Some approaches may be implemented in typical unsupervised learning algorithms for collecting incorrect data. However, within the context of an application of such unsupervised learning to anomaly detection, if unsupervised learning were applied the data collection, learning may still be primarily based on good articles, e.g., based on sufficiently good articles for proceeding with learning.
Accordingly, although unsupervised learning may typically have advantages in that a data collection period may be greatly shortened, and an operation for data labeling may not be required, there may be problems that such a typical unsupervised learning approach may not work properly for data with a large number of types and quantities of good articles, such as in the context of anomaly detection, since anomaly detection learning may also be desirably based on defective articles.
Typically, in the context of general unsupervised learning, an autoencoder structure may have a deep learning structure that may effectively compress data, and may be a structure widely used in unsupervised learning. For example, in the context of applying such unsupervised learning to anomaly detection, with an autoencoder having a CNN structure, data distribution of an image could be learned. For example, when data of good articles may be learned, data of defective articles may be passed through the autoencoder structure, and the data of defective articles may be restored as data of good articles. Accordingly, an image difference from the original image may be used in detecting anomalies.
However, in this example applying of such unsupervised learning to anomaly detection, the autoencoder may have a bottleneck structure, and thus, there may be a problem of blurring in a high-frequency region, when data is restored, which may make it difficult to distinguish between a fine defective region and a high-frequency normal region, and thus, performance of the desired anomaly detection may be deteriorated.
As another example, an anomaly detection approach using a generative adversarial network (GAN) or using a variational autoencoder (VAE), rather than the autoencoder structure, may be used. However, in the GAN, there may be disadvantages that random vectors may desirably be fine-tuned when expected, and in the VAE, there may be problems that quality of a restored image may be deteriorated such as in the operation of the autoencoder example.
Thus, when applying anomaly detection in manufacturing sites, it may be desirable that a potential problem of data loss due to a bottleneck of the example autoencoder may be solved. It may also be beneficial for anomaly detection to be applicable to a wide variety of good articles.
FIG. 1 is a view illustrating an apparatus with anomaly detection, according to one or more embodiments, and FIG. 2 is a view illustrating example image data of FIG. 1, according to one or more embodiments.
Referring to FIGS. 1 and 2, an apparatus 10 with anomaly detection may include an image generator 100, a first logic operator 200, a feature extractor 300, and a second logic operator 400, for example. The apparatus 10, as well as the apparatuses described in FIGS. 2-4 are each representative of one or more processors, and one or more non-transitory memories, with learning and/or inference operations being respectively implemented by hardware or implemented by a combination of hardware and software, such as through instructions stored in the one or more memories, which when executed by at least one of the one or more processors configures the at least one or any combination of the one or more processors to implement any one, any combination, or all operations or methods described herein. As another non-limiting example, any of the image generator 100, first logic operator 200, feature extractor 300, and second logic operator 400 are respectively representative of such one or more processors, and may be representative of such one or more non-transitory memories which may further store such respective instructions for such respective learning and/or inference operations.
The image generator 100 may learn (i.e., train) a first anomaly detection model using a plurality of pieces of image data of a good article(s), as learning target(s). The image generator 100 may apply the learned first anomaly detection model to first image data VD1, as an inspection object, to generate second image data VD2.
The first logic operator 200 may be configured to perform a first logic operation on the first image data VD1 corresponding to an original image and the second image data VD2, as an image restored by the image generator 100, and may output third image data corresponding to an image difference between the first image data VD1 and the second image data VD2. As a non-limiting example, the first logic operator 200 may be a first logic operation unit.
The feature extractor 300 may learn (i.e., train) a second anomaly detection model using a plurality of pieces of image data of good article(s), as learning target(s). These plurality of pieces of image data of good article(s) may be the same plurality of pieces of image data of good article(s), as learning target(s), used in the learning of the first anomaly detection model or may be different. The feature extractor 300 may apply the learned second anomaly detection model to the first image data VD1 and the second image data VD2, which are inspection objects, to output image mask data VMD having feature information between the first image data VD1 and the second image data VD2.
For example, a contra-embedding approach may be used in the second anomaly detection model, where the comparing of the first image data and the second image data of the good article, as the learning target, for each patch unit, may be learned, and thus, a good article and a defective article on the first image data, a learning target, for each patch unit, may be classified.
For example, the second anomaly detection model may implement the contra-embedding learning approach, which will be described in greater detail below with reference to FIG. 4 and Equation 2, for example.
In an example, a combined model may be derived by combining the learned second anomaly detection model of the feature extractor 300 and the learned first anomaly detection model of the image generator 100. In another example, the image generator 100 with the first anomaly detection model may be predetermined, and the feature extractor 300 with the learned second anomaly detection model may be combined with the image generator 100, results of the image generator 100 compared to results of the feature extractor 300, and the final anomaly detection result generated based on results of this comparison.
Thus, as a non-limiting example, one or more embodiments may use two such models/networks (or the example combined model/network) to perform anomaly detection, where the final result of the anomaly detection is based on the respective results of the two such models/networks.
The image generator 100 may be configured to compress and then restore the first image data VD1 of the good article. In this case, when a defective article passes through the image generator 100, the second image data VD2 similar to data of the good article may be generated, when the restoration is performed. When a difference between the two image data VD1 and VD2 is obtained by an implemented structural similarity (SSIM) algorithm, third image data VD3 having a first anomaly detection map may be generated.
The feature extractor 300 may perform a contra-embedding learning approach on the image data of the good article, as the learning target, and may then compare patch unit feature vectors between the first image data VD1 of the original image, as the inspection object, and the second image data VD2 restored by the image generator 100, to generate an image mask data VMD having a second anomaly detection map.
As a non-limiting example, the feature extractor 300 may compare the first image data VD1 for each patch N*32*32 and the second image data VD2 for each patch N*32*32. In this case, N denotes the number of patches, and '32*32' denotes a size of an image having thirty two (32) pixels * thirty two (32) pixels, as non-limiting examples.
A second logic operation (e.g., multiplication) may be performed on the third image data VD3 and the image mask data VMD, to generate fourth image data VD4 having a final anomaly detection map. The fourth image data VD4 may be used to detect anomaly data, and may also be used as a segmentation mask, for example. Thus, the fourth image data VD4 may be, or include information, indicative of anomal(ies).
The second logic operator 400 may be configured to perform the second logic operation (e.g., multiplication) on the third image data VD3 and the image mask data VMD, to generate the fourth image data VD4 having anomaly information. As a non-limiting example, the second logic operator 400 may be a second logic operation unit.
Thus, with the good article, as the learning target, having been learned by the first anomaly detection model in the image generator 100, a first image data corresponding to data of a defective article in the inspection process may be input, the restoration operation may be performed, and a result of the first anomaly detection model in the image generator 100 may be used to generate the second anomaly detection model corresponding to data of the good article.
The first logic operation of the first logic operator 200 may include a subtraction logic operation of subtracting the first image data VD1 and the second image data VD2.
As a non-limiting example, the first logic operation of the first logic operator 200 may correspond to a process of learning a structural similarity (SSIM)-autoencoder algorithm.
For example, learning of the image generator 100 may use a process of learning an SSIM-autoencoder algorithm, and SSIM, using a loss function presented below in Equation 1, as a non-limiting example.
[Equation 1]
Figure PCTKR2022002927-appb-img-000001
In Equation 1, p and q are image data obtained by cropping first image data VD1, an original image, and second image data VD2 restored by an autoencoder, as <k x k>, μp and μq are average intensities, σp and σq are variance, σpq is covariance, and c1 and c2 are constants and generally are 0.01 and 0.03, respectively, as non-limiting examples.
As described above, when image data of the good article is learned by an autoencoder, data distribution of the good article may be learned. Therefore, when image data of the defective article is passed through the autoencoder, the image data of the defective article may be restored to image data corresponding to data of the good article.
FIG. 3 is a view illustrating an example operation of an anomaly detection model, according to one or more embodiments.
Referring to FIG. 3, an operation process of a first anomaly detection model will be described. The first anomaly detection model may have a learned data distribution of first image data VD1, and may include a process of restoring an input image to a good article, for example.
For example, when an NxN-sized image is input, a wxh size of the input image may be compressed into a one-dimensional M-size vector, and the compressed vector may be then restored to an image having the wxh size, i.e., the original size of the input image prior to compression.
In this process, important elements in data of the VD1 may be compressed into the M-sized vector. When unlearned data (e.g., a defective image) is input as VD1 to the first anomaly detection model, one of the previously learned images may be restored dependent on the input VD1.
FIG. 4 is a view illustrating an example learning operation of an anomaly detection model of an apparatus with anomaly detection, using a contra-embedding approach, according to one or more embodiments.
Referring to FIG. 4, a feature extractor 300 may extract feature vector information FVI between first image data VD1 and second image data VD2, and may generate image mask data VMD having feature information based on the feature vector information FVI.
For example, a feature extractor 300 may learn a contra-embedding algorithm comparing first image data VD1 and second image data VD2 of a good article, a learning target, for each patch unit, and may classify a good article and a defective article on the first image data VD1 with respect to each patch unit.
Accordingly, the feature extractor 300 may apply the learned contra-embedding algorithm, to extract feature vector information from the first image data, as an inspection object, for each patch unit, and may output image mask data VMD having good article information and defective article information for each patch unit based on the feature vector information FVI.
For example, a loss function used in the learning by the contra-embedding algorithm is presented below in Equation 2, as a non-limiting example.
[Equation 2]
Figure PCTKR2022002927-appb-img-000002
In Equation 2, zi is a feature vector in which a patch of the first image data, an original image, is obtained by the feature extractor 300, and zj is a feature vector in which a patch of the second image data restored by an image generator 100 is obtained by the feature extractor 300. sim is cosine similarity, and
Figure PCTKR2022002927-appb-img-000003
is a hyperparameter.
Referring to FIG. 4, the feature extractor 300 may include a crop operator 310, a contrast learner 330, and an anomaly score calculator 350, for example.
First, the crop operator 310 may crop the original image and an image restored by the image generator 100 for each patch unit.
The contrast learner 330 may learn to increase a cosine similarity in the same position in the image, and may learn to decrease a cosine similarity in different positions in the image. In this case, the contrast learner 330 may further include a projector. For example, the projector may perform a role of compressing only such correspondingly indicated important elements in vectors extracted by the feature extractor.
The image generator 100 may render feature vectors in the same position identical to each other, and may render feature vectors in the different positions different from each other, to clearly generate a difference depending on whether or not an anomaly is present. Therefore, as a non-limiting example, the anomaly score calculator 350 may use the above to calculate an anomaly score.
For example, in an anomaly detection map M, a final anomaly detection map M may be acquired by multiplying a first anomaly detection map Mssim, included in third image data obtained by an SSIM algorithm by the image generator 100, and a second anomaly detection map Mcontra, included in the image mask data VMD obtained by a contra-embedding algorithm of the feature extractor 300. As a non-limiting example, the anomaly score may be selected as the largest value as presented below in Equation 3, for example, from the anomaly detection map M.
[Equation 3]
Figure PCTKR2022002927-appb-img-000004
Figure PCTKR2022002927-appb-img-000005
For example, a second logic operation of a second logic operator 400 may be performed by a multiplication logic operation unit configured to multiply the third image data VD3 and the image mask data VMD.
In addition, as noted above, referring to FIG. 4 and Equation 2, for example, the second anomaly detection model may correspond to a contra-embedding learning algorithm. For example, the second anomaly detection model may be trained by performing the contra-embedding learning on the second image data VD2 corresponding to an image restored by the image generator 100 and the first image data VD1 corresponding to an original image, for each patch unit, to decrease a difference between feature vectors in the same position and increase a difference between feature vectors in different positions.
FIG. 5 is a view illustrating an example anomaly detection result, according to one or more embodiments.
Referring to FIG. 5, the table illustrated in FIG. 5 represents example learning results as inspection accuracy, according to one or more embodiments.
First, example capacitor (e.g., MLCC) data is applied in an example implementation of apparatus 10 of FIG. 1, for example, resulting in capacitor-component 1 being learned using images of 6,880 good articles, with testing using images of 587 defective articles and images of 830 good articles. In this case, as shown in FIG. 5, inspection accuracy for the capacitor-component 1 implementation was 97.1%.
Next, another example capacitor (e.g., MLCC) data is applied in an example implementation of apparatus 10 of FIG. 1, for example, resulting in capacitor-component 2 being learned using images of 2,201 good articles, with testing using images of 639 good articles and images of 1,013 defective articles. In this case, as shown in FIG. 5, inspection accuracy for the capacitor-component 2 implementation was 94.5%.
FIG. 6 is a view illustrating example results of an anomaly detection with respect to a capacitor component, according to one or more embodiments, and FIG. 7 is a view illustrating example results of an anomaly detection with respect to a camera module component, according to one or more embodiments.
Referring to FIG. 6, for the capacitor component, original images (corresponding to first image data VD1) for cases C1 to C6 having six (6) different defects were inspected. As a result, it can be seen in FIG. 6 that the defects in the original images were accurately displayed on inspection images (corresponding to fourth image data VD4).
Referring to FIG. 7, for the camera module component, original images (VD1) for cases C1 to C4 having four (4) different defects were inspected. As a result, it can be seen in FIG. 7 that the defects in the original images were accurately displayed on inspection images (VD4).
Briefly, the above example descriptions made with reference to FIGS. 1 to 7 also demonstrate corresponding operations of example methods with anomaly detection. Likewise, the above and below descriptions of an example method with anomaly detection may be implemented by any, any combination, of apparatuses and components described above with respect to FIGS. 1 to 7. Therefore, in the below example method description with respect to FIG. 8, duplicated descriptions may be omitted.
FIG. 8 is a view illustrating an example method with anomaly detection, according to one or more embodiments.
In operation S100, a first anomaly detection model may be learned using a plurality of pieces of image data of a good article, as a learning target, and the learned first anomaly detection model may be applied to first image data VD1, as an inspection object, to generate second image data VD2. As a non-limiting example, this may be performed in an image generator 100 discussed above.
In operation S200, a first logic operation may be performed on the first image data VD1 and the second image data VD2, and third image data VD3 corresponding to an image difference between the first image data VD1 and the second image data VD2 may be output. As a non-limiting example, operation S200 may be performed by a first logic operator 200 discussed above.
In operation S300, a second anomaly detection model may be learned using the plurality of pieces of image data of the good article, as the learning target, and the learned second anomaly detection model may be applied to the first image data VD1, as the inspection object, and the second image data VD2, to output image mask data VMD having feature information between the first image data VD1 and the second image data VD2. As a non-limiting example, operation S300 may be performed by a feature extractor 300 discussed above.
In operation S400, a second logic operation may be performed on the third image data VD3 and the image mask data VMD, to generate fourth image data VD4 having anomaly information. As a non-limiting example, operation S400 may be performed by a second logic operator 400 discussed above.
The learned first anomaly detection model may perform compression-restoration learning on the good article, and, when the first image data corresponding to a defective article is input and a restoration operation is performed, may generate the second image data corresponding to the good article.
The first logic operation of operation S200 may include a subtraction logic operation of subtracting the first image data VD1 and the second image data VD2, for example.
The first logic operation of operation S200 may correspond to a process of learning a structural similarity (SSIM)-autoencoder algorithm, for example.
In operation S300, feature vector information FVI between the first image data VD1 and the second image data VD2 may be extracted, and image mask data VMD having feature information based on the feature vector information FVI may be generated.
In operation S300, based on or when learning the first anomaly detection model, a contra-embedding algorithm comparing the first image data VD1 and the second image data VD2 of the good article, as the learning target, for each patch unit, may be learned, and a good article and a defective article on the first image data VD1, with respect to each patch unit, may be classified.
In operation S300, the learned contra-embedding algorithm may be applied, to extract feature vector information FVI from the first image data VD1, as an inspection object, for each patch unit, and output image mask data VMD having good article information and defective article information for each patch unit based on the feature vector information FVI.
The second logic operation of operation S400 may include a multiplication logic operation of multiplying the third image data VD3 and the image maskdata VMD.
The apparatuses with anomaly detection, including electronic apparatuses, image generators, logic operators, feature extractors, crop operators, contrast learners, anomaly score calculators, projectors, and other apparatuses, devices, units, modules, and components described herein with respect to FIGS. 1 through 8 are implemented by hardware components. Examples of hardware components that may be used to perform the operations described in this application where appropriate include controllers, sensors, generators, drivers, memories, comparators, arithmetic logic units, adders, subtractors, multipliers, dividers, integrators, and any other electronic components configured to perform the operations described in this application. In other examples, one or more of the hardware components that perform the operations described in this application are implemented by computing hardware, for example, by one or more processors or computers. A processor or computer may be implemented by one or more processing elements, such as an array of logic gates, a controller and an arithmetic logic unit, a digital signal processor, a microcomputer, a programmable logic controller, a field-programmable gate array, a programmable logic array, a microprocessor, or any other device or combination of devices that is configured to respond to and execute instructions in a defined manner to achieve a desired result. In one example, a processor or computer includes, or is connected to, one or more memories storing instructions or software that are executed by the processor or computer. Hardware components implemented by a processor or computer may execute instructions or software, such as an operating system (OS) and one or more software applications that run on the OS, to perform the operations described in this application. The hardware components may also access, manipulate, process, create, and store data in response to execution of the instructions or software. For simplicity, the singular term "processor" or "computer" may be used in the description of the examples described in this application, but in other examples multiple processors or computers may be used, or a processor or computer may include multiple processing elements, or multiple types of processing elements, or both. For example, a single hardware component or two or more hardware components may be implemented by a single processor, or two or more processors, or a processor and a controller. One or more hardware components may be implemented by one or more processors, or a processor and a controller, and one or more other hardware components may be implemented by one or more other processors, or another processor and another controller. One or more processors, or a processor and a controller, may implement a single hardware component, or two or more hardware components. A hardware component may have any one or more of different processing configurations, examples of which include a single processor, independent processors, parallel processors, single-instruction single-data (SISD) multiprocessing, single-instruction multiple-data (SIMD) multiprocessing, multiple-instruction single-data (MISD) multiprocessing, and multiple-instruction multiple-data (MIMD) multiprocessing.
The methods illustrated in FIGS. 1-8 that perform the operations described in this application are performed by computing hardware, for example, by one or more processors or computers, implemented as described above executing instructions or software to perform the operations described in this application that are performed by the methods . For example, a single operation or two or more operations may be performed by a single processor, or two or more processors, or a processor and a controller. One or more operations may be performed by one or more processors, or a processor and a controller, and one or more other operations may be performed by one or more other processors, or another processor and another controller. One or more processors, or a processor and a controller, may perform a single operation, or two or more operations.
Instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above may be written as computer programs, code segments, instructions or any combination thereof, for individually or collectively instructing or configuring the one or more processors or computers to operate as a machine or special-purpose computer to perform the operations that are performed by the hardware components and the methods as described above. In one example, the instructions or software include machine code that is directly executed by the one or more processors or computers, such as machine code produced by a compiler. In another example, the instructions or software includes higher-level code that is executed by the one or more processors or computer using an interpreter. The instructions or software may be written using any programming language based on the block diagrams and the flow charts illustrated in the drawings and the corresponding descriptions herein, which disclose algorithms for performing the operations that are performed by the hardware components and the methods as described above.
The instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, may be recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of a non-transitory computer-readable storage medium include read-only memory (ROM), random-access programmable read only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, DVD-ROMs, DVD- Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, blue-ray or optical disk storage, hard disk drive (HDD), solid state drive (SSD), flash memory, a card type memory such as multimedia card micro or a card (for example, secure digital (SD) or extreme digital (XD)), magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and provide the instructions or software and any associated data, data files, and data structures to one or more processors or computers so that the one or more processors or computers can execute the instructions. In one example, the instructions or software and any associated data, data files, and data structures are distributed over network-coupled computer systems so that the instructions and software and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by the one or more processors or computers.
While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents. Therefore, in addition to the above disclosure, the scope of the disclosure may also be defined by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
[Explanation of Reference numerals]
100: image generator
200: first operation unit
300: feature extractor
400: second operation unit
VD1: first image data
VD2: second image data
VD3: third image data
VMD: image mask data
VD4: fourth image data

Claims (20)

  1. An apparatus, comprising:
    an image generator configured to learn a first anomaly detection model using a plurality of pieces of image data of a predetermined good article, and apply the learned first anomaly detection model to first image data to generate second image data;
    a first logic operator configured to perform a first logic operation on the first image data and the second image data, and output third image data corresponding to an image difference between the first image data and the second image data;
    a feature extractor configured to learn a second anomaly detection model using the plurality of pieces of image data of the predetermined good article, and apply the learned second anomaly detection model to the first image data and the second image data, to generate image mask data having feature information between the first image data and the second image data; and
    a second logic operator configured to perform a second logic operation on the third image data and the image mask data, to generate fourth image data having anomaly indicative information.
  2. The apparatus of claim 1, wherein the learning of the first anomaly detection model includes performing compression-restoration learning based on the predetermined good article.
  3. The apparatus of claim 1, wherein the learned first anomaly detection model is configured to, when the first image data corresponds to data of a defective article, perform compression-restoration with respect to the data of the defective article to generate the second image data corresponding to one of a plurality of predetermined good articles.
  4. The apparatus of claim 1, wherein, for the first logic operation, the first logic operator comprises a subtraction logic operation unit configured to subtract the first image data and the second image data.
  5. The apparatus of claim 1, wherein the first logic operator is further configured to perform a process of learning a structural similarity (SSIM) autoencoder algorithm.
  6. The apparatus of claim 1, wherein the feature extractor is configured to extract feature vector information between the first image data and the second image data, and generate the image mask data having the feature information based on the feature vector information.
  7. The apparatus of claim 1, wherein the learning of the second anomaly detection model includes learning a contra-embedding algorithm comparing, for each of a plurality of respective patch units, image information input to the first anomaly detection model and image information output by the first anomaly detection model of the predetermined good article, and classify, for each of the plurality of respective patch units, a non-defective article and a defective article based on results of the comparing.
  8. The apparatus of claim 7, wherein the feature extractor is configured to use the learned second anomaly detection model to extract feature vector information from the first image data for respective patch units of the first image data, and generate the image mask data having a select one between non-defective article information and defective article information for each of the respective patch units based on the extracted feature vector information.
  9. The apparatus of claim 1, wherein the second logic operator comprises a multiplication logic operation unit configured to multiply the third image data and the image mask data.
  10. An apparatus, comprising:
    a processor configured to:
    reconstruct non-defective article image data for input image data using a learned first anomaly detection model, wherein the learned first anomaly detection model includes a structural similarity (SSIM)-autoencoder;
    generate image mask data having feature information between the input image data and the reconstructed non-defective article image data using a contra-embedding algorithm-based learned second anomaly detection model; and
    generate defect indicative information based on a result of the SSIM autoencoder and the generated image mask data.
  11. A processor-implemented method, comprising:
    generating second image data by applying a learned first anomaly detection model to first image data;
    generating third image data corresponding to an image difference between the first image data and the second image data by performing a first logic operation on the first image data and the second image data;
    generating image mask data having feature information between the first image data and the second image data by applying a learned second anomaly detection model to the first image data and the second image data; and
    generating fourth image data having anomaly indicative information by performing a second logic operation on the third image data and the generated image mask data.
  12. The method of claim 11, further comprising:
    learning the first anomaly detection model using a plurality of pieces of image data of a predetermined good article; and
    learning the second anomaly detection model using the plurality of pieces of image data of the predetermined good article.
  13. The method of claim 12, wherein the learning of the first anomaly detection model comprises performing compression-restoration learning based on the predetermined good article.
  14. The method of claim 12,
    wherein the learning of the first anomaly detection model comprises learning a structural similarity (SSIM) autoencoder algorithm, and
    wherein the generating of the third image data includes implementing the learned SSIM autoencoder algorithm.
  15. The method of claim 12,
    wherein the learning of the second anomaly detection model includes learning a contra-embedding algorithm comparing, for each of a plurality of respective patch units, image information input to the first anomaly detection model and image information output by the first anomaly detection model of the predetermined good article, and classifying, for each of the plurality of respective patch units, a non-defective article and a defective article based on results of the comparing.
  16. The method of claim 15, wherein the generating of the image mask data includes extracting the feature vector information from the first image data for respective patch units of the first image data, and generating the image mask data having a select one between non-defective article information and defective article information for each of the respective patch units based on the extracted feature vector information.
  17. The method of claim 11, wherein, when the first image data corresponds to data of a defective article, the learned first anomaly detection model performs compression-restoration with respect to the data of the defective article to generate the second image data corresponding to one of a plurality of predetermined good articles.
  18. The method of claim 11, wherein the performing of the first logic operation on the first image data and the second image data comprises subtracting the first image data and the second image data.
  19. The method of claim 11, wherein the generating of the image mask data comprises extracting the feature vector information between the first image data and the second image data, and generating the image mask data based on the extracted feature vector information.
  20. The method of claim 11, wherein the performing of the second logic operation comprises multiplying the third image data and the image mask data.
PCT/KR2022/002927 2021-05-25 2022-03-02 Apparatus and method with manufacturing anomaly detection WO2022250253A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280026413.XA CN117121053A (en) 2021-05-25 2022-03-02 Apparatus and method for manufacturing anomaly detection

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20210067188 2021-05-25
KR10-2021-0067188 2021-05-25
KR1020210127263A KR102609153B1 (en) 2021-05-25 2021-09-27 Apparatus and method for detecting anomalies in manufacturing images based on deep learning
KR10-2021-0127263 2021-09-27

Publications (1)

Publication Number Publication Date
WO2022250253A1 true WO2022250253A1 (en) 2022-12-01

Family

ID=84229985

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/002927 WO2022250253A1 (en) 2021-05-25 2022-03-02 Apparatus and method with manufacturing anomaly detection

Country Status (1)

Country Link
WO (1) WO2022250253A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020171841A (en) * 2018-09-12 2020-10-22 株式会社Splink Diagnosis assistance system and method
KR20200132665A (en) * 2019-05-17 2020-11-25 삼성전자주식회사 Attention layer included generator based prediction image generating apparatus and controlling method thereof
US20210074036A1 (en) * 2018-03-23 2021-03-11 Memorial Sloan Kettering Cancer Center Deep encoder-decoder models for reconstructing biomedical images
EP3798916A1 (en) * 2019-09-24 2021-03-31 Another Brain Transformation of data samples to normal data
US20210120255A1 (en) * 2020-01-10 2021-04-22 Intel Corporation Image compression using autoencoder information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210074036A1 (en) * 2018-03-23 2021-03-11 Memorial Sloan Kettering Cancer Center Deep encoder-decoder models for reconstructing biomedical images
JP2020171841A (en) * 2018-09-12 2020-10-22 株式会社Splink Diagnosis assistance system and method
KR20200132665A (en) * 2019-05-17 2020-11-25 삼성전자주식회사 Attention layer included generator based prediction image generating apparatus and controlling method thereof
EP3798916A1 (en) * 2019-09-24 2021-03-31 Another Brain Transformation of data samples to normal data
US20210120255A1 (en) * 2020-01-10 2021-04-22 Intel Corporation Image compression using autoencoder information

Similar Documents

Publication Publication Date Title
Xing et al. A convolutional neural network-based method for workpiece surface defect detection
WO2012115332A1 (en) Device and method for analyzing the correlation between an image and another image or between an image and a video
WO2020153623A1 (en) Defect inspection device
WO2020124701A1 (en) Temperature measurement method and apparatus
WO2021107610A1 (en) Method and system for generating a tri-map for image matting
CN111369550A (en) Image registration and defect detection method, model, training method, device and equipment
WO2020073201A1 (en) Method and system for circuit breaker condition monitoring
WO2021187683A1 (en) Method and system for testing quality of new product by using deep learning
WO2020004815A1 (en) Method of detecting anomaly in data
WO2022250253A1 (en) Apparatus and method with manufacturing anomaly detection
CN113128555B (en) Method for detecting abnormality of train brake pad part
Ibrahim et al. A printed circuit board inspection system with defect classification capability
CN117333440A (en) Power transmission and distribution line defect detection method, device, equipment, medium and program product
Prabha et al. Defect detection of industrial products using image segmentation and saliency
Lu et al. A light CNN model for defect detection of LCD
CN115330688A (en) Image anomaly detection method considering tag uncertainty
Revathy et al. Fabric defect detection and classification via deep learning-based improved Mask RCNN
KR102609153B1 (en) Apparatus and method for detecting anomalies in manufacturing images based on deep learning
KR20210057887A (en) Deep learning and image processing based bolt loosening detection method
CN116758040B (en) Copper-plated plate surface fold defect detection method, device, equipment and storage medium
WO2022245046A1 (en) Image processing device and operation method thereof
WO2020231006A1 (en) Image processing device and operation method therefor
Kim et al. Film line scratch detection using texture and shape information
WO2020251172A1 (en) Data generation method
WO2023182796A1 (en) Artificial intelligence device for sensing defective products on basis of product images and method therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22811448

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22811448

Country of ref document: EP

Kind code of ref document: A1