WO2021181749A1 - Dispositif d'apprentissage, dispositif d'inspection d'image, paramètre appris, procédé d'apprentissage et procédé d'inspection d'image - Google Patents

Dispositif d'apprentissage, dispositif d'inspection d'image, paramètre appris, procédé d'apprentissage et procédé d'inspection d'image Download PDF

Info

Publication number
WO2021181749A1
WO2021181749A1 PCT/JP2020/041655 JP2020041655W WO2021181749A1 WO 2021181749 A1 WO2021181749 A1 WO 2021181749A1 JP 2020041655 W JP2020041655 W JP 2020041655W WO 2021181749 A1 WO2021181749 A1 WO 2021181749A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
inspection
learning
unit
variance
Prior art date
Application number
PCT/JP2020/041655
Other languages
English (en)
Japanese (ja)
Inventor
悟史 岡本
Original Assignee
株式会社Screenホールディングス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Screenホールディングス filed Critical 株式会社Screenホールディングス
Publication of WO2021181749A1 publication Critical patent/WO2021181749A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a technique for detecting an abnormality in an object.
  • an inspection process may be provided on the manufacturing line to perform inspections such as detection of defective products.
  • Product inspection is performed visually by humans (visual inspection), and there is a problem that human cost is high. Therefore, in order to automate a part or all of the inspection process, a system for automatically inspecting products by a machine is being developed.
  • a defect inspection method using a machine learning technique called a convolutional neural network has been proposed.
  • a general method using a convolutional neural network consists of a convolutional layer and a fully connected layer, and learns to classify non-defective products and defective products.
  • the convolutional layer the feature amount of the image is extracted, and in the fully connected layer in the final stage, it is learned to perform the identification using the feature amount.
  • the trained network outputs a judgment result indicating whether the product is non-defective or defective.
  • machine learning that learns by giving a correct label indicating whether it is a good product or a defective product is called supervised learning.
  • Patent Document 1 points out that a sufficient amount of good and defective products are required for learning, and proposes an inspection by unsupervised learning. Specifically, learning is performed so as to reconstruct the input image using a convolutional neural network called an autoencoder composed of a convolutional layer in the first half and a deconvolutional layer in the second half. Only non-defective images are used for learning.
  • the convolution layer compresses the input image into smaller data, and the deconvolution layer restores the original input image from the compressed data.
  • the convolutional layer in the first half can output the feature amount of the image.
  • the feature amount extracted using the learned convolution layer is input to a classifier such as an isolation forest, and the quality of the product is judged.
  • Patent Document 2 acquires a difference image between an image input to the autoencoder and an image output by the autoencoder in order to discriminate defects in industrial parts in which a part of the image is slightly different. By learning the autoencoder using only non-defective images, the image reconstructed by the autoencoder becomes an image without defects, so that the defective portion can be detected by taking the difference.
  • Non-Patent Document 1 describes an inspection technique using a Variational Auto-Encoder (VAE).
  • Variational Auto-Encoder is a kind of auto-encoder and enables more advanced learning by introducing a probabilistic model.
  • the output of the variational auto-encoder is modeled with a multivariate normal distribution, and the mean and variance of the normal distribution are estimated in consideration of the restoration error, instead of simply reconstructing each pixel. Learning is done like this. By considering not only the input / output difference but also how much error occurs during restoration, more accurate defect detection is possible.
  • Non-Patent Document 1 has a problem that the accuracy is greatly reduced when the brightness distribution of each pixel of a plurality of images used for learning does not follow a normal distribution. For example, if the luminance distribution has a shape that is extremely biased to the left and right, the difference between the actual luminance and the average value becomes large, so that over-detection may increase.
  • An object of the present invention is to provide a technique for suppressing over-detection when detecting an abnormality in an image.
  • the first aspect is a learning device for constructing an image inspection device, in which a plurality of object images are used as learning data so that an error between input and output is reduced and For each unit pixel, it is provided with a learning unit that learns a probability model so as to output the mean, variance, and higher-order statistics of the distribution approximated by a specific distribution.
  • the second aspect is the learning device of the first aspect, and the object image is a non-defective product image obtained by capturing a non-defective product.
  • the third aspect is the learning device of the first aspect or the second aspect, and the specific distribution is a normal distribution.
  • the fourth aspect is a learning device according to any one of the first to third aspects, and the probability model is a variational autoencoder.
  • the fifth aspect is the learning device of any one of the first to fourth aspects, and the higher-order statistics are skewness or kurtosis.
  • the sixth aspect is an image inspection device using a learning model trained by any one of the learning devices of the first to fifth aspects, and the inspection image to be inspected is obtained by the learning unit.
  • a statistic acquisition unit that inputs to the probabilistic model having the learned parameters and acquires the average, variance, and higher-order statistics for each unit pixel with respect to the inspection image, and an average acquired by the statistic acquisition unit. It includes an abnormality detection unit that detects an abnormality in the inspection image based on the variance and higher-order statistics.
  • a seventh aspect is the image inspection device of the sixth aspect, in which the abnormality detection unit detects a unit pixel of the inspection image in which the higher-order statistics acquired by the statistic acquisition unit exceeds a predetermined threshold value. Exclude and detect anomalies.
  • the eighth aspect is a learned parameter of the probability model acquired by the learning device of any one of the first to fifth aspects.
  • the ninth aspect is a learning method for constructing an image inspection method, in which a plurality of object images are used as training data so that an error between input and output is reduced, and a specific object is specified for each unit pixel. It involves training a probabilistic model to output the mean, variance, and higher-order statistics of the distribution approximated by the distribution.
  • the tenth aspect is an image inspection apparatus, in which a plurality of object images are used as training data, and the distribution is approximated by a specific distribution for each unit pixel so that the error between input and output is reduced.
  • Statistic acquisition to obtain mean, variance, and higher-order statistics for each unit pixel of an inspection image using a probabilistic model with trained parameters trained to output mean, variance, and higher-order statistics.
  • a unit and an abnormality detection unit that detects an abnormality in the inspection image based on the average, dispersion, and higher-order statistics for each unit pixel of the inspection image obtained by the statistic acquisition unit.
  • the eleventh aspect is the image inspection device of the tenth aspect, and the abnormality detection unit detects a unit pixel of the inspection image in which the higher-order statistics acquired by the statistic acquisition unit exceeds a predetermined threshold value. Exclude and detect anomalies.
  • the twelfth aspect is an image inspection method, in which a plurality of object images are used as training data, and the distribution is approximated by a specific distribution for each unit pixel so that the error between input and output is reduced.
  • Statistic acquisition to obtain mean, variance, and higher-order statistics for each unit pixel of an inspection image using a probabilistic model with trained parameters trained to output mean, variance, and higher-order statistics.
  • the step includes an abnormality detection step of detecting an abnormality in the inspection image based on the average, dispersion, and higher-order statistics of the inspection image for each unit pixel obtained by the statistic generation step.
  • unit pixels that do not follow a specific distribution can be specified based on higher-order statistics estimated using a probability model, overdetection can be suppressed.
  • FIG. 1 is a diagram showing an image inspection device 10 of an embodiment.
  • the image inspection device 10 detects defects (abnormalities) in the object 90 by analyzing the image of the object 90.
  • the object 90 is specifically a tablet, but is not limited to a tablet.
  • the image inspection device 10 includes a camera 110 and an information processing device 120.
  • the camera 110 is electrically connected to the information processing device 120.
  • the camera 110 includes an image sensor.
  • the camera 110 outputs an image signal obtained by imaging the object 90 using the image sensor to the information processing device 120.
  • the object 90 imaged by the camera 110 may be stopped at a predetermined position, or may be moved in a predetermined direction by a transport mechanism such as a belt conveyor.
  • FIG. 2 is a diagram showing a hardware configuration of the information processing device 120 of the embodiment.
  • the information processing device 120 has a configuration as a computer.
  • the information processing device 120 includes a processor 121, a RAM 123, a storage unit 125, an input unit 127, a display unit 129, an apparatus I / F 131, and a communication I / F 133.
  • the processor 121, the RAM 123, the storage unit 125, the input unit 127, the display unit 129, the device I / F 131, and the communication I / F 133 are electrically connected to each other via the bus 135.
  • the processor 121 includes a CPU or a GPU.
  • the RAM 123 is a storage medium capable of reading and writing information, and specifically, an SDRAM.
  • the storage unit 125 is a recording medium capable of reading and writing information, and specifically includes an HDD (hard disk drive) or an SSD (solid state drive).
  • the storage unit 125 may include a ROM, a portable optical disk, a magnetic disk, a semiconductor memory, or the like.
  • the storage unit 125 stores the program P.
  • the processor 121 realizes various functions by executing the program P with the RAM 123 as a work area.
  • the program P may be provided or distributed to the information processing apparatus 120 via the network.
  • the input unit 127 is an input device that accepts user's operation input, specifically, a mouse, a keyboard, or the like.
  • the display unit 129 is a display device that displays images representing various types of information, and is specifically a liquid crystal display.
  • the device I / F 131 is an interface for electrically connecting the camera 110 to the information processing device 120.
  • the communication I / F 133 is an interface for connecting the information processing device 120 to a network such as the Internet.
  • the camera 110 may be connected to the information processing device 120 via the communication I / F 133. That is, the image inspection device 10 is not essential to include the camera 110, and may include only the information processing device 120.
  • FIG. 3 is a diagram showing a functional configuration included in the information processing device 120 of the embodiment.
  • the information processing device 120 includes an acquisition unit 141, a learning unit 143, and an inspection unit 145.
  • the acquisition unit 141, the learning unit 143, and the inspection unit 145 are functions realized by operating the processor 121 according to the program P.
  • the learning unit 143 is not essential to be provided in the information processing device 120, and may be provided in another computer.
  • the acquisition unit 141 acquires the object image 91 obtained by capturing the object 90 with the camera 110.
  • the image to be inspected is referred to as an inspection image.
  • the learning unit 143 performs learning using the variational autoencoder 20, which is a probability model described later.
  • the inspection unit 145 inputs the inspection image to the variational autoencoder 20 and detects an abnormality in the inspection image based on the output result.
  • FIG. 4 is a diagram conceptually showing the variational autoencoder 20.
  • An autoencoder is a neural network technology, also called a self-encoder.
  • VAE Variational Auto Encoder
  • GAN Generative Adversarial Network
  • the variational autoencoder 20 is a function composed of a neural network.
  • the data x (object image 91) is input to the convolutional layer 21 and converted into a dimensionally reduced latent variable z.
  • the latent variable z is input to the first deconvolution layer 231 and the reconstruction data x'is output.
  • the convolution layer 21 is also referred to as an encoder
  • the first deconvolution layer 231 is also referred to as a decoder.
  • the encoder and the decoder are trained so that the reconstructed data x'is close to the data x.
  • the data x and the latent variable z are treated as random variables. That is, the encoder (convolution layer 21) and the decoder (first deconvolution layer 231) are not deterministic and are stochastic transformations that include sampling from the probability distributions p (z
  • ⁇ (x) and ⁇ (z) are projection functions that output each of the parameters ⁇ and ⁇ of the probability distribution with respect to the inputs (x and z).
  • z) are approximated by a normal distribution.
  • the output of the decoder becomes the parameters of the probability distribution approximated with the normal distribution (mean ⁇ x and variance ⁇ x 2 ). That is, when the object image 91 is input to the variational autoencoder 20 as the data x, the first deconvolution layer 231 outputs the average ⁇ x and the variance ⁇ x 2 for each pixel of the object image 91.
  • the average ⁇ x output by the first deconvolution layer 231 represents an image obtained by reconstructing the object image 91 input to the variational autoencoder 20. Further, the variance ⁇ x 2 output by the first deconvolution layer 231 represents the variation at the time of reconstruction.
  • z) are approximated by a normal distribution, and may be approximated by a distribution other than the normal distribution such as the Bernoulli distribution or the multinomial distribution.
  • the variational autoencoder 20 has a second deconvolution layer 233.
  • the second deconvolution layer 233 is connected to the output side of the convolution layer 21.
  • the second deconvolution layer 233 outputs high-order statistics for each pixel of the object image 91 input to the convolution layer 21 from the latent variable z output by the convolution layer 21.
  • the higher-order statistic output by the second deconvolution layer 233 is skewness.
  • the skewness is a statistic that indicates the degree of skewness of the distribution. If the distribution is not biased, it becomes 0, and if it is biased to the left or right, the value goes up or down.
  • the skewness output by the second deconvolution layer 233 is a value indicating how much the mean ⁇ x and the variance ⁇ x 2 output by the first deconvolution layer 231 are distorted with respect to the normal distribution.
  • the average and the variance are output from the first deconvolution layer 231 and the skewness is output from the second deconvolution layer 233.
  • the mean, variance, and skewness may be output from a common deconvolution layer.
  • the learning unit 143 performs learning using the variational autoencoder 20.
  • As the learning data as the plurality of object images 91, a non-defective image obtained by imaging a non-defective object 90 is used.
  • the learning unit 143 learns to update the internal parameters so as to minimize the error (reconstruction error) between the input and the output in the variational autoencoder 20.
  • An error function L (x) is defined for this learning.
  • the learning unit 143 inputs each good product image to the variational autoencoder 20, and learns to reconstruct each input good product image by using the probabilistic re-descent method, so that the convolution layer 21, the first The internal parameters of the deconvolution layer 231 and the second deconvolution layer 233 are updated.
  • the following equation is the error function L (x) used during learning.
  • error function L (x) i and j indicate the element numbers of the non-defective images of the training data.
  • the mean ⁇ xi and variance ⁇ xi 2 of the normal distribution are learned by using the log-likelihood of the normal distribution as an error function.
  • SVAE is designed to approximately optimize the skewness of the mean and variance of the normal distribution with a squared error. As a result, the output of the second reverse convolution 233 is brought closer to the skewness.
  • the i-th input image x i and is a parameter of the normal distribution estimated is determined from the average mu xi and standard deviation ⁇ xi (x i - ⁇ xi) 3 / ⁇ xi 3, the network output The error of skewness is minimized. This means that only the error of one non-defective image with respect to the distribution is calculated, but the approximation is stochastically performed by repeatedly updating the internal parameters using a huge number of non-defective images.
  • the learning unit 143 completes the learning using the variational autoencoder 20, the learning unit 143 stores the learned internal parameters (learned parameters) in the storage unit 125.
  • the inspection unit 145 includes a statistic acquisition unit 31, an abnormality degree acquisition unit 33, a correction unit 35, and an abnormality detection unit 37.
  • the contents of the process executed by the inspection unit 145 will be described in detail below.
  • FIG. 5 is a diagram conceptually showing the flow of inspection by the inspection unit 145.
  • FIG. 5 shows a case where the inspection image 93 to be inspected has a defective portion NG1.
  • the statistic acquisition unit 31 of the inspection unit 145 inputs the inspection image 93 to the variational autoencoder 20 having the learned internal parameters. Then, the variable auto encoder 20 has an average image 931 representing the average of the normal distribution, a dispersion image 933 showing the variance of the normal distribution, and a skewness image 935 showing the skewness of the normal distribution for each pixel of the inspection image 93. And output.
  • the abnormality degree acquisition unit 33 of the inspection unit 145 calculates the abnormality degree for each pixel of the inspection image 93 based on the inspection image 93 and the average image 931 and the dispersion image 933 acquired by the statistic acquisition unit 31.
  • the degree of anomaly may be, for example, the Mahalanobis distance.
  • the Mahalanobis distance is determined by, for example, (x k ⁇ ⁇ k ) 2 / ⁇ k 2 (where k represents the element number of each pixel).
  • the defect portion NG1 included in the inspection image 93 is detected in the abnormality degree image 937 as a high-luminance portion indicating that the abnormality degree is large.
  • the abnormality degree image 937 a portion having a large abnormality degree is detected in addition to the defect portion NG1.
  • the shining portion of the object 90 in the inspection image 93 is detected with a large degree of abnormality. Therefore, when the abnormality determination is made based on the abnormality degree image 937, there is a possibility that over-detection in which the abnormality is determined other than the defect portion NG1 may occur.
  • the anomaly degree image 937 is obtained on the assumption that the brightness distribution of each pixel follows a normal distribution. Therefore, for pixels whose estimated luminance distribution does not follow the normal distribution, the degree of abnormality tends to be high, which may cause over-detection. Therefore, the correction unit 35 of the inspection unit 145 corrects the abnormality degree image 937 in order to suppress over-detection. Specifically, the correction unit 35 performs a process of removing from the anomaly image 937 the pixels whose skewness exceeds a predetermined threshold value among the skewness images 935 as pixels that do not follow the normal distribution. That is, the correction unit 35 generates the correction image 939 based on the abnormality degree image 937 and the skewness degree image 935. As shown in FIG. 5, the shining portion in the inspection image 93 has a relatively large skewness in the skewness image 935. Therefore, in the corrected image 939, the degree of abnormality of the shining portion is removed.
  • the abnormality detection unit 37 of the inspection unit 145 determines whether or not each pixel of the inspection image 93 is abnormal based on the degree of abnormality of the corrected image 939. Specifically, the abnormality detection unit 37 determines in the corrected image 939 that a pixel whose degree of abnormality exceeds a predetermined threshold value is abnormal. The abnormality detection unit 37 may display the determination result on the display unit 129. The abnormality detection unit 37 may display information indicating the coordinates of the pixel determined to be abnormal or the degree of abnormality on the display unit 129.
  • the variational autoencoder 20 by training the variational autoencoder 20 so as to output the skewness which is a high-order statistic, the pixels whose luminance distribution does not follow the normal distribution are based on the skewness. Can be identified. Therefore, by correcting the abnormality degree image based on the skewness, over-detection of the abnormality in the inspection image 93 can be suppressed.
  • skewness is adopted as a higher-order statistic, but kurtosis or a higher-order statistic may be adopted.
  • the following equation may be adopted as the SVAE of the error function L (x) that trains the variational autoencoder 20 so as to output the kurtosis.
  • Image inspection device 100 Image inspection device 120 Information processing device (learning device) 125 Storage unit 143 Learning unit 145 Variational auto-encoder 31 Statistics acquisition unit 33 Abnormality acquisition unit 35 Correction unit 37 Abnormality detection unit 90 Object 91 Object image 93 Inspection image 931 Average image 933 Distributed image 935 Skewness Image 937 Anomaly image 939 Corrected image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Analytical Chemistry (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

L'invention concerne un dispositif d'inspection d'image (100) qui est pourvu d'une unité d'apprentissage (143), d'une unité d'acquisition statistique (31) et d'une unité de détection d'anomalies (37). L'unité d'apprentissage (143), prenant une pluralité d'images d'objets cibles en tant que données d'apprentissage, apprend un autoencodeur variationnel de façon à réduire une erreur d'entrée/sortie et de façon à délivrer en sortie une moyenne, une variance, et une statistique d'ordre supérieur d'une distribution approximée par une distribution spécifique pour des pixels unitaires respectifs. L'unité d'acquisition statistique (31) entre une image d'inspection à inspecter dans l'autoencodeur variationnel et acquiert une moyenne, une variance et une asymétrie de chacun des pixels de l'image d'inspection. L'unité de détection d'anomalie (37) détecte une anomalie sur l'image d'inspection sur la base de la moyenne, de la variance et des statistiques d'ordre supérieur pour chacun des pixels unitaires de l'image d'inspection obtenues par l'unité d'acquisition statistique (31).
PCT/JP2020/041655 2020-03-10 2020-11-09 Dispositif d'apprentissage, dispositif d'inspection d'image, paramètre appris, procédé d'apprentissage et procédé d'inspection d'image WO2021181749A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020041013A JP7426261B2 (ja) 2020-03-10 2020-03-10 学習装置、画像検査装置、学習済みパラメータ、学習方法、および画像検査方法
JP2020-041013 2020-03-10

Publications (1)

Publication Number Publication Date
WO2021181749A1 true WO2021181749A1 (fr) 2021-09-16

Family

ID=77671506

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/041655 WO2021181749A1 (fr) 2020-03-10 2020-11-09 Dispositif d'apprentissage, dispositif d'inspection d'image, paramètre appris, procédé d'apprentissage et procédé d'inspection d'image

Country Status (2)

Country Link
JP (1) JP7426261B2 (fr)
WO (1) WO2021181749A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024068203A1 (fr) 2022-09-28 2024-04-04 Carl Zeiss Smt Gmbh Procédé mis en œuvre par ordinateur pour la détection de défauts dans un ensemble de données d'imagerie d'une tranche, support lisible par ordinateur correspondant, produit programme d'ordinateur et systèmes utilisant de tels procédés

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7439157B2 (ja) * 2022-03-30 2024-02-27 本田技研工業株式会社 検査装置
WO2024034451A1 (fr) * 2022-08-08 2024-02-15 株式会社神戸製鋼所 Procédé de génération de modèles entraînés, dispositif d'évaluation, procédé d'évaluation et programme

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018195119A (ja) * 2017-05-18 2018-12-06 住友電装株式会社 異変検出装置及び異変検出方法
JP2019016292A (ja) * 2017-07-10 2019-01-31 凸版印刷株式会社 コンテンツ生成装置、コンテンツ生成方法及びプログラム
JP2019506739A (ja) * 2016-01-06 2019-03-07 ケーエルエー−テンカー コーポレイション 外れ値検出を通じた特徴選択及び自動処理窓監視
US20190287230A1 (en) * 2018-03-19 2019-09-19 Kla-Tencor Corporation Semi-supervised anomaly detection in scanning electron microscope images
US20190332900A1 (en) * 2018-04-30 2019-10-31 Elekta Ab Modality-agnostic method for medical image representation
JP6621117B1 (ja) * 2018-10-25 2019-12-18 株式会社アルム 画像処理装置、画像処理システム、および画像処理プログラム
JP2019537135A (ja) * 2016-11-04 2019-12-19 ディープマインド テクノロジーズ リミテッド ニューラルネットワークを使用したシーンの理解および生成
US20200034654A1 (en) * 2018-07-30 2020-01-30 Siemens Healthcare Gmbh Deep Variational Method for Deformable Image Registration
JP2020035097A (ja) * 2018-08-28 2020-03-05 株式会社モルフォ 画像識別装置、画像識別方法及び画像識別プログラム

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019506739A (ja) * 2016-01-06 2019-03-07 ケーエルエー−テンカー コーポレイション 外れ値検出を通じた特徴選択及び自動処理窓監視
JP2019537135A (ja) * 2016-11-04 2019-12-19 ディープマインド テクノロジーズ リミテッド ニューラルネットワークを使用したシーンの理解および生成
JP2018195119A (ja) * 2017-05-18 2018-12-06 住友電装株式会社 異変検出装置及び異変検出方法
JP2019016292A (ja) * 2017-07-10 2019-01-31 凸版印刷株式会社 コンテンツ生成装置、コンテンツ生成方法及びプログラム
US20190287230A1 (en) * 2018-03-19 2019-09-19 Kla-Tencor Corporation Semi-supervised anomaly detection in scanning electron microscope images
US20190332900A1 (en) * 2018-04-30 2019-10-31 Elekta Ab Modality-agnostic method for medical image representation
US20200034654A1 (en) * 2018-07-30 2020-01-30 Siemens Healthcare Gmbh Deep Variational Method for Deformable Image Registration
JP2020035097A (ja) * 2018-08-28 2020-03-05 株式会社モルフォ 画像識別装置、画像識別方法及び画像識別プログラム
JP6621117B1 (ja) * 2018-10-25 2019-12-18 株式会社アルム 画像処理装置、画像処理システム、および画像処理プログラム

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024068203A1 (fr) 2022-09-28 2024-04-04 Carl Zeiss Smt Gmbh Procédé mis en œuvre par ordinateur pour la détection de défauts dans un ensemble de données d'imagerie d'une tranche, support lisible par ordinateur correspondant, produit programme d'ordinateur et systèmes utilisant de tels procédés

Also Published As

Publication number Publication date
JP7426261B2 (ja) 2024-02-01
JP2021144314A (ja) 2021-09-24

Similar Documents

Publication Publication Date Title
WO2021181749A1 (fr) Dispositif d'apprentissage, dispositif d'inspection d'image, paramètre appris, procédé d'apprentissage et procédé d'inspection d'image
JPWO2020031984A1 (ja) 部品の検査方法及び検査システム
JP7004145B2 (ja) 欠陥検査装置、欠陥検査方法、及びそのプログラム
US20210295485A1 (en) Inspection device and inspection method
US20140348415A1 (en) System and method for identifying defects in welds by processing x-ray images
WO2020055555A1 (fr) Codeur automatique profond pour surveillance d'état d'équipement et détection de défaut dans des outils d'équipement de traitement d'affichage et de semi-conducteurs
JP6844563B2 (ja) 検査装置、画像識別装置、識別装置、検査方法、及び検査プログラム
JP7435303B2 (ja) 検査装置、ユニット選択装置、検査方法、及び検査プログラム
JP6347589B2 (ja) 情報処理装置、情報処理方法及びプログラム
JP2024504735A (ja) 自動化ビジュアル検査を使用する製造品質コントロールのためのシステムおよび方法
JP7453813B2 (ja) 検査装置、検査方法、プログラム、学習装置、学習方法、および学習済みデータセット
JP7459697B2 (ja) 異常検知システム、学習装置、異常検知プログラム、学習プログラム、異常検知方法、および学習方法
JP7414629B2 (ja) 学習用データ処理装置、学習装置、学習用データ処理方法、およびプログラム
KR20230036650A (ko) 영상 패치 기반의 불량 검출 시스템 및 방법
JP2022029262A (ja) 画像処理装置、画像処理方法、画像処理プログラム、および学習装置
Kim et al. CECvT: Initial Diagnosis of Anomalies in Thermal Images
CN113837173A (zh) 目标对象检测方法、装置、计算机设备和存储介质
JP2021135630A (ja) 学習装置、画像検査装置、学習済みデータセット、および学習方法
JP3652589B2 (ja) 欠陥検査装置
US20240005477A1 (en) Index selection device, information processing device, information processing system, inspection device, inspection system, index selection method, and index selection program
US20230274409A1 (en) Method for automatic quality inspection of an aeronautical part
US20240112325A1 (en) Automatic Optical Inspection Using Hybrid Imaging System
JP2023008416A (ja) 異常検知システムおよび異常検知方法
WO2022181304A1 (fr) Système d'inspection et programme d'inspection
WO2021229905A1 (fr) Dispositif d'inspection d'image, procédé d'inspection d'image et dispositif de génération de modèle préappris

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20924063

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20924063

Country of ref document: EP

Kind code of ref document: A1