WO2022172468A1 - Dispositif d'inspection d'image, procédé d'inspection d'image et dispositif de génération de modèle entraîné - Google Patents

Dispositif d'inspection d'image, procédé d'inspection d'image et dispositif de génération de modèle entraîné Download PDF

Info

Publication number
WO2022172468A1
WO2022172468A1 PCT/JP2021/009412 JP2021009412W WO2022172468A1 WO 2022172468 A1 WO2022172468 A1 WO 2022172468A1 JP 2021009412 W JP2021009412 W JP 2021009412W WO 2022172468 A1 WO2022172468 A1 WO 2022172468A1
Authority
WO
WIPO (PCT)
Prior art keywords
inspection
image
divided image
defective product
feature amount
Prior art date
Application number
PCT/JP2021/009412
Other languages
English (en)
Japanese (ja)
Inventor
泰之 池田
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Publication of WO2022172468A1 publication Critical patent/WO2022172468A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an image inspection device, an image inspection method, and a trained model generation device.
  • Japanese Patent Laid-Open No. 2002-200000 discloses an anomaly determination apparatus that performs an anomaly determination based on input image data to be determined. for generating reconstructed image data from the feature amount of the determination target image data using the reconstruction parameters of and performing abnormality determination based on difference information between the generated reconstructed image data and the determination target image data It has a process executing means for executing an abnormality determination process.
  • the abnormality determination device of Patent Document 1 When image data to be determined includes image data of a plurality of channels, the abnormality determination device of Patent Document 1 generates reconstructed image data for each channel from the feature amount of the image data of each channel using reconstruction parameters, Abnormality determination is performed based on difference information between each generated reconstructed image data and image data of each channel of the determination target image data.
  • a non-defective product image of a non-defective object is divided into small sizes and input, and a trained model trained to output the feature amount of the non-defective product image is extracted from the inspection image in the feature space.
  • a method of detecting defects in an inspection image based on the feature amount obtained by the inspection and the feature amount of the non-defective product image.
  • the feature amount of the non-defective image including this special pattern is plotted at a position distant from the feature amount group of other non-defective images in the feature space.
  • An object including a pattern may be detected as a defective product.
  • the feature amount of the inspection image that includes such a pattern at the other position is plotted near the feature amount group of the non-defective product image, and the defective product may be overlooked.
  • the present invention has been made in view of such circumstances, and plots the feature amount of an inspection image containing a special pattern close to the feature amount of a non-defective product image, and plots the feature amount of an inspection image of a defective product.
  • One of the objects is to provide an image inspection device, an image inspection method, and a learned model generation device that can plot at a distance.
  • An image inspection apparatus receives as an input a synthesized non-defective product divided image generated by synthesizing a non-defective product divided image, which is a divided image of a non-defective product inspection object, and label information corresponding to the non-defective product divided image.
  • an extraction unit that inputs a synthetic inspection divided image and extracts the feature amount of the synthetic inspection divided image, the feature amount of the extracted synthetic inspection divided image, and the feature amount of the synthesized non-defective product divided image output during learning.
  • an acquisition unit that acquires the degree of defect indicating the degree of defect of the inspection divided image corresponding to the composite inspection divided image based on the feature space that is obtained from the feature space; and an inspection unit that inspects the inspection object based on the acquired defect degree.
  • the inspection divided image and the label information are added to the trained model trained to receive as input a synthesized non-defective product divided image obtained by synthesizing the non-defective product divided image and the label information and to output the feature amount of the synthesized non-defective product divided image.
  • a synthesized non-defective product divided image obtained by synthesizing the non-defective product divided image and the label information and to output the feature amount of the synthesized non-defective product divided image.
  • the inspection accuracy of the inspection object 30 can be improved because the defect degree of the inspection divided image can be obtained based on the feature space formed by the quantity and the inspection object can be inspected based on the defect degree. becomes possible.
  • the trained model can learn a pattern specific to each non-defective product segmented image based on the label information, and in the feature space, the point indicated by the feature amount of each synthesized non-defective product segmented image is the same position in the non-defective product image. is plotted near the set formed by the features of the other synthetic good segmented images in . Therefore, in the feature space, it is possible to plot the point indicated by the feature amount of the synthetic inspection split image including the special pattern near the set formed by the feature amounts of the plurality of synthetic non-defective product split images corresponding to the synthetic inspection split image. It is possible to acquire the degree of defect with a small degree of defect and determine that the product is non-defective.
  • the trained model can learn different discrimination criteria depending on the position in the non-defective product image based on the label information. Different ranges and regions can be collected for each position of the non-defective divided image. Therefore, in the feature space, the point indicated by the feature amount of the composite inspection divided image at the other position, which includes a pattern that is good at one position but is defective at another position, is defined as a plurality of composite inspection images at the other position. It is possible to plot a set far from the set formed by the feature amount of the non-defective divided image, and it is possible to acquire the degree of defect with a large degree of defect and determine the defective product.
  • the acquisition unit may acquire the degree of defect of the divided inspection image based on the distance between the feature amount of the combined inspection divided image and the feature amount of the combined non-defective divided image forming the feature space.
  • the defect degree of the inspection divided image corresponding to the feature amount is the point in the feature space indicated by the feature amount and the set formed by the feature amounts corresponding to the plurality of non-defective product divided images. Obtained based on distance. This makes it possible to easily indicate the degree of defects in the inspection divided image.
  • the acquisition unit obtains the synthesized non-defective product divided image in which the distance between the feature quantity of the synthesized inspection divided image and the feature quantity of the synthesized non-defective product divided image among the feature values of the synthesized non-defective product divided image forming the feature space is the closest. and the degree of defect of the inspection divided image may be obtained based on the feature amount of .
  • the defect degree of the inspection divided image corresponding to the feature amount is determined by the point indicated by the feature amount in the feature space and the feature amount corresponding to the inspection divided image among the feature amounts corresponding to the plurality of non-defective product divided images. is acquired based on the distance between the feature quantity corresponding to the non-defective divided image with the closest distance between and. This makes it possible to easily indicate the degree of defects in the inspection divided image.
  • the inspection unit may generate a defect degree image based on a plurality of defect degrees, and inspect the inspection object based on the defect degree image.
  • the above aspect further includes a synthesizing unit that synthesizes the inspection divided image and the label information to generate a synthesized inspection divided image, and the synthesizing unit assigns a number that identifies the label included in the label information to the inspection divided image that can be used in the inspection divided image.
  • a combined inspection divided image may be generated by multiplying the number of colors and adding or multiplying the value obtained by the multiplication to the density value of the inspection divided image.
  • the above aspect may further include a learning unit that learns a learning model using a plurality of non-defective product divided images included in the non-defective product inspection object and generates a trained model.
  • An image inspection method inputs a composite non-defective product divided image generated by synthesizing a non-defective product divided image, which is a divided image of a non-defective product inspection object, and label information corresponding to the non-defective product divided image. is generated by synthesizing the inspection divided image, which is the divided image of the inspection object, and the label information corresponding to the inspection divided image with the trained model trained to output the feature amount of the synthesized non-defective product divided image.
  • the inspection divided image and the label information are added to the trained model trained to receive as input a synthesized non-defective product divided image obtained by synthesizing the non-defective product divided image and the label information and to output the feature amount of the synthesized non-defective product divided image.
  • a synthesized non-defective product divided image obtained by synthesizing the non-defective product divided image and the label information and to output the feature amount of the synthesized non-defective product divided image.
  • the inspection accuracy of the inspection object 30 can be improved because the defect degree of the inspection divided image can be obtained based on the feature space formed by the quantity and the inspection object can be inspected based on the defect degree. becomes possible.
  • the trained model can learn a pattern specific to each non-defective product segmented image based on the label information, and in the feature space, the point indicated by the feature amount of each synthesized non-defective product segmented image is the same position in the non-defective product image. is plotted near the set formed by the features of the other synthetic good segmented images in . Therefore, in the feature space, it is possible to plot the point indicated by the feature amount of the synthetic inspection split image including the special pattern near the set formed by the feature amounts of the plurality of synthetic non-defective product split images corresponding to the synthetic inspection split image. It is possible to acquire the degree of defect with a small degree of defect and determine that the product is non-defective.
  • the trained model can learn different discrimination criteria depending on the position in the non-defective product image based on the label information. Different ranges and regions can be collected for each position of the non-defective divided image. Therefore, in the feature space, the point indicated by the feature amount of the composite inspection divided image at the other position, which includes a pattern that is good at one position but is defective at another position, is defined as a plurality of composite inspection images at the other position. It is possible to plot a set far from the set formed by the feature amount of the non-defective divided image, and it is possible to acquire the degree of defect with a large degree of defect and determine the defective product.
  • a trained model generating apparatus provides a synthesized non-defective product divided image generated by synthesizing a non-defective product divided image, which is a divided image of a non-defective product inspection object, and label information corresponding to the non-defective product divided image. is input, and a model generating unit that generates a trained model trained to output the feature amount of the synthesized non-defective divided image.
  • the inspection divided image and the label information are added to the trained model trained to receive as input a synthesized non-defective product divided image obtained by synthesizing the non-defective product divided image and the label information and to output the feature amount of the synthesized non-defective product divided image.
  • a synthesized non-defective product divided image obtained by synthesizing the non-defective product divided image and the label information and to output the feature amount of the synthesized non-defective product divided image.
  • the trained model can learn a pattern specific to each non-defective product segmented image based on the label information.
  • the trained model can learn different discrimination criteria depending on the position in the non-defective product image based on the label information. Different ranges and regions can be collected for each position of the non-defective divided image.
  • the point indicated by the feature amount of the composite inspection divided image at the other position which includes a pattern that is good at one position but is defective at another position, is defined as a plurality of composite inspection images at the other position. It can be plotted far relative to the set formed by the feature values of the non-defective split images.
  • an image inspection apparatus capable of plotting the feature amount of a non-defective product image containing a special pattern closer to the feature amount of a non-defective product image and plotting the feature value of an inspection image of a defective product farther away, It is possible to provide an image inspection method and a trained model generation device.
  • FIG. 1 is a schematic configuration diagram of an image inspection system according to an embodiment of the present invention
  • FIG. FIG. 2 is a functional block diagram showing the configuration of a trained model generation device according to the same embodiment
  • FIG. 4 is a diagram for explaining non-defective product divided images and label information; It is a figure which shows an example of a non-defective product image.
  • 5 is a diagram showing an example of label information given to each of a plurality of non-defective product divided images included in the non-defective product image of FIG. 4
  • FIG. FIG. 10 is a diagram for explaining a specific example of generating a combined non-defective divided image; It is a figure for demonstrating the model which the model generation part which concerns on the same embodiment learns.
  • FIG. 11 is a conceptual diagram for explaining another example of a method of acquiring a degree of defect; It is a conceptual diagram for demonstrating the process by the test
  • FIG. 6 is a flowchart for explaining an example of image inspection processing executed by the image inspection apparatus according to the embodiment
  • 5 is a flowchart for explaining an example of image inspection processing executed by the image inspection apparatus according to the embodiment
  • 2 is a block diagram showing physical configurations of an image inspection apparatus and a trained model generation apparatus according to the same embodiment
  • FIG. 1 is a schematic configuration diagram of an image inspection system 1 according to one embodiment of the present invention.
  • the image inspection system 1 includes an image inspection device 20 and illumination 25 .
  • the illumination 25 irradiates the inspection object 30 with the light L.
  • the image inspection device 20 captures the reflected light R and inspects the inspection object 30 based on the image of the inspection object 30 (hereinafter also referred to as “inspection image”).
  • the image inspection device 20 is connected to the trained model generation device 10 via the communication network 15 .
  • the trained model generation device 10 generates a trained model used by the image inspection device 20 to inspect the inspection object 30 .
  • FIG. 2 is a functional block diagram showing the configuration of the trained model generation device 10 according to this embodiment.
  • the trained model generation device 10 includes, for example, a storage unit 100, a learning unit 110, and a communication unit 120.
  • the storage unit 100 stores various types of information.
  • the storage unit 100 stores, for example, the non-defective product image 40, the learning image 50, the trained model 65, and the feature amount 70.
  • the non-defective product image 40 is an image of a non-defective inspection object.
  • the learning image 50 is an image input for model learning.
  • the learned model 65 is a model generated by the learning unit 110, which will be described later.
  • the feature quantity 70 is data output from the trained model 65 .
  • the learning unit 110 includes, for example, a learning image generation unit 111 and a model generation unit 112.
  • the learning image generation unit 111 generates a learning image 50 used by the model generation unit 112 to perform learning processing.
  • the learning image 50 is an image generated by synthesizing a non-defective product divided image obtained by dividing the non-defective product image 40 and label information corresponding to the non-defective product divided image. A non-defective divided image and label information will be described with reference to FIG.
  • the learning image generation unit 111 acquires the non-defective product image 40 from the storage unit 100 and divides the non-defective product image 40 to generate a plurality of non-defective product divided images 400, 402, 404, . In this embodiment, the learning image generation unit 111 generates a total of 16 non-defective divided images by vertically and horizontally dividing the non-defective item image 40 into four.
  • the non-defective product image 40 may be divided into 2 to 15 non-defective divided images, or may be divided into 17 or more non-defective divided images.
  • the learning image generation unit 111 assigns label information to each of the plurality of non-defective divided images, and generates a plurality of data sets configured by the non-defective divided images and the label information.
  • the learning image generation unit 111 may assign label information to the non-defective product divided image based on a predetermined algorithm, or may assign label information to the non-defective product divided image based on the user's operation.
  • the learning image generation unit 111 may cause the storage unit 100 to store information indicating which label information was assigned to which positions of the non-defective product divided images.
  • label information (A) is attached to the non-defective product divided image 400 in FIG. 3, and the non-defective product divided image 400 and the label information (A) are stored as a set of data sets.
  • the learning image generation unit 111 generates 16 non-defective divided images, such as a data set of non-defective divided images 402 and label information (B), and a data set of non-defective divided images 404 and label information (C). Generate and store a dataset based on
  • FIG. 4 is a diagram showing an example of a non-defective product image 40.
  • the non-defective product image 40 shown in FIG. 4 includes four patterns (first pattern 402, second pattern 404, third pattern 406 and fourth pattern 408).
  • the non-defective product image 40 shown in FIG. 4 is vertically and horizontally divided into four parts, and is divided into a total of 16 non-defective product divided images.
  • FIG. 5 is a diagram showing an example of label information given to each of 16 non-defective divided images. 5, the four patterns shown in FIG. 4 are omitted. Also, the numbers shown in the non-defective product divided image in FIG. 5 are numbers for identifying labels (hereinafter referred to as "label identification numbers"), and are data included in label information.
  • label information is given to each of the 16 non-defective product divided images in the order of raster scanning. That is, in the non-defective product image 40 shown in FIG. 5, label identification numbers from 0 to 15 are assigned to each of the 16 non-defective product divided images in the order indicated by the arrows. For example, label identification numbers from 0 to 3 are assigned to the four non-defective divided images in the top row in order from the non-defective divided image 420 on the left end. Label identification numbers from 4 to 7 are assigned to the four non-defective divided images in the second row from the top in order from the leftmost non-defective divided image 422 . Further, label identification numbers from 8 to 15 are given in the order of the arrows to the eight non-defective divided images in the third and fourth rows from the top.
  • the top leftmost non-defective product divided image 420 has been described as the starting non-defective product divided image whose label identification number is 0, any of the non-defective product divided images may be used as the starting non-defective product divided image.
  • the numerical value of the label identification number is not limited to the order of raster scanning, and may be given to the non-defective divided images in any order.
  • the learning image generation unit 111 generates a combined non-defective divided image by synthesizing the non-defective divided image and the label information that constitute one data set. Specifically, the learning image generation unit 111 multiplies the label identification number included in the label information by the number of colors that can be used in the non-defective divided image, and sets the value obtained by the multiplication as the density value of the non-defective divided image.
  • a combined non-defective split image is generated by addition or multiplication. For example, 256 colors can be used as the number of colors that can be used in the non-defective divided image. Note that the number of colors is not limited to 256, and the number of colors corresponding to the number of quantization bits can be used as appropriate.
  • the non-defective product divided images and label information shown in FIG. 6 correspond to a part of the 16 non-defective product divided images and label information shown in FIGS. It is assumed that the number of colors that can be used in this non-defective product image 40 is 256 colors.
  • the label identification number included in the label information corresponding to the non-defective product divided image 420 in FIG. 6 is 0. In this case, by multiplying 0, which is the label identification number, by 256, which is the number of colors, 0 is obtained as the multiplication value.
  • the learning image generation unit 111 generates a combined non-defective divided image 420 ′ by adding or multiplying the density value of the non-defective divided image 420 by 0, which is the product of the label identification number and the number of colors.
  • the label identification number included in the label information corresponding to the non-defective product divided image 422 is 4. Therefore, the learning image generation unit 111 adds or multiplies the density value of the non-defective product divided image 422 by 1024, which is the multiplication value of the label identification number 4 and the number of colors 256, to obtain a synthesized non-defective product divided image. 422'.
  • the label identification number included in the label information corresponding to the non-defective product divided image 424 is 8. Therefore, the learning image generation unit 111 adds or multiplies the density value of the non-defective product divided image 424 by 2048, which is the multiplication value of the label identification number of 8 and the number of colors of 256, to the combined non-defective product divided image. 424'.
  • the learning image generation unit 111 generates a total of 16 synthesized non-defective divided images by performing the same processing as above on the remaining 13 non-defective divided images.
  • By generating a composite non-defective product divided image in this way it is possible to generate 16 composite non-defective product divided images having different density values.
  • the density distribution of the non-defective product image 40 composed of 16 synthesized non-defective product divided images can be represented by 16 different densities corresponding to the positions of the synthesized non-defective product divided images.
  • the learning image generation unit 111 stores the generated combined non-defective divided image as the learning image 50 .
  • the model generation unit 112 shown in FIG. 2 executes learning processing using the learning image 50 to generate a trained model 65.
  • the trained model 65 is, for example, a model that receives as input each of a plurality of synthesized non-defective product divided images that constitute the non-defective product image 40 and outputs respective feature amounts.
  • the learning model 60 used by the model generating unit 112 to generate the trained model 65 is, for example, one of neural networks and an auto model which is one of unsupervised machine learning methods.
  • An encoder can be used.
  • the autoencoder includes an input layer 601 , an output layer 605 and an intermediate layer 603 positioned between the input layer 601 and the output layer 605 .
  • the learning model 60 is not limited to an autoencoder.
  • the learning model 60 may be, for example, a model using PCA (Principle Component Analysis).
  • the intermediate layer 603 is not limited to one layer, and may be two or more layers.
  • a combined non-defective product divided image is generated by synthesizing the non-defective product divided image and the label information.
  • the composite non-defective product segmented image is input to the input layer 601 , the dimensions of the composite non-defective segmented image are reduced in the intermediate layer 603 . Then, in the output layer 605, the dimension is reversed to output an output composite non-defective product divided image corresponding to the composite non-defective product divided image.
  • the autoencoder learns weights so that the difference between the synthesized non-defective divided image and the output synthesized non-defective divided image is minimized. More specifically, each node, represented by a circle, has a unique weighting at each edge, represented by a line, and the weighted values are input to the nodes of the next layer. By learning the weight in this weighting, the difference between the synthesized non-defective product divided image and the output synthesized non-defective product divided image is minimized. Through this learning, it becomes possible to output the feature amount from the intermediate layer 603 .
  • the feature quantity is a variable that quantitatively expresses the features of the desired thing.
  • the feature amount output from the intermediate layer 603 characterizes, for example, the synthesized non-defective product divided image.
  • Single, blue single, or red/green/blue (RGB) histograms can be used.
  • RGB red/green/blue
  • a synthesized non-defective divided image is often recognized using a plurality of types of feature amounts rather than using one type of feature amount.
  • a feature amount can be represented, for example, by a vector (hereinafter also referred to as a "feature vector") having a plurality of feature amounts as components.
  • a vector hereinafter also referred to as a "feature vector”
  • the feature amount may be represented by a scalar, matrix, or tensor.
  • the number (number) of feature values represents the dimension (number of dimensions) of the feature values.
  • a space formed by feature quantities is called a feature space, and one feature quantity is represented as one point on the feature space.
  • the input to the learning model 60 is, for example, a synthetic non-defective product divided image
  • the output from the intermediate layer 603 of the learning model 60 is the feature quantity of the synthetic non-defective product divided image.
  • the trained model 65 for the non-defective product image 40 of FIG. each is entered. Then, in the learning model 60, for each synthesized non-defective product divided image, a group of points indicating feature amounts of a plurality of synthesized non-defective product divided images arranged at the same position as the synthesized non-defective product divided image is plotted at a short distance in the feature space. Learn each weight as As a result, a learned model 65 that outputs the feature amount of the synthesized non-defective divided image is generated.
  • the model generation unit 112 inputs the synthesized non-defective product divided image obtained by synthesizing the non-defective product divided image and the label information of the non-defective product divided image to the learning model 60, and causes the learning model 60 to learn. to generate
  • the learned model 65 learns based on the synthesized non-defective product divided image obtained by synthesizing the label information for specifying the position in the non-defective product image with the non-defective product divided image, it is possible to determine which position in the non-defective product image is the pattern of the non-defective product divided image. It is possible to discriminate.
  • the communication unit 120 of the trained model generation device 10 can transmit and receive various types of information.
  • the communication unit 120 transmits the learned model 65 to the image inspection device 20 via the communication network 15 .
  • the image inspection apparatus 20 also receives information indicating which label information has been assigned to which position of the non-defective product divided image in the non-defective product image.
  • FIG. 8 is a functional block diagram showing the configuration of the image inspection apparatus 20 according to this embodiment.
  • the image inspection apparatus 20 includes, for example, a communication unit 200, a storage unit 210, an imaging unit 220, a processing unit 230, and a learning unit 240.
  • the communication unit 200 receives the trained model 65 and the feature quantity 70 from the trained model generation device 10 via the communication network 15, for example.
  • the communication unit 200 can also receive the learning images 50 from the trained model generating device 10 or other devices via the communication network 15, for example.
  • the received learning image 50, trained model 65, and feature amount 70 are written and stored in the storage unit 210.
  • FIG. Note that the communication unit 200 may receive only one of the trained model 65 and feature amount 70 and the learning image 50 .
  • the learning unit 240 which will be described later, uses the learning image 50 to generate the trained model 65, and extracts the feature amount 70 in the process of generating the trained model 65. You may
  • the storage unit 210 is configured to store various types of information.
  • the storage unit 210 stores, for example, the learning image 50, the trained model 65, and the feature amount 70.
  • the learned model 65 is also given information indicating which label information is given to which position of the good product divided image in the good product image. By storing the learned model 65 in the storage unit 210, the learned model can be easily read.
  • the learning unit 240 is configured to learn a learning model using the learning image 50 and generate a trained model 65 .
  • the trained model 65 is a model that receives a synthesized non-defective product divided image as an input and outputs the feature amount of the synthesized non-defective product divided image.
  • the generated trained model 65 is written and stored in the storage unit 210 . Note that the method of learning the learning model is the same as the method by the model generation unit 112 of the trained model generation device 10 described above, and therefore the description thereof will be omitted.
  • the trained model 65 can be obtained without the trained model generation device 10.
  • the imaging unit 220 includes an imaging device such as a camera, and captures an image of the inspection object 30 .
  • the imaging unit 220 receives reflected light R from the inspection object 30 and captures an image of the inspection object 30 .
  • the imaging unit 220 outputs the captured image to the processing unit 230 .
  • the processing unit 230 performs various processes on the image of the inspection object, and inspects the inspection object.
  • FIG. 9 is a functional block diagram showing the configuration of the processing section 230 according to this embodiment.
  • the processing unit 230 includes, for example, a dividing unit 231, a labeling unit 232, a synthesizing unit 233, an extracting unit 234, an acquiring unit 235, and an inspecting unit 236.
  • the dividing unit 231 acquires the image of the inspection object from the imaging unit 220 and divides the image of the inspection object to generate a plurality of divided inspection images. Note that the method for dividing the image of the inspection object is the same as the method for dividing the non-defective product image by the learning image generation unit 111 of the trained model generation device 10 described above, so the description thereof will be omitted.
  • the label assigning unit 232 assigns label information to each of the plurality of divided inspection images, and generates a plurality of data sets composed of the divided inspection images and the label information.
  • the label assigning unit 232 refers to information indicating which label information is assigned to which position of the non-defective product divided image in the non-defective product image, which is stored in association with the learned model 65. The label information of the non-defective divided image located at the same position as the divided image is added to the inspection divided image.
  • the synthesizing unit 233 generates a synthetic inspection divided image by synthesizing the inspection divided image and the label information that constitute one data set. Note that the method of generating the composite inspection divided image is the same as the method of generating the composite non-defective product divided image by the learning image generation unit 111 of the trained model generation device 10 described above, and therefore the description thereof will be omitted.
  • the extraction unit 234 inputs the synthetic inspection divided image to the trained model 65 and extracts the feature amount of the synthetic inspection divided image.
  • the acquisition unit 235 corresponds to the feature amount of the synthesized inspection divided image based on the feature space formed by the extracted feature amount of the synthesized inspection divided image and the feature amount 70 of the synthesized non-defective product divided image output during learning.
  • a defect degree indicating the degree of defect in the inspection divided image is acquired. A procedure when the acquisition unit 235 acquires the degree of defect will be described below.
  • the acquisition unit 235 reads the feature values 70 stored in the storage unit 210 and plots the points indicated by the feature values 70 in the feature space.
  • a set S1 indicated by a dashed line is formed in the two-dimensional feature space by black dots indicated by the respective feature amounts 70 .
  • This set S1 is formed for each position of the non-defective product divided image in the non-defective product image 40, for example.
  • the acquisition unit 235 may form a set S1 in the feature space in advance based on the plurality of feature amounts 70, and write and store information about the set S1 in the storage unit 210.
  • the acquisition unit 235 plots the points indicated by the feature amounts of the composite inspection divided images at the positions corresponding to the set S1 in the feature space.
  • the feature amount of the synthetic inspection divided image at the corresponding position is indicated by a white circle point P1.
  • the acquiring unit 235 acquires the defect degree of the inspection divided image corresponding to the synthesized inspection divided image based on the point P1 and the set S1 indicated by the feature amount of this synthesized inspection divided image.
  • the acquisition unit 235 calculates the extracted feature amount based on the distance between the point indicated by the feature amount and the set formed by the feature amounts of the plurality of synthesized non-defective divided images in the feature space. , the defect degree of the inspection divided image corresponding to the feature quantity is obtained.
  • the acquisition unit 235 obtains a point P1 indicated by the feature amount of a certain synthetic inspection divided image and a point P1 corresponding to the synthetic inspection divided image.
  • the degree of defect of the inspection divided image corresponding to the synthesized inspection divided image is acquired based on the distance between the set S1 of the feature amounts of the plurality of synthesized non-defective product divided images. This defect degree is, for example, a value corresponding to the distance.
  • the defect degree of the inspection divided image corresponding to the feature amount is based on the distance between the point P1 indicated by the feature amount and the set S1 formed by the feature amounts of the plurality of synthesized non-defective product divided images in the feature space. , the degree of defects in the inspection divided image can be easily indicated.
  • the distance between the point indicated by the feature amount of a certain synthesized inspection divided image and the point indicated by one of the feature amounts of a plurality of synthesized non-defective product divided images is Based on this, the defect degree of the inspection divided image may be obtained.
  • the acquisition unit 235 obtains a point P2 indicated by the feature amount of a certain synthetic inspection divided image and the synthetic inspection divided image.
  • the degree of defect of the inspection divided image is obtained based on the distance from the point P3 included in the set S2 of the feature amounts of the plurality of synthesized non-defective product divided images corresponding to .
  • This defect degree is, for example, a value corresponding to the distance.
  • a point P3 included in the set S2 is the closest point to the point P2 among the plurality of points included in the set S2.
  • the obtaining unit 235 can calculate the distance between each of the plurality of points included in the set S2 and the point P2, and determine the closest point to the point P2.
  • the defect degree of the inspection divided image corresponding to the feature amount is the distance between the point P2 indicated by the feature amount and the point indicated by one of the feature amounts of the plurality of synthesized non-defective product divided images in the feature space.
  • the degree of defects in the inspection divided image can be easily indicated by being acquired based on the above.
  • the acquisition unit 235 repeats the above procedure for each feature amount extracted by the extraction unit 234, and acquires the degree of defect of each of the plurality of divided inspection images.
  • the acquisition unit 235 then outputs a plurality of defect degrees to the inspection unit 236 .
  • the inspection unit 236 inspects the inspection object 30 based on the plurality of defect degrees acquired by the acquisition unit 235 .
  • the inspection unit 236 generates defect degree images based on a plurality of defect degrees, and inspects the inspection object 30 based on the generated defect degree images.
  • the degree of defect is preferably a value indicating the degree of defect in the inspection divided image. This makes it possible to quantitatively indicate the degree of defects in the inspection divided image.
  • the inspection unit 236 converts the defect degrees 460, 462, 464, .
  • Images 480, 482, 484, . . . are generated.
  • the inspection unit 236 generates the defect degree image 48 by integrating the generated partial images 480, 482, 484, .
  • the vertical and horizontal sizes (number of pixels) of the defect degree image 48 may be the same as or different from those of the inspection image. Then, the inspection unit 236 determines whether or not the inspection object 30 is non-defective based on the defect degree image 48 .
  • the defect degree image 48 includes, for example, two defect partial images 482 and 484.
  • the defective partial image 484 is a partial image with a relatively low defect degree
  • the defective partial image 482 is a partial image with a relatively high defect degree.
  • the inspection unit 236 determines that the inspection object 30 is non-defective when the ratio of the defect partial images 482 and 484 in the defect degree image 48 is equal to or less than a predetermined threshold, and when the ratio exceeds the predetermined threshold, It is determined that the inspection object 30 is not a non-defective product, that is, is a defective product.
  • the inspection unit 236 may detect defects in the inspection object 30 based on the presence or absence of the defect partial images 482 and 484 included in the defect degree image 48 .
  • the degree of defect of each divided inspection image is obtained based on the point indicated by the feature amount of the synthesized divided inspection image and the set formed by the feature amount of a plurality of synthesized non-defective divided images in the feature space.
  • the communication unit 120 acquires a plurality of non-defective product images 40 via the communication network 15 (step S101).
  • the acquired non-defective product images 40 are stored in the storage unit 100 .
  • the learning image generation unit 111 reads out the multiple non-defective product images 40 from the storage unit 100, and generates multiple learning images 50 based on the multiple non-defective product images 40 (step S102).
  • the learning image 50 as described above, a synthesized non-defective product divided image generated based on the non-defective product divided image and label information is used.
  • the model generation unit 112 receives the plurality of learning images 50 generated in step S102, and generates a trained model 65 trained to output the feature amount of the learning images (step S103). ).
  • the generated trained model 65 is stored in the storage unit 100 .
  • the communication unit 120 transmits the trained model 65 generated in step S103 and the feature quantity 70 extracted in the process of generating the trained model 65 to the image inspection apparatus 20 via the communication network 15. It transmits (step S104). This enables the image inspection device 20 to use the trained model generated by the trained model generation device 10 .
  • step S104 the learned model generation device 10 ends the learned model generation process.
  • the communication unit 200 receives the learned model 65 and the feature amount 70 from the learned model generation device 10, and the storage unit 210 stores the learned model 65 and the feature amount 70. do.
  • the imaging unit 220 acquires an inspection image of the inspection object 30 (step S201).
  • the acquired inspection image is output to the processing unit 230 .
  • the dividing unit 231 of the processing unit 230 divides the inspection image acquired in step S201 to generate a plurality of divided inspection images (step S202).
  • a plurality of generated inspection divided images are output to the labeling section 232 of the processing section 230 .
  • the label assigning unit 232 assigns the label information of the non-defective product split image at the same position as the inspection split image generated in step S202 to generate a data set (step S203).
  • the generated data set is output to the synthesizing section 233 of the processing section 230 .
  • the synthesizing unit 233 generates a synthesized inspection divided image by synthesizing the inspection divided image and the label information forming the data set generated in step S203 (step S204).
  • the generated composite inspection divided image is output to the extraction section 234 of the processing section 230 .
  • the extraction unit 234 reads out the learned model 65 pre-stored in the storage unit 210, inputs the synthetic test split image generated in step S204 to the learned model 65, and extracts the feature amount of the synthetic test split image. is extracted (step S205).
  • the extracted feature amount is output to the acquisition section 235 of the processing section 230 .
  • the acquiring unit 235 reads out the feature amount 70 of the synthesized non-defective divided image stored in advance in the storage unit 210, and extracts the feature amount extracted in step S205. Based on the distance from the set formed by the extracted feature quantity 70, the defect degree of the inspection divided image corresponding to the feature quantity is obtained (step S206). The obtained defect degree is output to the inspection section 236 of the processing section 230 .
  • the inspection unit 236 generates a partial image from the defect degree values obtained in step S206, and integrates the generated partial images to generate a defect degree image (step S207).
  • the inspection unit 236 inspects the inspection object 30 based on the defect degree image generated in step S207 (step S208).
  • step S208 the image inspection apparatus 20 ends the image inspection process.
  • FIG. 16 is a diagram showing the physical configuration of the trained model generation device 10 according to this embodiment.
  • the trained model generation device 10 includes a CPU (Central Processing Unit) 10a equivalent to a calculation unit, a RAM (Random Access Memory) 10b equivalent to a storage unit, a ROM (Read only Memory) 10c equivalent to a storage unit, It has a communication unit 10d, an input unit 10e, and a display unit 10f. These components are connected to each other via a bus so that data can be sent and received.
  • a CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read only Memory
  • the physical configuration of the image inspection device 20 is the same as the physical configuration of the trained model generation device 10, so description thereof will be omitted.
  • the trained model generation device 10 and the image inspection device 20 are each configured by one computer, but each of the trained model generation device 10 and the image inspection device 20 A plurality of computers may be combined and realized. Also, the trained model generation device 10 and the image inspection device 20 may be configured by one computer. Also, the configuration shown in FIG. 16 is an example, and the trained model generation device 10 and the image inspection device 20 may have configurations other than these, or may not have some of these configurations. good.
  • the CPU 10a is a computing unit that controls the execution of programs stored in the RAM 10b or ROM 10c and computes and processes data.
  • the CPU 10a included in the trained model generation device 10 is a computing unit that executes a program (learning program) that performs learning processing using learning data and generates a trained model.
  • the CPU 10a included in the image inspection apparatus 20 is an arithmetic unit that executes a program (image inspection program) for inspecting an inspection object using an image of the inspection object.
  • the CPU 10a receives various data from the input section 10e and the communication section 10d, and displays the calculation results of the data on the display section 10f and stores them in the RAM 10b.
  • the RAM 10b is a rewritable part of the storage unit, and may be composed of, for example, a semiconductor memory element.
  • the RAM 10b may store data such as programs executed by the CPU 10a, learning data, and learned models. Note that these are examples, and the RAM 10b may store data other than these, or may not store some of them.
  • the ROM 10c is one of the storage units from which data can be read, and may be composed of, for example, a semiconductor memory element.
  • the ROM 10c may store, for example, an image inspection program, a learning program, and data that is not rewritten.
  • the communication unit 10d is an interface that connects the trained model generation device 10 or the image inspection device 20 to other devices.
  • the communication unit 10d may be connected to a communication network such as the Internet.
  • the input unit 10e receives data input from the user, and may include, for example, a keyboard and a touch panel.
  • the input unit 10e may receive an input such as label information of a non-defective product divided image or an inspection divided image, for example.
  • the display unit 10f visually displays the calculation result by the CPU 10a, and may be configured by, for example, an LCD (Liquid Crystal Display).
  • the display unit 10f may display, for example, the inspection results of the inspection object.
  • the image inspection program may be stored in a computer-readable storage medium such as the RAM 10b or ROM 10c and provided, or may be provided via a communication network connected by the communication unit 10d.
  • the CPU 10a executes the learning program to implement various functions described with reference to FIG. 2 and the like.
  • the CPU 10a executes the image inspection program to realize various functions described with reference to FIGS. 8 and 9 and the like. It should be noted that these physical configurations are examples, and do not necessarily have to be independent configurations.
  • each of the trained model generation device 10 and the image inspection device 20 may include an LSI (Large-Scale Integration) in which the CPU 10a, the RAM 10b, and the ROM 10c are integrated.
  • LSI Large-Scale Integration
  • a synthesized non-defective product divided image obtained by synthesizing a non-defective product divided image and label information is input, and the feature amount of the synthesized non-defective product divided image is output.
  • a synthesized inspection divided image obtained by synthesizing the inspection divided image and the label information can be input to the learned model 65 that has been trained, and the feature amount of the synthesized inspection divided image can be extracted.
  • the degree of defect of the inspection divided image is obtained based on the quantity and the feature space formed by the feature amount of the multiple synthesized non-defective product divided images output during learning, and the inspection object is inspected based on the degree of defect. Therefore, the inspection accuracy of the inspection object 30 can be improved.
  • the trained model 65 in this embodiment can learn a pattern specific to each non-defective divided image based on the label information, and in the feature space, the point indicated by the feature amount of each synthesized non-defective divided image is It is plotted near the set formed by the features of the other synthesized good split images at the same position in the good image.
  • the trained model 65 in this embodiment can learn discrimination criteria that differ depending on the position in the non-defective product image based on the label information. can be collected in different ranges and regions for each position of the good product divided image in the good product image.
  • the point indicated by the feature amount of the composite inspection divided image at the other position which includes a pattern that is good at one position but is defective at another position, is defined as a plurality of composite inspection images at the other position. It is possible to plot the set far from the set formed by the feature amount of the non-defective divided image, and it is possible to acquire the degree of defect with a large degree of defect and determine the defective product.
  • a composite non-defective product divided image generated by synthesizing a non-defective product divided image, which is a divided image of a non-defective product inspection object, and label information corresponding to the non-defective product divided image is input, and a feature amount of the synthesized non-defective product divided image is output.
  • the synthesized inspection divided image generated by synthesizing the inspection divided image which is the divided image of the inspection object and the label information corresponding to the inspection divided image is input to the trained model (65) trained as described above.
  • An image inspection device (20) comprising:
  • a composite non-defective product divided image generated by synthesizing a non-defective product divided image, which is a divided image of a non-defective product inspection object, and label information corresponding to the non-defective product divided image is input, and a feature amount of the synthesized non-defective product divided image is output.
  • the synthesized inspection divided image generated by synthesizing the inspection divided image which is the divided image of the inspection object and the label information corresponding to the inspection divided image is input to the trained model (65) trained as described above.
  • an extracting step of extracting a feature amount of the composite inspection divided image Based on the feature space formed by the extracted feature amount of the synthesized inspection divided image and the feature amount of the synthesized non-defective product divided image output during learning, the inspection divided image corresponding to the synthesized inspection divided image is generated.
  • an acquisition step of acquiring a defect degree indicating the degree of the defect an inspection step of inspecting the inspection object based on the acquired degree of defect;
  • An image inspection method comprising:
  • a composite non-defective product divided image generated by synthesizing a non-defective product divided image that is a divided image of a non-defective product inspection object and label information corresponding to the non-defective product divided image is input, and a feature amount of the synthesized non-defective product divided image is output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Cette invention permet de tracer une quantité de caractéristiques d'une image de bonne qualité comprenant un motif spécial à proximité d'une quantité de caractéristiques d'une image de bonne qualité, et de tracer une quantité de caractéristiques d'une image d'inspection d'un produit défectueux à distance de la quantité de caractéristiques de ladite image de bonne qualité. L'invention comprend : une unité d'extraction 234 qui extrait une quantité de caractéristiques d'une image de division d'inspection de synthèse par saisie d'une image de division d'inspection de synthèse générée par synthèse d'une image de division d'inspection et d'informations d'étiquette correspondant à l'image de division d'inspection dans un modèle entraîné qui a été entraîné de façon à accepter en entrée une image de division de bonne qualité de synthèse générée par synthèse d'une image de division de bonne qualité et d'informations d'étiquette correspondant à l'image de division de bonne qualité et à émettre une quantité de caractéristiques de l'image de division de bonne qualité de synthèse ; une unité d'acquisition 235 qui acquiert un degré de défaut de l'image de division d'inspection correspondant à l'image de division d'inspection de synthèse sur la base d'un espace de caractéristiques formé par la quantité de caractéristiques extraite de l'image de division d'inspection de synthèse et de la quantité de caractéristiques de l'image de division de bonne qualité de synthèse émise lors de l'entraînement ; et une unité d'inspection 236 qui inspecte un sujet d'inspection sur la base du degré de défaut acquis.
PCT/JP2021/009412 2021-02-12 2021-03-10 Dispositif d'inspection d'image, procédé d'inspection d'image et dispositif de génération de modèle entraîné WO2022172468A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021020406A JP2022123234A (ja) 2021-02-12 2021-02-12 画像検査装置、画像検査方法及び学習済みモデル生成装置
JP2021-020406 2021-02-12

Publications (1)

Publication Number Publication Date
WO2022172468A1 true WO2022172468A1 (fr) 2022-08-18

Family

ID=82837624

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/009412 WO2022172468A1 (fr) 2021-02-12 2021-03-10 Dispositif d'inspection d'image, procédé d'inspection d'image et dispositif de génération de modèle entraîné

Country Status (2)

Country Link
JP (1) JP2022123234A (fr)
WO (1) WO2022172468A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006220648A (ja) * 2005-01-11 2006-08-24 Omron Corp 基板検査装置並びにその検査ロジック設定方法および検査ロジック設定装置
JP2018005773A (ja) * 2016-07-07 2018-01-11 株式会社リコー 異常判定装置及び異常判定方法
CN110570393A (zh) * 2019-07-31 2019-12-13 华南理工大学 一种基于机器视觉的手机玻璃盖板视窗区缺陷检测方法
JP2020136368A (ja) * 2019-02-15 2020-08-31 東京エレクトロン株式会社 画像生成装置、検査装置及び画像生成方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006220648A (ja) * 2005-01-11 2006-08-24 Omron Corp 基板検査装置並びにその検査ロジック設定方法および検査ロジック設定装置
JP2018005773A (ja) * 2016-07-07 2018-01-11 株式会社リコー 異常判定装置及び異常判定方法
JP2020136368A (ja) * 2019-02-15 2020-08-31 東京エレクトロン株式会社 画像生成装置、検査装置及び画像生成方法
CN110570393A (zh) * 2019-07-31 2019-12-13 华南理工大学 一种基于机器视觉的手机玻璃盖板视窗区缺陷检测方法

Also Published As

Publication number Publication date
JP2022123234A (ja) 2022-08-24

Similar Documents

Publication Publication Date Title
JP6573226B2 (ja) データ生成装置、データ生成方法及びデータ生成プログラム
JP6924413B2 (ja) データ生成装置、データ生成方法及びデータ生成プログラム
JP2019095217A (ja) 外観検査装置
JP2017049974A (ja) 識別器生成装置、良否判定方法、およびプログラム
JP7131617B2 (ja) 照明条件を設定する方法、装置、システム及びプログラム並びに記憶媒体
JP2016115331A (ja) 識別器生成装置、識別器生成方法、良否判定装置、良否判定方法、プログラム
JP6818961B1 (ja) 学習装置、学習方法、および推論装置
JP2021190716A (ja) 弱いラベル付けを使用した半導体試料内の欠陥の検出
US11210774B2 (en) Automated pixel error detection using an inpainting neural network
CN105701493A (zh) 基于阶层图形的图像提取以及前景估测的方法和系统
JP2022045688A (ja) 欠陥管理装置、方法およびプログラム
KR102437115B1 (ko) 제품 구조 예측 기술을 이용한 딥러닝 기반 결함 검사 장치 및 방법
WO2022172468A1 (fr) Dispositif d'inspection d'image, procédé d'inspection d'image et dispositif de génération de modèle entraîné
WO2022172470A1 (fr) Dispositif d'inspection d'image, procédé d'inspection d'image et dispositif de génération de modèle entraîné
JP2023145412A (ja) 欠陥検出方法及びシステム
CN115222649A (zh) 用于对热图的图案进行检测和分类的系统、设备和方法
WO2022172475A1 (fr) Dispositif et procédé d'inspection d'image, et dispositif de génération de modèle entraîné
JP7206892B2 (ja) 画像検査装置、画像検査のための学習方法および画像検査プログラム
WO2022172469A1 (fr) Dispositif d'inspection d'image, procédé d'inspection d'image et dispositif de génération de modèle entraîné
JPWO2020158630A1 (ja) 検出装置、学習器、コンピュータプログラム、検出方法及び学習器の生成方法
WO2021229901A1 (fr) Dispositif d'inspection d'images, procédé d'inspection d'images et dispositif de génération de modèle pré-appris
WO2023089846A1 (fr) Dispositif d'inspection et procédé d'inspection, et programme destiné à être utilisé dans celui-ci
WO2023166776A1 (fr) Système d'analyse d'apparence, procédé d'analyse d'apparence et programme
JP7270314B1 (ja) 検査方法、検査システム、ニューラルネットワーク
JP7366325B1 (ja) 情報処理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21925717

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21925717

Country of ref document: EP

Kind code of ref document: A1