CN110619343B - Automatic defect classification method based on machine learning - Google Patents

Automatic defect classification method based on machine learning Download PDF

Info

Publication number
CN110619343B
CN110619343B CN201910537298.2A CN201910537298A CN110619343B CN 110619343 B CN110619343 B CN 110619343B CN 201910537298 A CN201910537298 A CN 201910537298A CN 110619343 B CN110619343 B CN 110619343B
Authority
CN
China
Prior art keywords
image
nth
defect
defect classification
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910537298.2A
Other languages
Chinese (zh)
Other versions
CN110619343A (en
Inventor
安敏晶
李弘哲
李旻英
金明昭
边圣埈
廉镇燮
朴男材
朴曹荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amo Information Technology Shanghai Co ltd
Korea University Research and Business Foundation
Original Assignee
Amo Information Technology Shanghai Co ltd
Korea University Research and Business Foundation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amo Information Technology Shanghai Co ltd, Korea University Research and Business Foundation filed Critical Amo Information Technology Shanghai Co ltd
Publication of CN110619343A publication Critical patent/CN110619343A/en
Application granted granted Critical
Publication of CN110619343B publication Critical patent/CN110619343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

Disclosed is an automatic defect classification method based on machine learning, the method comprising: preparing a sample image having a known defect type; with respect to each of the sample images, first to nth (where N is a natural number of 2 or more) images different from each other are obtained from the sample image through a certain preprocessing process, and first to nth image sets are obtained from the first to nth images through data enhancement; generating first to nth defect classification models by machine learning with an nth (n=1,..and N) image set as learning data input and a defect type of a corresponding sample image as learning data output for each of the first to nth image sets obtained with respect to the sample image to generate an nth defect classification model; and receiving a target image and classifying a defect type of the target image using the first through nth defect classification models.

Description

Automatic defect classification method based on machine learning
Cross Reference to Related Applications
The present application claims priority and rights of korean patent application No. 10-2018-007045, filed on date 20 of 6.6, the disclosure of which is incorporated herein by reference in its entirety.
Technical Field
The present invention relates to an automatic defect classification method based on machine learning, and more particularly, to an automatic defect classification method in which a defect classification model is generated by machine learning using a sample image of a well-known defect type and a defect type is classified on a target image using the defect classification model.
Background
Techniques for classifying defects of images of products such as displays, panels, printed Circuit Boards (PCBs), etc. using machine learning are essential to intelligent factories. Since defects of the product can be analyzed in real time and then the cause of the defects can be eliminated by giving feedback to the production system according to the type of defects analyzed, a large number of defects can be significantly reduced in a continuous process. Furthermore, the process can be optimized by analyzing the pattern of defects according to the working conditions.
Recently, a method of classifying product defects has been studied using a deep learning method such as a convolutional neural network. In the deep learning method, when there is a sufficient amount of learning data, learning efficiency and classification accuracy increase. However, when the amount of learning data is small, learning efficiency and classification performance are degraded.
Disclosure of Invention
The present invention aims to provide an automatic defect classification method based on machine learning, which can improve learning efficiency and classification performance using only a small amount of learning data.
According to one aspect of the present invention, there is provided an automatic defect classification method based on machine learning, including: preparing a sample image having a known defect type; with respect to each of the sample images, first to nth (where N is a natural number of 2 or more) images different from each other are obtained from the sample image through a certain preprocessing process, and first to nth image sets are obtained from the first to nth images through data enhancement; generating first to nth defect classification models by machine learning with an nth (n=1,..and N) image set as learning data input and a defect type of a corresponding sample image as learning data output for each of the first to nth image sets obtained with respect to the sample image to generate an nth defect classification model; and receiving a target image and classifying a defect type of the target image using the first through nth defect classification models.
The first to nth images may include two or more of the following images: the sample image; a gray-scale image obtained by gradation-grading the sample image; dividing a spot image of a spot region and a region without spots by a process of removing a repeated pattern from the gray scale image; an original cropped image obtained by cropping a certain region from the sample image so as to include at least a part of the spot region therein; a gray-scale cropped image obtained by cropping a certain region from the gray-scale image so as to include at least a part of the spot region therein; and a spot clipping image obtained by clipping a certain region from the spot image so as to include at least a part of the spot region therein.
The speckle image may include at least one of the following images: a binary image in which a spot area and an area without spots are binary-divided; MAP image in which the outline of the repetitive pattern remains in the region without spots; and MAP2 images in which the blob area is represented by two or more pixel values.
The data enhancement may include two or more of rotation, expansion, left-right inversion, up-down inversion, horizontal movement, and vertical movement of the image, and combinations thereof.
The method may further comprise: with respect to each of the sample images, obtaining a probability value for each defect type for each of the first to nth defect classification models by inputting each of the first to nth images obtained from the sample image into the first to nth defect classification models; and generating an ensemble (ensemble) model by taking a probability value for each defect type of each of the first to nth defect classification models obtained with respect to the sample image as a learning data input and a defect type of the corresponding sample image as a learning data output. Here, classifying the defect type of the target image may include further classifying the defect type of the target image using the ensemble model.
Classifying the defect type of the target image may include: obtaining first to nth images from a target image through a certain preprocessing process; obtaining a probability value for each defect type for each of first to nth defect classification models obtained from a target image by inputting each of the first to nth images into the first to nth defect classification models; and determining a defect type of the target image by an ensemble model by inputting a probability value of each defect type obtained for each of the first to nth defect classification models into the ensemble model.
The method may further comprise: obtaining, for each of the sample images, a set of feature values comprising two or more feature values from the sample image; and generating an (n+1) -th defect classification model by machine learning with the set of feature values for the sample images as learning data input and with the defect type of the corresponding sample image as learning data output. Here, classifying the defect type of the target image may include further classifying the defect type of the target image using an (n+1) -th defect classification model.
The feature values may include two or more of the values related to the number, shape, size, pixel values, and combinations thereof of the spots extracted from the sample image.
The method may further comprise: with respect to each of the sample images, obtaining a probability value for each defect type of each of the first to nth defect classification models and the (n+1) th defect classification model by inputting each of the first to nth images obtained from the sample image and the feature value set into the first to (n+1) th defect classification models; and generating an ensemble model by machine learning with a probability value for each defect type of each of the first to (n+1) -th defect classification models obtained with respect to the sample image as a learning data input and with a defect type of the corresponding sample image as a learning data output. Here, classifying the defect type of the target image may include further classifying the defect type of the target image using the ensemble model.
Classifying the defect type of the target image may include: obtaining first to Nth images and a characteristic value set from a target image through a certain preprocessing process; obtaining a probability value for each defect type of each of the first to N-th defect classification models and the (n+1) -th defect classification model by inputting each of the first to N-th images obtained from a target image and the feature value set into the first to (n+1) -th defect classification models; and determining a defect type of a target image by the ensemble model by inputting a probability value of each defect type obtained for each of the first to (n+1) -th defect classification models into the ensemble model.
According to another aspect of the present invention, there is provided a computer-readable recording medium in which a program for executing the above-described automatic defect classification method based on machine learning is recorded.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent to those of ordinary skill in the art by describing in detail exemplary embodiments thereof with reference to the attached drawings, in which:
FIG. 1 is a flow chart illustrating a process for generating an individual defect classification model by machine learning using a sample image of a known type in a machine learning based automatic defect classification method according to one embodiment of the present invention;
FIG. 2A shows an example of a sample image including a defect;
fig. 2B shows an example of a binary image obtained from the sample image of fig. 2A;
fig. 2C illustrates an example of a MAP image obtained from the sample image of fig. 2A;
fig. 2D illustrates an example of a MAP2 image obtained from the sample image of fig. 2A;
FIG. 2E shows an example of a grayscale clipping image obtained from the sample image of FIG. 2A;
FIG. 3 is a flowchart illustrating a process of generating an ensemble model by machine learning using probability values for each of the defect types obtained by the individual defect classification model generated by the process of FIG. 1 in an automatic defect classification method based on machine learning according to one embodiment of the present invention;
FIG. 4 shows an example of the structure of each of the individual defect classification models generated by the process of FIG. 1 and the ensemble model generated by the process of FIG. 3; and
fig. 5 is a flowchart illustrating a process of classifying defect types of a target image using an individual defect classification model and an ensemble model in an automatic defect classification method based on machine learning according to an embodiment of the present invention.
Detailed Description
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description and drawings, substantially the same elements are referred to as the same reference numerals, and repetitive description thereof will be omitted. Furthermore, in the description of the present invention, detailed descriptions of well-known functions or components of the prior art will be omitted when it is considered to obscure the essence of the present invention.
Fig. 1 is a flowchart illustrating a process of generating an individual defect classification model through machine learning using a well-known type of sample image in a machine learning-based automatic defect classification method according to an embodiment of the present invention. In an embodiment of the present invention, as a machine learning algorithm for an individual defect classification model, for example, a Convolutional Neural Network (CNN) may be used.
In operation 110, a sample image whose defect type is known is prepared. The sample image is an image obtained by photographing an image of a product such as a display panel, a Printed Circuit Board (PCB), or the like, in which defective portions are included and defect types of the defects are classified. Fig. 2A shows an example of an original image including a defect. In general, in the deep learning method, learning efficiency and classification accuracy are raised only when a sufficient number of such sample images exist. However, embodiments of the present invention aim to improve learning efficiency and classification performance using only a small number of sample images.
When the sample images are prepared, the following operations 120 to 140 are performed with respect to each sample image. In some embodiments, individual defect classification models may be generated and then sample images may be added. In this case, the following operations 120 to 140 are performed with respect to the added sample image.
In operation 120, first to nth (where N is a natural number greater than or equal to 2) images different from each other are obtained from a sample image through a certain preprocessing process. Hereinafter, for convenience, the sample image is referred to as an original image. The first to nth images may include an original image, a plurality of preprocessed images obtained by preprocessing the original image.
Preprocessing the image may include: for example, a gray-scale image obtained by gradation of an original image; a speckle image divided into a speckle region corresponding to a defective portion and another region without speckle in a process of removing the repeated pattern from the gray-scale image; an original cropped image obtained by cropping a certain area from the original image so as to include at least a part of the spot area therein; a gray-scale cut image obtained by cutting out a certain region from the gray-scale image so as to include at least a part of the spot region therein; a spot clipping image obtained by clipping a certain area from a spot image so as to include at least a part of the spot area therein, and the like.
Further, the speckle image may include: for example, a binary image in which a spot area and an area other than a spot are divided binary; MAP image in which the outline of the repetitive pattern remains in the region without spots; a MAP2 image representing a blob area having two or more pixel values based on a certain threshold; etc.
The images obtained by cropping certain regions from the binary image, the MAP image and the MAP2 image such that at least part of the speckle region is included therein will be referred to as a binary cropped image, a MAP cropped image and a MAP2 cropped image, respectively.
A detailed example of a method of generating the above-described gray-scale image, binary image, MAP2 image, binary clip image, original clip image, gray-scale clip image, MAP clip image, and MAP2 clip image will be described below.
Gray scale image
The pixel information of the original image includes R, G and B-channel pixel values. The pixel values of the gray scale image may be represented as the sum of the weights of the pixel values of R, G and B channels of the original image. When the pixel values of the image deviate, the contrast can be improved by uniformly distributing the pixel values in the range of 0 to 255 through histogram flattening, as needed.
In order to generate a binary image, a MAP image, and a MAP2 image in which a speckle region and a region without speckle are divided from a gray-scale image, processing of removing the repetitive pattern is performed as follows. In this embodiment of the present invention, it is assumed that a repeated pattern is shown in a portion other than the spot in the image. For example, referring to FIG. 2A, it can be seen that there is a repeating pattern except for the spot portion near the center.
First, the period of the repeated pattern is detected from an image (gray-scale image). For this, an area in which the entire pattern can be completely included, for example, 1/10 of the width direction and 1/4 of the length direction is cut from the upper left end of the image. When a portion most similar to the cut region appears in the image while moving the cut region in the width direction and the length direction, coordinates corresponding to the width and the length are determined as a period of the repeated pattern.
Next, pixels shifted by one period from a random pixel of the image are defined as M pixels, and a grayscale image having the same size as the image and values of all pixels being 128 is generated. Further, abs (pixel-M (pixel)) value is calculated for each pixel of the image, and by subtracting abs (pixel-M (pixel)) value from 128, which is the pixel value of each corresponding pixel of the gray-scale image, an image is obtained and used as a reference image. The abs (pixel-M (pixel)) value is 0 or a relatively small value in the case of a repeating pattern (because the pixel values of the pixel and the M pixel are equal or similar to each other) and a relatively large value (i.e., the blob portion) in the case of a non-repeating pattern. Thus, in the reference image, the pixel values of the portion of the blob are relatively small values, while the pixel values of the portion without the blob are relatively large values approaching 128.
The reference image may be used to generate a binary image, a MAP image, and a MAP2 image.
Binary image
At this time, based on a certain threshold (e.g., 70), each pixel of the reference image whose pixel value is smaller than the threshold is converted to white (pixel value is 0), that is, corresponds to the spot portion, and each pixel larger than the threshold is converted to black (pixel value is 255). By this process, a binary image is obtained in which a spot area is represented as white and an area without spots is represented as black. Fig. 2B shows an example of a binary image obtained from the sample image of fig. 2A.
MAP image
At this time, based on a certain threshold value (e.g., 70), each pixel of the reference image whose pixel value is smaller than the threshold value is converted to black (pixel value is 255), i.e., corresponds to the spot portion, and each pixel larger than the threshold value is converted to (128- 'pixel value of the reference image'). (128- 'pixel values of the reference image') represents the outline of the repeating pattern. Thus, by this processing, a MAP image is obtained in which the spot area is black and the outline of the repetitive pattern remains in the area without spots. Fig. 2C shows an example of a MAP image obtained from the sample image of fig. 2A.
MAP2 image
Each pixel of the reference image is converted into a different pixel value for each section (section) based on several thresholds (e.g., 10, 30, 50, and 70). For example, when the pixel value is less than or equal to 10, the pixel is converted to 255 (black), when the pixel value is 10 to 30, to 230, when the pixel value is 30 to 50, to 210, when the pixel value is 50 to 70, to 190, and when the pixel value is greater than 70, to 128 (gray). By this procedure, a MAP2 image is obtained in which the areas without spots are represented as gray and the spot areas are represented as black or dark gray. Fig. 2D shows an example of a MAP2 image obtained from the sample image of fig. 2A.
The binary clip image, the original clip image, the gray-scale clip image, the MAP clip image, and the MAP2 clip image are images obtained by clipping a certain region from the binary image, the original image, the gray-scale image, the MAP image, and the MAP2 image so that at least part of the speckle region is included therein, respectively. In an embodiment of the present invention, a clipping region is determined, and a clipping image can be obtained by applying the same clipping region to an original image, a gray-level image, a MAP image, and a MAP2 image, respectively.
The clipping region may be determined by extracting spots from the binary image and extracting the center coordinates of the largest spot among the extracted spots, so that certain upper, lower, left, and right regions based on the center coordinates may be determined as the clipping region. For example, when 256 pixels on the upper, lower, left, and right sides of the center coordinates are determined as the clipping region, a clipping image of 512×512 in size is generated. Fig. 2E shows an example of a grayscale cropped image obtained by cropping a certain region from a grayscale image so that at least a part of the speckle region is included therein.
The blobs may be extracted from the binary image as follows. (1) One pixel of the blob area in the binary image, i.e., a random pixel with a pixel value of 0, is specified. (2) The specified pixel is determined to be a blob seed, and the surrounding pixels are assigned to be new blob seeds when their pixel value is 0, and assigned to be a blob contour when their pixel value is 255. (3) Repeating the process of (2) until all the blob seeds are present within the blob outline or image interface. (4) The allocated blob seed is extracted as one blob, and the process of (1) to (4) is repeated for new pixels within the binary image that have not been allocated as blob seeds. (5) A blob of the extracted blobs that has a relatively small size (e.g., less than 5 pixels) may be considered noise and is negligible.
Meanwhile, the feature value of the sample image may be obtained using the spots extracted from the binary image as described above, and may be used as learning data. Thus, referring back to FIG. 1, in operation 130, a set of feature values including two or more feature values is obtained using the blob extracted from the binary image.
The feature values may include the number, shape, size, pixel-related values, combinations thereof, and the like of the blobs. A detailed example of the feature values is as follows.
Blob_area: areas of extracted blobs
blob_Brightness: average pixel value of speckle region (pixel value of gray image)
Blobs_count: number of extracted blobs
Convex_hull: minimum convex polygon surrounding a blob
Maxblob_angle: inclination of minimum rectangle around maximum spot
maxblob_Perimeter: perimeter of maximum spot
Maxblob_diameter: diameter of maximum spot
Maxblob_length_1st: length of long side of minimum rectangle surrounding maximum spot
Maxblob_length_2nd: short side length of minimum rectangle surrounding maximum spot
Maxblob_roughness: roughness of maximum spot circumference (spot circumference/housing (hull) circumference)
Maxblob_stability: convexity of maximum spot (spot area/shell area)
Average value, standard deviation, minimum and maximum values of characteristic values, shell characteristics, and the like
Next, in operation 140, first to nth image sets are obtained through data enhancement from the first to nth images obtained through operation 120. Here, the data enhancement may include rotation, expansion, left-right inversion, up-down inversion, horizontal movement, vertical movement, a combination thereof, and the like of the image. Regarding an nth (n=1,..and N) image obtained from the sample image, an nth image set including several to several tens of images including the nth image therein can be generated by data enhancement.
For example, with respect to each of the original image, the original clip image, the binary clip image, the gray scale clip image, the MAP clip image, the MAP2 image and the MAP2 clip image, an original image set, an original cropping image set, a binary cropping image set, a gray scale cropping image set, a MAP cropping image level, a MAP2 image set, and a MAP2 cropping image set may be generated.
When operations 120 to 140 are completely performed with respect to all sample images, in operation 150, for each of the first to nth image sets obtained with respect to the sample images, an nth defect classification model is generated by performing machine learning with the nth (n=1,..and N) image set as learning data input and the defect type of the corresponding sample image as learning data output, thereby generating the first to nth defect classification models.
For example, a first defect classification model may be generated by machine learning using the original image set, a second defect classification model may be generated by machine learning using the original cropping image set, a third defect classification model may be generated by machine learning using the binary image set, a fourth defect classification model may be generated by machine learning using the binary cropping image set, a fifth defect classification model may be generated by machine learning using the gray level image set, a sixth defect classification model may be generated by machine learning using the gray level cropping image set, a seventh defect classification model may be generated by machine learning using the gray level cropping image set, an eighth defect classification model may be generated by machine learning using the MAP cropping image set, a ninth defect classification model may be generated by machine learning using the MAP2 image set, and a tenth defect classification model may be generated by machine learning using the MAP2 image set.
In operation 160, the (n+1) -th defect classification model is generated by performing machine learning with the feature value set obtained with respect to the sample image by operation 130 as a learning data input and the defect type of the corresponding sample image as a learning data output.
For example, the eleventh defect classification model may be generated by machine learning using the feature value set.
When a target image whose defect types are to be classified is given, the defect types of the target image may be classified using the first to (n+1) -th defect classification models generated through the process of fig. 1.
Fig. 3 is a flowchart illustrating a process of generating an ensemble model through machine learning using probability values for each defect type obtained by an individual defect classification model generated through the process of fig. 1 in an automatic defect classification method based on machine learning according to an embodiment of the present invention. In an embodiment of the present invention, as a machine learning algorithm for the ensemble model, for example, a multi-layer perceptron (multilayer perceptron, MLP), which is one of the artificial neural network classifiers, may be used.
Referring to fig. 3, the following operations 310 and 320 are performed with respect to each sample image.
In operation 310, a probability value for each defect type is obtained for each of the first through nth defect classification models by inputting each of the first through nth images obtained from the sample image via the preprocessing procedure into the first through nth defect classification models. When there are four defect types, four probability values are obtained for each of the first to nth defect classification models.
Further, in operation 320, a probability value for each defect type of the (n+1) th defect classification model is obtained by inputting a feature value set obtained from the sample image into the (n+1) th defect classification model. Here, similarly, when there are four defect types, four probability values are obtained with respect to the (n+1) -th defect classification model.
When operations 310 and 320 are completely performed with respect to all the sample images, in operation 330, an ensemble model is generated by performing machine learning with probability values for each defect type obtained for each of the first through (n+1) -th defect classification models with respect to the sample images as learning data input and defect types of the corresponding sample images as learning data output.
Fig. 4 shows an example of the structure of each of the respective defect classification models generated by the process of fig. 1 and the ensemble model generated by the process of fig. 3.
Referring to fig. 4, four probability values are obtained for each of eleven defect classification models with respect to the sample image, thereby obtaining forty-four probability values in total. For each of the eleven defect classification models obtained for all sample images, an ensemble model is generated using the probability value for each defect type.
When a target image whose defect types are to be classified is given, the defect types of the target image may be classified using the first to (n+1) -th defect classification models generated through the process of fig. 1 and the ensemble model generated through the process of fig. 3.
Fig. 5 is a flowchart illustrating a process of classifying defect types of a target image using an individual defect classification model and an ensemble model in an automatic defect classification method based on machine learning according to an embodiment of the present invention.
In operation 510, first to nth images different from each other are obtained from a target image through the same preprocessing process as that performed with respect to a sample image.
Specifically, the first to nth images obtained from the target image may include an original image, an original clip image, a binary clip image, a gray-scale clip image, a MAP clip image, a MAP2 image, and a MAP2 clip image.
In operation 520, a feature value set is obtained from the target image through the same feature value extraction process as that performed with respect to the sample image.
In operation 530, a probability value for each defect type is obtained with respect to each of the first through (n+1) -th defect classification models by inputting the first through N-th images and the feature value sets obtained from the target image into the first through N-th defect classification models and the (n+1) -th defect classification model.
In operation 540, the defect type of the target image is determined using the ensemble model by inputting a probability value for each defect type obtained with respect to each of the first through (n+1) -th defect classification models into the ensemble model.
To describe the process of classifying the defect type of the target image with reference to fig. 4, four probability values are obtained with respect to each of eleven defect classification models, thereby obtaining forty-four probability values in total with respect to the target image. When the probability value is input into the ensemble model, a defect type of the target image is determined.
According to the embodiment of the present invention, since various images are obtained from a sample image through a certain preprocessing process and an individual defect classification model is generated with respect to each image, it is possible to improve learning efficiency and classification performance using only a small number of sample images.
Further, the ensemble model is generated using the probability value of each defect type obtained with respect to each individual defect classification model as learning data, so that the classification performance can be ensured using the performance of the model having the most excellent performance among the individual defect classification models.
Meanwhile, the above-described embodiments of the present invention may be written as programs that can be executed by computers and may be implemented in general-use digital computers that execute the programs using a computer readable recording medium. The computer-readable recording medium includes storage media such as magnetic storage media (e.g., read-only memory (ROM), floppy disks, hard disks, etc.), and optically readable media (e.g., compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), etc.).
Embodiments of the invention may include functional block elements and various processing operations. The functional blocks may be implemented using various numbers of hardware and/or software components that perform the specified functions. For example, embodiments may employ integrated circuit components, such as memories, processes, logic, look-up tables, or the like, which are capable of performing a variety of functions under the control of one or more microprocessors or other control devices. Similar to the components of the present invention, which may be implemented by software programming or software elements, embodiments may be implemented by programming or scripting languages, such as C, C ++, java, assembler, or the like, including various algorithms implemented with data structures, processes, routines, or combinations of different programming components. The functional aspects may be implemented by algorithms executed by one or more processors. Further, the embodiments may employ conventional techniques for electronic environment settings, signal processing, data processing, and the like. Terms such as "mechanism," "element," "device" and "component" may be used broadly and are not limited to mechanical and physical components. These terms may include the meaning of a series of software routines associated with a processor or the like.
The particular implementations described as examples in the embodiments do not limit the scope of the embodiments to any method. Descriptions of conventional electronic components, control systems, software, and other functional aspects of the systems may be omitted so as to simplify the description. Furthermore, the connections or connecting means of the lines between the components shown in the figures are shown as examples of functional and/or physical or circuit connections, and may be shown as various alternative or additional functional, physical and circuit connections in a real plant. Furthermore, unless there is a detailed description such as "necessary," "significant," etc., components may not be necessary for the practice of the present invention.
According to the embodiment of the present invention, since various images are obtained from a sample image through a certain preprocessing process and an individual defect classification model is generated with respect to each image, it is possible to improve learning efficiency and classification performance using only a small number of sample images.
Exemplary embodiments of the present invention have been described above. It will be appreciated by those skilled in the art that modifications may be made without departing from the essential characteristics of the invention. Accordingly, the described embodiments should be considered in descriptive sense only and not for purposes of limitation. The scope of the invention is defined by the claims rather than the foregoing description, and it is understood that all differences within the equivalent scope thereof are included in the present invention.

Claims (9)

1. An automatic defect classification method based on machine learning, comprising:
preparing a sample image having a known defect type;
with respect to each of the sample images, first to nth images different from each other are obtained from the sample image through a certain preprocessing process, where N is a natural number of 2 or more, and first to nth image sets are obtained from the first to nth images through data enhancement;
generating first to nth defect classification models by inputting, for each of the first to nth image sets obtained with respect to the sample image, an nth image set as learning data, where n=1, & N, and performing machine learning with a defect type of the corresponding sample image as learning data output to generate an nth defect classification model; and
receiving a target image and classifying a defect type of the target image using the first through nth defect classification models,
wherein the first through nth images include two or more of the following images: the sample image; a gray-scale image obtained by gradation-grading the sample image; dividing a spot image of a spot region and a region without spots by a process of removing a repeated pattern from the gray scale image; an original cropped image obtained by cropping a certain region from the sample image so as to include at least a part of the spot region therein; a gray-scale cropped image obtained by cropping a certain region from the gray-scale image so as to include at least a part of the spot region therein; and a spot clipping image obtained by clipping a certain region from the spot image so as to include at least a part of the spot region therein.
2. The method of claim 1, wherein the speckle image comprises at least one of the following images: a binary image in which the speckle region and the speckle-free region are binary-divided; MAP image in which the outline of the repetitive pattern remains in the region without spots; and a MAP2 image in which the blob area is represented by two or more pixel values.
3. An automatic defect classification method based on machine learning, comprising:
preparing a sample image having a known defect type;
with respect to each of the sample images, first to nth images different from each other are obtained from the sample image through a certain preprocessing process, where N is a natural number of 2 or more, and first to nth image sets are obtained from the first to nth images through data enhancement;
generating first to nth defect classification models by inputting, for each of the first to nth image sets obtained with respect to the sample image, an nth image set as learning data, where n=1, & N, and performing machine learning with a defect type of the corresponding sample image as learning data output to generate an nth defect classification model;
receiving a target image and classifying a defect type of the target image using the first through nth defect classification models;
with respect to each of the sample images, obtaining a probability value for each defect type for each of the first to nth defect classification models by inputting each of the first to nth images obtained from the sample image into the first to nth defect classification models; and
by generating an ensemble model with probability values for each defect type of each of the first to nth defect classification models obtained with respect to the sample image as learning data input and defect types of the corresponding sample image as learning data output,
wherein classifying the defect type of the target image includes further classifying the defect type of the target image using the ensemble model.
4. A method according to claim 3, wherein classifying the defect type of the target image comprises:
obtaining first to nth images from the target image through a certain preprocessing process;
obtaining a probability value for each defect type of each of the first to nth defect classification models by inputting each of the first to nth images obtained from the target image into the first to nth defect classification models; and
determining a defect type of the target image by the ensemble model by inputting a probability value of each defect type obtained for each of the first to nth defect classification models into the ensemble model.
5. An automatic defect classification method based on machine learning, comprising:
preparing a sample image having a known defect type;
with respect to each of the sample images, first to nth images different from each other are obtained from the sample image through a certain preprocessing process, where N is a natural number of 2 or more, and first to nth image sets are obtained from the first to nth images through data enhancement;
generating first to nth defect classification models by inputting, for each of the first to nth image sets obtained with respect to the sample image, an nth image set as learning data, where n=1, & N, and performing machine learning with a defect type of the corresponding sample image as learning data output to generate an nth defect classification model;
receiving a target image and classifying a defect type of the target image using the first through nth defect classification models;
obtaining, for each of the sample images, a set of feature values comprising two or more feature values from the sample image; and
generating an n+1th defect classification model by machine learning with the set of feature values with respect to the sample images as learning data input and the defect type of the corresponding sample image as learning data output,
wherein classifying the defect type of the target image includes further classifying the defect type of the target image using the n+1th defect classification model.
6. The method of claim 5, wherein the feature values comprise two or more of values related to the number, shape, size, pixel values, and combinations thereof, of blobs extracted from the sample image.
7. The method of claim 5, further comprising:
with respect to each of the sample images, obtaining a probability value for each defect type of each of the first to nth defect classification models and the n+1th defect classification model by inputting each of the first to nth images obtained from the sample image and the feature value set into the first to n+1th defect classification models; and
an ensemble model is generated by machine learning with a probability value for each defect type of each of the first to n+1-th defect classification models obtained with respect to the sample image as a learning data input and a defect type of the corresponding sample image as a learning data output,
wherein classifying the defect type of the target image includes further classifying the defect type of the target image using the ensemble model.
8. The method of claim 7, wherein classifying a defect type of the target image comprises:
obtaining first to nth images and a set of feature values from the target image through a certain preprocessing process;
obtaining a probability value for each defect type of each of the first to N-th defect classification models and n+1-th defect classification models by inputting each of the first to N-th images obtained from the target image and the feature value set into the first to n+1-th defect classification models; and
a defect type of the target image is determined by the ensemble model by inputting a probability value of each defect type obtained for each of the first to n+1-th defect classification models into the ensemble model.
9. A computer-readable recording medium in which a program for executing the machine learning-based automatic defect classification method according to any one of claims 1 to 8 is recorded.
CN201910537298.2A 2018-06-20 2019-06-20 Automatic defect classification method based on machine learning Active CN110619343B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2018-0070845 2018-06-20
KR1020180070845A KR102154393B1 (en) 2018-06-20 2018-06-20 Automated defect classification method based on machine learning

Publications (2)

Publication Number Publication Date
CN110619343A CN110619343A (en) 2019-12-27
CN110619343B true CN110619343B (en) 2023-06-06

Family

ID=68921528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910537298.2A Active CN110619343B (en) 2018-06-20 2019-06-20 Automatic defect classification method based on machine learning

Country Status (2)

Country Link
KR (1) KR102154393B1 (en)
CN (1) CN110619343B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102284155B1 (en) * 2020-01-22 2021-07-29 에스케이 주식회사 Method and system using deep learning of multi-view image
TWI748344B (en) * 2020-02-14 2021-12-01 聚積科技股份有限公司 Establishing Method of Standard Judgment Model for LED Screen Adjustment
KR20210133090A (en) * 2020-04-28 2021-11-05 삼성전자주식회사 Electronic device for providing information realted to failure of product and method for operating thereof
CN111784670B (en) * 2020-06-30 2022-05-20 深圳赛安特技术服务有限公司 Hot rolled steel plate surface defect identification method and device based on computer vision
CN112085722B (en) * 2020-09-07 2024-04-09 凌云光技术股份有限公司 Training sample image acquisition method and device
KR102296511B1 (en) * 2020-11-17 2021-09-02 주식회사 트윔 Training data generating apparatus, method for product inspection and product inspection appratus using the training data
KR102335013B1 (en) * 2020-12-21 2021-12-03 (주)위세아이텍 Apparatus and method for failure mode classification of rotating equipment based on deep learning denoising model
KR20220105689A (en) * 2021-01-20 2022-07-28 주식회사 솔루션에이 System for determining defect of display panel based on machine learning model
KR20220152924A (en) * 2021-05-10 2022-11-17 현대자동차주식회사 Method and apparatus for augmenting data in machine to machine system
KR20220161601A (en) * 2021-05-27 2022-12-07 주식회사 솔루션에이 System for determining defect of image inspection target using deep learning model
KR102520759B1 (en) * 2021-07-12 2023-04-13 울산과학기술원 Apparatus and method for predicting and providing surface roughness of product to be moulded, and apparatus and method for predicting and providing process condition, using artificial intelligence
KR102469219B1 (en) * 2022-05-27 2022-11-23 국방과학연구소 Abnormal data detection method and electronic device therefor

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200411324A (en) * 2002-12-20 2004-07-01 Taiwan Semiconductor Mfg Method and system of defect image classification
CN104778684A (en) * 2015-03-06 2015-07-15 江苏大学 Method and system thereof for automatically measuring, representing and classifying heterogeneous defects on surface of steel
CN105283884A (en) * 2013-03-13 2016-01-27 柯法克斯公司 Classifying objects in digital images captured using mobile devices
CN106384074A (en) * 2015-07-31 2017-02-08 富士通株式会社 Detection apparatus of pavement defects and method thereof, and image processing equipment
CN106650823A (en) * 2016-12-30 2017-05-10 湖南文理学院 Probability extreme learning machine integration-based foam nickel surface defect classification method
CN107561738A (en) * 2017-08-30 2018-01-09 湖南理工学院 TFT LCD surface defect quick determination methods based on FCN
CN107831173A (en) * 2017-10-17 2018-03-23 哈尔滨工业大学(威海) Photovoltaic component defect detection method and system
CN108038843A (en) * 2017-11-29 2018-05-15 英特尔产品(成都)有限公司 A kind of method, apparatus and equipment for defects detection
CN108154498A (en) * 2017-12-06 2018-06-12 深圳市智能机器人研究院 A kind of rift defect detecting system and its implementation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10650508B2 (en) * 2014-12-03 2020-05-12 Kla-Tencor Corporation Automatic defect classification without sampling and feature selection
US9898811B2 (en) * 2015-05-08 2018-02-20 Kla-Tencor Corporation Method and system for defect classification
US9922269B2 (en) * 2015-06-05 2018-03-20 Kla-Tencor Corporation Method and system for iterative defect classification
KR101759496B1 (en) * 2015-08-20 2017-07-19 충북대학교 산학협력단 System and Method for Classification of PCB fault and Type of Fault

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200411324A (en) * 2002-12-20 2004-07-01 Taiwan Semiconductor Mfg Method and system of defect image classification
CN105283884A (en) * 2013-03-13 2016-01-27 柯法克斯公司 Classifying objects in digital images captured using mobile devices
CN104778684A (en) * 2015-03-06 2015-07-15 江苏大学 Method and system thereof for automatically measuring, representing and classifying heterogeneous defects on surface of steel
CN106384074A (en) * 2015-07-31 2017-02-08 富士通株式会社 Detection apparatus of pavement defects and method thereof, and image processing equipment
CN106650823A (en) * 2016-12-30 2017-05-10 湖南文理学院 Probability extreme learning machine integration-based foam nickel surface defect classification method
CN107561738A (en) * 2017-08-30 2018-01-09 湖南理工学院 TFT LCD surface defect quick determination methods based on FCN
CN107831173A (en) * 2017-10-17 2018-03-23 哈尔滨工业大学(威海) Photovoltaic component defect detection method and system
CN108038843A (en) * 2017-11-29 2018-05-15 英特尔产品(成都)有限公司 A kind of method, apparatus and equipment for defects detection
CN108154498A (en) * 2017-12-06 2018-06-12 深圳市智能机器人研究院 A kind of rift defect detecting system and its implementation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Solder joint defect classification based on ensemble learning;Hao Wu等;《Soldering and Surface Mount Technology》;20170630;第29卷(第03期);1-8 *
基于图像处理的钢板表面缺陷支持向量机识别;汤勃等;《中国机械工程》;20110625;第22卷(第12期);1402-1405 *
基于高斯金字塔和MLP的带钢边部缺陷识别;王爱芳等;《软件导刊》;20160303;第15卷(第02期);105-108 *

Also Published As

Publication number Publication date
CN110619343A (en) 2019-12-27
KR20190143192A (en) 2019-12-30
KR102154393B1 (en) 2020-09-09

Similar Documents

Publication Publication Date Title
CN110619343B (en) Automatic defect classification method based on machine learning
US10007865B1 (en) Learning method and learning device for adjusting parameters of CNN by using multi-scale feature maps and testing method and testing device using the same
CN110678901B (en) Information processing apparatus, information processing method, and computer-readable storage medium
EP1884892B1 (en) Method, medium, and system compensating shadow areas
US7974471B2 (en) Method of generating a labeled image and image processing system with pixel blocks
CN108921152B (en) English character segmentation method and device based on object detection network
Hirata Jr et al. Segmentation of microarray images by mathematical morphology
CN107563476B (en) Two-dimensional code beautifying and anti-counterfeiting method
US9792507B2 (en) Method and system for ground truth determination in lane departure warning
CN113327252B (en) Method and system for object detection
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN117152165B (en) Photosensitive chip defect detection method and device, storage medium and electronic equipment
US6963664B2 (en) Segmentation of digital images
GB2385187A (en) Dynamic thresholding of binary images
KR102386930B1 (en) Apparatus for detecting edge of road image and method thereof
CN117372415A (en) Laryngoscope image recognition method, device, computer equipment and storage medium
US11954865B2 (en) Image processing apparatus, image processing method, and storage medium for foreground extraction
JP2008282280A (en) Two dimensional code reader and its method
CN111666811B (en) Method and system for extracting traffic sign board area in traffic scene image
CN114529570A (en) Image segmentation method, image identification method, user certificate subsidizing method and system
JP2019003534A (en) Image processing program, image processing apparatus, and image processing method
JP2011170882A (en) Two dimensional code reader and its method
US11900643B2 (en) Object detection method and object detection system
CN113870297B (en) Image edge detection method and device and storage medium
KR100683360B1 (en) Method of generating binary image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200113

Address after: Room 368, Part 302, No.211, North Fute Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: AMO information technology (Shanghai) Co.,Ltd.

Applicant after: KOREA University RESEARCH AND BUSINESS FOUNDATION

Address before: Han Guojingjidao

Applicant before: AMO Information Technology Co.,Ltd.

Applicant before: KOREA University RESEARCH AND BUSINESS FOUNDATION

GR01 Patent grant
GR01 Patent grant