CN110619343A - Automatic defect classification method based on machine learning - Google Patents

Automatic defect classification method based on machine learning Download PDF

Info

Publication number
CN110619343A
CN110619343A CN201910537298.2A CN201910537298A CN110619343A CN 110619343 A CN110619343 A CN 110619343A CN 201910537298 A CN201910537298 A CN 201910537298A CN 110619343 A CN110619343 A CN 110619343A
Authority
CN
China
Prior art keywords
image
nth
defect
defect classification
defect type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910537298.2A
Other languages
Chinese (zh)
Other versions
CN110619343B (en
Inventor
安敏晶
李弘哲
李旻英
金明昭
边圣埈
廉镇燮
朴男材
朴曹荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amo Information Technology Shanghai Co ltd
Korea University Research and Business Foundation
Original Assignee
Amo Information Technology Co Ltd
Industry Academy Collaboration Foundation of Korea University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amo Information Technology Co Ltd, Industry Academy Collaboration Foundation of Korea University filed Critical Amo Information Technology Co Ltd
Publication of CN110619343A publication Critical patent/CN110619343A/en
Application granted granted Critical
Publication of CN110619343B publication Critical patent/CN110619343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

Disclosed is an automatic defect classification method based on machine learning, which comprises the following steps: preparing a sample image having a known defect type; with respect to each of the sample images, first to nth (here, N is a natural number of 2 or more) images different from each other are obtained from the sample image by a certain preprocessing process, and first to nth image sets are obtained from the first to nth images by data enhancement; generating first to nth defect classification models by machine learning with an nth (N ═ 1.., and N) image set as learning data input and a defect type of a corresponding sample image as learning data output to generate an nth defect classification model for each of the first to nth image sets obtained with respect to the sample image; and receiving a target image and classifying a defect type of the target image using the first through nth defect classification models.

Description

Automatic defect classification method based on machine learning
Cross Reference to Related Applications
The present application claims priority and benefit from korean patent application No.10-2018-0070845, filed on 20/6/2018, the disclosure of which is incorporated herein by reference in its entirety.
Technical Field
The present invention relates to an automatic defect classification method based on machine learning, and more particularly, to an automatic defect classification method in which a defect classification model is generated by machine learning using a sample image of a well-known defect type and a defect type classification is performed on a target image using the defect classification model.
Background
Techniques for classifying defects in images of products such as displays, panels, Printed Circuit Boards (PCBs) and the like using machine learning are essential to intelligent factories. Since the defects of the product can be analyzed in real time and then the cause of the defects can be eliminated by giving feedback to the production system according to the analyzed defect type, a large number of defects can be significantly reduced in a continuous process. Furthermore, the process can be optimized by analyzing the pattern of defects according to the operating conditions.
Recently, methods of classifying product defects have been studied using deep learning methods such as convolutional neural networks. In this deep learning method, when there is a sufficient amount of learning data, learning efficiency and classification accuracy increase. However, when the amount of learning data is small, learning efficiency and classification performance are degraded.
Disclosure of Invention
The present invention aims to provide an automatic defect classification method based on machine learning, which can improve learning efficiency and classification performance using only a small amount of learning data.
According to one aspect of the invention, an automatic defect classification method based on machine learning is provided, which comprises the following steps: preparing a sample image having a known defect type; with respect to each of the sample images, first to nth (here, N is a natural number of 2 or more) images different from each other are obtained from the sample image by a certain preprocessing process, and first to nth image sets are obtained from the first to nth images by data enhancement; generating first to nth defect classification models by machine learning with an nth (N ═ 1.., and N) image set as learning data input and a defect type of a corresponding sample image as learning data output to generate an nth defect classification model for each of the first to nth image sets obtained with respect to the sample image; and receiving a target image and classifying a defect type of the target image using the first through nth defect classification models.
The first to nth images may include two or more of the following images: the sample image; a gray-scale image obtained by gray-scaling the sample image; dividing a speckle image into a speckle region and a region without speckle by removing a repetitive pattern from the gray-scale image; an original cropped image obtained by cropping a certain region from the sample image so as to include therein at least a part of the blob region; a gray-scale clipping image obtained by clipping a certain region from the gray-scale image so that at least a part of the blob region is included therein; and a blob trimming image obtained by trimming a certain region from the blob image so that at least a part of the blob region is included therein.
The speckle image may comprise at least one of the following images: a binary image, wherein a spot area and an area without spots are divided into two values; MAP images, in which the contours of the repeating pattern are preserved in areas without speckles; and MAP2 images, in which the blob area is represented by two or more pixel values.
The data enhancement may include two or more of rotation, expansion, left-right inversion, up-down inversion, horizontal movement, and vertical movement of the image, and combinations thereof.
The method may further comprise: obtaining, with respect to each of the sample images, a probability value for each defect type for each of the first to nth defect classification models by inputting each of first to nth images obtained from the sample image into the first to nth defect classification models; and generating an ensemble (ensemble) model by inputting a probability value for each defect type for each of the first to nth defect classification models obtained with respect to the sample image as learning data and outputting a defect type of the corresponding sample image as learning data. Here, classifying the defect type of the target image may include further classifying the defect type of the target image using the ensemble model.
Classifying the defect type of the target image may include: obtaining first to nth images from a target image through a certain preprocessing process; obtaining a probability value for each defect type for each of first to nth defect classification models by inputting each of first to nth images obtained from a target image into the first to nth defect classification models; and determining a defect type of the target image through an ensemble model by inputting a probability value of each defect type obtained for each of the first to nth defect classification models into the ensemble model.
The method may further comprise: obtaining, for each of the sample images, a feature value set including two or more feature values from the sample image; and generating an (N +1) th defect classification model by machine learning with the feature value set on the sample image as a learning data input and a defect type of a corresponding sample image as a learning data output. Here, the classifying the defect type of the target image may include further classifying the defect type of the target image using an (N +1) th defect classification model.
The feature values may include two or more of values related to the number, shape, size, pixel values, and combinations thereof of blobs extracted from the sample image.
The method may further comprise: obtaining, with respect to each of the sample images, a probability value for each defect type of each of the first to nth defect classification models and the (N +1) th defect classification model by inputting each of the first to nth images and the feature value set obtained from the sample image into the first to (N +1) th defect classification models; and generating an ensemble model by machine learning with a probability value for each defect type of each of the first to (N +1) th defect classification models obtained with respect to the sample image as a learning data input and with a defect type of the corresponding sample image as a learning data output. Here, classifying the defect type of the target image may include further classifying the defect type of the target image using the ensemble model.
Classifying the defect type of the target image may include: obtaining first to Nth images and a characteristic value set from a target image through a certain preprocessing process; obtaining a probability value for each defect type of each of first to nth defect classification models and (N +1) th defect classification models by inputting each of first to nth images obtained from a target image and the feature value set into the first to (N +1) th defect classification models; and determining a defect type of a target image through the ensemble model by inputting a probability value of each defect type obtained for each of the first to (N +1) th defect classification models into the ensemble model.
According to another aspect of the present invention, there is provided a computer-readable recording medium in which a program for executing the above-described machine learning-based automatic defect classification method is recorded.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent to those of ordinary skill in the art by describing in detail exemplary embodiments thereof with reference to the attached drawings, in which:
FIG. 1 is a flow diagram illustrating a process for generating an individual defect classification model by machine learning using sample images of known types in an automatic defect classification method based on machine learning according to one embodiment of the present invention;
FIG. 2A shows an example of a sample image including a defect;
FIG. 2B illustrates an example of a binary image obtained from the sample image of FIG. 2A;
FIG. 2C shows an example of a MAP image obtained from the sample image of FIG. 2A;
FIG. 2D shows an example of a MAP2 image obtained from the sample image of FIG. 2A;
FIG. 2E illustrates an example of a grayscale cropped image obtained from the sample image of FIG. 2A;
FIG. 3 is a flow diagram illustrating a process in a machine learning based automatic defect classification method of generating an ensemble model by machine learning with probability values for each of defect types obtained by the individual defect classification models generated by the process of FIG. 1 according to one embodiment of the present invention;
FIG. 4 shows an example of the structure of each of the individual defect classification models generated by the process of FIG. 1 and the ensemble model generated by the process of FIG. 3; and
fig. 5 is a flowchart illustrating a process of classifying defect types of a target image using an individual defect classification model and an ensemble model in an automatic defect classification method based on machine learning according to an embodiment of the present invention.
Detailed Description
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description and the drawings, substantially the same elements are referred to by the same reference numerals, and a repetitive description thereof will be omitted. In addition, in the description of the present invention, a detailed description of known functions or elements of the related art will be omitted when it is considered to obscure the essence of the present invention.
Fig. 1 is a flow diagram illustrating a process of generating an individual defect classification model through machine learning using sample images of a known type in an automatic defect classification method based on machine learning according to an embodiment of the present invention. In an embodiment of the present invention, as a machine learning algorithm for the individual defect classification model, for example, a Convolutional Neural Network (CNN) may be used.
In operation 110, a sample image whose defect type is known is prepared. The sample image is an image obtained by taking an image of a product such as a display panel, a Printed Circuit Board (PCB), or the like, in which defective parts are included and the defect types of the defects are classified. Fig. 2A illustrates an example of an original image including a defect. In general, in the deep learning method, only when there are a sufficient number of such sample images, the learning efficiency and the classification accuracy are increased. However, embodiments of the present invention aim to improve learning efficiency and classification performance using only a small number of sample images.
When the sample images are prepared, the following operations 120 to 140 are performed with respect to each sample image. In some embodiments, individual defect classification models may be generated, and then sample images may be added. In this case, the following operations 120 to 140 are performed with respect to the added sample image.
In operation 120, first to nth (here, N is a natural number greater than or equal to 2) images different from each other are obtained from the sample image through a certain preprocessing process. Hereinafter, the sample image is referred to as an original image for convenience. The first to nth images may include an original image, a plurality of preprocessed images obtained by preprocessing the original image.
Pre-processing the image may include: for example, a grayscale image obtained by performing grayscale gradation on an original image; a speckle image divided into a speckle region corresponding to the defective portion and another region having no speckle in the process of removing the repetitive pattern from the gray-scale image; an original cropped image obtained by cropping a certain region from the original image so as to include at least a part of the blob region therein; a gray-scale clipping image obtained by clipping a certain region from the gray-scale image so that at least a part of the spot region is included therein; a spot-clipped image obtained by clipping a certain area from a spot image so that at least a part of the spot area is included therein, and the like.
Further, the speckle image may include: for example, a binary image in which a spot region and a region other than spots are divided two-valued; MAP images, in which the contours of the repeating pattern are preserved in areas without speckles; a MAP2 image representing a spot region having two or more pixel values based on a certain threshold; and so on.
Images obtained by cropping certain regions from the binary image, the MAP image, and the MAP2 image such that at least part of the blob region is included therein will be referred to as a binary cropped image, a MAP cropped image, and a MAP2 cropped image, respectively.
A detailed example of a method of generating the above-described grayscale image, binary image, MAP2 image, binary clipped image, original clipped image, grayscale clipped image, MAP clipped image, and MAP2 clipped image will be described below.
Gray scale images
The pixel information of the original image includes R, G and the pixel values of the B channel. The pixel value of the grayscale image may be represented as the sum of the weights of the R, G and the pixel value of the B channel of the original image. When the pixel values of the image deviate, the contrast can be improved by uniformly distributing the pixel values in the range of 0 to 255 by the histogram flattening, as needed.
In order to generate a binary image, a MAP image, and a MAP2 image in which a speckle region and a region without speckle are divided from a gray-scale image, a process of removing a repetitive pattern is performed as follows. In this embodiment of the present invention, it is assumed that a repetitive pattern is shown in a portion other than a spot in an image. For example, referring to fig. 2A, it can be seen that a repetitive pattern exists except for a spot portion near the center.
First, the period of the repetitive pattern is detected from an image (gray-scale image). For this, a region in which the entirety of the pattern can be completely included, for example, 1/10 in the width direction and 1/4 in the length direction, is cut from the upper left end of the image. When a portion most similar to the cut region appears in the image while moving the cut region in the width direction and the length direction, coordinates corresponding to the width and the length are determined as the period of the repetitive pattern.
Next, pixels shifted from a random pixel of the image by one period are defined as M pixels, and a gray image having the same size as the image and all the pixels having a value of 128 is generated. Further, an abs (pixel-M (pixel)) value is calculated for each pixel of the image, and by subtracting the abs (pixel-M (pixel)) value, which is a pixel value of each corresponding pixel of the grayscale image, from 128, an image is obtained and used as a reference image. The abs (pixel-M (pixel)) value is 0 or a relatively small value in the case of a repeating pattern (because the pixel values of the pixel and the M pixel are equal or similar to each other), and is a relatively large value (i.e., a speckle portion) in the case of not a repeating pattern. Therefore, in the reference image, the pixel value of the spot portion is a relatively small value, and the pixel value of the spot-free portion is a relatively large value close to 128.
A binary image, a MAP image, and a MAP2 image may be generated using the reference image.
Binary image
At this time, based on a certain threshold value (e.g., 70), each pixel of the reference image whose pixel value is smaller than the threshold value is converted into white (pixel value is 0), that is, corresponds to a spot portion, and each pixel larger than the threshold value is converted into black (pixel value is 255). By this processing, a binary image is obtained in which a spot region is represented as white and a region without spots is represented as black. Fig. 2B illustrates an example of a binary image obtained from the sample image of fig. 2A.
MAP image
At this time, each pixel of the reference image having a pixel value smaller than the threshold is converted into black (pixel value of 255), that is, corresponds to a speckle portion, based on a certain threshold (e.g., 70), and each pixel larger than the threshold is converted into (128- 'pixel value of the reference image'). (128- 'pixel value of reference image') represents the outline of the repeating pattern. Thus, by this processing, a MAP image is obtained in which the spot region is black and the outline of the repetitive pattern is retained in the region without spots. Fig. 2C illustrates an example of a MAP image obtained from the sample image of fig. 2A.
MAP2 image
Each pixel of the reference image is converted to a different pixel value for each region (section) based on several thresholds (e.g., 10, 30, 50, and 70). For example, a pixel is converted into 255 (black) when the pixel value is less than or equal to 10, 230 when the pixel value is 10 to 30, 210 when the pixel value is 30 to 50, 190 when the pixel value is 50 to 70, and 128 (gray) when the pixel value is greater than 70. By this process, a MAP2 image is obtained in which an area without spots is represented as gray, and a spot area is represented as black or dark gray. Fig. 2D shows an example of a MAP2 image obtained from the sample image of fig. 2A.
The binary clipped image, the original clipped image, the grayscale clipped image, the MAP clipped image, and the MAP2 clipped image are images obtained by clipping certain regions from the binary image, the original image, the grayscale image, the MAP image, and the MAP2 image so that at least parts of the blob regions are respectively included therein. In the embodiment of the present invention, a clipping region is determined, and a clipping image may be obtained by applying the same clipping region to an original image, a grayscale image, a MAP image, and a MAP2 image, respectively.
The clipping region may be determined by extracting blobs from the binary image and extracting center coordinates of the largest blob among the extracted blobs, so that certain upper, lower, left, and right regions based on the center coordinates may be determined as the clipping region. For example, when 256 pixels on the upper, lower, left, and right sides of the center coordinate are determined as the clipping region, a clipping image of size 512 × 512 is generated. Fig. 2E shows an example of a gray-scale clipping image obtained by clipping a certain region from a gray-scale image so that at least a part of a speckle region is included therein.
The blobs may be extracted from the binary image as follows. (1) One pixel of the spot region in the binary image, i.e., a random pixel having a pixel value of 0, is designated. (2) The designated pixel is determined as a blob seed, and the peripheral pixels are assigned as new blob seeds when the pixel values of the peripheral pixels are 0, and as blob contours when the pixel values are 255. (3) Repeating the process of (2) until all the blob seeds are present within the blob outline or image interface. (4) The assigned blob seed is extracted as one blob, and the processes of (1) through (4) are repeated with respect to new pixels within the binary image that have not been assigned as blob seeds. (5) Blobs of the extracted blobs that have a relatively small size (e.g., less than 5 pixels) may be considered noise and disregarded.
Meanwhile, the feature values of the sample image may be obtained using the blobs extracted from the binary image as described above, and may be used as the learning data. Thus, referring back to fig. 1, in operation 130, a feature value set including two or more feature values is obtained using the blobs extracted from the binary image.
The feature values may include the number, shape, size, pixel-related values, combinations thereof, and the like of blobs. Detailed examples of the feature values are as follows.
Blob _ Area: region of the extracted blob
Blob _ Brightness: average pixel value of spot region (pixel value of gray image)
Blobs _ count: number of extracted blobs
Convex _ hull: smallest convex polygon surrounding a blob
Maxblob _ Angle: inclination of the smallest rectangle around the largest spot
Maxblob _ Perimeter: circumference of maximum spot
Maxblob _ Diameter: diameter of maximum spot
Maxblob _ Length _1 st: length of long side of minimum rectangle surrounding maximum spot
Maxblob _ Length _2 nd: the length of the short side of the smallest rectangle surrounding the largest spot
Maxblob _ roughnesss: roughness of maximum spot perimeter (spot perimeter/shell (hull) perimeter)
Maxblob _ solid: convexity of the maximum spot (spot area/shell area)
Mean, standard deviation, minimum and maximum values of characteristic values, housing characteristics, and the like
Next, in operation 140, first to nth image sets are obtained through data enhancement from the first to nth images obtained through operation 120. Here, the data enhancement may include rotation, expansion, left-right inversion, up-down inversion, horizontal movement, vertical movement of the image, a combination thereof, and the like. With respect to the N-th (N ═ 1.,. and N) images obtained from the sample images, an N-th image set including several to several tens of images, in which the N-th image is included, can be generated by data enhancement.
For example, with respect to each of the original image, the original cropped image, the binary cropped image, the grayscale cropped image, the MAP cropped image, the MAP2 image, and the MAP2 cropped image, an original image set, an original cropped image set, a binary cropped image set, a grayscale cropped image set, a MAP cropped image level, a MAP2 image set, and a MAP2 cropped image set may be generated.
When operations 120 to 140 are completely performed with respect to all sample images, in operation 150, for each of first to nth image sets obtained with respect to the sample images, an nth defect classification model is generated by performing machine learning with an nth (N ═ 1.. and N) image set as a learning data input and a defect type of a corresponding sample image as a learning data output, thereby generating first to nth defect classification models.
For example, a first defect classification model may be generated by machine learning using an original image set, a second defect classification model may be generated by machine learning using an original cropped image set, a third defect classification model may be generated by machine learning using a binary image set, a fourth defect classification model may be generated by machine learning using a binary cropped image set, a fifth defect classification model may be generated by machine learning using a grayscale image set, a sixth defect classification model may be generated by machine learning using a grayscale cropped image set, a seventh defect classification model may be generated by machine learning using a MAP image set, an eighth defect classification model may be generated by machine learning using a MAP cropped image set, a ninth defect classification model may be generated by machine learning using a MAP2 image set, and a tenth defect classification model may be generated by machine learning using the MAP2 clipped image set.
In operation 160, an (N +1) th defect classification model is generated by machine learning with the feature value set obtained with respect to the sample image through operation 130 as a learning data input and the defect type of the corresponding sample image as a learning data output.
For example, the eleventh defect classification model may be generated by machine learning using the feature value set.
When a target image whose defect type is to be classified is given, the defect type of the target image may be classified using the first to (N +1) th defect classification models generated by the process of fig. 1.
Fig. 3 is a flowchart illustrating a process of generating an ensemble model through machine learning using a probability value of each defect type obtained from an individual defect classification model generated through the process of fig. 1 in an automatic defect classification method based on machine learning according to an embodiment of the present invention. In an embodiment of the present invention, as a machine learning algorithm for the ensemble model, for example, a multilayer perceptron (MLP), which is one of artificial neural network classifiers, may be used.
Referring to fig. 3, the following operations 310 and 320 are performed with respect to each sample image.
In operation 310, a probability value of each defect type is obtained for each of the first to nth defect classification models by inputting each of the first to nth images obtained from the sample image via the preprocessing process into the first to nth defect classification models. When there are four defect types, four probability values are obtained for each of the first to nth defect classification models.
Further, in operation 320, a probability value for each defect type of the (N +1) th defect classification model is obtained by inputting the feature value set obtained from the sample image into the (N +1) th defect classification model. Here, similarly, when there are four defect types, four probability values are obtained with respect to the (N +1) th defect classification model.
When operations 310 and 320 are completely performed with respect to all sample images, an ensemble model is generated by performing machine learning with a probability value of each defect type obtained for each of the first to (N +1) th defect classification models with respect to the sample images as learning data input and with a defect type of the corresponding sample image as learning data output in operation 330.
FIG. 4 shows an example of the structure of each of the individual defect classification models generated by the process of FIG. 1 and the ensemble model generated by the process of FIG. 3.
Referring to fig. 4, four probability values are obtained for each of eleven defect classification models with respect to a sample image, thereby obtaining forty-four probability values in total. For each of the eleven defect classification models obtained for all sample images, an ensemble model is generated using the probability value for each defect type.
When a target image whose defect type is to be classified is given, the defect type of the target image may be classified using the first to (N +1) th defect classification models generated by the process of fig. 1 and the ensemble model generated by the process of fig. 3.
Fig. 5 is a flowchart illustrating a process of classifying defect types of a target image using an individual defect classification model and an ensemble model in an automatic defect classification method based on machine learning according to an embodiment of the present invention.
In operation 510, first to nth images different from each other are obtained from a target image through the same preprocessing process as that performed with respect to a sample image.
Specifically, the first to nth images obtained from the target image may include an original image, an original cropped image, a binary cropped image, a grayscale cropped image, a MAP cropped image, a MAP2 image, and a MAP2 cropped image.
In operation 520, a feature value set is obtained from the target image through the same feature value extraction process as that performed with respect to the sample image.
In operation 530, a probability value of each defect type is obtained with respect to each of the first to (N +1) th defect classification models by inputting the first to N-th images and the feature value sets obtained from the target image into the first to N-th defect classification models and the (N +1) th defect classification model.
In operation 540, a defect type of the target image is determined using the ensemble model by inputting a probability value of each defect type obtained with respect to each of the first to (N +1) th defect classification models into the ensemble model.
To describe the process of classifying the defect type of the target image with reference to fig. 4, four probability values are obtained with respect to each of eleven defect classification models, thereby obtaining forty-four probability values in total with respect to the target image. When the probability values are input into the ensemble model, the defect type of the target image is determined.
According to the embodiments of the present invention, since various images are obtained from sample images through a certain preprocessing process and an individual defect classification model is generated with respect to each image, it is possible to improve learning efficiency and classification performance using only a small number of sample images.
Further, the ensemble model is generated using the probability value of each defect type obtained with respect to each individual defect classification model as learning data, so that the classification performance can be ensured using the performance of the model having the optimal performance among the respective defect classification models.
Meanwhile, the above-described embodiments of the present invention may be written as programs that can be executed by a computer and can be implemented in a general-purpose digital computer that runs the programs using a computer readable recording medium. The computer-readable recording medium includes storage media such as magnetic storage media (e.g., Read Only Memory (ROM), floppy disks, hard disks, etc.) and optically readable media (e.g., compact disk read only memory (CD-ROM), Digital Versatile Disks (DVDs), etc.).
Embodiments of the invention may include functional block components and various processing operations. Functional blocks may be implemented using various numbers of hardware and/or software components that perform the specified functions. For example, embodiments may employ integrated circuit components, such as memory, processing, logic, look-up tables, or the like, which are capable of performing various functions under the control of one or more microprocessors or other control devices. Similar to the components of the invention, which may be performed by software programming or software elements, embodiments may be implemented by programming or scripting languages, such as C, C + +, Java, assembler, or the like, including various algorithms implemented by data structures, procedures, routines, or combinations of different programming components. The functional aspects may be implemented by algorithms executed by one or more processors. Further, embodiments may employ conventional techniques for electronic environment setup, signal processing, data processing, and the like. Terms such as "mechanism," "element," "device," and "component" may be used broadly and are not limited to mechanical and physical components. These terms may include the meaning of a series of software routines associated with a processor or the like.
The particular implementations described as examples in the embodiments do not limit the scope of the embodiments to any method. Descriptions of conventional electronic components, control systems, software, and other functional aspects of the systems may be omitted to simplify the description. Further, the connections or connection means of the lines between the components shown in the drawings are shown as examples of functional connections and/or physical or circuit connections, and may be shown as various alternative or additional functional connections, physical connections, and circuit connections in a real device. Further, unless there is a detailed description such as "necessary," "significant," etc., the components may not be necessary to practice the invention.
According to the embodiments of the present invention, since various images are obtained from sample images through a certain preprocessing process and an individual defect classification model is generated with respect to each image, it is possible to improve learning efficiency and classification performance using only a small number of sample images.
Exemplary embodiments of the present invention have been described above. It will be understood by those skilled in the art that modifications may be made without departing from the essential characteristics of the invention. The described embodiments are, therefore, to be considered in all respects as illustrative and not restrictive. The scope of the invention is defined by the claims, rather than the description above, and all differences within the equivalent scope will be understood to be included in the present invention.

Claims (11)

1. A method for automatic defect classification based on machine learning, comprising:
preparing a sample image having a known defect type;
with respect to each of the sample images, first to nth images different from each other are obtained from the sample image by a certain preprocessing process, where N is a natural number equal to or greater than 2, and first to nth image sets are obtained from the first to nth images by data enhancement;
generating first to nth defect classification models by, for each of the first to nth image sets obtained with respect to the sample image, inputting an nth image set as learning data, where N is 1., and N, and performing machine learning with a defect type of the corresponding sample image as learning data output to generate an nth defect classification model; and
a target image is received and defect types of the target image are classified using the first through nth defect classification models.
2. The method of claim 1, wherein the first through nth images comprise two or more of: the sample image; a gray-scale image obtained by gray-scaling the sample image; dividing a speckle image of a speckle region and a region without speckle by a process of removing a repetitive pattern from the gray-scale image; an original cropped image obtained by cropping a certain region from the sample image so as to include therein at least a part of the blob region; a gray-scale clipping image obtained by clipping a certain region from the gray-scale image so that at least a part of the blob region is included therein; and a blob trimming image obtained by trimming a certain region from the blob image so that at least a part of the blob region is included therein.
3. The method of claim 2, wherein the speckle image comprises at least one of: a binary image in which the speckle region and the region without speckle are divided binarily; a MAP image in which the outline of the repeating pattern is preserved in the region without the spots; and a MAP2 image, wherein the blob region is represented by two or more pixel values.
4. The method of claim 1, wherein the data enhancement comprises two or more of rotation, expansion, left-right inversion, up-down inversion, horizontal movement, and vertical movement of an image, and combinations thereof.
5. The method of claim 1, further comprising:
obtaining, with respect to each of the sample images, a probability value for each defect type for each of the first to nth defect classification models by inputting each of first to nth images obtained from the sample image into the first to nth defect classification models; and
generating an ensemble model by inputting a probability value for each defect type for each of the first to N-th defect classification models obtained with respect to the sample image as learning data and outputting a defect type of the corresponding sample image as learning data,
wherein classifying the defect type of the target image comprises further classifying the defect type of the target image using the ensemble model.
6. The method of claim 5, wherein classifying the defect type of the target image comprises:
obtaining first to nth images from the target image through a certain preprocessing process;
obtaining a probability value for each defect type of each of first to nth defect classification models by inputting each of first to nth images obtained from the target image into the first to nth defect classification models; and
determining a defect type of the target image through the ensemble model by inputting a probability value of each defect type obtained for each of the first to nth defect classification models into the ensemble model.
7. The method of claim 1, further comprising:
obtaining, for each of the sample images, a feature value set including two or more feature values from the sample image; and
generating an N +1 th defect classification model by machine learning with the feature value set on the sample image as a learning data input and a defect type of a corresponding sample image as a learning data output,
wherein classifying the defect type of the target image comprises further classifying the defect type of the target image using the N +1 th defect classification model.
8. The method of claim 7, wherein the feature values comprise two or more of the values related to the number, shape, size, pixel values of the blobs extracted from the sample image and combinations thereof.
9. The method of claim 7, further comprising:
obtaining, with respect to each of the sample images, a probability value for each defect type of each of the first to nth defect classification models and the N +1 th defect classification model by inputting each of the first to nth images obtained from the sample image and the feature value set into the first to N +1 th defect classification models; and
generating an ensemble model by machine learning with a probability value for each defect type for each of the first to N +1 th defect classification models obtained with respect to the sample image as a learning data input and with a defect type of the corresponding sample image as a learning data output,
wherein classifying the defect type of the target image comprises further classifying the defect type of the target image using the ensemble model.
10. The method of claim 9, wherein classifying the defect type of the target image comprises:
obtaining first to Nth images and a characteristic value set from the target image through a certain preprocessing process;
obtaining a probability value for each defect type for each of the first to nth defect classification models and an N +1 th defect classification model by inputting each of the first to nth images obtained from the target image and the feature value set into the first to N +1 th defect classification models; and
determining a defect type of the target image through the ensemble model by inputting a probability value of each defect type obtained for each of the first to N +1 th defect classification models into the ensemble model.
11. A computer-readable recording medium in which a program for executing the machine learning-based automatic defect classification method according to any one of claims 1 to 10 is recorded.
CN201910537298.2A 2018-06-20 2019-06-20 Automatic defect classification method based on machine learning Active CN110619343B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2018-0070845 2018-06-20
KR1020180070845A KR102154393B1 (en) 2018-06-20 2018-06-20 Automated defect classification method based on machine learning

Publications (2)

Publication Number Publication Date
CN110619343A true CN110619343A (en) 2019-12-27
CN110619343B CN110619343B (en) 2023-06-06

Family

ID=68921528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910537298.2A Active CN110619343B (en) 2018-06-20 2019-06-20 Automatic defect classification method based on machine learning

Country Status (2)

Country Link
KR (1) KR102154393B1 (en)
CN (1) CN110619343B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784670A (en) * 2020-06-30 2020-10-16 平安国际智慧城市科技股份有限公司 Hot rolled steel plate surface defect identification method and device based on computer vision

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102284155B1 (en) * 2020-01-22 2021-07-29 에스케이 주식회사 Method and system using deep learning of multi-view image
TWI748344B (en) * 2020-02-14 2021-12-01 聚積科技股份有限公司 Establishing Method of Standard Judgment Model for LED Screen Adjustment
KR20210133090A (en) * 2020-04-28 2021-11-05 삼성전자주식회사 Electronic device for providing information realted to failure of product and method for operating thereof
CN112085722B (en) * 2020-09-07 2024-04-09 凌云光技术股份有限公司 Training sample image acquisition method and device
KR102296511B1 (en) * 2020-11-17 2021-09-02 주식회사 트윔 Training data generating apparatus, method for product inspection and product inspection appratus using the training data
KR102335013B1 (en) * 2020-12-21 2021-12-03 (주)위세아이텍 Apparatus and method for failure mode classification of rotating equipment based on deep learning denoising model
KR20220105689A (en) * 2021-01-20 2022-07-28 주식회사 솔루션에이 System for determining defect of display panel based on machine learning model
KR20220152924A (en) * 2021-05-10 2022-11-17 현대자동차주식회사 Method and apparatus for augmenting data in machine to machine system
KR20220161601A (en) * 2021-05-27 2022-12-07 주식회사 솔루션에이 System for determining defect of image inspection target using deep learning model
KR102520759B1 (en) * 2021-07-12 2023-04-13 울산과학기술원 Apparatus and method for predicting and providing surface roughness of product to be moulded, and apparatus and method for predicting and providing process condition, using artificial intelligence
KR102469219B1 (en) * 2022-05-27 2022-11-23 국방과학연구소 Abnormal data detection method and electronic device therefor

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200411324A (en) * 2002-12-20 2004-07-01 Taiwan Semiconductor Mfg Method and system of defect image classification
CN104778684A (en) * 2015-03-06 2015-07-15 江苏大学 Method and system thereof for automatically measuring, representing and classifying heterogeneous defects on surface of steel
CN105283884A (en) * 2013-03-13 2016-01-27 柯法克斯公司 Classifying objects in digital images captured using mobile devices
US20160328837A1 (en) * 2015-05-08 2016-11-10 Kla-Tencor Corporation Method and System for Defect Classification
CN106384074A (en) * 2015-07-31 2017-02-08 富士通株式会社 Detection apparatus of pavement defects and method thereof, and image processing equipment
CN106650823A (en) * 2016-12-30 2017-05-10 湖南文理学院 Probability extreme learning machine integration-based foam nickel surface defect classification method
CN107561738A (en) * 2017-08-30 2018-01-09 湖南理工学院 TFT LCD surface defect quick determination methods based on FCN
CN107831173A (en) * 2017-10-17 2018-03-23 哈尔滨工业大学(威海) Photovoltaic component defect detection method and system
CN108038843A (en) * 2017-11-29 2018-05-15 英特尔产品(成都)有限公司 A kind of method, apparatus and equipment for defects detection
CN108154498A (en) * 2017-12-06 2018-06-12 深圳市智能机器人研究院 A kind of rift defect detecting system and its implementation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10650508B2 (en) * 2014-12-03 2020-05-12 Kla-Tencor Corporation Automatic defect classification without sampling and feature selection
US9922269B2 (en) * 2015-06-05 2018-03-20 Kla-Tencor Corporation Method and system for iterative defect classification
KR101759496B1 (en) * 2015-08-20 2017-07-19 충북대학교 산학협력단 System and Method for Classification of PCB fault and Type of Fault

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200411324A (en) * 2002-12-20 2004-07-01 Taiwan Semiconductor Mfg Method and system of defect image classification
CN105283884A (en) * 2013-03-13 2016-01-27 柯法克斯公司 Classifying objects in digital images captured using mobile devices
CN104778684A (en) * 2015-03-06 2015-07-15 江苏大学 Method and system thereof for automatically measuring, representing and classifying heterogeneous defects on surface of steel
US20160328837A1 (en) * 2015-05-08 2016-11-10 Kla-Tencor Corporation Method and System for Defect Classification
CN106384074A (en) * 2015-07-31 2017-02-08 富士通株式会社 Detection apparatus of pavement defects and method thereof, and image processing equipment
CN106650823A (en) * 2016-12-30 2017-05-10 湖南文理学院 Probability extreme learning machine integration-based foam nickel surface defect classification method
CN107561738A (en) * 2017-08-30 2018-01-09 湖南理工学院 TFT LCD surface defect quick determination methods based on FCN
CN107831173A (en) * 2017-10-17 2018-03-23 哈尔滨工业大学(威海) Photovoltaic component defect detection method and system
CN108038843A (en) * 2017-11-29 2018-05-15 英特尔产品(成都)有限公司 A kind of method, apparatus and equipment for defects detection
CN108154498A (en) * 2017-12-06 2018-06-12 深圳市智能机器人研究院 A kind of rift defect detecting system and its implementation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAO WU等: "Solder joint defect classification based on ensemble learning", 《SOLDERING AND SURFACE MOUNT TECHNOLOGY》 *
汤勃等: "基于图像处理的钢板表面缺陷支持向量机识别", 《中国机械工程》 *
王爱芳等: "基于高斯金字塔和MLP的带钢边部缺陷识别", 《软件导刊》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784670A (en) * 2020-06-30 2020-10-16 平安国际智慧城市科技股份有限公司 Hot rolled steel plate surface defect identification method and device based on computer vision
CN111784670B (en) * 2020-06-30 2022-05-20 深圳赛安特技术服务有限公司 Hot rolled steel plate surface defect identification method and device based on computer vision

Also Published As

Publication number Publication date
CN110619343B (en) 2023-06-06
KR20190143192A (en) 2019-12-30
KR102154393B1 (en) 2020-09-09

Similar Documents

Publication Publication Date Title
CN110619343B (en) Automatic defect classification method based on machine learning
US9547800B2 (en) System and a method for the detection of multiple number-plates of moving cars in a series of 2-D images
WO2019057067A1 (en) Image quality evaluation method and apparatus
JP6129987B2 (en) Text quality based feedback to improve OCR
US9792507B2 (en) Method and system for ground truth determination in lane departure warning
CN111860027A (en) Two-dimensional code identification method and device
CN113807378A (en) Training data increment method, electronic device and computer readable recording medium
CN113327252A (en) Method and system for object detection
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
RU2608239C1 (en) Method and system for determining suitability of document image for optical character recognition and other image processing operations
CN114494174A (en) Chip welding line defect detection method and device
KR102386930B1 (en) Apparatus for detecting edge of road image and method thereof
CN113269790A (en) Video clipping method and device, electronic equipment, server and storage medium
US11954865B2 (en) Image processing apparatus, image processing method, and storage medium for foreground extraction
JP6377214B2 (en) Text detection method and apparatus
JP2008282280A (en) Two dimensional code reader and its method
JP5427828B2 (en) Two-dimensional code reading apparatus and method
CN115330637A (en) Image sharpening method and device, computing device and storage medium
CN111666811B (en) Method and system for extracting traffic sign board area in traffic scene image
CN111696064B (en) Image processing method, device, electronic equipment and computer readable medium
CN114529570A (en) Image segmentation method, image identification method, user certificate subsidizing method and system
CN112101323A (en) Method, system, electronic device and storage medium for identifying title list
US11900643B2 (en) Object detection method and object detection system
CN117727041A (en) Edge equipment nameplate character segmentation method and system based on horizontal projection and connected domain
CN117011596A (en) Circle identification and circle center positioning method and device for visual measurement of structural movement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200113

Address after: Room 368, Part 302, No.211, North Fute Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: AMO information technology (Shanghai) Co.,Ltd.

Applicant after: KOREA University RESEARCH AND BUSINESS FOUNDATION

Address before: Han Guojingjidao

Applicant before: AMO Information Technology Co.,Ltd.

Applicant before: KOREA University RESEARCH AND BUSINESS FOUNDATION

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant