CA3166581A1 - Method and system for training inspection equipment for automatic defect classification - Google Patents
Method and system for training inspection equipment for automatic defect classification Download PDFInfo
- Publication number
- CA3166581A1 CA3166581A1 CA3166581A CA3166581A CA3166581A1 CA 3166581 A1 CA3166581 A1 CA 3166581A1 CA 3166581 A CA3166581 A CA 3166581A CA 3166581 A CA3166581 A CA 3166581A CA 3166581 A1 CA3166581 A1 CA 3166581A1
- Authority
- CA
- Canada
- Prior art keywords
- training
- classifier
- inspection images
- inspection
- binary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 252
- 230000007547 defect Effects 0.000 title claims abstract description 152
- 238000000034 method Methods 0.000 title claims abstract description 104
- 238000012549 training Methods 0.000 title claims description 203
- 238000013528 artificial neural network Methods 0.000 claims abstract description 98
- 238000013145 classification model Methods 0.000 claims abstract description 74
- 238000010801 machine learning Methods 0.000 claims abstract description 19
- 239000004065 semiconductor Substances 0.000 claims abstract description 17
- 238000003860 storage Methods 0.000 claims abstract description 8
- 230000002950 deficient Effects 0.000 claims description 95
- 230000006870 function Effects 0.000 claims description 73
- 238000010200 validation analysis Methods 0.000 claims description 17
- 238000013500 data storage Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 14
- 238000012360 testing method Methods 0.000 claims description 12
- 150000003071 polychlorinated biphenyls Chemical class 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 9
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 6
- 238000013434 data augmentation Methods 0.000 claims description 6
- 230000001537 neural effect Effects 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 6
- 239000000463 material Substances 0.000 claims description 5
- 239000000758 substrate Substances 0.000 claims description 5
- 235000012431 wafers Nutrition 0.000 claims description 5
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 238000007747 plating Methods 0.000 claims description 4
- 239000002184 metal Substances 0.000 claims description 3
- 230000008569 process Effects 0.000 description 20
- 238000004519 manufacturing process Methods 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 5
- 238000012805 post-processing Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000000844 transformation Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000005305 interferometry Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000002161 passivation Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000000275 quality assurance Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
- G01R31/26—Testing of individual semiconductor devices
- G01R31/265—Contactless testing
- G01R31/2656—Contactless testing using non-ionising electromagnetic radiation, e.g. optical radiation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
- G06N5/025—Extracting rules from data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/87—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using selection of the recognition techniques, e.g. of a classifier in a multiple classifier system
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
- G01R31/28—Testing of electronic circuits, e.g. by signal tracer
- G01R31/2832—Specific tests of electronic circuits not provided for elsewhere
- G01R31/2834—Automated test systems [ATE]; using microprocessors or computers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
- G01R31/28—Testing of electronic circuits, e.g. by signal tracer
- G01R31/2851—Testing of integrated circuits [IC]
- G01R31/2894—Aspects of quality control [QC]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30141—Printed circuit board [PCB]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/06—Recognition of objects for industrial automation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Electromagnetism (AREA)
- Toxicology (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Analysis (AREA)
Abstract
A computer-implemented method, an inspection system and a non-transitory storage medium are provided, for automatically generating defect classification models, using machine learning, for the inspection of semiconductor and/or Printed Circuit Board (PCB) parts. The defect classification models are automatically built from a first combination of a trained neural network binary classifier and optimizer and from a second combination of a trained neural network multi-class classifier and optimizer.
Description
METHOD AND SYSTEM FOR TRAINING INSPECTION EQUIPMENT FOR
AUTOMATIC DEFECT CLASSIFICATION
TECHNICAL FIELD
[1]
The technical field generally relates to inspection systems and methods for automatic defect inspection, and more specifically relates to methods and systems for automatically classifying defects of products being inspected. The methods and systems presented hereinbelow are especially adapted for the inspection of semiconductor products.
BACKGROUND
AUTOMATIC DEFECT CLASSIFICATION
TECHNICAL FIELD
[1]
The technical field generally relates to inspection systems and methods for automatic defect inspection, and more specifically relates to methods and systems for automatically classifying defects of products being inspected. The methods and systems presented hereinbelow are especially adapted for the inspection of semiconductor products.
BACKGROUND
[2]
Manufacturing processes generally include the automated inspection of the manufactured parts, at different milestones during the process, and typically at least at the end of the manufacturing process. Inspection may be conducted with inspection systems that optically analyze the manufactured parts and detect defective parts.
Different technologies can be used, such as cameras combined with laser-triangulation and/or interferometry. Automated inspection systems ensure that the parts manufactured meet the quality standards expected and provide useful information on adjustments that may be needed to the manufacturing tools, equipment and/or compositions, depending on the type of defects identified.
Manufacturing processes generally include the automated inspection of the manufactured parts, at different milestones during the process, and typically at least at the end of the manufacturing process. Inspection may be conducted with inspection systems that optically analyze the manufactured parts and detect defective parts.
Different technologies can be used, such as cameras combined with laser-triangulation and/or interferometry. Automated inspection systems ensure that the parts manufactured meet the quality standards expected and provide useful information on adjustments that may be needed to the manufacturing tools, equipment and/or compositions, depending on the type of defects identified.
[3]
In the semiconductor industry, it is common to have the same manufacturing line used for the different types of parts, either for the same or different customers. Thus, inspection systems must be able to detect non-defective from defective parts, and the type of defects present in identified defective parts. The classification of defects is often laborious and requires the involvement of experts, of the inspection system and of the manufacturing process, to be able to tune and configure the system to properly identify the defects. Configuration of the inspection system for the tuning to existing defect types, and for the detection of new defect types, requires the system to be offline in most cases.
A well-known method for defect-detection in the semiconductor industry includes the comparison of the captured images with a "mask" or "ideal part layout", but this method leaves out a lot of undetected defects.
In the semiconductor industry, it is common to have the same manufacturing line used for the different types of parts, either for the same or different customers. Thus, inspection systems must be able to detect non-defective from defective parts, and the type of defects present in identified defective parts. The classification of defects is often laborious and requires the involvement of experts, of the inspection system and of the manufacturing process, to be able to tune and configure the system to properly identify the defects. Configuration of the inspection system for the tuning to existing defect types, and for the detection of new defect types, requires the system to be offline in most cases.
A well-known method for defect-detection in the semiconductor industry includes the comparison of the captured images with a "mask" or "ideal part layout", but this method leaves out a lot of undetected defects.
[4] There is a need for inspection systems and methods that can help improve or facilitate the process of classifying defects when automatically inspecting products.
SUMMARY
SUMMARY
[5] According to an aspect, a computer-implemented method is provided for automatically generating a defect classification model, using machine learning, for use in an automated inspection system, for the inspection of manufactured parts. The method comprises a step of acquiring inspection images of parts captured by the inspection system. The inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts, such as semiconductor and/or Printed Circuit Board (PCB) parts.
[6] The method also comprises a step of training a binary classifier, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts. The binary classifier uses a first combination of a neural network architecture and of an optimizer. The binary classifier is trained by iteratively updating weights of the nodes of the different layers of the neural network architecture used in the first combination.
The method also comprises a step of training a multi-class classifier, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts. The multi-class classifier uses a second combination of a neural network architecture and of an optimizer. The multi-class classifier is trained by iteratively updating weights of the nodes of the different layer of the neural architecture of the second combination.
[8] Once the binary and the multi-class classifiers are trained, a defect classification model is built or generated, where a configuration file defines the first and second combinations of neural network architectures and optimizers, and parameters thereof. The configuration file also comprises the final updated weights of the nodes of each of the neural network architectures from the binary and from the multi-class classifiers. The automatic defect classification model is thereby usable by the automated inspection system for detecting defective parts and for identifying defect types on the manufactured parts being inspected.
[9] In a possible implementation of the method, the step of training the binary classifier further comprises an initial step of automatically exploring different combinations of neural network architecture and optimizer on an exploring subset of the inspection images. The first combination selected for the binary classifier corresponds to the combination that provided, during the exploration step, the highest accuracy in identifying non-defective from defective parts for a given number of epochs.
[10] In a possible implementation of the method, the step of training the multi-class classifier further comprises an initial step of automatically exploring the different combinations of neural network architectures and optimizer, using another exploring subset of inspection images. The second combination of neural network architecture and optimizer selected for the multi-class classifier corresponds to the combination that provided, during the exploration step, the highest accuracy in identifying the different defect types for a given number of epochs.
[11] In a possible implementation of the method, the step of training the binary classifier further comprises a step of automatically exploring different loss functions and different learning rate schedulers. The first combination is further defined by a loss function and by a learning rate scheduler that provided, during the exploration phase, together with the neural network architecture and the optimizer, the highest accuracy in detecting non-defective from defective parts for the given number of epochs. The selection of the loss function and of the learning rate is made automatically. The configuration file of the defect classification model further comprises the parameters from the selected loss function and from the learning rate scheduler for the binary classifier.
[12]
In a possible implementation of the method, the step of training the multi-class classifier further comprises a step of automatically exploring the different loss functions and the learning rate schedulers. The second combination is further defined by the loss function and the learning rate scheduler that provided, during the exploration phase, together with the neural network architecture and the optimizer, the highest accuracy in identifying the defect types for the given number of epochs. The configuration file of the defect classification model further comprises the parameters from the selected loss function and from the learning rate scheduler of the multi-class classifier.
[13]
In a possible implementation of the method, the updated weights and the parameters of the selected neural network architectures, optimizers, loss functions and learning rate schedulers are packaged in the configuration file that loadable by the automated inspection system.
[14] In a possible implementation of the method, the different neural network architectures comprise at least one of the following neural network architectures:
ResNet34, NesNet50, ResNet101, ResNet152, WideResNet50, WideResNet101, IncptionV3 and InceptionResNet.
[15] In a possible implementation of the method, the different optimizers comprise at least one of: Adam and SGD optimizers.
[16] In a possible implementation of the method, the different loss functions comprise at least one of: cross entropy and NII loss functions.
[17] In a possible implementation of the method, the different rate learning schedulers comprise at least one of: decay and cyclical rate schedulers.
[18] In a possible implementation of the method, the automated inspection system is trained to detect different defect types on at least one of the following products:
semiconductor packages, wafers, side-single PCBs, double-side PCBs, multilayer PCBs and substrates.
[19] In a possible implementation of the method, the defect types comprise one or more of: under plating, foreign material, incomplete parts, cracks, smudges, abnormal circuits, resist residue, deformation, scratches, clusters and metal film residue.
[20] In a possible implementation of the method, acquiring the inspection images comprises capturing, through a graphical user interface, a selection of one or more image folders wherein the inspection images are stored.
[21] In a possible implementation of the method, training of the binary and multi-class classifiers is initiated in response to an input made through a graphical user interface.
[22]
In a possible implementation of the method, training of the binary and multi-class classifiers is controlled, via an input captured through the graphical user interface, to pause, abort or resume the training.
[23] In a possible implementation, the method comprises a step of validating whether the overall number of inspection images is sufficient to initiate the training of the binary classifier, and if so, whether the number of inspection images associated with each defect 5 type is sufficient to initiate the training of the multi-class classifier, whereby the training the multi-class classifier is initiated only for defect types for which there are a sufficient number of inspection images per defect type.
[24] In a possible implementation, the method comprises increasing the number of inspection images of a given defect type, when the number of inspection images associated with the given defect type is insufficient, using data augmentation algorithms.
[25] In a possible implementation, the method comprises automatically splitting, for each of the first and the second subsets, the inspection images into at least a training dataset and a validation dataset, prior to training the binary and multi-class classifier. The training dataset is used during training to set initial parameters of the first and the second combinations of the neural network architecture and optimizer. The validation dataset is used to validate and further adjust the weights of the nodes during the training of the binary and multi-class classifiers.
[26] In a possible implementation, the method comprises automatically splitting the inspection images into a test dataset to confirm the parameters and weights of the first and second combinations, once the binary and multi-class classifiers have been trained.
[27] In a possible implementation of the method, the number of inspection images used to train the binary and multi-class classifiers at each training iteration is dynamically adapted as a function of the available physical resources of the processor performing the training.
[28] In a possible implementation of the method, the number of inspection images passed at each iteration through the binary and multi-class classifiers are bundled in predetermined batch sizes which are tested until an acceptable batch size can be handled by the processor.
[29]
In a possible implementation of the method, the training of the binary and multi-class classifiers is performed by feeding the inspection images to the classifiers in subsequent batches, and wherein the number of inspection images in each batch is dynamically adjusted as a function of an availability of processing resources.
[30] In a possible implementation of the method, acquiring the inspection images comprises scanning an image server and displaying on a graphical user interface a representation of a folder architecture comprising a machine identifier, a customer identifier, a recipe identifier and a lot or device identifier, for selection by a user.
[31] In a possible implementation, the method comprises verifying whether the inspection images have already been stored on a training server prior to copying the inspection images to the training server.
[32] According to another aspect, an automated inspection system is provided for automatically generating, via machine learning, defect classification models, each model being adapted for the inspection of a specific part type. The different defect classification models can be used for the inspection of different types of manufactured parts, such as semiconductor and/or Printed Circuit Board (PCB) parts. The system comprises one or more dedicated servers, including processor(s) and data storage, the data storage having stored thereon. The system also comprises an acquisition module for acquiring inspection images of parts captured by the inspection system, wherein the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts.
[33]
The system also comprises a training application comprising a binary classifier that is trainable, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts, by iteratively updating weights of the nodes of the neural network architecture used for the binary classifier. The binary classifier uses a first combination of neural network architecture and optimizer. The training application also comprises a multi-class classifier that is trainable, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts. The multi-class classifier uses a second combination of neural network architecture and optimizer. The multi-class classifier is trained by iteratively updating weights of the nodes of the neural network architecture used for the multi-class
The method also comprises a step of training a multi-class classifier, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts. The multi-class classifier uses a second combination of a neural network architecture and of an optimizer. The multi-class classifier is trained by iteratively updating weights of the nodes of the different layer of the neural architecture of the second combination.
[8] Once the binary and the multi-class classifiers are trained, a defect classification model is built or generated, where a configuration file defines the first and second combinations of neural network architectures and optimizers, and parameters thereof. The configuration file also comprises the final updated weights of the nodes of each of the neural network architectures from the binary and from the multi-class classifiers. The automatic defect classification model is thereby usable by the automated inspection system for detecting defective parts and for identifying defect types on the manufactured parts being inspected.
[9] In a possible implementation of the method, the step of training the binary classifier further comprises an initial step of automatically exploring different combinations of neural network architecture and optimizer on an exploring subset of the inspection images. The first combination selected for the binary classifier corresponds to the combination that provided, during the exploration step, the highest accuracy in identifying non-defective from defective parts for a given number of epochs.
[10] In a possible implementation of the method, the step of training the multi-class classifier further comprises an initial step of automatically exploring the different combinations of neural network architectures and optimizer, using another exploring subset of inspection images. The second combination of neural network architecture and optimizer selected for the multi-class classifier corresponds to the combination that provided, during the exploration step, the highest accuracy in identifying the different defect types for a given number of epochs.
[11] In a possible implementation of the method, the step of training the binary classifier further comprises a step of automatically exploring different loss functions and different learning rate schedulers. The first combination is further defined by a loss function and by a learning rate scheduler that provided, during the exploration phase, together with the neural network architecture and the optimizer, the highest accuracy in detecting non-defective from defective parts for the given number of epochs. The selection of the loss function and of the learning rate is made automatically. The configuration file of the defect classification model further comprises the parameters from the selected loss function and from the learning rate scheduler for the binary classifier.
[12]
In a possible implementation of the method, the step of training the multi-class classifier further comprises a step of automatically exploring the different loss functions and the learning rate schedulers. The second combination is further defined by the loss function and the learning rate scheduler that provided, during the exploration phase, together with the neural network architecture and the optimizer, the highest accuracy in identifying the defect types for the given number of epochs. The configuration file of the defect classification model further comprises the parameters from the selected loss function and from the learning rate scheduler of the multi-class classifier.
[13]
In a possible implementation of the method, the updated weights and the parameters of the selected neural network architectures, optimizers, loss functions and learning rate schedulers are packaged in the configuration file that loadable by the automated inspection system.
[14] In a possible implementation of the method, the different neural network architectures comprise at least one of the following neural network architectures:
ResNet34, NesNet50, ResNet101, ResNet152, WideResNet50, WideResNet101, IncptionV3 and InceptionResNet.
[15] In a possible implementation of the method, the different optimizers comprise at least one of: Adam and SGD optimizers.
[16] In a possible implementation of the method, the different loss functions comprise at least one of: cross entropy and NII loss functions.
[17] In a possible implementation of the method, the different rate learning schedulers comprise at least one of: decay and cyclical rate schedulers.
[18] In a possible implementation of the method, the automated inspection system is trained to detect different defect types on at least one of the following products:
semiconductor packages, wafers, side-single PCBs, double-side PCBs, multilayer PCBs and substrates.
[19] In a possible implementation of the method, the defect types comprise one or more of: under plating, foreign material, incomplete parts, cracks, smudges, abnormal circuits, resist residue, deformation, scratches, clusters and metal film residue.
[20] In a possible implementation of the method, acquiring the inspection images comprises capturing, through a graphical user interface, a selection of one or more image folders wherein the inspection images are stored.
[21] In a possible implementation of the method, training of the binary and multi-class classifiers is initiated in response to an input made through a graphical user interface.
[22]
In a possible implementation of the method, training of the binary and multi-class classifiers is controlled, via an input captured through the graphical user interface, to pause, abort or resume the training.
[23] In a possible implementation, the method comprises a step of validating whether the overall number of inspection images is sufficient to initiate the training of the binary classifier, and if so, whether the number of inspection images associated with each defect 5 type is sufficient to initiate the training of the multi-class classifier, whereby the training the multi-class classifier is initiated only for defect types for which there are a sufficient number of inspection images per defect type.
[24] In a possible implementation, the method comprises increasing the number of inspection images of a given defect type, when the number of inspection images associated with the given defect type is insufficient, using data augmentation algorithms.
[25] In a possible implementation, the method comprises automatically splitting, for each of the first and the second subsets, the inspection images into at least a training dataset and a validation dataset, prior to training the binary and multi-class classifier. The training dataset is used during training to set initial parameters of the first and the second combinations of the neural network architecture and optimizer. The validation dataset is used to validate and further adjust the weights of the nodes during the training of the binary and multi-class classifiers.
[26] In a possible implementation, the method comprises automatically splitting the inspection images into a test dataset to confirm the parameters and weights of the first and second combinations, once the binary and multi-class classifiers have been trained.
[27] In a possible implementation of the method, the number of inspection images used to train the binary and multi-class classifiers at each training iteration is dynamically adapted as a function of the available physical resources of the processor performing the training.
[28] In a possible implementation of the method, the number of inspection images passed at each iteration through the binary and multi-class classifiers are bundled in predetermined batch sizes which are tested until an acceptable batch size can be handled by the processor.
[29]
In a possible implementation of the method, the training of the binary and multi-class classifiers is performed by feeding the inspection images to the classifiers in subsequent batches, and wherein the number of inspection images in each batch is dynamically adjusted as a function of an availability of processing resources.
[30] In a possible implementation of the method, acquiring the inspection images comprises scanning an image server and displaying on a graphical user interface a representation of a folder architecture comprising a machine identifier, a customer identifier, a recipe identifier and a lot or device identifier, for selection by a user.
[31] In a possible implementation, the method comprises verifying whether the inspection images have already been stored on a training server prior to copying the inspection images to the training server.
[32] According to another aspect, an automated inspection system is provided for automatically generating, via machine learning, defect classification models, each model being adapted for the inspection of a specific part type. The different defect classification models can be used for the inspection of different types of manufactured parts, such as semiconductor and/or Printed Circuit Board (PCB) parts. The system comprises one or more dedicated servers, including processor(s) and data storage, the data storage having stored thereon. The system also comprises an acquisition module for acquiring inspection images of parts captured by the inspection system, wherein the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts.
[33]
The system also comprises a training application comprising a binary classifier that is trainable, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts, by iteratively updating weights of the nodes of the neural network architecture used for the binary classifier. The binary classifier uses a first combination of neural network architecture and optimizer. The training application also comprises a multi-class classifier that is trainable, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts. The multi-class classifier uses a second combination of neural network architecture and optimizer. The multi-class classifier is trained by iteratively updating weights of the nodes of the neural network architecture used for the multi-class
7 classifier.
[34] The training application comprises algorithms to generate, from the trained binary classifier and from the trained multi-class classifier, a defect classification model defined by a configuration file. The configuration file comprises the parameters of the first and second combinations of neural network architecture and optimizer and the updated weights of the nodes of each neural network architecture. The automatic defect classification model is thereby usable by the automated inspection system for detecting defects on additional parts being inspected.
[35] In a possible implementation of the system, the data storage further stores an exploration module, a first set of different neural network architectures and a second set of optimizers. The exploration module is configured to explore different combinations of neural network architectures and optimizers on an exploring subset of the inspection images for training the binary classifier. The exploration module is further configured to automatically select the first combination of neural network architecture and optimizer for the binary classifier that provides the highest accuracy in detecting non-defective from defective parts for a given number of epochs.
[36] In a possible implementation of the system, the exploration module is further configured to explore different combinations of neural and optimizers on the exploring subset of the inspection images for training the multi-class classifier. The exploration module is further configured to automatically select the second combination of neural network architecture and an optimizer for the multi-class classifier that provides the highest accuracy in identifying defect types for a given number of epochs.
[37] In a possible implementation, the system comprises a graphical user interface, allowing a user to select one or more image folders wherein the inspection images are stored and to initiate, in response to an input made through the graphical user interface, the generation of the automatic defect classification model.
[38] In a possible implementation, the system comprises a database for storing the inspection images of parts captured by the inspection system, and for storing the label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts.
[34] The training application comprises algorithms to generate, from the trained binary classifier and from the trained multi-class classifier, a defect classification model defined by a configuration file. The configuration file comprises the parameters of the first and second combinations of neural network architecture and optimizer and the updated weights of the nodes of each neural network architecture. The automatic defect classification model is thereby usable by the automated inspection system for detecting defects on additional parts being inspected.
[35] In a possible implementation of the system, the data storage further stores an exploration module, a first set of different neural network architectures and a second set of optimizers. The exploration module is configured to explore different combinations of neural network architectures and optimizers on an exploring subset of the inspection images for training the binary classifier. The exploration module is further configured to automatically select the first combination of neural network architecture and optimizer for the binary classifier that provides the highest accuracy in detecting non-defective from defective parts for a given number of epochs.
[36] In a possible implementation of the system, the exploration module is further configured to explore different combinations of neural and optimizers on the exploring subset of the inspection images for training the multi-class classifier. The exploration module is further configured to automatically select the second combination of neural network architecture and an optimizer for the multi-class classifier that provides the highest accuracy in identifying defect types for a given number of epochs.
[37] In a possible implementation, the system comprises a graphical user interface, allowing a user to select one or more image folders wherein the inspection images are stored and to initiate, in response to an input made through the graphical user interface, the generation of the automatic defect classification model.
[38] In a possible implementation, the system comprises a database for storing the inspection images of parts captured by the inspection system, and for storing the label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts.
8 [39] In a possible implementation of the system, the data storage of the one or more dedicated servers further store a pre-processing module, for validating whether the overall number of inspection images is sufficient to initiate the training of the binary and multi-class classifiers, and for copying the images to the database and processing the images, such as by using data augmentation algorithms.
[40] According to yet another aspect, a non-transitory storage medium is provided. The non-transitory storage medium has stored thereon computer-readable instructions for causing a processor to:
acquire inspection images of parts captured by the inspection system, wherein the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts, train a binary classifier, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts, the binary classifier using a first combination of neural network architecture and an optimizer, train a multi-class classifier, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts, the multi-class classifier using a second combination of neural network architecture and an optimizer, and generate, from the trained binary classifier and from the multi-class classifier, a defect classification model comprising configuration settings of the first and second combinations of neural network architecture and an optimizer, the automatic defect classification model being thereby usable by the automated inspection system for detecting defects on additional parts being inspected.
[41] Other features and advantages of the embodiments of the present invention will be better understood upon reading of preferred embodiments thereof with reference to the appended drawings.
[40] According to yet another aspect, a non-transitory storage medium is provided. The non-transitory storage medium has stored thereon computer-readable instructions for causing a processor to:
acquire inspection images of parts captured by the inspection system, wherein the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts, train a binary classifier, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts, the binary classifier using a first combination of neural network architecture and an optimizer, train a multi-class classifier, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts, the multi-class classifier using a second combination of neural network architecture and an optimizer, and generate, from the trained binary classifier and from the multi-class classifier, a defect classification model comprising configuration settings of the first and second combinations of neural network architecture and an optimizer, the automatic defect classification model being thereby usable by the automated inspection system for detecting defects on additional parts being inspected.
[41] Other features and advantages of the embodiments of the present invention will be better understood upon reading of preferred embodiments thereof with reference to the appended drawings.
9 BRIEF DESCRIPTION OF THE FIGURES
[42] FIG. 1 is a flowchart of steps performed by a pre-processing module, according to a possible embodiment of a method and system for automatically generating a defect classification model for use by an automated inspection system.
[43] FIG. 2 is a flowchart of steps performed by a training application, according to a possible embodiment of the method and system.
[44] FIG. 3 is a flowchart of steps performed by a post-processing module, according to a possible embodiment of the method and system.
[45] FIG. 4 is a graphical user interface (GUI) for capturing a selection of image folders containing training images for use by the training application, according to a possible embodiment.
[46] FIG. 5 is a graphical user interface (GUI) for monitoring and controlling the training process, for example to pause, abort or resume the training.
[47] FIG. 6 is a schematic illustration a system for automatically generating a defect classification model for use in an automated inspection system of manufactured parts, according to a possible embodiment.
[48] FIG. 7 is a schematic illustration of a computer network including computers or servers, and data storage, and being part of, or linked to, an automated part inspection system, according to a possible embodiment.
[49] It should be noted that the appended drawings illustrate only exemplary embodiments of the invention and are therefore not to be construed as limiting of its scope, for the invention may admit to other equally effective embodiments.
DETAILED DESCRIPTION
[50] In the following description, similar features in the drawings have been given similar reference numerals and, to not unduly encumber the figures, some elements may not be indicated on some figures if they were already identified in a preceding figure. It should be 5 understood herein that the elements of the drawings are not necessarily depicted to scale, since emphasis is placed upon clearly illustrating the elements and interactions between elements.
[51] The automatic defect classification system, method and software application described in the present application relate to automated 2D and/or 3D
inspection and
[42] FIG. 1 is a flowchart of steps performed by a pre-processing module, according to a possible embodiment of a method and system for automatically generating a defect classification model for use by an automated inspection system.
[43] FIG. 2 is a flowchart of steps performed by a training application, according to a possible embodiment of the method and system.
[44] FIG. 3 is a flowchart of steps performed by a post-processing module, according to a possible embodiment of the method and system.
[45] FIG. 4 is a graphical user interface (GUI) for capturing a selection of image folders containing training images for use by the training application, according to a possible embodiment.
[46] FIG. 5 is a graphical user interface (GUI) for monitoring and controlling the training process, for example to pause, abort or resume the training.
[47] FIG. 6 is a schematic illustration a system for automatically generating a defect classification model for use in an automated inspection system of manufactured parts, according to a possible embodiment.
[48] FIG. 7 is a schematic illustration of a computer network including computers or servers, and data storage, and being part of, or linked to, an automated part inspection system, according to a possible embodiment.
[49] It should be noted that the appended drawings illustrate only exemplary embodiments of the invention and are therefore not to be construed as limiting of its scope, for the invention may admit to other equally effective embodiments.
DETAILED DESCRIPTION
[50] In the following description, similar features in the drawings have been given similar reference numerals and, to not unduly encumber the figures, some elements may not be indicated on some figures if they were already identified in a preceding figure. It should be 5 understood herein that the elements of the drawings are not necessarily depicted to scale, since emphasis is placed upon clearly illustrating the elements and interactions between elements.
[51] The automatic defect classification system, method and software application described in the present application relate to automated 2D and/or 3D
inspection and
10 metrology equipment. The Applicant already commercializes different inspection systems, such as semiconductor package inspection systems (GATS-2128, GATS-6163, etc.), printed circuit board inspection systems (STAR REC, NRFEID), optical vision inspection systems (wafer or substrate bump inspection system), etc., with which the proposed system for automatically generating defect-classification model(s) can be used. The exemplary system and process described with reference to FIGs. 1 to 6 are especially adapted for the inspection of semiconductor and PCB products, but the proposed system and method can be used in other applications and for other industries which require the automated inspection of parts, such as the automobile industry, as an example only. The proposed defect classification system can also be adapted to different automated visual inspection systems, other than laser triangulation.
[52] With regard to semiconductor inspection, existing optical inspection systems often include an offline defect-detection stage, where classification of the detected defects into product-specific or client-specific classes is done manually, by human operators. There also exist systems provided with automatic Al/machine learning (ML) classifiers that analyze images produced by inspection cameras and that assigns the defects to predefined classes, in real time. However, those systems are difficult to configure, and often require a data expert and/or Al specialist to be able to tune the classifiers properly.
In addition, predefined ML-models are typically used, and they are not always optimal depending on the types of defects that need to be detected.
[53] According to an aspect of the present invention, an automated Artificial Intelligence (Al)-based defect classification system is provided. When combined with an automatic defect classification model, as will be described in more detail below, the inspection
[52] With regard to semiconductor inspection, existing optical inspection systems often include an offline defect-detection stage, where classification of the detected defects into product-specific or client-specific classes is done manually, by human operators. There also exist systems provided with automatic Al/machine learning (ML) classifiers that analyze images produced by inspection cameras and that assigns the defects to predefined classes, in real time. However, those systems are difficult to configure, and often require a data expert and/or Al specialist to be able to tune the classifiers properly.
In addition, predefined ML-models are typically used, and they are not always optimal depending on the types of defects that need to be detected.
[53] According to an aspect of the present invention, an automated Artificial Intelligence (Al)-based defect classification system is provided. When combined with an automatic defect classification model, as will be described in more detail below, the inspection
11 system can provide higher accuracy in measurements, and may bring down inspection cost and reduce human errors throughout the inspection process.
[54] The proposed system and method allow to automatically generate one or more defect classification model(s) for use in an automated part inspection system.
Using the proposed system and method, users, such as machine operators having no or limited knowledge of Al, can build new detect-classifier models or update existing ones, whether the inspection system is inline or offline. The proposed system and method can build or update classification models of different product types, such as wafers, individual dies, substrates or IC packages. The proposed system and method thus greatly simplify training of the inspection system in detecting defect types, for different products. In some implementations, the proposed classification-training system can detect changes in the types of parts and/or defects that are presented thereto, and can adjust its defect classification models, with no or limited user intervention. A human operator (e.g. process engineer) may still need to validate the models before they are pushed to the inline inspection systems, but the training process is greatly simplified. Conditions that triggers the creation of new classification models, or adjustments to existing classification models, are multiple and include:
i. new images captured, such as for defects from depleted defect classes (i.e.
classes for which there are not enough images to properly tune or configure the classification model), ii. changes in class labels (where a label can correspond to a defective or non-defective part, or to a type of defect), iii. new product to be inspected (which requires a new classification model to be built), iv. scheduled retraining, and v. classification model drift, detected by quality assurance mechanisms.
[55] In possible embodiments, the proposed system and method can automatically select, from a list of existing machine learning (ML)-models, the most appropriate model to be used, and tuning of hyperparameters associated to said model can be realized using a simple grid search technique.
[54] The proposed system and method allow to automatically generate one or more defect classification model(s) for use in an automated part inspection system.
Using the proposed system and method, users, such as machine operators having no or limited knowledge of Al, can build new detect-classifier models or update existing ones, whether the inspection system is inline or offline. The proposed system and method can build or update classification models of different product types, such as wafers, individual dies, substrates or IC packages. The proposed system and method thus greatly simplify training of the inspection system in detecting defect types, for different products. In some implementations, the proposed classification-training system can detect changes in the types of parts and/or defects that are presented thereto, and can adjust its defect classification models, with no or limited user intervention. A human operator (e.g. process engineer) may still need to validate the models before they are pushed to the inline inspection systems, but the training process is greatly simplified. Conditions that triggers the creation of new classification models, or adjustments to existing classification models, are multiple and include:
i. new images captured, such as for defects from depleted defect classes (i.e.
classes for which there are not enough images to properly tune or configure the classification model), ii. changes in class labels (where a label can correspond to a defective or non-defective part, or to a type of defect), iii. new product to be inspected (which requires a new classification model to be built), iv. scheduled retraining, and v. classification model drift, detected by quality assurance mechanisms.
[55] In possible embodiments, the proposed system and method can automatically select, from a list of existing machine learning (ML)-models, the most appropriate model to be used, and tuning of hyperparameters associated to said model can be realized using a simple grid search technique.
12 [56] The proposed system and method also have the advantage of being implemented, in preferred embodiments, on dedicated servers. The proposed system and method can thus be implemented in a closed environment, without having to access any Al cloud-based platforms. The proposed automated classification training can thus be performed in isolated environments, such as in plants where there is no or restricted internet-access.
[57] The term "processing device" encompasses computers, nodes, servers and/or specialized electronic devices configured and adapted to receive, store, process and/or transmit data, such as labelled images and machine learning models.
"Processing devices" include processors, such as microcontrollers and/or microprocessors, CPUs and GPUs, as examples only. The processors are used in combination with data storage, also referred to as "memory" or "storage medium". Data storage can store instructions, algorithms, rules and/or image data to be processed. Storage medium encompasses volatile or non-volatile/persistent memory, such as registers, cache, RAM, flash memory, ROM, as examples only. The type of memory is, of course, chosen according to the desired use, whether it should retain instructions, or temporarily store, retain or update data. A schematic representation of an architecture being part of, or linked to, an automated inspection system, wherein the architecture includes such processing devices and data storage, is presented in FIG. 7.
[58] By "classifier," we refer to machine learning algorithms whose function is to classify or predict the classes or labels to which the data, such as a digital image, belongs. A
"classifier" is a special type of machine learning model. In some instances, a classifier is a discrete-value function that assigns class labels to data points. In the present application, the data points are derived from digital inspection images. A "binary classifier" predicts, with a given degree of accuracy and certainty, of which two "class" a given set of data belongs too. For manufactured part inspection, classes can be "pass" or "fail." A "multi-class" classifier predicts, with a given degree of accuracy and certainty, to which one of a plurality of classes a given set of data belongs too.
[59] By "defect classification model" or "model" we also refer to machine learning models. In the present description, the defect classification model is a combination of trained classifiers, used in combination with optimizers, loss functions and learning rate schedulers, which parameters have also been adjusted during training of the classifiers.
[57] The term "processing device" encompasses computers, nodes, servers and/or specialized electronic devices configured and adapted to receive, store, process and/or transmit data, such as labelled images and machine learning models.
"Processing devices" include processors, such as microcontrollers and/or microprocessors, CPUs and GPUs, as examples only. The processors are used in combination with data storage, also referred to as "memory" or "storage medium". Data storage can store instructions, algorithms, rules and/or image data to be processed. Storage medium encompasses volatile or non-volatile/persistent memory, such as registers, cache, RAM, flash memory, ROM, as examples only. The type of memory is, of course, chosen according to the desired use, whether it should retain instructions, or temporarily store, retain or update data. A schematic representation of an architecture being part of, or linked to, an automated inspection system, wherein the architecture includes such processing devices and data storage, is presented in FIG. 7.
[58] By "classifier," we refer to machine learning algorithms whose function is to classify or predict the classes or labels to which the data, such as a digital image, belongs. A
"classifier" is a special type of machine learning model. In some instances, a classifier is a discrete-value function that assigns class labels to data points. In the present application, the data points are derived from digital inspection images. A "binary classifier" predicts, with a given degree of accuracy and certainty, of which two "class" a given set of data belongs too. For manufactured part inspection, classes can be "pass" or "fail." A "multi-class" classifier predicts, with a given degree of accuracy and certainty, to which one of a plurality of classes a given set of data belongs too.
[59] By "defect classification model" or "model" we also refer to machine learning models. In the present description, the defect classification model is a combination of trained classifiers, used in combination with optimizers, loss functions and learning rate schedulers, which parameters have also been adjusted during training of the classifiers.
13 [60] By "neural network architectures", also simply referred to as "neural networks," we refer to specific types of machine-learning models (or algorithms) that are based on a collection of connected nodes (also referred to "artificial neurons" or "perceptrons") which are structured in layers. Nodes of a given layer are interconnected to nodes of neighbouring layers, and weights are assigned to the connections between the nodes.
The bias represents how far a prediction is from the intended value. Biases can be seen as the difference between the node's input and its output. There exist different neural network architectures, including convolutional neural networks, recurrent neural networks, etc. More specific examples of neural network architectures include the ResNet and Inception architectures.
[61] By "loss functions," we refer to algorithmic functions that measure how far a prediction made by a model or classifier is from the actual value. The smaller the number returned by the loss function is, the more accurate the classifier's prediction is.
[62] By "optimizers," we refer to algorithms that tie the loss function to the classifier parameters to update weights of the nodes of the classifier in response to the output of the loss function. In other words, optimizers update the weights of the nodes of the neural network architecture to minimize the loss function.
[63] By "learning rate schedulers," we refer to algorithms that adjusts the learning rate during training the machine learning classifier by reducing the learning rate according to a predefined schedule. The learning rate is an hyperparameter controlling how much the classifier needs to be changed (by adjusting the weights) in response to the estimated error.
[64] By "epochs," we refer to the number of passes or cycles for the entire dataset to go through the machine learning model or architecture. An "epoch" is one complete presentation of the dataset to the machine-learning algorithm.
[65] With reference to FIGs. 1 to 7, the proposed system 600 (identified in FIG. 6) will be described. The system generally includes preprocessing modules, to prepare the inspection images that will be used to build or adjust the defect classification model (shown in FIG. 1); a training application, accessible via a training Application Programming Interface (API), that creates or builds the defect classification models based on the labelled and processed training images (FIG. 2), by training a binary and a multi-class classifiers,
The bias represents how far a prediction is from the intended value. Biases can be seen as the difference between the node's input and its output. There exist different neural network architectures, including convolutional neural networks, recurrent neural networks, etc. More specific examples of neural network architectures include the ResNet and Inception architectures.
[61] By "loss functions," we refer to algorithmic functions that measure how far a prediction made by a model or classifier is from the actual value. The smaller the number returned by the loss function is, the more accurate the classifier's prediction is.
[62] By "optimizers," we refer to algorithms that tie the loss function to the classifier parameters to update weights of the nodes of the classifier in response to the output of the loss function. In other words, optimizers update the weights of the nodes of the neural network architecture to minimize the loss function.
[63] By "learning rate schedulers," we refer to algorithms that adjusts the learning rate during training the machine learning classifier by reducing the learning rate according to a predefined schedule. The learning rate is an hyperparameter controlling how much the classifier needs to be changed (by adjusting the weights) in response to the estimated error.
[64] By "epochs," we refer to the number of passes or cycles for the entire dataset to go through the machine learning model or architecture. An "epoch" is one complete presentation of the dataset to the machine-learning algorithm.
[65] With reference to FIGs. 1 to 7, the proposed system 600 (identified in FIG. 6) will be described. The system generally includes preprocessing modules, to prepare the inspection images that will be used to build or adjust the defect classification model (shown in FIG. 1); a training application, accessible via a training Application Programming Interface (API), that creates or builds the defect classification models based on the labelled and processed training images (FIG. 2), by training a binary and a multi-class classifiers,
14 and post-processing modules (FIG. 3), which manages the classification models created and update the inspection system 606 with the newly/adjusted classification models.
[66] A possible implementation of the system 600 is illustrated in FIG. 6. The system 600 comprises an acquisition module 610 to acquire the inspection images captured by the inspection system 606, with either 2D or 3D cameras. The inspection system operates via server 604, which runs the inspection system application and comprises database or data storage to store the inspection images. The inspection images are thus first stored in the inspection system database 608, and the defect classification application 618 is used to classify or label the inspection images, with label information indicative of whether the part is defective or not, and if defective, with label information indicative of the defect type. Another computer or server 602 runs the training application 614 and provides the training-API that can be accessed by the inspection system 606. The server includes one or more processors to run the training application 614. The server 602 comprises non-transitory data storage to store the computer-readable instructions of the application. An exploration module 612, which allows to explore different combinations of classifiers, is provided as part of the training application 614. The system 600 preferably includes its own training database 616, to store the different classifiers, optimizers, loss function and rate scheduler that can be used when building or updating a defect classification model, as well as the configuration settings and parameters of these machine learning algorithms.
PRE-PROCESSING
[67] FIG.1 schematically illustrates possible pre-processing modules 10, part of the proposed system. The pre-processing modules generally prepare the training dataset that will be used by the training application. The training dataset generally comprises labelled inspection images, i.e. images that have been tagged or labelled with inspection information, such as "non-defective" or "defective," or with a specific "defect type". At step 104, the proposed system can be triggered or activated to acquire inspection images captured by cameras of the inspection system, to scan one or more servers storing inspection images and their associated label or class information. A class or label can be, for example, 0 for non-defective parts, and numbers 1 to n, for n different types of defects, such as 1 for under-plating defects, 2 for foreign material defects, 3 for incomplete parts, 4 for pi cracks, etc. There can be any number of defect types, such as between 5 to 100 different types of defects. Labels can thus be any alphanumerical indicators used to tag or provide an indication of the content of the image, such as whether the image corresponds to a defective or non-defective part, and for defective part, the type of defect.
Typically, most inspection images captured by optical inspections correspond to non-defective parts, unless there is an issue with the manufacturing process.
Thus, most of the inspection images generated by optical inspection systems are labelled or associated with a non-defective or non-defective label (or class). A small portion of the inspection images, such typically between 0.01% to 10%, as an example only, correspond to defective parts. In this case, the inspection images need to be specifically labelled or classified according to the defect type. As illustrated in FIG. 6, the one or more servers (ref. numeral 604) storing the inspection images is part of, or linked to, the inspection system (numeral 606 in FIG.6). A schematic representation of a possible architecture of the one or more computers or servers 604, is further detailed in FIG. 7, wherein the architecture includes processing devices (such as a 2D PC providing the graphical user
[66] A possible implementation of the system 600 is illustrated in FIG. 6. The system 600 comprises an acquisition module 610 to acquire the inspection images captured by the inspection system 606, with either 2D or 3D cameras. The inspection system operates via server 604, which runs the inspection system application and comprises database or data storage to store the inspection images. The inspection images are thus first stored in the inspection system database 608, and the defect classification application 618 is used to classify or label the inspection images, with label information indicative of whether the part is defective or not, and if defective, with label information indicative of the defect type. Another computer or server 602 runs the training application 614 and provides the training-API that can be accessed by the inspection system 606. The server includes one or more processors to run the training application 614. The server 602 comprises non-transitory data storage to store the computer-readable instructions of the application. An exploration module 612, which allows to explore different combinations of classifiers, is provided as part of the training application 614. The system 600 preferably includes its own training database 616, to store the different classifiers, optimizers, loss function and rate scheduler that can be used when building or updating a defect classification model, as well as the configuration settings and parameters of these machine learning algorithms.
PRE-PROCESSING
[67] FIG.1 schematically illustrates possible pre-processing modules 10, part of the proposed system. The pre-processing modules generally prepare the training dataset that will be used by the training application. The training dataset generally comprises labelled inspection images, i.e. images that have been tagged or labelled with inspection information, such as "non-defective" or "defective," or with a specific "defect type". At step 104, the proposed system can be triggered or activated to acquire inspection images captured by cameras of the inspection system, to scan one or more servers storing inspection images and their associated label or class information. A class or label can be, for example, 0 for non-defective parts, and numbers 1 to n, for n different types of defects, such as 1 for under-plating defects, 2 for foreign material defects, 3 for incomplete parts, 4 for pi cracks, etc. There can be any number of defect types, such as between 5 to 100 different types of defects. Labels can thus be any alphanumerical indicators used to tag or provide an indication of the content of the image, such as whether the image corresponds to a defective or non-defective part, and for defective part, the type of defect.
Typically, most inspection images captured by optical inspections correspond to non-defective parts, unless there is an issue with the manufacturing process.
Thus, most of the inspection images generated by optical inspection systems are labelled or associated with a non-defective or non-defective label (or class). A small portion of the inspection images, such typically between 0.01% to 10%, as an example only, correspond to defective parts. In this case, the inspection images need to be specifically labelled or classified according to the defect type. As illustrated in FIG. 6, the one or more servers (ref. numeral 604) storing the inspection images is part of, or linked to, the inspection system (numeral 606 in FIG.6). A schematic representation of a possible architecture of the one or more computers or servers 604, is further detailed in FIG. 7, wherein the architecture includes processing devices (such as a 2D PC providing the graphical user
15 interface via the Equipment Front End Module (EFEM), a 3D processing PC and a GPU PC) and data storage 608. 2D and 3D cameras capture the inspection images, with the 2D or 3D frame grabbers, and the images are processed by the CPUs and/or GPUs of the computers or servers 604 and stored in the inspection system database 608. It will be noted that the architectures illustrated at FIGs. 6 and 7 are exemplary only, and that other arrangements are possible. For example, server 604 which manages and stores the images and the training server 602 can be combined as a single server, which can also correspond to the server of the inspection system. The different functions and applications, including image storage and management, training and part inspection can be run from one or from multiple servers / computers.
[68] In the exemplary embodiment presented in FIGs. 1 to 7, the inspection images are images of semiconductor or PCB parts, such as semiconductor packages, Silicon or other material wafers, single-sided PCBs, double-sided PCBs, multilayer PCBs, substrates and the like. The defect types can include, as examples only: under plating, foreign material, incomplete parts, cracks, smudges, abnormal circuits, resist residue, deformations, scratches, abnormal passivation, clusters, metal film residue, etc. This list of defects is off course non-exhaustive, as the number and types of defects can differ depending on the types of parts being inspected.
[69] In the exemplary embodiment, the one or more servers 604 store the inspection
[68] In the exemplary embodiment presented in FIGs. 1 to 7, the inspection images are images of semiconductor or PCB parts, such as semiconductor packages, Silicon or other material wafers, single-sided PCBs, double-sided PCBs, multilayer PCBs, substrates and the like. The defect types can include, as examples only: under plating, foreign material, incomplete parts, cracks, smudges, abnormal circuits, resist residue, deformations, scratches, abnormal passivation, clusters, metal film residue, etc. This list of defects is off course non-exhaustive, as the number and types of defects can differ depending on the types of parts being inspected.
[69] In the exemplary embodiment, the one or more servers 604 store the inspection
16 images in folders, organized according to a given folder structure, with different folder levels, such as Machine Name, Customer Name, Recipe or Parts and Lots. An exemplary embodiment of a folder structure is shown with reference to FIG.4, where the folder structure 408 of the server is presented through a graphical user interface (GUI) 400. The GUI shows a folder structure which is consistent with the folder structure of the one or more servers, allowing a selection of the training images to be used by the training application for retraining or creating/building new defect classification models. The image folder structure or arborescence is thus preferably periodically scanned, as per step 102.
At step 106 (FIG.1), the folder structure 408 presented to the user via the GUI may be dynamically updated by the system, as to correspond to the most updated folder structure and content of server 604. Since the proposed system and method allow to build new classification models and/or adjust existing ones, while the inspection system is in operation (in line), the folder structure presented through the GUI preferably reflects the current state of the image storage servers, since new inspection images may be continuously captured while the inspection images are selected and captured through the GUI, for the training of the classification model(s).
[70] Still referring to FIG. 1, at step 108, folders containing inspection images can be selected for retraining and/or creating new classification models, and the selection is captured through the GUI is used by the system to fetch and load the images to be used for the training steps. The system thus receives, via the GUI, a selection of folder(s) containing the inspection images to be used for training, as illustrated in FIG. 4. In possible embodiments, the selection can include higher-level folder(s), such as the "Parts" folder, as a way of selecting all lower-level folders, i.e. all "Lots" folders.
[71] Training of the classification model can be initiated with an input made through the GUI, such as via button 404 in FIG.4, corresponding to step 112 of FIG. 1. The GUI also allows controlling training process, by stopping or resuming the training, if needed (button 406 on FIG. 4.) At the starting/initiation step 112, the total number of selected inspection images is preferably calculated, or counted, and displayed on the GUI (see Fig. 4, pane 402). In order for the classification model to be retrained or created, a minimal number of training inspection images is required. The system also preferably calculates the number of selected inspection images for each defect type, for the same reason, i.e.
to ensure that a minimal number of images has been gathered for the proper training and/or creating defect-classifiers for the different defect types. A minimum number of images per defect
At step 106 (FIG.1), the folder structure 408 presented to the user via the GUI may be dynamically updated by the system, as to correspond to the most updated folder structure and content of server 604. Since the proposed system and method allow to build new classification models and/or adjust existing ones, while the inspection system is in operation (in line), the folder structure presented through the GUI preferably reflects the current state of the image storage servers, since new inspection images may be continuously captured while the inspection images are selected and captured through the GUI, for the training of the classification model(s).
[70] Still referring to FIG. 1, at step 108, folders containing inspection images can be selected for retraining and/or creating new classification models, and the selection is captured through the GUI is used by the system to fetch and load the images to be used for the training steps. The system thus receives, via the GUI, a selection of folder(s) containing the inspection images to be used for training, as illustrated in FIG. 4. In possible embodiments, the selection can include higher-level folder(s), such as the "Parts" folder, as a way of selecting all lower-level folders, i.e. all "Lots" folders.
[71] Training of the classification model can be initiated with an input made through the GUI, such as via button 404 in FIG.4, corresponding to step 112 of FIG. 1. The GUI also allows controlling training process, by stopping or resuming the training, if needed (button 406 on FIG. 4.) At the starting/initiation step 112, the total number of selected inspection images is preferably calculated, or counted, and displayed on the GUI (see Fig. 4, pane 402). In order for the classification model to be retrained or created, a minimal number of training inspection images is required. The system also preferably calculates the number of selected inspection images for each defect type, for the same reason, i.e.
to ensure that a minimal number of images has been gathered for the proper training and/or creating defect-classifiers for the different defect types. A minimum number of images per defect
17 type is required to prevent bias in the classification model, as will be explained in more detail below. If the minimum number of images is not reached for a given defect, the inspection images corresponding to said defects are preferably discarded from the selection, before starting to build the classification model. Inspection images associated with the discarded defect type may eventually be used for training when the minimal number of images is reached (as in paragraph i) of page 4 above). The preprocessing module thus validates whether the overall number of inspection images is sufficient to initiate the training of the binary classifier ¨ which will be used to detect pass or fail (i.e.
defective vs non-defective parts), and if so, whether the number of inspection images associated with each defect type is sufficient to initiate the training of the multi-class classifier ¨ which will be used to detect the different types of defects, whereby the training the multi-class classifier is initiated only for defect types for which there are a sufficient number of inspection images.
[72] Still referring to FIG. 1, step 110 is thus preferably performed prior to step 112. The system may calculate the total number of inspection images selected and additionally provide the total number of selected inspection images for each defect type, displaying the results to the user through the GUI. After confirming that the number of inspection images selected meets the minimal training requirements, step 112 may be triggered by the user via an input made through the GUI, such as using button 402, as shown in FIG.4.
In the case where the number of inspection images is insufficient for training, a message may be displayed to the user, requesting a new selection of inspection images.
[73] At step 114, the selected inspection images are pre-processed, preferably before being transferred and stored on the training server 602, identified in FIG.6.
Image pre-processing may include extracting relevant information from the inspection images and transforming the images according to techniques well known in the art, such as image cropping, contrast adjustment, histogram equalization, binarization, image normalization and/or standardization. It will be noted that in the exemplary embodiment, different servers are used, such as server(s) 604 associated with the inspection system, and training server 602, associated with the training application. The inspection images selected for training are thus copied and transferred from server 604 to server 602. However, in other embodiments, it can be contemplated using the same server, wherein its memory is partitioned for storing inline inspection images, and for storing of the selected training images.
defective vs non-defective parts), and if so, whether the number of inspection images associated with each defect type is sufficient to initiate the training of the multi-class classifier ¨ which will be used to detect the different types of defects, whereby the training the multi-class classifier is initiated only for defect types for which there are a sufficient number of inspection images.
[72] Still referring to FIG. 1, step 110 is thus preferably performed prior to step 112. The system may calculate the total number of inspection images selected and additionally provide the total number of selected inspection images for each defect type, displaying the results to the user through the GUI. After confirming that the number of inspection images selected meets the minimal training requirements, step 112 may be triggered by the user via an input made through the GUI, such as using button 402, as shown in FIG.4.
In the case where the number of inspection images is insufficient for training, a message may be displayed to the user, requesting a new selection of inspection images.
[73] At step 114, the selected inspection images are pre-processed, preferably before being transferred and stored on the training server 602, identified in FIG.6.
Image pre-processing may include extracting relevant information from the inspection images and transforming the images according to techniques well known in the art, such as image cropping, contrast adjustment, histogram equalization, binarization, image normalization and/or standardization. It will be noted that in the exemplary embodiment, different servers are used, such as server(s) 604 associated with the inspection system, and training server 602, associated with the training application. The inspection images selected for training are thus copied and transferred from server 604 to server 602. However, in other embodiments, it can be contemplated using the same server, wherein its memory is partitioned for storing inline inspection images, and for storing of the selected training images.
18 [74] Still referring to FIG. 1, at step 116, the system stores inspection images information and performs checksum verification in the database associated with or part of the training server 602. This verification step allows avoiding duplicating images on the training server, whereby uniqueness of each image is verified before copying a new image in the database. Inspection images which are not already stored on the training server, as identified at step 120, are updated or copied to the training server, as per step 118.
[75] Preferably, once the inspection images have been transferred to the training server 602, the inspection images are divided, or split, into at least a training dataset and a validation dataset. The system is thus configured to automatically split, for each of the first and the second subsets of images, the inspection images into at least a training dataset and a validation dataset, prior to training the binary and multi-class classifier. The first subset will include images for training the binary classifier (i.e. the first subset includes images labelled as defective and non-defective), and the second subset will include the images of the first subset labelled as defective, and further labelled with a defect type.
Images in the training dataset will be used during training to adjust or change the weights of the nodes of the different layers of the neural network architecture, using the optimizer, to reduce or minimize the output of the loss function. The validation dataset will then be used to measure the accuracy of the model using the adjusted weights determined during the training of the binary and multi-class classifiers.
[76] The training and validation datasets will be used alternatively to train and adjust the weights of the nodes of the classifiers. More preferably, the inspection images are split into three datasets, the training and validation datasets, as mentioned previously, and a third "test" or "final validation" dataset, which is used to validate the final state of the classification model, once trained. In other words, the test dataset is used by the system to confirm the final weights of the neural network architecture for the binary and multi-class classifiers, once trained.
TRAINING
[77]
FIG.2 schematically illustrates steps of the training process performed by the training application (or training module), for automatically building defect classification-models, each model being adapted to specific manufacturing processes, part models, or client requirements. The training application is a software program, stored on server 602 (identified in Fig. 6), comprising different sub modules. The training module is governed
[75] Preferably, once the inspection images have been transferred to the training server 602, the inspection images are divided, or split, into at least a training dataset and a validation dataset. The system is thus configured to automatically split, for each of the first and the second subsets of images, the inspection images into at least a training dataset and a validation dataset, prior to training the binary and multi-class classifier. The first subset will include images for training the binary classifier (i.e. the first subset includes images labelled as defective and non-defective), and the second subset will include the images of the first subset labelled as defective, and further labelled with a defect type.
Images in the training dataset will be used during training to adjust or change the weights of the nodes of the different layers of the neural network architecture, using the optimizer, to reduce or minimize the output of the loss function. The validation dataset will then be used to measure the accuracy of the model using the adjusted weights determined during the training of the binary and multi-class classifiers.
[76] The training and validation datasets will be used alternatively to train and adjust the weights of the nodes of the classifiers. More preferably, the inspection images are split into three datasets, the training and validation datasets, as mentioned previously, and a third "test" or "final validation" dataset, which is used to validate the final state of the classification model, once trained. In other words, the test dataset is used by the system to confirm the final weights of the neural network architecture for the binary and multi-class classifiers, once trained.
TRAINING
[77]
FIG.2 schematically illustrates steps of the training process performed by the training application (or training module), for automatically building defect classification-models, each model being adapted to specific manufacturing processes, part models, or client requirements. The training application is a software program, stored on server 602 (identified in Fig. 6), comprising different sub modules. The training module is governed
19 by a state machine which verifies whether or nor caller actions are allowed a given moment, including actions such as abort, initialize, train, pause, succeeded or failed/exception. The training module includes a training-API, including programming functions to manage a training session. The functions of the training-API cab comprise an initialization function, a resume function, a start function, a pause function, an abort function, an evaluation function, a getStatus, getPerformance and getTrainingPerformance function, as examples only. The initialization function prepares each training cycle by verifying the content of the first and second datasets, including for example confirming that all classes have enough sample images, i.e. that the number of images in each class is above a given threshold and that the exploration, training, validation and test image subset, for the training of each classifier, has a predetermined size. The initialization module also initializes the defect classification model to be built with parameters of the previous model built or with predefined or random weights for each classifier. A configuration file comprising initial parameters of the first and second combinations of neural network architecture and optimizer is thus loaded when starting the training. The configuration file can take different formats, such as for example a .JSON
format. The initial configuration file can include fields such as the classifier model to be loaded during training, the optimizer to be loaded during training, including the learning rate decay factor to be used, the augmentation data algorithms to be used in case of imbalanced class samples, the number of epoch for which a stable accuracy must be maintained, as examples only. The start function will start the training process, and the training operations will be started using the parameters of the different fields part of the initial configuration file. The evaluate function will evaluate a trained defect classification model against an evaluation dataset of inspection images, and will return an average accuracy, expressed as a percentage, i.e. the percentage of the predictions which are correct.
[78]
The training application, which can be called by the inspection system via a training-API, will thus initially load, or comprises, a binary classifier that can be trained to determine whether the inspection images correspond to non-defective or defective parts (represented by steps 208 and 214 on the left side of FIG.2) and a multi-class classifier ( which may also be referred to as "defect type-classifier, and represented by steps 210 and 216 on the right side of FIG.2), which can be trained to determine the defect types in the inspection images that have been determined as defective by the binary classifier.
[79] By "training" the classifiers, it is meant that the weights of the nodes of the different layers forming the classifier (binary or multi-class) are iteratively adjusted, to maximize the accuracy of the classifier's prediction, for a given number of trials (or epochs). The optimizer selected in combination with the neural network architecture is used during training for iteratively adjusting the weights of the nodes. Once trained, the weights associated with the plurality of nodes of the classifiers are set and define the classification model that can be used for automated part inspection.
[80] The inspection images selected for creating new classification models and/or adjusting existing models therefore form a first subset (split into training, validation and test subsets, and optionally - exploration) to train the binary classifier, and the inspection images that have been determined as defective form a second subset of inspection images, used for training the multi-class classifier.
[81] The proposed system and method are especially advantageous in that different combinations of neural network algorithms and of optimizer algorithms can be used for the binary classifier and for the multi-class classifier. What's more, determination of the best combination of neural network architecture and optimizer for the binary classifier and for the multi-class classifier can be made through an exploration phase, as will be explained in more detail below.
[82] The binary classifier may thus use a first combination of neural network (NN)
format. The initial configuration file can include fields such as the classifier model to be loaded during training, the optimizer to be loaded during training, including the learning rate decay factor to be used, the augmentation data algorithms to be used in case of imbalanced class samples, the number of epoch for which a stable accuracy must be maintained, as examples only. The start function will start the training process, and the training operations will be started using the parameters of the different fields part of the initial configuration file. The evaluate function will evaluate a trained defect classification model against an evaluation dataset of inspection images, and will return an average accuracy, expressed as a percentage, i.e. the percentage of the predictions which are correct.
[78]
The training application, which can be called by the inspection system via a training-API, will thus initially load, or comprises, a binary classifier that can be trained to determine whether the inspection images correspond to non-defective or defective parts (represented by steps 208 and 214 on the left side of FIG.2) and a multi-class classifier ( which may also be referred to as "defect type-classifier, and represented by steps 210 and 216 on the right side of FIG.2), which can be trained to determine the defect types in the inspection images that have been determined as defective by the binary classifier.
[79] By "training" the classifiers, it is meant that the weights of the nodes of the different layers forming the classifier (binary or multi-class) are iteratively adjusted, to maximize the accuracy of the classifier's prediction, for a given number of trials (or epochs). The optimizer selected in combination with the neural network architecture is used during training for iteratively adjusting the weights of the nodes. Once trained, the weights associated with the plurality of nodes of the classifiers are set and define the classification model that can be used for automated part inspection.
[80] The inspection images selected for creating new classification models and/or adjusting existing models therefore form a first subset (split into training, validation and test subsets, and optionally - exploration) to train the binary classifier, and the inspection images that have been determined as defective form a second subset of inspection images, used for training the multi-class classifier.
[81] The proposed system and method are especially advantageous in that different combinations of neural network algorithms and of optimizer algorithms can be used for the binary classifier and for the multi-class classifier. What's more, determination of the best combination of neural network architecture and optimizer for the binary classifier and for the multi-class classifier can be made through an exploration phase, as will be explained in more detail below.
[82] The binary classifier may thus use a first combination of neural network (NN)
20 architecture and an optimizer, while the multi-class classifier may use a second combination of neural network architecture and optimizer. It will be noted that the binary classifier can be another type of classifier, such as a decision tree, a support vector machine or a naïve bayes classifier. Preferably, the first and second combinations may further include a selection of loss function algorithms and associated learning rate factors.
The first and second combinations may or not be the same, but experiments have shown that, in general, better results are obtained when the first and second combinations of neural network architecture and optimizer differ for the binary and multi-class classifiers.
As an example, the first combination of neural network architecture and optimizer for the binary classifier could be the ResNet34 architecture and the Adam optimizer, while the neural network and optimizer for the multi-class classifier could be the ResNet152 architecture and the SGD optimizer.
The first and second combinations may or not be the same, but experiments have shown that, in general, better results are obtained when the first and second combinations of neural network architecture and optimizer differ for the binary and multi-class classifiers.
As an example, the first combination of neural network architecture and optimizer for the binary classifier could be the ResNet34 architecture and the Adam optimizer, while the neural network and optimizer for the multi-class classifier could be the ResNet152 architecture and the SGD optimizer.
21 [83] Still referring to FIG. 2, at step 202, data augmentation may be performed on the inspection images associated with the same defect type, for which the number of inspection images associated thereto are either insufficient for training the defect classification model, or too little compared to other defect classes. This step serves to balance the number of images of each of the defect types to improve training accuracy and avoid bias that would otherwise be created for defect types having a much greater number of inspection images compared to other types of defects. Data augmentation algorithms apply random transformations to a given training inspection image, thus creating new images to increase the number of images in a given class.
Transformations can be spatial transformations (such as rotating or flipping the images), but other types of transformation are possible, including changing the Red Green Blue (RGB) values of pixels, as an example only.
[84] At step 204, the training API dynamically loads an initial configuration file (or training initial settings), which may contain various training parameters, such as for example the first combination of neural network architecture and optimizer to be used for training the binary classifier (step 214) and the second combination of neural network architecture and optimizer to be used for training the multi-class classifier (step 216). The configuration file and/or training settings may further contain an indication of the loss function algorithms to be used for the binary classifier and multi-class classifier training (which may be different or not for the two classifiers), and an indication of the learning rate scheduler algorithms (and factors) to be used for the training of the binary classifier and of the multi-class classifier (which may also be different or not for the two classifiers). As examples only, the different neural network architectures that can be used by the binary and/or multi-class classifiers can include: NesNet50, ResNet101, ResNet152, WideResNet50, WideResNet101, IncptionV3 and InceptionResNet. Examples of optimizer algorithms that can be used for training the binary and/or multi-class classifiers can include the Adam optimizer and the Stochastic Gradient Descent (SGD) optimizer.
Examples of loss function algorithms can include the cross-entropy loss function and the NII loss function, and examples of learning rate scheduler algorithms can include the decay scheduler and the cyclical rate scheduler. The initial configuration file can also optionally include weights for each nodes of the classifiers.
[85] The above examples of neural network architectures, optimizers, loss functions and learning rate schedulers are non-exhaustive, and the present invention may be used
Transformations can be spatial transformations (such as rotating or flipping the images), but other types of transformation are possible, including changing the Red Green Blue (RGB) values of pixels, as an example only.
[84] At step 204, the training API dynamically loads an initial configuration file (or training initial settings), which may contain various training parameters, such as for example the first combination of neural network architecture and optimizer to be used for training the binary classifier (step 214) and the second combination of neural network architecture and optimizer to be used for training the multi-class classifier (step 216). The configuration file and/or training settings may further contain an indication of the loss function algorithms to be used for the binary classifier and multi-class classifier training (which may be different or not for the two classifiers), and an indication of the learning rate scheduler algorithms (and factors) to be used for the training of the binary classifier and of the multi-class classifier (which may also be different or not for the two classifiers). As examples only, the different neural network architectures that can be used by the binary and/or multi-class classifiers can include: NesNet50, ResNet101, ResNet152, WideResNet50, WideResNet101, IncptionV3 and InceptionResNet. Examples of optimizer algorithms that can be used for training the binary and/or multi-class classifiers can include the Adam optimizer and the Stochastic Gradient Descent (SGD) optimizer.
Examples of loss function algorithms can include the cross-entropy loss function and the NII loss function, and examples of learning rate scheduler algorithms can include the decay scheduler and the cyclical rate scheduler. The initial configuration file can also optionally include weights for each nodes of the classifiers.
[85] The above examples of neural network architectures, optimizers, loss functions and learning rate schedulers are non-exhaustive, and the present invention may be used
22 with different types of architectures, optimizers, loss functions and learning rate schedulers. The first and second combination of training parameters and settings may also include other types of parameters, such as the number of epochs, in addition to those mentioned above. Preferably, the configuration file (or training settings) can be updated to add or remove any number of neural network architectures, optimizers, loss functions, and learning rate schedulers.
[86] The proposed method and system are also advantageous in that, in possible implementations, different types of neural network architectures and optimizers can be tried or explored, before fully training the binary and multi-class classifiers, so as to select the best or more accurate combination of architecture and optimizer for a given product or manufacturing part type. In other words, the proposed method and system may include a step where different combinations for neural networks and optimizers (and also possibly loss functions and learning rate schedulers) are tried and explored, for training of the binary classifier, and also for training of the multi-class classifier, in order to select the "best" or "optimal" combinations to fully train the binary and multi-class classifiers, i.e. the combination which provides the highest accuracy. Still referring to FIG. 2, the exploration or trial step 206 is performed prior to the training steps 212. The system, at step 208, tests (or explores/tries) different combinations of the neural networks and optimizers, and also preferably, of loss functions and learning rate schedulers, on a reduced subset of the training inspection images, for a given number of epochs, for both the binary and multi-class classifiers. Following the exploration stage, a first combination with the best classification accuracy is selected for the binary classifier.
[87] The objective of the initial steps of exploring different combinations of neural networks and optimizers is to identify and select the pair of neural network and optimizer that provides the highest accuracy for a given subset of inspection images.
More specifically, exploring different combinations of neural networks and optimizers can comprise launching several shorter training sessions, using different pairs of neural networks and optimizers, and recording the performance (i.e. accuracy) for each pair tried on the reduced dataset (i.e exploration dataset). For example, before building the defect classification model with a given combination of binary classifier and optimizer, n different pairs of neural networks and optimizers are tried, such as ResNet34 as the neural network and Adam as the optimizer, InsceptionResNet as the neural network and Gradient Descent as the optimizer, ResNet34 as the neural network and SGD as the optimizer. The
[86] The proposed method and system are also advantageous in that, in possible implementations, different types of neural network architectures and optimizers can be tried or explored, before fully training the binary and multi-class classifiers, so as to select the best or more accurate combination of architecture and optimizer for a given product or manufacturing part type. In other words, the proposed method and system may include a step where different combinations for neural networks and optimizers (and also possibly loss functions and learning rate schedulers) are tried and explored, for training of the binary classifier, and also for training of the multi-class classifier, in order to select the "best" or "optimal" combinations to fully train the binary and multi-class classifiers, i.e. the combination which provides the highest accuracy. Still referring to FIG. 2, the exploration or trial step 206 is performed prior to the training steps 212. The system, at step 208, tests (or explores/tries) different combinations of the neural networks and optimizers, and also preferably, of loss functions and learning rate schedulers, on a reduced subset of the training inspection images, for a given number of epochs, for both the binary and multi-class classifiers. Following the exploration stage, a first combination with the best classification accuracy is selected for the binary classifier.
[87] The objective of the initial steps of exploring different combinations of neural networks and optimizers is to identify and select the pair of neural network and optimizer that provides the highest accuracy for a given subset of inspection images.
More specifically, exploring different combinations of neural networks and optimizers can comprise launching several shorter training sessions, using different pairs of neural networks and optimizers, and recording the performance (i.e. accuracy) for each pair tried on the reduced dataset (i.e exploration dataset). For example, before building the defect classification model with a given combination of binary classifier and optimizer, n different pairs of neural networks and optimizers are tried, such as ResNet34 as the neural network and Adam as the optimizer, InsceptionResNet as the neural network and Gradient Descent as the optimizer, ResNet34 as the neural network and SGD as the optimizer. The
23 accuracy for each pair is determined using a reduced validation image data set, and the pair having the highest accuracy is selected and used for the training step.
[88]
Similarly, different loss functions and learning rate scheduler factors can be tried or explored, when exploring the pairs of neural network and optimizers. The loss function and a learning rate scheduler which provides, together with the NN-architecture and the optimizer, the best accuracy (expressed as a percentage of correct prediction over the total number of predictions made) in detecting non-defective from defective parts for the given number of epochs, on an exploration subset of images, is thus identified and retained for the training of the binary classifier.
[89]
Similarly, at step 210, different combinations of neural networks and optimizers (and also possibly of loss functions and learning rate schedulers) are tried (or explored) on the reduced subset of training inspection images, for determining the best combination to be used to fully train the multi-class classifier. The exploration is also performed for a given number of epochs. The combination with the best classification accuracy is selected for the multi-class classifier.
[90] As mentioned previously, the epoch number may be a parameter of the training system. As such, in possible embodiments, the exploration training phase can automatically stop once the binary and the multi-class classifiers reach a predetermined number of epochs for each combination. Preferably, the exploration training may be controlled, for instance stopped, resumed, and aborted, by the user, via the GUI, at any time during the exploration phase.
[91] In possible embodiments of the system, the training exploration can be bypassed by loading a configuration file comprising the initial parameters associated with the binary classifier, and those associated with the multi-class classifier. In that case, steps 206, 208 and 210 are bypassed.
[92] Still referring to FIG. 2, once the exploration phase is either completed or bypassed, training of the binary and of the multi-class classifiers can begin.
The binary classifier training starts at step 214, using, in one possible embodiment, the combination of neural network and optimizer algorithms determined during the exploration phase at step 208. The inspection images forming the first image subset are used for training the binary classifier. All of the selected images can form the first subset, or only a portion of
[88]
Similarly, different loss functions and learning rate scheduler factors can be tried or explored, when exploring the pairs of neural network and optimizers. The loss function and a learning rate scheduler which provides, together with the NN-architecture and the optimizer, the best accuracy (expressed as a percentage of correct prediction over the total number of predictions made) in detecting non-defective from defective parts for the given number of epochs, on an exploration subset of images, is thus identified and retained for the training of the binary classifier.
[89]
Similarly, at step 210, different combinations of neural networks and optimizers (and also possibly of loss functions and learning rate schedulers) are tried (or explored) on the reduced subset of training inspection images, for determining the best combination to be used to fully train the multi-class classifier. The exploration is also performed for a given number of epochs. The combination with the best classification accuracy is selected for the multi-class classifier.
[90] As mentioned previously, the epoch number may be a parameter of the training system. As such, in possible embodiments, the exploration training phase can automatically stop once the binary and the multi-class classifiers reach a predetermined number of epochs for each combination. Preferably, the exploration training may be controlled, for instance stopped, resumed, and aborted, by the user, via the GUI, at any time during the exploration phase.
[91] In possible embodiments of the system, the training exploration can be bypassed by loading a configuration file comprising the initial parameters associated with the binary classifier, and those associated with the multi-class classifier. In that case, steps 206, 208 and 210 are bypassed.
[92] Still referring to FIG. 2, once the exploration phase is either completed or bypassed, training of the binary and of the multi-class classifiers can begin.
The binary classifier training starts at step 214, using, in one possible embodiment, the combination of neural network and optimizer algorithms determined during the exploration phase at step 208. The inspection images forming the first image subset are used for training the binary classifier. All of the selected images can form the first subset, or only a portion of
24 the images selected.
[93] The multi-class classifier training starts at step 216, using preferably, similar to the binary classifier, the combination of neural network and optimizer determined as most efficient and/or accurate during the exploration phase, at step 210. In this case, the subset of the inspection images used for training the multi-class classifier consists of a subset of the first subset, i.e. this second subset comprises the inspection images classified as "defective" by the binary classifier.
[94] During training of the binary and multi-class classifiers, the training and validation image datasets are used to iteratively adjust weights of the nodes of the binary classifier and of the multi-class classifier, based on parameters of the optimizers, after each epoch, for example. Adjusting the neural network parameters can include the automatic adjustment of the weights applied to the nodes of the different layers of the neural network layers, until the differences between the actual and predicted outcomes of each training pass are satisfactorily reduced, using the selected optimizer, loss function and learning rate factor. The validation subset is thus used as representative of the actual outcome of the prediction on the part state (non-defective or defective, and type of defect). Adjusting the optimizer can include iteratively adjusting its hyperparameters, which are used to control the learning process. It will be noted that the training process is completely automated, and is performed autonomously, by providing an initial configuration file to the training-API.
[95] In possible embodiment of the system and process, it may be possible for the user to pause the training. When the system receives a pause or stop instruction, the system saves the current configuration settings and all information relevant to the training, such as the number of epochs run, in a database on the training server. If the training is resumed, the system fetches all the information of the database as a restarting point. FIG.
5 shows a possible GUI where the status of the building process of the defect classification model can be monitored (window 500), by indicating the current iteration and progress of the training (see lower section 506 of window 500). The GUI also provides the possibility to pause (502) or abort (504) the training process, if needed.
[96] In one possible embodiment of the system, during the training process, the number of inspection images used to train the binary and multi-class classifiers at each training iteration is dynamically adapted as a function of the available physical resources of the processor performing the training. More specifically, the batch size, defined as the number of images passing through the binary and multi-class classifiers at each iteration, can be dynamically modified, as indicated by step 218. This step advantageously enables to adjust, in real time, the training process as a function of the physical/processing resources available for running the training. In possible embodiments, the sizes of batches may have pre-defined values, and different batch sizes can be tried, until a warning or an indication (such as a memory error) that the processing resources (such as the GPU) are fully utilized is detected by the training system. In this case, the next lower batch size is tried until an acceptable batch size that can be handled by the processor (typically the GPU) is reached_ In other words, the subset of inspection images submitted to the classifiers is fed in subsequent batches, and the number of inspection images in each batch is dynamically adjusted as a function of the availability of processing resources (i.e.
available processing capacity or processor availability).This feature or option of the training system eliminates 15 the need to have prior knowledge of the hardware specifications, or training model requirements or parameter size. This feature also enables the training system to be highly portable, as different manufacturing plants may have different servers/processing device requirements and/or specifications.
[97]
Training of the binary classifier and of the multi-class classifier is completed with 20 the accuracy of the classifiers reach a given accuracy threshold (such as above 95%) in a given number of epochs. The defect classification model is thus built from the trained binary and multi-class classifier and is defined by a configuration file which comprises the last updated parameters of the first and second combinations of neural networks and optimizers at the end of the training session. The selected neural networks, optimizers,
[93] The multi-class classifier training starts at step 216, using preferably, similar to the binary classifier, the combination of neural network and optimizer determined as most efficient and/or accurate during the exploration phase, at step 210. In this case, the subset of the inspection images used for training the multi-class classifier consists of a subset of the first subset, i.e. this second subset comprises the inspection images classified as "defective" by the binary classifier.
[94] During training of the binary and multi-class classifiers, the training and validation image datasets are used to iteratively adjust weights of the nodes of the binary classifier and of the multi-class classifier, based on parameters of the optimizers, after each epoch, for example. Adjusting the neural network parameters can include the automatic adjustment of the weights applied to the nodes of the different layers of the neural network layers, until the differences between the actual and predicted outcomes of each training pass are satisfactorily reduced, using the selected optimizer, loss function and learning rate factor. The validation subset is thus used as representative of the actual outcome of the prediction on the part state (non-defective or defective, and type of defect). Adjusting the optimizer can include iteratively adjusting its hyperparameters, which are used to control the learning process. It will be noted that the training process is completely automated, and is performed autonomously, by providing an initial configuration file to the training-API.
[95] In possible embodiment of the system and process, it may be possible for the user to pause the training. When the system receives a pause or stop instruction, the system saves the current configuration settings and all information relevant to the training, such as the number of epochs run, in a database on the training server. If the training is resumed, the system fetches all the information of the database as a restarting point. FIG.
5 shows a possible GUI where the status of the building process of the defect classification model can be monitored (window 500), by indicating the current iteration and progress of the training (see lower section 506 of window 500). The GUI also provides the possibility to pause (502) or abort (504) the training process, if needed.
[96] In one possible embodiment of the system, during the training process, the number of inspection images used to train the binary and multi-class classifiers at each training iteration is dynamically adapted as a function of the available physical resources of the processor performing the training. More specifically, the batch size, defined as the number of images passing through the binary and multi-class classifiers at each iteration, can be dynamically modified, as indicated by step 218. This step advantageously enables to adjust, in real time, the training process as a function of the physical/processing resources available for running the training. In possible embodiments, the sizes of batches may have pre-defined values, and different batch sizes can be tried, until a warning or an indication (such as a memory error) that the processing resources (such as the GPU) are fully utilized is detected by the training system. In this case, the next lower batch size is tried until an acceptable batch size that can be handled by the processor (typically the GPU) is reached_ In other words, the subset of inspection images submitted to the classifiers is fed in subsequent batches, and the number of inspection images in each batch is dynamically adjusted as a function of the availability of processing resources (i.e.
available processing capacity or processor availability).This feature or option of the training system eliminates 15 the need to have prior knowledge of the hardware specifications, or training model requirements or parameter size. This feature also enables the training system to be highly portable, as different manufacturing plants may have different servers/processing device requirements and/or specifications.
[97]
Training of the binary classifier and of the multi-class classifier is completed with 20 the accuracy of the classifiers reach a given accuracy threshold (such as above 95%) in a given number of epochs. The defect classification model is thus built from the trained binary and multi-class classifier and is defined by a configuration file which comprises the last updated parameters of the first and second combinations of neural networks and optimizers at the end of the training session. The selected neural networks, optimizers,
25 loss functions and learning rate schedulers are packaged in the configuration file that loadable by the automated inspection system. The configuration file can comprise parameters such as the neural network architecture user for the binary classifier (for example: ResNet34), the source model (including weight settings) for the binary classifier, the neural network architecture user for the multi-class classifier (for example, InceptionResNet), source model for the multi-class classifier (including weight settings), optimizer for the binary and for the multi-class classifier (e.g. Adam), the learning rate (e.g.
0.03) and the learning rate decay factor (e.g. 1.0). The automatic defect classification model is thereby usable by the automated inspection system for detecting defective parts
0.03) and the learning rate decay factor (e.g. 1.0). The automatic defect classification model is thereby usable by the automated inspection system for detecting defective parts
26 and for identifying defect types on the manufactured parts being inspected.
POST-PROCESSING
[98] The post-processing modules illustrated in FIG.3 comprise the different modules involved in storing the defect classification models, once built (step 304), and in updating the inspection system's GUI (step 306) with the training results, and/or updating the training and/or inspection system's databases (308) with the new models created or the existing models updated. Preferably, before transferring the defect classification model build to the inspection system, the test dataset has been used to demonstrate the accuracy of the first and second combination of optimizers and binary / multi-class classifiers.
[99] Thus, at step 310, the resulting defect classification model is generated and comprises, in a configuration file, the type and parameters of the binary classifier and of the multi-class classifier. For example, a defect classification model for a new semiconductor part may include, in the form of a configuration file, the first combination of neural network-architecture, optimizer, loss function and learning rate scheduler to be used for binary classifier, as well as the relevant parameters settings for each of those algorithms, and the second combination of neural network-architecture, optimizer, loss function and learning rate scheduler to be used for multi-class classifier, as well as the relevant parameters settings for each of those algorithms.
[100] The defect classification model may be stored in a database located on the training server and/or on the inspection system server. The results of the training process, such as the first, and second combinations selected and corresponding first, and second accuracy, may be displayed on the GUI. In one embodiment, the results may be exported into performance reports (step 312).
[101] In use, the automatic defect classification application loads the appropriate defect classification model according to the part type selected by an operator, through the GUI.
Thus, each part type can be associated to its own defect classification model, each model having been tuned and trained to optimize its accuracy for a given part type or client requirements. The automatic defect classification can advantageously detect new defects that are captured by the optical system, such as by classifying new/unknown defects to an "unknown" category or label. If the number of "unknown" defects for a given lot is above a given threshold, the application can be configured to generate a warning that the
POST-PROCESSING
[98] The post-processing modules illustrated in FIG.3 comprise the different modules involved in storing the defect classification models, once built (step 304), and in updating the inspection system's GUI (step 306) with the training results, and/or updating the training and/or inspection system's databases (308) with the new models created or the existing models updated. Preferably, before transferring the defect classification model build to the inspection system, the test dataset has been used to demonstrate the accuracy of the first and second combination of optimizers and binary / multi-class classifiers.
[99] Thus, at step 310, the resulting defect classification model is generated and comprises, in a configuration file, the type and parameters of the binary classifier and of the multi-class classifier. For example, a defect classification model for a new semiconductor part may include, in the form of a configuration file, the first combination of neural network-architecture, optimizer, loss function and learning rate scheduler to be used for binary classifier, as well as the relevant parameters settings for each of those algorithms, and the second combination of neural network-architecture, optimizer, loss function and learning rate scheduler to be used for multi-class classifier, as well as the relevant parameters settings for each of those algorithms.
[100] The defect classification model may be stored in a database located on the training server and/or on the inspection system server. The results of the training process, such as the first, and second combinations selected and corresponding first, and second accuracy, may be displayed on the GUI. In one embodiment, the results may be exported into performance reports (step 312).
[101] In use, the automatic defect classification application loads the appropriate defect classification model according to the part type selected by an operator, through the GUI.
Thus, each part type can be associated to its own defect classification model, each model having been tuned and trained to optimize its accuracy for a given part type or client requirements. The automatic defect classification can advantageously detect new defects that are captured by the optical system, such as by classifying new/unknown defects to an "unknown" category or label. If the number of "unknown" defects for a given lot is above a given threshold, the application can be configured to generate a warning that the
27 classification model needs to be updated, and, in some possible implementation, the proposed system and method can update (or retrain) of the classification model automatically.
[102] The proposed method and system for generating automatic defect classification models, via machine learning, for use in automated inspection system, can advantageously be deployed on customer site's server(s) (where a "customer" is typically a manufacturing company), without needing to upload sensitive data to cloud-based servers. In addition, the proposed method and system provide users with control and create defect-classification models with no prior Al-knowledge. The proposed method and system can also work directly with inspection images, without having to rely on complex relational datasets.
The training application can be extended by adding new neural network architectures, new optimizers, loss function and learning rate schedulers. The training application includes a resize layer function (resizeLayer) which ensures that the number of outputs of the newly added neural network architecture matches the number of outputs passed as arguments.
The training application also includes a forward function which pushes tensors passed as arguments to the input layer of the model and collects the output. A similar process can be performed to add new optimizers, loss functions and learning rate schedulers.
Experimental Results [103] One of the advantages of the present application is the capacity to test different combinations for the binary and the multi-class classifiers and optimizers used with the classifiers. This exploration, as detailed hereinabove and defined in steps 206, 208 and 210 of FIG.2, was implemented and tested in an experiment. Results of this experiment are presented in Table 1, which is an excerpt of an original table containing all the combinations and associated results.
[104] Table / contains the different training parameters used in each combination, the number of epochs for which the tests were run, and the accuracy results in classifying the inspection images, wherein the bold combinations are the ones selected by the system as the best combinations with respect to classification accuracy.
[102] The proposed method and system for generating automatic defect classification models, via machine learning, for use in automated inspection system, can advantageously be deployed on customer site's server(s) (where a "customer" is typically a manufacturing company), without needing to upload sensitive data to cloud-based servers. In addition, the proposed method and system provide users with control and create defect-classification models with no prior Al-knowledge. The proposed method and system can also work directly with inspection images, without having to rely on complex relational datasets.
The training application can be extended by adding new neural network architectures, new optimizers, loss function and learning rate schedulers. The training application includes a resize layer function (resizeLayer) which ensures that the number of outputs of the newly added neural network architecture matches the number of outputs passed as arguments.
The training application also includes a forward function which pushes tensors passed as arguments to the input layer of the model and collects the output. A similar process can be performed to add new optimizers, loss functions and learning rate schedulers.
Experimental Results [103] One of the advantages of the present application is the capacity to test different combinations for the binary and the multi-class classifiers and optimizers used with the classifiers. This exploration, as detailed hereinabove and defined in steps 206, 208 and 210 of FIG.2, was implemented and tested in an experiment. Results of this experiment are presented in Table 1, which is an excerpt of an original table containing all the combinations and associated results.
[104] Table / contains the different training parameters used in each combination, the number of epochs for which the tests were run, and the accuracy results in classifying the inspection images, wherein the bold combinations are the ones selected by the system as the best combinations with respect to classification accuracy.
28 [105] The original inspection image dataset used contained 159,087 images that were split into a first dataset of 144,255 images, of which 80% of the images were further split into a Training dataset and 20% into a Validation dataset, and a second dataset of 14,832 images constituting the Test dataset. The inspection images were associated to a total of 23 classes comprising defect types and the Accept type.
[106] Three classes and the 295 associated images were dropped by the method before starting the training. Each of the three classes did not respect a minimum number of inspection images having a label information corresponding to the class, the minimum number being set to 120. Consequently, the training was performed on a Training dataset of 115,160 images and a Validation dataset of 28,800 images.
[107] It can be seen from Table 1 that the testing of multiple combinations of classifiers is advantageous when selecting combinations for both the binary and the multi-class classifiers, as the variations between the different combinations are significant for both the binary classifier and the multi-class classifier. The selection of a loss function associated to a particular optimizer also has a significant impact on the accuracy of the combination.
[108] Different runs were performed on the same inspection images, with the training system always selecting the ResNet34 model and SGD optimizer for the binary classifier, and selecting deeper models such as ResNet152 and InceptionResNet combined to the SGD optimizer for the multi-class classifier, thereby confirming the accuracy of the system in selecting the best combinations with respect to classification accuracy for both the binary and the multi-class classifiers.
[109] With the number of epochs increased to 10 from 5, the accuracy of the combinations for the binary classifier ranged approximately from 77% to 92%, and the accuracy of the combinations for the multi-class classifier ranged approximately from 64%
to 82%, further confirming the impact of different combinations on the classification accuracy.
[106] Three classes and the 295 associated images were dropped by the method before starting the training. Each of the three classes did not respect a minimum number of inspection images having a label information corresponding to the class, the minimum number being set to 120. Consequently, the training was performed on a Training dataset of 115,160 images and a Validation dataset of 28,800 images.
[107] It can be seen from Table 1 that the testing of multiple combinations of classifiers is advantageous when selecting combinations for both the binary and the multi-class classifiers, as the variations between the different combinations are significant for both the binary classifier and the multi-class classifier. The selection of a loss function associated to a particular optimizer also has a significant impact on the accuracy of the combination.
[108] Different runs were performed on the same inspection images, with the training system always selecting the ResNet34 model and SGD optimizer for the binary classifier, and selecting deeper models such as ResNet152 and InceptionResNet combined to the SGD optimizer for the multi-class classifier, thereby confirming the accuracy of the system in selecting the best combinations with respect to classification accuracy for both the binary and the multi-class classifiers.
[109] With the number of epochs increased to 10 from 5, the accuracy of the combinations for the binary classifier ranged approximately from 77% to 92%, and the accuracy of the combinations for the multi-class classifier ranged approximately from 64%
to 82%, further confirming the impact of different combinations on the classification accuracy.
29 Table 1. Exploration Results Number Classifier Neural Network Loss LR
Optimizer of Accuracy Type Architecture Function Scheduler Epochs Binary ResNet34 Adam Cross Decay 5 57.36 classifier Entropy Binary ResNet34 SGD Cross Decay 5 83.49 classifier Entropy Binary ResNet101 Adam Cross Decay 5 57.02 classifier Entropy Binary ResNet101 SGD Cross Decay 5 82.32 classifier Entropy Binary ResNet152 Adam Cross Decay 5 55.99 classifier Entropy Multi ResNet152 Adam NII Loss Cyclical 5 48.29 classifier Multi ResNet152 SGD Cross Cyclical 5 52.86 classifier Entropy Multi InceptionResNet Adam NII Loss Cyclical 5 50.33 classifier Multi InceptionResNet SGD Cross Cyclical 5 57.43 classifier Entropy Multi ResNet152 Adam Cross Decay 5 42.91 classifier Entropy Multi ResNet152 SGD Cross Decay 5 51.89 classifier Entropy Multi ResNet34 Adam NII Loss Decay 5 44.29 classifier Multi ResNet34 SGD NII Loss Decay 5 46.61 classifier Multi ResNet152 Adam NII Loss Cyclical 5 48.29 classifier Multi ResNet152 SGD Cross Cyclical 5 52.86 classifier Entropy Multi InceptionResNet Adam NII Loss Cyclical 5 50.33 classifier Multi InceptionResNet SGD Cross Cyclical 5 57.53 classifier Entropy [110] Of course, numerous modifications could be made to the embodiments described above without departing from the scope of the present disclosure.
Optimizer of Accuracy Type Architecture Function Scheduler Epochs Binary ResNet34 Adam Cross Decay 5 57.36 classifier Entropy Binary ResNet34 SGD Cross Decay 5 83.49 classifier Entropy Binary ResNet101 Adam Cross Decay 5 57.02 classifier Entropy Binary ResNet101 SGD Cross Decay 5 82.32 classifier Entropy Binary ResNet152 Adam Cross Decay 5 55.99 classifier Entropy Multi ResNet152 Adam NII Loss Cyclical 5 48.29 classifier Multi ResNet152 SGD Cross Cyclical 5 52.86 classifier Entropy Multi InceptionResNet Adam NII Loss Cyclical 5 50.33 classifier Multi InceptionResNet SGD Cross Cyclical 5 57.43 classifier Entropy Multi ResNet152 Adam Cross Decay 5 42.91 classifier Entropy Multi ResNet152 SGD Cross Decay 5 51.89 classifier Entropy Multi ResNet34 Adam NII Loss Decay 5 44.29 classifier Multi ResNet34 SGD NII Loss Decay 5 46.61 classifier Multi ResNet152 Adam NII Loss Cyclical 5 48.29 classifier Multi ResNet152 SGD Cross Cyclical 5 52.86 classifier Entropy Multi InceptionResNet Adam NII Loss Cyclical 5 50.33 classifier Multi InceptionResNet SGD Cross Cyclical 5 57.53 classifier Entropy [110] Of course, numerous modifications could be made to the embodiments described above without departing from the scope of the present disclosure.
Claims (31)
1. A computer-implemented method for automatically generating a defect classification model, using machine learning, for use in an automated inspection system for the inspection of semiconductor and/or Printed Circuit Board (PCB) parts, the method 5 comprising the steps of:
acquiring inspection images of parts captured by the inspection system, wherein the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective 10 parts, training a binary classifier, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts, the binary classifier using a first combination of neural network architecture and optimizer, the binary classifier being trained by iteratively updating weights 15 associated with nodes thereof, training a multi-class classifier, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts, the multi-class classifier using a second combination of neural 20 network architecture and optimizer, the multi-class classifier being trained by iteratively updating weights associated with nodes thereof, and building, from the trained binary classifier and from the multi-class classifier, a defect classification model defined by a configuration file comprising the parameters of the first and second combinations of neural network architectures and optimizers 25 and the updated weights of the nodes of each neural network architecture, the automatic defect classification model being thereby usable by the automated inspection system for detecting defective parts and for identifying defect types on the parts being inspected.
acquiring inspection images of parts captured by the inspection system, wherein the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective 10 parts, training a binary classifier, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts, the binary classifier using a first combination of neural network architecture and optimizer, the binary classifier being trained by iteratively updating weights 15 associated with nodes thereof, training a multi-class classifier, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts, the multi-class classifier using a second combination of neural 20 network architecture and optimizer, the multi-class classifier being trained by iteratively updating weights associated with nodes thereof, and building, from the trained binary classifier and from the multi-class classifier, a defect classification model defined by a configuration file comprising the parameters of the first and second combinations of neural network architectures and optimizers 25 and the updated weights of the nodes of each neural network architecture, the automatic defect classification model being thereby usable by the automated inspection system for detecting defective parts and for identifying defect types on the parts being inspected.
2. The computer-implemented method according to claim 1, wherein training the binary 30 classifier further comprises an initial step of automatically exploring different combinations of neural network architecture and optimizer on an exploring subset of the inspection images, and wherein the first combination selected for the binary classifier corresponds to the combination that provides the highest accuracy in identifying non-defective from defective parts for a given number of epochs.
3. The computer-implemented method according to claim 1 or 2, wherein training the multi-class classifier further comprises an initial step of automatically exploring the different combinations of neural networks and optimizer on another exploring subset of the second subset of inspection images, and wherein the second combination of neural network architecture and optimizer corresponds to the combination that provides the highest accuracy in identifying the different defect types for a given number of epochs.
4. The computer-implemented method according to claim 3, wherein training the binary classifier further comprises automatically exploring different loss functions and different learning rate schedulers, and wherein the first combination is further defined by automatically selecting a loss function and a learning rate scheduler that provides, together with the neural network architecture and optimizer, the highest accuracy in detecting non-defective from defective parts for the given number of epochs, the configuration file of the defect classification model further comprising parameters from the selected loss function and learning rate scheduler of the binary classifier.
5. The computer-implemented method according to claim 4, wherein training the multi-class classifier further comprises automatically exploring the different loss functions and the learning rate schedulers, and wherein the second combination is further defined by automatically selecting a loss function and a learning rate scheduler that provides, together with the neural network architecture and the optimizer, the highest accuracy in identifying the defect types for the given number of epochs, the configuration file of the defect classification model further comprising parameters from the selected loss function and learning rate scheduler of the multi-class classifier.
6. The computer-implemented method according to claims 4 or 5, wherein the updated weights and the parameters of the selected neural network architectures, optimizers, loss functions and learning rate schedulers are packaged in the configuration file that loadable by the automated inspection system.
7. The computer-implemented method according to any one of claims 1 to 6, wherein the different neural network architectures comprise at least one of :
ResNet34, NesNet50, ResNet101, ResNet152, WideResNet50, WideResNet101, lncptionV3 and lnceptionResNet.
ResNet34, NesNet50, ResNet101, ResNet152, WideResNet50, WideResNet101, lncptionV3 and lnceptionResNet.
8. The computer-implemented method according to any one of claims 1 to 7, wherein the different optimizers comprise at least one of: Adam and SGD optimizers.
9. The computer-implemented method according to any one of claims 1 to 8, wherein the different loss functions comprise at least one of: cross entropy and Nll loss functions.
10. The computer-implemented method according to any one of claims 1 to 9, wherein the different rate learning schedulers comprise at least one of: decay and cyclical rate schedulers.
11. The computer-implemented method according to any one of claims 1 to 10, wherein the parts comprise at least one of: semiconductor packages, wafers, side-single PCBs, double-side PCBs, multilayer PCBs and substrates.
12. The computer-implemented method according to any one of claims 1 to 11, wherein the multi-class classifier is trained to detect the defect types comprising one or more of: under plating, foreign material, incomplete parts, cracks, smudges, abnormal circuits, resist residue, deformation, scratches, clusters and metal film residue.
13. The computer-implemented method according to any one of claims 1 to 12, wherein acquiring the inspection images comprises capturing, through a graphical user interface, a selection of one or more image folders wherein the inspection images are stored.
14. The computer-implemented method according to claim 13, wherein training of the binary and multi-class classifiers is initiated in response to an input made through a graphical user interface.
15. The computer-implemented method according to claim 14, wherein the training of the binary and multi-class classifiers is controlled, via an input captured through the graphical user interface, to pause, abort or resume the training.
16. The computer-implemented method according to any one of claims 1 to 15, comprising validating whether the overall number of inspection images is sufficient to initiate the training of the binary classifier, and if so, whether the number of inspection images associated with each defect type is sufficient to initiate the training of the multi-class classifier, whereby the training the multi-class classifier is initiated only for defect types for which there are a sufficient number of inspection images.
17. The computer-implemented method according to claim 16, comprising increasing the number of inspection images of a given defect type, when the number of inspection images associated with the given defect type is insufficient, using data augmentation algorithms.
18. The computer-implemented method according to any one of claims 1 to 17, comprising automatically splitting, for each of the first and the second subsets, the inspection images into at least a training dataset and a validation dataset, prior to training the binary and multi-class classifier, the training dataset being used during training to set initial parameters of the first and the second combinations of the neural network architecture and optimizer, the validation dataset being used to adjust the weights of the nodes during the training of the binary and multi-class classifiers.
19. The computer-implemented method according to claim 18, comprising further automatically splitting the inspection images into a test dataset to confirm the parameters and adjusted weights of the first and second combinations, once the binary and multi-class classifiers have been trained.
20. The computer-implemented method according to any one of claims 1 to 19, wherein the number of inspection images used to train the binary and multi-class classifiers at each training iteration is dynamically adapted as a function of the available physical resources of the processor performing the training.
21. The computer-implementation method according to any one of claims 1 to 20, wherein the number of inspection images passed at each iteration through the binary and multi-class classifiers are bundled in predetermined batch sizes which are tested until an acceptable batch size can be handled by the processor.
22. The computer-implemented method according to claim 21, wherein the training of the binary and multi-class classifiers is performed by feeding the inspection images to the classifiers in subsequent batches, and wherein the number of inspection images in each batch is dynamically adjusted as a function of an availability of processing resources.
23. The computer-implemented method according to any one of claims 1 to 22, wherein acquiring the inspection images comprises scanning an image server and displaying on a graphical user interface a representation of a folder architecture comprising a machine identifier, a customer identifier, a recipe identifier and a lot or device identifier, for selection by a user.
24. The computer-implemented method according to any one of the previous claims, comprising verifying whether the inspection images have already been stored on a training server prior to copying the inspection images to the training server.
25. An automated inspection system for generating, via machine learning, an automatic defect classification model for the inspection of semiconductor and/or Printed Circuit Board (PCB) parts, the system comprising:
one or more dedicated servers, including processor(s) and data storage, the data storage having stored thereon:
an acquisition module for acquiring inspection images of the semiconductor and/or Printed Circuit Board (PCB) parts, wherein the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts;
a training application comprising:
0 a binary classifier that is trainable, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts, by iteratively updating weights of nodes of the binary classifier, the binary classifier using a first combination of neural network architecture and optimizer, o a multi-class classifier that is trainable, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts, by iteratively updating weights of nodes of the multi-class classifier, the multi-class classifier using a second combination of neural network architecture and optimizer;
the training application comprising algorithms to generate, from the trained binary 5 classifier and from the trained multi-class classifier, a defect classification model defined by a configuration file comprising the parameters of the first and second combinations of neural network architecture and optimizer and the updated weights of the nodes of each neural network architecture, the automatic defect classification model being thereby usable by the automated inspection system for detecting 10 defects on additional parts being inspected.
one or more dedicated servers, including processor(s) and data storage, the data storage having stored thereon:
an acquisition module for acquiring inspection images of the semiconductor and/or Printed Circuit Board (PCB) parts, wherein the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts;
a training application comprising:
0 a binary classifier that is trainable, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts, by iteratively updating weights of nodes of the binary classifier, the binary classifier using a first combination of neural network architecture and optimizer, o a multi-class classifier that is trainable, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts, by iteratively updating weights of nodes of the multi-class classifier, the multi-class classifier using a second combination of neural network architecture and optimizer;
the training application comprising algorithms to generate, from the trained binary 5 classifier and from the trained multi-class classifier, a defect classification model defined by a configuration file comprising the parameters of the first and second combinations of neural network architecture and optimizer and the updated weights of the nodes of each neural network architecture, the automatic defect classification model being thereby usable by the automated inspection system for detecting 10 defects on additional parts being inspected.
26. The automated inspection system according to claim 25, wherein the data storage further stores an exploration module, a first set of different neural network architectures and a second set of optimizers, the exploration module being configured to explore different combinations of neural network architectures and 15 optimizers on an exploring subset of the inspection images for training the binary classifier, the exploration module being further configured to select the first combination of neural network architecture and optimizer for the binary classifier that provides the highest accuracy in detecting non-defective from defective parts for a given number of epochs.
20 27. The automated inspection system according to the claim 26, wherein the exploration module is further configured to explore different combinations of neural and optimizers on the exploring subset of the inspection images for training the multi-class classifier, the exploration module being further configured select the second combination of neural network architecture and an optimizer for the multi-class 25 classifier that provides the highest accuracy in identifying defect types for a given number of epochs.
28. The automated inspection system according to any one of claims 25 to 27, comprising a graphical user interface, allowing a user to select one or more image folders wherein the inspection images are stored and to initiate, in response to an 30 input made through the graphical user interface, the generation of the automatic defect classification model.
29. The automated inspection system according to any one of claims 25 to 28, comprising a database for storing the inspection images of parts captured by the inspection system, and for storing the label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts.
30. The automated inspection system according to any one of claims 25 to 29, wherein the data storage of the one or more dedicated servers further store a pre-processing module, for validating whether the overall number of inspection images is sufficient to initiate the training of the binary and multi-class classifiers, and for copying the images to the database and processing the images, such as by using data augmentation algorithms.
31. A non-transitory storage medium, having stored thereon computer-readable instructions for causing a processor to:
acquire inspection images of parts captured by the inspection system, wherein the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts, train a binary classifier, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts, the binary classifier using a first combination of neural network architecture and an optimizer, train a multi-class classifier, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts, the multi-class classifier using a second combination of neural network architecture and an optimizer, and generate, from the trained binary classifier and from the multi-class classifier, a defect classification model comprising configuration settings of the first and second combinations of neural network architecture and an optimizer, the automatic defect classification model being thereby usable by the automated inspection system for detecting defects on additional parts being inspected.
acquire inspection images of parts captured by the inspection system, wherein the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts, train a binary classifier, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts, the binary classifier using a first combination of neural network architecture and an optimizer, train a multi-class classifier, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts, the multi-class classifier using a second combination of neural network architecture and an optimizer, and generate, from the trained binary classifier and from the multi-class classifier, a defect classification model comprising configuration settings of the first and second combinations of neural network architecture and an optimizer, the automatic defect classification model being thereby usable by the automated inspection system for detecting defects on additional parts being inspected.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063028800P | 2020-05-22 | 2020-05-22 | |
US63/028,800 | 2020-05-22 | ||
PCT/CA2021/050672 WO2021232149A1 (en) | 2020-05-22 | 2021-05-17 | Method and system for training inspection equipment for automatic defect classification |
Publications (1)
Publication Number | Publication Date |
---|---|
CA3166581A1 true CA3166581A1 (en) | 2021-11-25 |
Family
ID=78708867
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA3166581A Pending CA3166581A1 (en) | 2020-05-22 | 2021-05-17 | Method and system for training inspection equipment for automatic defect classification |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP2023528688A (en) |
CN (1) | CN115668286A (en) |
CA (1) | CA3166581A1 (en) |
TW (1) | TW202203152A (en) |
WO (1) | WO2021232149A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230237636A1 (en) * | 2022-01-27 | 2023-07-27 | TE Connectivity Services Gmbh | Vision inspection system for defect detection |
WO2023146946A1 (en) * | 2022-01-27 | 2023-08-03 | Te Connectivity Solutions Gmbh | Vision inspection system for defect detection |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114511503B (en) * | 2021-12-30 | 2024-05-17 | 广西慧云信息技术有限公司 | Particle board surface defect detection method capable of adapting to thickness of board |
CN114594106B (en) * | 2022-03-08 | 2024-08-20 | 苏州菲利达铜业有限公司 | Real-time monitoring method and system for copper pipe electroplating process |
TWI806500B (en) * | 2022-03-18 | 2023-06-21 | 廣達電腦股份有限公司 | Image classifying device and method |
CN114627093A (en) * | 2022-03-23 | 2022-06-14 | 中国联合网络通信集团有限公司 | Quality inspection method and device, quality inspection system, electronic device and readable medium |
CN115018099A (en) * | 2022-06-01 | 2022-09-06 | 成都智谷耘行信息技术有限公司 | Rail transit defect automatic distribution method and system based on support vector machine |
TWI842292B (en) * | 2022-12-23 | 2024-05-11 | 偲倢科技股份有限公司 | Appearance defect inspection method and system for automated production line products |
CN115830403B (en) * | 2023-02-22 | 2023-05-30 | 厦门微亚智能科技有限公司 | Automatic defect classification system and method based on deep learning |
TWI844284B (en) * | 2023-02-24 | 2024-06-01 | 國立中山大學 | Method and electrical device for training cross-domain classifier |
CN116245846B (en) * | 2023-03-08 | 2023-11-21 | 华院计算技术(上海)股份有限公司 | Defect detection method and device for strip steel, storage medium and computing equipment |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160132787A1 (en) * | 2014-11-11 | 2016-05-12 | Massachusetts Institute Of Technology | Distributed, multi-model, self-learning platform for machine learning |
US10234848B2 (en) * | 2017-05-24 | 2019-03-19 | Relativity Space, Inc. | Real-time adaptive control of additive manufacturing processes using machine learning |
US10496902B2 (en) * | 2017-09-21 | 2019-12-03 | International Business Machines Corporation | Data augmentation for image classification tasks |
KR20190073756A (en) * | 2017-12-19 | 2019-06-27 | 삼성전자주식회사 | Semiconductor defect classification device, method for classifying defect of semiconductor, and semiconductor defect classification system |
US11429894B2 (en) * | 2018-02-28 | 2022-08-30 | Google Llc | Constrained classification and ranking via quantiles |
US10713769B2 (en) * | 2018-06-05 | 2020-07-14 | Kla-Tencor Corp. | Active learning for defect classifier training |
CN109961142B (en) * | 2019-03-07 | 2023-05-12 | 腾讯科技(深圳)有限公司 | Neural network optimization method and device based on meta learning |
-
2021
- 2021-05-17 CN CN202180036832.7A patent/CN115668286A/en active Pending
- 2021-05-17 CA CA3166581A patent/CA3166581A1/en active Pending
- 2021-05-17 JP JP2023515224A patent/JP2023528688A/en active Pending
- 2021-05-17 WO PCT/CA2021/050672 patent/WO2021232149A1/en active Application Filing
- 2021-05-20 TW TW110118315A patent/TW202203152A/en unknown
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230237636A1 (en) * | 2022-01-27 | 2023-07-27 | TE Connectivity Services Gmbh | Vision inspection system for defect detection |
WO2023146946A1 (en) * | 2022-01-27 | 2023-08-03 | Te Connectivity Solutions Gmbh | Vision inspection system for defect detection |
Also Published As
Publication number | Publication date |
---|---|
TW202203152A (en) | 2022-01-16 |
CN115668286A (en) | 2023-01-31 |
WO2021232149A1 (en) | 2021-11-25 |
JP2023528688A (en) | 2023-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA3166581A1 (en) | Method and system for training inspection equipment for automatic defect classification | |
EP3499418B1 (en) | Information processing apparatus, identification system, setting method, and program | |
US10964004B2 (en) | Automated optical inspection method using deep learning and apparatus, computer program for performing the method, computer-readable storage medium storing the computer program, and deep learning system thereof | |
US10679333B2 (en) | Defect detection, classification, and process window control using scanning electron microscope metrology | |
US20220254005A1 (en) | Yarn quality control | |
CN109598698B (en) | System, method, and non-transitory computer readable medium for classifying a plurality of items | |
US20220374720A1 (en) | Systems and methods for sample generation for identifying manufacturing defects | |
WO2016090044A1 (en) | Automatic defect classification without sampling and feature selection | |
KR20180091952A (en) | Characteristic selection through outlier detection and automatic process window monitoring | |
US11967060B2 (en) | Wafer level spatial signature grouping using transfer learning | |
US20190187555A1 (en) | Automatic inline detection and wafer disposition system and method for automatic inline detection and wafer disposition | |
US20200005084A1 (en) | Training method of, and inspection system based on, iterative deep learning system | |
JP7150918B2 (en) | Automatic selection of algorithm modules for specimen inspection | |
US11035666B2 (en) | Inspection-guided critical site selection for critical dimension measurement | |
US11639906B2 (en) | Method and system for virtually executing an operation of an energy dispersive X-ray spectrometry (EDS) system in real-time production line | |
JP2019113914A (en) | Data identification device and data identification method | |
CN113743447B (en) | Semiconductor flaw identification method, device, computer equipment and storage medium | |
US20240193758A1 (en) | Apparatus and method with image generation | |
US20240281953A1 (en) | Adaptive spatial pattern recognition for defect detection | |
US20210306547A1 (en) | System and edge device | |
CN118676014A (en) | Wafer inspection method, system, electronic device, storage medium and computer program product | |
Deshmukh et al. | Automatic Inspection System for Segregation of Defective Parts of Heavy Vehicles | |
JP2024533867A (en) | Method for determining whether a given delivery item is located within a monitored area - Patents.com | |
CN117522271A (en) | Industrial collaborative warehouse management method and system | |
Tseng et al. | Author's Accepted Manuscript |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request |
Effective date: 20220729 |
|
EEER | Examination request |
Effective date: 20220729 |
|
EEER | Examination request |
Effective date: 20220729 |
|
EEER | Examination request |
Effective date: 20220729 |
|
EEER | Examination request |
Effective date: 20220729 |
|
EEER | Examination request |
Effective date: 20220729 |
|
EEER | Examination request |
Effective date: 20220729 |
|
EEER | Examination request |
Effective date: 20220729 |