WO2021232149A1 - Procédé et système d'entraînement d'équipement d'inspection pour la classification automatique de défauts - Google Patents

Procédé et système d'entraînement d'équipement d'inspection pour la classification automatique de défauts Download PDF

Info

Publication number
WO2021232149A1
WO2021232149A1 PCT/CA2021/050672 CA2021050672W WO2021232149A1 WO 2021232149 A1 WO2021232149 A1 WO 2021232149A1 CA 2021050672 W CA2021050672 W CA 2021050672W WO 2021232149 A1 WO2021232149 A1 WO 2021232149A1
Authority
WO
WIPO (PCT)
Prior art keywords
training
inspection images
classifier
inspection
neural network
Prior art date
Application number
PCT/CA2021/050672
Other languages
English (en)
Inventor
Parisa Darvish Zadeh Varcheie
Louis-philippe MASSE
Original Assignee
Nidec-Read Corporation
Nidec-Read Inspection Canada Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nidec-Read Corporation, Nidec-Read Inspection Canada Corporation filed Critical Nidec-Read Corporation
Priority to CN202180036832.7A priority Critical patent/CN115668286A/zh
Priority to JP2023515224A priority patent/JP2023528688A/ja
Priority to CA3166581A priority patent/CA3166581A1/fr
Publication of WO2021232149A1 publication Critical patent/WO2021232149A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/26Testing of individual semiconductor devices
    • G01R31/265Contactless testing
    • G01R31/2656Contactless testing using non-ionising electromagnetic radiation, e.g. optical radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/87Arrangements for image or video recognition or understanding using pattern recognition or machine learning using selection of the recognition techniques, e.g. of a classifier in a multiple classifier system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/2832Specific tests of electronic circuits not provided for elsewhere
    • G01R31/2834Automated test systems [ATE]; using microprocessors or computers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/2851Testing of integrated circuits [IC]
    • G01R31/2894Aspects of quality control [QC]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Definitions

  • the technical field generally relates to inspection systems and methods for automatic defect inspection, and more specifically relates to methods and systems for automatically classifying defects of products being inspected.
  • the methods and systems presented hereinbelow are especially adapted for the inspection of semiconductor products.
  • Manufacturing processes generally include the automated inspection of the manufactured parts, at different milestones during the process, and typically at least at the end of the manufacturing process. Inspection may be conducted with inspection systems that optically analyze the manufactured parts and detect defective parts. Different technologies can be used, such as cameras combined with laser-triangulation and/or interferometry. Automated inspection systems ensure that the parts manufactured meet the quality standards expected and provide useful information on adjustments that may be needed to the manufacturing tools, equipment and/or compositions, depending on the type of defects identified.
  • a computer-implemented method for automatically generating a defect classification model, using machine learning, for use in an automated inspection system, for the inspection of manufactured parts.
  • the method comprises a step of acquiring inspection images of parts captured by the inspection system.
  • the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts, such as semiconductor and/or Printed Circuit Board (PCB) parts.
  • PCB Printed Circuit Board
  • the method also comprises a step of training a binary classifier, using a first subset of the inspection images, to determine whether the inspection images correspond to non defective or defective parts.
  • the binary classifier uses a first combination of a neural network architecture and of an optimizer.
  • the binary classifier is trained by iteratively updating weights of the nodes of the different layers of the neural network architecture used in the first combination.
  • the method also comprises a step of training a multi-class classifier, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts.
  • the multi-class classifier uses a second combination of a neural network architecture and of an optimizer.
  • the multi-class classifier is trained by iteratively updating weights of the nodes of the different layer of the neural architecture of the second combination.
  • a defect classification model is built or generated, where a configuration file defines the first and second combinations of neural network architectures and optimizers, and parameters thereof.
  • the configuration file also comprises the final updated weights of the nodes of each of the neural network architectures from the binary and from the multi-class classifiers.
  • the automatic defect classification model is thereby usable by the automated inspection system for detecting defective parts and for identifying defect types on the manufactured parts being inspected.
  • the step of training the binary classifier further comprises an initial step of automatically exploring different combinations of neural network architecture and optimizer on an exploring subset of the inspection images.
  • the first combination selected for the binary classifier corresponds to the combination that provided, during the exploration step, the highest accuracy in identifying non-defective from defective parts for a given number of epochs.
  • the step of training the multi-class classifier further comprises an initial step of automatically exploring the different combinations of neural network architectures and optimizer, using another exploring subset of inspection images.
  • the second combination of neural network architecture and optimizer selected for the multi-class classifier corresponds to the combination that provided, during the exploration step, the highest accuracy in identifying the different defect types for a given number of epochs.
  • the step of training the binary classifier further comprises a step of automatically exploring different loss functions and different learning rate schedulers.
  • the first combination is further defined by a loss function and by a learning rate scheduler that provided, during the exploration phase, together with the neural network architecture and the optimizer, the highest accuracy in detecting non defective from defective parts for the given number of epochs.
  • the selection of the loss function and of the learning rate is made automatically.
  • the configuration file of the defect classification model further comprises the parameters from the selected loss function and from the learning rate scheduler for the binary classifier.
  • the step of training the multi-class classifier further comprises a step of automatically exploring the different loss functions and the learning rate schedulers.
  • the second combination is further defined by the loss function and the learning rate scheduler that provided, during the exploration phase, together with the neural network architecture and the optimizer, the highest accuracy in identifying the defect types for the given number of epochs.
  • the configuration file of the defect classification model further comprises the parameters from the selected loss function and from the learning rate scheduler of the multi-class classifier.
  • the updated weights and the parameters of the selected neural network architectures, optimizers, loss functions and learning rate schedulers are packaged in the configuration file that loadable by the automated inspection system.
  • the different neural network architectures comprise at least one of the following neural network architectures: ResNet34, NesNet50, ResNet101, ResNet152, WideResNet50, WideResNet101, lncptionV3 and InceptionResNet.
  • the different optimizers comprise at least one of: Adam and SGD optimizers.
  • the different loss functions comprise at least one of: cross entropy and Nil loss functions.
  • the different rate learning schedulers comprise at least one of: decay and cyclical rate schedulers.
  • the automated inspection system is trained to detect different defect types on at least one of the following products: semiconductor packages, wafers, side-single PCBs, double-side PCBs, multilayer PCBs and substrates.
  • the defect types comprise one or more of: under plating, foreign material, incomplete parts, cracks, smudges, abnormal circuits, resist residue, deformation, scratches, clusters and metal film residue.
  • acquiring the inspection images comprises capturing, through a graphical user interface, a selection of one or more image folders wherein the inspection images are stored.
  • training of the binary and multi-class classifiers is initiated in response to an input made through a graphical user interface.
  • training of the binary and multi-class classifiers is controlled, via an input captured through the graphical user interface, to pause, abort or resume the training.
  • the method comprises a step of validating whether the overall number of inspection images is sufficient to initiate the training of the binary classifier, and if so, whether the number of inspection images associated with each defect type is sufficient to initiate the training of the multi-class classifier, whereby the training the multi-class classifier is initiated only for defect types for which there are a sufficient number of inspection images per defect type.
  • the method comprises increasing the number of inspection images of a given defect type, when the number of inspection images associated with the given defect type is insufficient, using data augmentation algorithms.
  • the method comprises automatically splitting, for each of the first and the second subsets, the inspection images into at least a training dataset and a validation dataset, prior to training the binary and multi-class classifier.
  • the training dataset is used during training to set initial parameters of the first and the second combinations of the neural network architecture and optimizer.
  • the validation dataset is used to validate and further adjust the weights of the nodes during the training of the binary and multi-class classifiers.
  • the method comprises automatically splitting the inspection images into a test dataset to confirm the parameters and weights of the first and second combinations, once the binary and multi-class classifiers have been trained.
  • the number of inspection images used to train the binary and multi-class classifiers at each training iteration is dynamically adapted as a function of the available physical resources of the processor performing the training.
  • the number of inspection images passed at each iteration through the binary and multi-class classifiers are bundled in predetermined batch sizes which are tested until an acceptable batch size can be handled by the processor.
  • the training of the binary and multi class classifiers is performed by feeding the inspection images to the classifiers in subsequent batches, and wherein the number of inspection images in each batch is dynamically adjusted as a function of an availability of processing resources.
  • acquiring the inspection images comprises scanning an image server and displaying on a graphical user interface a representation of a folder architecture comprising a machine identifier, a customer identifier, a recipe identifier and a lot or device identifier, for selection by a user.
  • the method comprises verifying whether the inspection images have already been stored on a training server prior to copying the inspection images to the training server.
  • an automated inspection system for automatically generating, via machine learning, defect classification models, each model being adapted for the inspection of a specific part type.
  • the different defect classification models can be used for the inspection of different types of manufactured parts, such as semiconductor and/or Printed Circuit Board (PCB) parts.
  • the system comprises one or more dedicated servers, including processor(s) and data storage, the data storage having stored thereon.
  • the system also comprises an acquisition module for acquiring inspection images of parts captured by the inspection system, wherein the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts.
  • the system also comprises a training application comprising a binary classifier that is trainable, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts, by iteratively updating weights of the nodes of the neural network architecture used for the binary classifier.
  • the binary classifier uses a first combination of neural network architecture and optimizer.
  • the training application also comprises a multi-class classifier that is trainable, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts.
  • the multi-class classifier uses a second combination of neural network architecture and optimizer.
  • the multi-class classifier is trained by iteratively updating weights of the nodes of the neural network architecture used for the multi-class classifier.
  • the training application comprises algorithms to generate, from the trained binary classifier and from the trained multi-class classifier, a defect classification model defined by a configuration file.
  • the configuration file comprises the parameters of the first and second combinations of neural network architecture and optimizer and the updated weights of the nodes of each neural network architecture.
  • the automatic defect classification model is thereby usable by the automated inspection system for detecting defects on additional parts being inspected.
  • the data storage further stores an exploration module, a first set of different neural network architectures and a second set of optimizers.
  • the exploration module is configured to explore different combinations of neural network architectures and optimizers on an exploring subset of the inspection images for training the binary classifier.
  • the exploration module is further configured to automatically select the first combination of neural network architecture and optimizer for the binary classifier that provides the highest accuracy in detecting non-defective from defective parts for a given number of epochs.
  • the exploration module is further configured to explore different combinations of neural and optimizers on the exploring subset of the inspection images for training the multi-class classifier.
  • the exploration module is further configured to automatically select the second combination of neural network architecture and an optimizer for the multi-class classifier that provides the highest accuracy in identifying defect types for a given number of epochs.
  • the system comprises a graphical user interface, allowing a user to select one or more image folders wherein the inspection images are stored and to initiate, in response to an input made through the graphical user interface, the generation of the automatic defect classification model.
  • the system comprises a database for storing the inspection images of parts captured by the inspection system, and for storing the label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts.
  • the data storage of the one or more dedicated servers further store a pre-processing module, for validating whether the overall number of inspection images is sufficient to initiate the training of the binary and multi class classifiers, and for copying the images to the database and processing the images, such as by using data augmentation algorithms.
  • a non-transitory storage medium has stored thereon computer-readable instructions for causing a processor to: acquire inspection images of parts captured by the inspection system, wherein the inspection images are associated with label information indicative of whether a given image corresponds to a non-defective or defective part, and further indicative of a defect type, for the inspection images corresponding to defective parts, train a binary classifier, using a first subset of the inspection images, to determine whether the inspection images correspond to non-defective or defective parts, the binary classifier using a first combination of neural network architecture and an optimizer, train a multi-class classifier, using a second subset of the inspection images corresponding to defective parts, to determine the defect type in the inspection images previously determined by the binary classifier as corresponding to defective parts, the multi-class classifier using a second combination of neural network architecture and an optimizer, and generate, from the trained binary classifier and from the multi-class classifier, a defect classification model comprising configuration settings of
  • FIG. 1 is a flowchart of steps performed by a pre-processing module, according to a possible embodiment of a method and system for automatically generating a defect classification model for use by an automated inspection system.
  • FIG. 2 is a flowchart of steps performed by a training application, according to a possible embodiment of the method and system.
  • FIG. 3 is a flowchart of steps performed by a post-processing module, according to a possible embodiment of the method and system.
  • FIG. 4 is a graphical user interface (GUI) for capturing a selection of image folders containing training images for use by the training application, according to a possible embodiment.
  • GUI graphical user interface
  • FIG. 5 is a graphical user interface (GUI) for monitoring and controlling the training process, for example to pause, abort or resume the training.
  • GUI graphical user interface
  • FIG. 6 is a schematic illustration a system for automatically generating a defect classification model for use in an automated inspection system of manufactured parts, according to a possible embodiment.
  • FIG. 7 is a schematic illustration of a computer network including computers or servers, and data storage, and being part of, or linked to, an automated part inspection system, according to a possible embodiment.
  • the automatic defect classification system, method and software application described in the present application relate to automated 2D and/or 3D inspection and metrology equipment.
  • the Applicant already commercializes different inspection systems, such as semiconductor package inspection systems (GATS-2128, GATS-6163, etc.), printed circuit board inspection systems (STAR REC, NRFEID), optical vision inspection systems (wafer or substrate bump inspection system), etc., with which the proposed system for automatically generating defect-classification model(s) can be used.
  • the exemplary system and process described with reference to FIGs. 1 to 6 are especially adapted for the inspection of semiconductor and PCB products, but the proposed system and method can be used in other applications and for other industries which require the automated inspection of parts, such as the automobile industry, as an example only.
  • the proposed defect classification system can also be adapted to different automated visual inspection systems, other than laser triangulation.
  • existing optical inspection systems often include an offline defect-detection stage, where classification of the detected defects into product-specific or client-specific classes is done manually, by human operators.
  • systems provided with automatic Al/machine learning (ML) classifiers that analyze images produced by inspection cameras and that assigns the defects to predefined classes, in real time.
  • ML machine learning
  • those systems are difficult to configure, and often require a data expert and/or Al specialist to be able to tune the classifiers properly.
  • predefined ML-models are typically used, and they are not always optimal depending on the types of defects that need to be detected.
  • an automated Artificial Intelligence (Al)-based defect classification system is provided.
  • the inspection system can provide higher accuracy in measurements, and may bring down inspection cost and reduce human errors throughout the inspection process.
  • the proposed system and method allow to automatically generate one or more defect classification model(s) for use in an automated part inspection system.
  • users such as machine operators having no or limited knowledge of Al, can build new detect-classifier models or update existing ones, whether the inspection system is inline or offline.
  • the proposed system and method can build or update classification models of different product types, such as wafers, individual dies, substrates or 1C packages.
  • the proposed system and method thus greatly simplify training of the inspection system in detecting defect types, for different products.
  • the proposed classification-training system can detect changes in the types of parts and/or defects that are presented thereto, and can adjust its defect classification models, with no or limited user intervention.
  • a human operator e.g.
  • Conditions that triggers the creation of new classification models, or adjustments to existing classification models are multiple and include: new images captured, such as for defects from depleted defect classes (/.e. classes for which there are not enough images to properly tune or configure the classification model), ii. changes in class labels (where a label can correspond to a defective or non defective part, or to a type of defect), iii. new product to be inspected (which requires a new classification model to be built), iv. scheduled retraining, and v. classification model drift, detected by quality assurance mechanisms.
  • the proposed system and method can automatically select, from a list of existing machine learning (ML)-models, the most appropriate model to be used, and tuning of hyperparameters associated to said model can be realized using a simple grid search technique.
  • the proposed system and method also have the advantage of being implemented, in preferred embodiments, on dedicated servers. The proposed system and method can thus be implemented in a closed environment, without having to access any Al cloud- based platforms. The proposed automated classification training can thus be performed in isolated environments, such as in plants where there is no or restricted internet-access.
  • processing device encompasses computers, nodes, servers and/or specialized electronic devices configured and adapted to receive, store, process and/or transmit data, such as labelled images and machine learning models.
  • Processing devices include processors, such as microcontrollers and/or microprocessors, CPUs and GPUs, as examples only.
  • the processors are used in combination with data storage, also referred to as “memory” or “storage medium”.
  • Data storage can store instructions, algorithms, rules and/or image data to be processed.
  • Storage medium encompasses volatile or non-volatile/persistent memory, such as registers, cache, RAM, flash memory, ROM, as examples only.
  • FIG. 7 A schematic representation of an architecture being part of, or linked to, an automated inspection system, wherein the architecture includes such processing devices and data storage, is presented in FIG. 7.
  • classifier we refer to machine learning algorithms whose function is to classify or predict the classes or labels to which the data, such as a digital image, belongs.
  • a “classifier” is a special type of machine learning model. In some instances, a classifier is a discrete-value function that assigns class labels to data points. I n the present application, the data points are derived from digital inspection images.
  • a “binary classifier” predicts, with a given degree of accuracy and certainty, of which two “class” a given set of data belongs too. For manufactured part inspection, classes can be “pass” or “fail.”
  • a “multi class” classifier predicts, with a given degree of accuracy and certainty, to which one of a plurality of classes a given set of data belongs too.
  • defect classification model or “model” we also refer to machine learning models.
  • the defect classification model is a combination of trained classifiers, used in combination with optimizers, loss functions and learning rate schedulers, which parameters have also been adjusted during training of the classifiers.
  • neural network architectures also simply referred to as “neural networks,” we refer to specific types of machine-learning models (or algorithms) that are based on a collection of connected nodes (also referred to “artificial neurons” or “perceptrons”) which are structured in layers. Nodes of a given layer are interconnected to nodes of neighbouring layers, and weights are assigned to the connections between the nodes. The bias represents how far a prediction is from the intended value. Biases can be seen as the difference between the node’s input and its output.
  • neural network architectures including convolutional neural networks, recurrent neural networks, etc. More specific examples of neural network architectures include the ResNet and Inception architectures.
  • loss functions we refer to algorithmic functions that measure how far a prediction made by a model or classifier is from the actual value. The smaller the number returned by the loss function is, the more accurate the classifier’s prediction is.
  • optimizations we refer to algorithms that tie the loss function to the classifier parameters to update weights of the nodes of the classifier in response to the output of the loss function. In other words, optimizers update the weights of the nodes of the neural network architecture to minimize the loss function.
  • learning rate schedulers we refer to algorithms that adjusts the learning rate during training the machine learning classifier by reducing the learning rate according to a predefined schedule.
  • the learning rate is an hyperparameter controlling how much the classifier needs to be changed (by adjusting the weights) in response to the estimated error.
  • epochs we refer to the number of passes or cycles for the entire dataset to go through the machine learning model or architecture.
  • An “epoch” is one complete presentation of the dataset to the machine-learning algorithm.
  • the system generally includes preprocessing modules, to prepare the inspection images that will be used to build or adjust the defect classification model (shown in FIG. 1); a training application, accessible via a training Application Programming Interface (API), that creates or builds the defect classification models based on the labelled and processed training images (FIG. 2), by training a binary and a multi-class classifiers, and post-processing modules (FIG. 3), which manages the classification models created and update the inspection system 606 with the newly/adjusted classification models.
  • API Application Programming Interface
  • the system 600 comprises an acquisition module 610 to acquire the inspection images captured by the inspection system 606, with either 2D or 3D cameras.
  • the inspection system 606 operates via server 604, which runs the inspection system application and comprises database or data storage to store the inspection images.
  • the inspection images are thus first stored in the inspection system database 608, and the defect classification application 618 is used to classify or label the inspection images, with label information indicative of whether the part is defective or not, and if defective, with label information indicative of the defect type.
  • Another computer or server 602 runs the training application 614 and provides the training-API that can be accessed by the inspection system 606.
  • the server 602 includes one or more processors to run the training application 614.
  • the server 602 comprises non-transitory data storage to store the computer-readable instructions of the application.
  • An exploration module 612 which allows to explore different combinations of classifiers, is provided as part of the training application 614.
  • the system 600 preferably includes its own training database 616, to store the different classifiers, optimizers, loss function and rate scheduler that can be used when building or updating a defect classification model, as well as the configuration settings and parameters of these machine learning algorithms.
  • FIG.1 schematically illustrates possible pre-processing modules 10, part of the proposed system.
  • the pre-processing modules generally prepare the training dataset that will be used by the training application.
  • the training dataset generally comprises labelled inspection images, i.e. images that have been tagged or labelled with inspection information, such as “non-defective” or “defective,” or with a specific “defect type”.
  • the proposed system can be triggered or activated to acquire inspection images captured by cameras of the inspection system, to scan one or more servers storing inspection images and their associated label or class information.
  • a class or label can be, for example, 0 for non-defective parts, and numbers 1 to n, for n different types of defects, such as 1 for under-plating defects, 2 for foreign material defects, 3 for incomplete parts, 4 for pi cracks, etc. There can be any number of defect types, such as between 5 to 100 different types of defects. Labels can thus be any alphanumerical indicators used to tag or provide an indication of the content of the image, such as whether the image corresponds to a defective or non-defective part, and for defective part, the type of defect. Typically, most inspection images captured by optical inspections correspond to non defective parts, unless there is an issue with the manufacturing process.
  • the inspection images generated by optical inspection systems are labelled or associated with a non-defective or non-defective label (or class).
  • a small portion of the inspection images such typically between 0.01% to 10%, as an example only, correspond to defective parts.
  • the inspection images need to be specifically labelled or classified according to the defect type.
  • the one or more servers (ref. numeral 604) storing the inspection images is part of, or linked to, the inspection system (numeral 606 in FIG.6).
  • a schematic representation of a possible architecture of the one or more computers or servers 604 is further detailed in FIG.
  • the architecture includes processing devices (such as a 2D PC providing the graphical user interface via the Equipment Front End Module (EFEM), a 3D processing PC and a 3D GPU PC) and data storage 608.
  • 2D and 3D cameras capture the inspection images, with the 2D or 3D frame grabbers, and the images are processed by the CPUs and/or GPUs of the computers or servers 604 and stored in the inspection system database 608.
  • server 604 which manages and stores the images and the training server 602 can be combined as a single server, which can also correspond to the server of the inspection system.
  • the different functions and applications, including image storage and management, training and part inspection can be run from one or from multiple servers / computers.
  • the inspection images are images of semiconductor or PCB parts, such as semiconductor packages, Silicon or other material wafers, single-sided PCBs, double-sided PCBs, multilayer PCBs, substrates and the like.
  • the defect types can include, as examples only: under plating, foreign material, incomplete parts, cracks, smudges, abnormal circuits, resist residue, deformations, scratches, abnormal passivation, clusters, metal film residue, etc. This list of defects is off course non-exhaustive, as the number and types of defects can differ depending on the types of parts being inspected.
  • the one or more servers 604 store the inspection images in folders, organized according to a given folder structure, with different folder levels, such as Machine Name, Customer Name, Recipe or Parts and Lots.
  • An exemplary embodiment of a folder structure is shown with reference to FIG.4, where the folder structure 408 of the server is presented through a graphical user interface (GUI) 400.
  • GUI graphical user interface
  • the GUI shows a folder structure which is consistent with the folder structure of the one or more servers, allowing a selection of the training images to be used by the training application for retraining or creating/building new defect classification models.
  • the image folder structure or arborescence is thus preferably periodically scanned, as per step 102.
  • the folder structure 408 presented to the user via the GUI may be dynamically updated by the system, as to correspond to the most updated folder structure and content of server 604. Since the proposed system and method allow to build new classification models and/or adjust existing ones, while the inspection system is in operation (in line), the folder structure presented through the GUI preferably reflects the current state of the image storage servers, since new inspection images may be continuously captured while the inspection images are selected and captured through the GUI, for the training of the classification model(s).
  • folders containing inspection images can be selected for retraining and/or creating new classification models, and the selection is captured through the GUI is used by the system to fetch and load the images to be used for the training steps.
  • the system thus receives, via the GUI, a selection of folder(s) containing the inspection images to be used for training, as illustrated in FIG. 4.
  • the selection can include higher-level folder(s), such as the “Parts” folder, as a way of selecting all lower-level folders, i.e. all “Lots” folders.
  • Training of the classification model can be initiated with an input made through the GUI, such as via button 404 in FIG.4, corresponding to step 112 of FIG. 1.
  • the GUI also allows controlling training process, by stopping or resuming the training, if needed (button 406 on FIG. 4.)
  • the total number of selected inspection images is preferably calculated, or counted, and displayed on the GUI (see Fig. 4, pane 402).
  • a minimal number of training inspection images is required.
  • the system also preferably calculates the number of selected inspection images for each defect type, for the same reason, i.e.
  • the preprocessing module thus validates whether the overall number of inspection images is sufficient to initiate the training of the binary classifier - which will be used to detect pass or fail (i.e.
  • step 110 is thus preferably performed prior to step 112.
  • the system may calculate the total number of inspection images selected and additionally provide the total number of selected inspection images for each defect type, displaying the results to the user through the GUI.
  • step 112 may be triggered by the user via an input made through the GUI, such as using button 402, as shown in FIG.4.
  • a message may be displayed to the user, requesting a new selection of inspection images.
  • the selected inspection images are pre-processed, preferably before being transferred and stored on the training server 602, identified in FIG.6.
  • Image pre processing may include extracting relevant information from the inspection images and transforming the images according to techniques well known in the art, such as image cropping, contrast adjustment, histogram equalization, binarization, image normalization and/or standardization.
  • different servers are used, such as server(s) 604 associated with the inspection system, and training server 602, associated with the training application. The inspection images selected for training are thus copied and transferred from server 604 to server 602.
  • the system stores inspection images information and performs checksum verification in the database associated with or part of the training server 602. This verification step allows avoiding duplicating images on the training server, whereby uniqueness of each image is verified before copying a new image in the database. Inspection images which are not already stored on the training server, as identified at step 120, are updated or copied to the training server, as per step 118.
  • the inspection images are divided, or split, into at least a training dataset and a validation dataset.
  • the system is thus configured to automatically split, for each of the first and the second subsets of images, the inspection images into at least a training dataset and a validation dataset, prior to training the binary and multi-class classifier.
  • the first subset will include images for training the binary classifier (i.e. the first subset includes images labelled as defective and non-defective), and the second subset will include the images of the first subset labelled as defective, and further labelled with a defect type.
  • Images in the training dataset will be used during training to adjust or change the weights of the nodes of the different layers of the neural network architecture, using the optimizer, to reduce or minimize the output of the loss function.
  • the validation dataset will then be used to measure the accuracy of the model using the adjusted weights determined during the training of the binary and multi-class classifiers.
  • the training and validation datasets will be used alternatively to train and adjust the weights of the nodes of the classifiers. More preferably, the inspection images are split into three datasets, the training and validation datasets, as mentioned previously, and a third “test” or “final validation” dataset, which is used to validate the final state of the classification model, once trained. In other words, the test dataset is used by the system to confirm the final weights of the neural network architecture for the binary and multi-class classifiers, once trained.
  • FIG.2 schematically illustrates steps of the training process performed by the training application (or training module), for automatically building defect classification- models, each model being adapted to specific manufacturing processes, part models, or client requirements.
  • the training application is a software program, stored on server 602 (identified in Fig. 6), comprising different sub modules.
  • the training module is governed by a state machine which verifies whether or nor caller actions are allowed a given moment, including actions such as abort, initialize, train, pause, succeeded or failed/exception.
  • the training module includes a training-API, including programming functions to manage a training session.
  • the functions of the training-API cab comprise an initialization function, a resume function, a start function, a pause function, an abort function, an evaluation function, a getStatus, getPerformance and getTrainingPerformance function, as examples only.
  • the initialization function prepares each training cycle by verifying the content of the first and second datasets, including for example confirming that all classes have enough sample images, i.e. that the number of images in each class is above a given threshold and that the exploration, training, validation and test image subset, for the training of each classifier, has a predetermined size.
  • the initialization module also initializes the defect classification model to be built with parameters of the previous model built or with predefined or random weights for each classifier.
  • a configuration file comprising initial parameters of the first and second combinations of neural network architecture and optimizer is thus loaded when starting the training.
  • the configuration file can take different formats, such as for example a . JSON format.
  • the initial configuration file can include fields such as the classifier model to be loaded during training, the optimizer to be loaded during training, including the learning rate decay factor to be used, the augmentation data algorithms to be used in case of imbalanced class samples, the number of epoch for which a stable accuracy must be maintained, as examples only.
  • the start function will start the training process, and the training operations will be started using the parameters of the different fields part of the initial configuration file.
  • the evaluate function will evaluate a trained defect classification model against an evaluation dataset of inspection images, and will return an average accuracy, expressed as a percentage, i.e. the percentage of the predictions which are correct.
  • the training application which can be called by the inspection system via a training-API, will thus initially load, or comprises, a binary classifier that can be trained to determine whether the inspection images correspond to non-defective or defective parts (represented by steps 208 and 214 on the left side of FIG.2) and a multi-class classifier (which may also be referred to as “defect type-classifier, and represented by steps 210 and 216 on the right side of FIG.2), which can be trained to determine the defect types in the inspection images that have been determined as defective by the binary classifier.
  • a binary classifier that can be trained to determine whether the inspection images correspond to non-defective or defective parts
  • a multi-class classifier which may also be referred to as “defect type-classifier, and represented by steps 210 and 216 on the right side of FIG.2
  • training it is meant that the weights of the nodes of the different layers forming the classifier (binary or multi-class) are iteratively adjusted, to maximize the accuracy of the classifier’s prediction, for a given number of trials (or epochs).
  • the optimizer selected in combination with the neural network architecture is used during training for iteratively adjusting the weights of the nodes.
  • the weights associated with the plurality of nodes of the classifiers are set and define the classification model that can be used for automated part inspection.
  • the inspection images selected for creating new classification models and/or adjusting existing models therefore form a first subset (split into training, validation and test subsets, and optionally - exploration) to train the binary classifier, and the inspection images that have been determined as defective form a second subset of inspection images, used for training the multi-class classifier.
  • the proposed system and method are especially advantageous in that different combinations of neural network algorithms and of optimizer algorithms can be used for the binary classifier and for the multi-class classifier. What’s more, determination of the best combination of neural network architecture and optimizer for the binary classifier and for the multi-class classifier can be made through an exploration phase, as will be explained in more detail below.
  • the binary classifier may thus use a first combination of neural network (NN) architecture and an optimizer, while the multi-class classifier may use a second combination of neural network architecture and optimizer.
  • the binary classifier can be another type of classifier, such as a decision tree, a support vector machine or a naive bayes classifier.
  • the first and second combinations may further include a selection of loss function algorithms and associated learning rate factors. The first and second combinations may or not be the same, but experiments have shown that, in general, better results are obtained when the first and second combinations of neural network architecture and optimizer differ for the binary and multi-class classifiers.
  • the first combination of neural network architecture and optimizer for the binary classifier could be the ResNet34 architecture and the Adam optimizer, while the neural network and optimizer for the multi-class classifier could be the ResNet152 architecture and the SGD optimizer.
  • data augmentation may be performed on the inspection images associated with the same defect type, for which the number of inspection images associated thereto are either insufficient for training the defect classification model, or too little compared to other defect classes. This step serves to balance the number of images of each of the defect types to improve training accuracy and avoid bias that would otherwise be created for defect types having a much greater number of inspection images compared to other types of defects.
  • Transformations can be spatial transformations (such as rotating or flipping the images), but other types of transformation are possible, including changing the Red Green Blue (RGB) values of pixels, as an example only.
  • RGB Red Green Blue
  • the training API dynamically loads an initial configuration file (or training initial settings), which may contain various training parameters, such as for example the first combination of neural network architecture and optimizer to be used for training the binary classifier (step 214) and the second combination of neural network architecture and optimizer to be used for training the multi-class classifier (step 216).
  • the configuration file and/or training settings may further contain an indication of the loss function algorithms to be used for the binary classifier and multi-class classifier training (which may be different or not for the two classifiers), and an indication of the learning rate scheduler algorithms (and factors) to be used for the training of the binary classifier and of the multi-class classifier (which may also be different or not for the two classifiers).
  • the different neural network architectures that can be used by the binary and/or multi-class classifiers can include: NesNet50, ResNet101, ResNet152, WideResNet50, WideResNet101 , lncptionV3 and InceptionResNet.
  • optimizer algorithms that can be used for training the binary and/or multi-class classifiers can include the Adam optimizer and the Stochastic Gradient Descent (SGD) optimizer.
  • loss function algorithms can include the cross-entropy loss function and the Nil loss function
  • examples of learning rate scheduler algorithms can include the decay scheduler and the cyclical rate scheduler.
  • the initial configuration file can also optionally include weights for each nodes of the classifiers.
  • the above examples of neural network architectures, optimizers, loss functions and learning rate schedulers are non-exhaustive, and the present invention may be used with different types of architectures, optimizers, loss functions and learning rate schedulers.
  • the first and second combination of training parameters and settings may also include other types of parameters, such as the number of epochs, in addition to those mentioned above.
  • the configuration file (or training settings) can be updated to add or remove any number of neural network architectures, optimizers, loss functions, and learning rate schedulers.
  • the proposed method and system are also advantageous in that, in possible implementations, different types of neural network architectures and optimizers can be tried or explored, before fully training the binary and multi-class classifiers, so as to select the best or more accurate combination of architecture and optimizer for a given product or manufacturing part type.
  • the proposed method and system may include a step where different combinations for neural networks and optimizers (and also possibly loss functions and learning rate schedulers) are tried and explored, for training of the binary classifier, and also for training of the multi-class classifier, in order to select the “best” or “optimal” combinations to fully train the binary and multi-class classifiers, i.e. the combination which provides the highest accuracy. Still referring to FIG.
  • the exploration or trial step 206 is performed prior to the training steps 212.
  • the system tests (or explores/tries) different combinations of the neural networks and optimizers, and also preferably, of loss functions and learning rate schedulers, on a reduced subset of the training inspection images, for a given number of epochs, for both the binary and multi class classifiers.
  • a first combination with the best classification accuracy is selected for the binary classifier.
  • the objective of the initial steps of exploring different combinations of neural networks and optimizers is to identify and select the pair of neural network and optimizer that provides the highest accuracy for a given subset of inspection images. More specifically, exploring different combinations of neural networks and optimizers can comprise launching several shorter training sessions, using different pairs of neural networks and optimizers, and recording the performance (i.e. accuracy) for each pair tried on the reduced dataset (i.e. exploration dataset). For example, before building the defect classification model with a given combination of binary classifier and optimizer, n different pairs of neural networks and optimizers are tried, such as ResNet34 as the neural network and Adam as the optimizer, InsceptionResNet as the neural network and Gradient Descent as the optimizer, ResNet34 as the neural network and SGD as the optimizer. The accuracy for each pair is determined using a reduced validation image data set, and the pair having the highest accuracy is selected and used for the training step.
  • loss function and learning rate scheduler factors can be tried or explored, when exploring the pairs of neural network and optimizers.
  • the loss function and a learning rate scheduler which provides, together with the NN-architecture and the optimizer, the best accuracy (expressed as a percentage of correct prediction over the total number of predictions made) in detecting non-defective from defective parts for the given number of epochs, on an exploration subset of images, is thus identified and retained for the training of the binary classifier.
  • step 210 different combinations of neural networks and optimizers (and also possibly of loss functions and learning rate schedulers) are tried (or explored) on the reduced subset of training inspection images, for determining the best combination to be used to fully train the multi-class classifier.
  • the exploration is also performed for a given number of epochs.
  • the combination with the best classification accuracy is selected for the multi-class classifier.
  • the epoch number may be a parameter of the training system.
  • the exploration training phase can automatically stop once the binary and the multi-class classifiers reach a predetermined number of epochs for each combination.
  • the exploration training may be controlled, for instance stopped, resumed, and aborted, by the user, via the GUI, at any time during the exploration phase.
  • the training exploration can be bypassed by loading a configuration file comprising the initial parameters associated with the binary classifier, and those associated with the multi-class classifier. In that case, steps 206, 208 and 210 are bypassed.
  • the binary classifier training starts at step 214, using, in one possible embodiment, the combination of neural network and optimizer algorithms determined during the exploration phase at step 208.
  • the inspection images forming the first image subset are used for training the binary classifier. All of the selected images can form the first subset, or only a portion of the images selected.
  • the multi-class classifier training starts at step 216, using preferably, similar to the binary classifier, the combination of neural network and optimizer determined as most efficient and/or accurate during the exploration phase, at step 210.
  • the subset of the inspection images used for training the multi-class classifier consists of a subset of the first subset, i.e. this second subset comprises the inspection images classified as “defective” by the binary classifier.
  • the training and validation image datasets are used to iteratively adjust weights of the nodes of the binary classifier and of the multi-class classifier, based on parameters of the optimizers, after each epoch, for example.
  • Adjusting the neural network parameters can include the automatic adjustment of the weights applied to the nodes of the different layers of the neural network layers, until the differences between the actual and predicted outcomes of each training pass are satisfactorily reduced, using the selected optimizer, loss function and learning rate factor.
  • the validation subset is thus used as representative of the actual outcome of the prediction on the part state (non-defective or defective, and type of defect).
  • Adjusting the optimizer can include iteratively adjusting its hyperparameters, which are used to control the learning process. It will be noted that the training process is completely automated, and is performed autonomously, by providing an initial configuration file to the training-API.
  • FIG. 5 shows a possible GUI where the status of the building process of the defect classification model can be monitored (window 500), by indicating the current iteration and progress of the training (see lower section 506 of window 500).
  • the GUI also provides the possibility to pause (502) or abort (504) the training process, if needed.
  • the number of inspection images used to train the binary and multi-class classifiers at each training iteration is dynamically adapted as a function of the available physical resources of the processor performing the training. More specifically, the batch size, defined as the number of images passing through the binary and multi-class classifiers at each iteration, can be dynamically modified, as indicated by step 218. This step advantageously enables to adjust, in real time, the training process as a function of the physical/processing resources available for running the training.
  • the sizes of batches may have pre-defined values, and different batch sizes can be tried, until a warning or an indication (such as a memory error) that the processing resources (such as the GPU) are fully utilized is detected by the training system. In this case, the next lower batch size is tried until an acceptable batch size that can be handled by the processor (typically the GPU) is reached.
  • the subset of inspection images submitted to the classifiers is fed in subsequent batches, and the number of inspection images in each batch is dynamically adjusted as a function of the availability of processing resources (i.e. available processing capacity or processor availability).
  • This feature or option of the training system eliminates the need to have prior knowledge of the hardware specifications, or training model requirements or parameter size. This feature also enables the training system to be highly portable, as different manufacturing plants may have different servers/processing device requirements and/or specifications.
  • the configuration file can comprise parameters such as the neural network architecture user for the binary classifier (for example : ResNet34), the source model (including weight settings) for the binary classifier, the neural network architecture user for the multi-class classifier (for example, InceptionResNet), source model for the multi-class classifier (including weight settings), optimizer for the binary and for the multi-class classifier (e.g. Adam), the learning rate (e.g. 0.03) and the learning rate decay factor (e.g. 1.0).
  • the automatic defect classification model is thereby usable by the automated inspection system for detecting defective parts and for identifying defect types on the manufactured parts being inspected.
  • the post-processing modules illustrated in FIG.3 comprise the different modules involved in storing the defect classification models, once built (step 304), and in updating the inspection system’s GUI (step 306) with the training results, and/or updating the training and/or inspection system’s databases (308) with the new models created or the existing models updated.
  • the test dataset has been used to demonstrate the accuracy of the first and second combination of optimizers and binary / multi-class classifiers.
  • the resulting defect classification model is generated and comprises, in a configuration file, the type and parameters of the binary classifier and of the multi-class classifier.
  • a defect classification model for a new semiconductor part may include, in the form of a configuration file, the first combination of neural network-architecture, optimizer, loss function and learning rate scheduler to be used for binary classifier, as well as the relevant parameters settings for each of those algorithms, and the second combination of neural network-architecture, optimizer, loss function and learning rate scheduler to be used for multi-class classifier, as well as the relevant parameters settings for each of those algorithms.
  • the defect classification model may be stored in a database located on the training server and/or on the inspection system server.
  • the results of the training process such as the first, and second combinations selected and corresponding first, and second accuracy, may be displayed on the GUI.
  • the results may be exported into performance reports (step 312).
  • the automatic defect classification application loads the appropriate defect classification model according to the part type selected by an operator, through the GUI.
  • each part type can be associated to its own defect classification model, each model having been tuned and trained to optimize its accuracy for a given part type or client requirements.
  • the automatic defect classification can advantageously detect new defects that are captured by the optical system, such as by classifying new/unknown defects to an “unknown” category or label. If the number of “unknown” defects for a given lot is above a given threshold, the application can be configured to generate a warning that the classification model needs to be updated, and, in some possible implementation, the proposed system and method can update (or retrain) of the classification model automatically.
  • the proposed method and system for generating automatic defect classification models, via machine learning, for use in automated inspection system can advantageously be deployed on customer site’s server(s) (where a “customer” is typically a manufacturing company), without needing to upload sensitive data to cloud-based servers.
  • the proposed method and system provide users with control and create defect-classification models with no prior Al-knowledge.
  • the proposed method and system can also work directly with inspection images, without having to rely on complex relational datasets.
  • the training application can be extended by adding new neural network architectures, new optimizers, loss function and learning rate schedulers.
  • the training application includes a resize layer function (resizeLayer) which ensures that the number of outputs of the newly added neural network architecture matches the number of outputs passed as arguments.
  • the training application also includes a forward function which pushes tensors passed as arguments to the input layer of the model and collects the output.
  • resizeLayer resizeLayer
  • Table 1 contains the different training parameters used in each combination, the number of epochs for which the tests were run, and the accuracy results in classifying the inspection images, wherein the bold combinations are the ones selected by the system as the best combinations with respect to classification accuracy.
  • the original inspection image dataset used contained 159,087 images that were split into a first dataset of 144,255 images, of which 80% of the images were further split into a Training dataset and 20% into a Validation dataset, and a second dataset of 14,832 images constituting the Test dataset.
  • the inspection images were associated to a total of 23 classes comprising defect types and the Accept type.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Electromagnetism (AREA)
  • Toxicology (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé mis en œuvre par ordinateur, un système d'inspection et un support de stockage non transitoire, servant à générer automatiquement des modèles de classification de défauts, au moyen d'un apprentissage automatique, pour l'inspection de composants à semi-conducteur et/ou de parties d'une carte de circuit imprimé (PCB). Les modèles de classification de défauts sont construits automatiquement à partir d'une première combinaison d'un classifieur binaire à réseau de neurones entraîné et d'un optimiseur et à partir d'une seconde combinaison d'un classifieur multiclasse à réseau de neurones entraîné et d'un optimiseur.
PCT/CA2021/050672 2020-05-22 2021-05-17 Procédé et système d'entraînement d'équipement d'inspection pour la classification automatique de défauts WO2021232149A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202180036832.7A CN115668286A (zh) 2020-05-22 2021-05-17 训练自动缺陷分类的检测仪器的方法与系统
JP2023515224A JP2023528688A (ja) 2020-05-22 2021-05-17 自動欠陥分類検査装置を訓練する方法及びシステム
CA3166581A CA3166581A1 (fr) 2020-05-22 2021-05-17 Procede et systeme d'entrainement d'equipement d'inspection pour la classification automatique de defauts

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063028800P 2020-05-22 2020-05-22
US63/028,800 2020-05-22

Publications (1)

Publication Number Publication Date
WO2021232149A1 true WO2021232149A1 (fr) 2021-11-25

Family

ID=78708867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2021/050672 WO2021232149A1 (fr) 2020-05-22 2021-05-17 Procédé et système d'entraînement d'équipement d'inspection pour la classification automatique de défauts

Country Status (5)

Country Link
JP (1) JP2023528688A (fr)
CN (1) CN115668286A (fr)
CA (1) CA3166581A1 (fr)
TW (1) TW202203152A (fr)
WO (1) WO2021232149A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511503A (zh) * 2021-12-30 2022-05-17 广西慧云信息技术有限公司 一种自适应板材厚度的刨花板表面缺陷检测方法
CN115830403A (zh) * 2023-02-22 2023-03-21 厦门微亚智能科技有限公司 一种基于深度学习的自动缺陷分类系统及方法
CN116245846A (zh) * 2023-03-08 2023-06-09 华院计算技术(上海)股份有限公司 带钢的缺陷检测方法及装置、存储介质、计算设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023146946A1 (fr) * 2022-01-27 2023-08-03 Te Connectivity Solutions Gmbh Système d'inspection visuelel pour détection de défaut
TWI806500B (zh) * 2022-03-18 2023-06-21 廣達電腦股份有限公司 影像分類裝置和方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160132787A1 (en) * 2014-11-11 2016-05-12 Massachusetts Institute Of Technology Distributed, multi-model, self-learning platform for machine learning
US20180341248A1 (en) * 2017-05-24 2018-11-29 Relativity Space, Inc. Real-time adaptive control of additive manufacturing processes using machine learning
WO2019058300A1 (fr) * 2017-09-21 2019-03-28 International Business Machines Corporation Augmentation de données destinée à des tâches de classification d'images
US20190188840A1 (en) * 2017-12-19 2019-06-20 Samsung Electronics Co., Ltd. Semiconductor defect classification device, method for classifying defect of semiconductor, and semiconductor defect classification system
CN109961142A (zh) * 2019-03-07 2019-07-02 腾讯科技(深圳)有限公司 一种基于元学习的神经网络优化方法及装置
US20190266513A1 (en) * 2018-02-28 2019-08-29 Google Llc Constrained Classification and Ranking via Quantiles
US20190370955A1 (en) * 2018-06-05 2019-12-05 Kla-Tencor Corporation Active learning for defect classifier training

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160132787A1 (en) * 2014-11-11 2016-05-12 Massachusetts Institute Of Technology Distributed, multi-model, self-learning platform for machine learning
US20180341248A1 (en) * 2017-05-24 2018-11-29 Relativity Space, Inc. Real-time adaptive control of additive manufacturing processes using machine learning
WO2019058300A1 (fr) * 2017-09-21 2019-03-28 International Business Machines Corporation Augmentation de données destinée à des tâches de classification d'images
US20190188840A1 (en) * 2017-12-19 2019-06-20 Samsung Electronics Co., Ltd. Semiconductor defect classification device, method for classifying defect of semiconductor, and semiconductor defect classification system
US20190266513A1 (en) * 2018-02-28 2019-08-29 Google Llc Constrained Classification and Ranking via Quantiles
US20190370955A1 (en) * 2018-06-05 2019-12-05 Kla-Tencor Corporation Active learning for defect classifier training
CN109961142A (zh) * 2019-03-07 2019-07-02 腾讯科技(深圳)有限公司 一种基于元学习的神经网络优化方法及装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511503A (zh) * 2021-12-30 2022-05-17 广西慧云信息技术有限公司 一种自适应板材厚度的刨花板表面缺陷检测方法
CN114511503B (zh) * 2021-12-30 2024-05-17 广西慧云信息技术有限公司 一种自适应板材厚度的刨花板表面缺陷检测方法
CN115830403A (zh) * 2023-02-22 2023-03-21 厦门微亚智能科技有限公司 一种基于深度学习的自动缺陷分类系统及方法
CN116245846A (zh) * 2023-03-08 2023-06-09 华院计算技术(上海)股份有限公司 带钢的缺陷检测方法及装置、存储介质、计算设备
CN116245846B (zh) * 2023-03-08 2023-11-21 华院计算技术(上海)股份有限公司 带钢的缺陷检测方法及装置、存储介质、计算设备

Also Published As

Publication number Publication date
TW202203152A (zh) 2022-01-16
JP2023528688A (ja) 2023-07-05
CN115668286A (zh) 2023-01-31
CA3166581A1 (fr) 2021-11-25

Similar Documents

Publication Publication Date Title
WO2021232149A1 (fr) Procédé et système d'entraînement d'équipement d'inspection pour la classification automatique de défauts
EP3499418B1 (fr) Appareil de traitement d'informations, système d'identification, procédé de pose et programme
US10964004B2 (en) Automated optical inspection method using deep learning and apparatus, computer program for performing the method, computer-readable storage medium storing the computer program, and deep learning system thereof
US10650508B2 (en) Automatic defect classification without sampling and feature selection
US10679333B2 (en) Defect detection, classification, and process window control using scanning electron microscope metrology
TWI708301B (zh) 透過異常值偵測之特徵選擇及自動化製程窗監控
CN109598698B (zh) 用于对多个项进行分类的系统、方法和非暂时性计算机可读取介质
US20220254005A1 (en) Yarn quality control
US20220374720A1 (en) Systems and methods for sample generation for identifying manufacturing defects
US20200005084A1 (en) Training method of, and inspection system based on, iterative deep learning system
JP7150918B2 (ja) 試料の検査のためのアルゴリズムモジュールの自動選択
US10656518B2 (en) Automatic inline detection and wafer disposition system and method for automatic inline detection and wafer disposition
EP4285337A1 (fr) Système et procédé de contrôle qualité de fabrication utilisant une inspection visuelle automatisée
TWI791930B (zh) 用於分類半導體樣本中的缺陷的系統、方法及電腦可讀取媒體
Thielen et al. A machine learning based approach to detect false calls in SMT manufacturing
US11967060B2 (en) Wafer level spatial signature grouping using transfer learning
US11639906B2 (en) Method and system for virtually executing an operation of an energy dispersive X-ray spectrometry (EDS) system in real-time production line
JP2019113914A (ja) データ識別装置およびデータ識別方法
CN112840352A (zh) 配置图像评估装置的方法和图像评估方法及图像评估装置
US20240193758A1 (en) Apparatus and method with image generation
US20210306547A1 (en) System and edge device
Deshmukh et al. Automatic Inspection System for Segregation of Defective Parts of Heavy Vehicles
Tseng et al. Author's Accepted Manuscript

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21808810

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3166581

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2023515224

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21808810

Country of ref document: EP

Kind code of ref document: A1