CN118266014A - Machine learning system and method for object-specific recognition - Google Patents

Machine learning system and method for object-specific recognition Download PDF

Info

Publication number
CN118266014A
CN118266014A CN202280075868.0A CN202280075868A CN118266014A CN 118266014 A CN118266014 A CN 118266014A CN 202280075868 A CN202280075868 A CN 202280075868A CN 118266014 A CN118266014 A CN 118266014A
Authority
CN
China
Prior art keywords
image
image analysis
reusable
digital
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280075868.0A
Other languages
Chinese (zh)
Inventor
C·帕夫洛维奇
M·格林
B·马查多·特林达德
S·普莱拉
Z·于
F·任
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TechInsights Inc
Original Assignee
TechInsights Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TechInsights Inc filed Critical TechInsights Inc
Priority claimed from PCT/CA2022/051676 external-priority patent/WO2023082018A1/en
Publication of CN118266014A publication Critical patent/CN118266014A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

Various embodiments of machine learning systems and methods for object-specific recognition are described.

Description

Machine learning system and method for object-specific recognition
RELATED APPLICATIONS
The present application claims priority from U.S. provisional patent application nos. 63/279,311 and 2021, U.S. provisional patent application No. 63/282,102 and 2022, U.S. provisional patent application No. 63/308,869, entitled "machine learning system and method for object-specific identification," filed on day 15 of 11, 2021, and entitled "machine learning system and method for object-specific identification," filed on day 22 of 11, 2021, the entire disclosures of which are incorporated herein by reference.
Technical Field
The present disclosure relates to machine learning, and in particular to machine learning systems and methods for object-specific recognition.
Background
Semiconductor analysis is important for deep knowledge of technical competitiveness and Intellectual Property (IP) infringement. An important aspect of semiconductor analysis is the extraction of Integrated Circuit (IC) features (e.g., the segmentation of wires, the detection of vias, the identification of diffusion or polysilicon features, etc.) from electron microscope images. However, automatic extraction of these features faces challenges in terms of low segmentation accuracy due to noise images, contamination, and intensity variations between circuit images. While some academic papers report some degree of success in image segmentation, etc., such disclosures are generally related to quasi-ideal images. However, the image acquisition speeds required for industrial applications can lead to increased image noise, resulting in processing errors, which can be very time consuming to correct and/or require extensive manual intervention.
The existing circuit segmentation process is highly dependent on manually adjusting parameters to achieve reasonable results. For example, wilson et al (Ronald Wilson, navid Asadizanjani, domenic Forte, and Damon L. Woodard, "histogram-based auto-segmentation: a new method of segmenting integrated circuit structures from SEM images," arXiv:2004.13874, 2020) propose an intensity histogram-based method for auto-segmentation of integrated circuits. However, no quantitative analysis of the performance of different integrated circuit images with significant intensity variations was made in this report. Furthermore, while emphasis is placed on wire singulation, the lack of adequate extraction of via related information, such as accurate via location data, is an important aspect of many semiconductor analysis applications. Likewise Trindade et al (Bruno Machado Trindade, eranga Ukwatta, MIKE SPENCE and Chris Pawlowicz, "divide integrated circuit layout from scanning electron microscope images", IEEE Canadian Electrical computer engineering conference (CCECE), 2018, 1-4, DOI:10.1109/CCECE.2018.8447878, 2018) discuss the effect of different preprocessing filters on Scanning Electron Microscope (SEM) images and propose a learning-free method for integrated circuit division. However, the effectiveness of the proposed method again relies on separation thresholds, which can be challenging if not impossible to universally build across images of widely varying intensities or circuit configurations.
Machine learning platforms offer a potential solution for improving the automation of image recognition. For example, lin et al (Lin et al, "deep learning based image analysis framework for hardware support for digital Integrated circuits", IEEE International integrated circuit Physics and failure analysis seminar (IPFA) in 2020, pages 1-6, DOI:10.1109/IPFA49335.2020.9261081, 2020) disclose a deep learning based method for identifying electrical components in an image, wherein a full convolution network is used to segment both vias and metal lines of an SEM image of an IC.
This background information is provided to reveal information believed by the applicant to be of possible relevance. It is not necessarily intended, nor should it be construed, that any of the preceding information form part of the prior art or form part of the common general knowledge in the relevant art.
Disclosure of Invention
The following presents a simplified summary of the general inventive concepts described herein in order to provide a basic understanding of some aspects of the disclosure. This abstract is not an extensive overview of the disclosure. It is not intended to limit or define the scope of key or critical elements of the embodiments of the disclosure beyond that which is explicitly or implicitly described in the following description and claims.
There is a need for a machine learning system and method for object-specific recognition that overcomes some of the disadvantages of the known techniques, or at least provides a useful alternative thereto. Some aspects of the present disclosure provide examples of such systems and methods.
According to one aspect, there is provided an image analysis method for identifying each of a plurality of object types in an image, the method being performed by at least one digital data processor in communication with a digital data storage medium storing the image, the method comprising: accessing a digital representation of at least a portion of an image; identifying, in the digital representation, an object of a first object type of the plurality of object types by a first reusable identification model associated with a first machine learning architecture; identifying, in the digital representation, an object of a second object type of the plurality of object types by a second reusable identification model associated with a second machine learning architecture; first and second object data sets representing objects of first and second object types, respectively, in a digital representation of an image are output.
In one embodiment, one or more of the first reusable recognition model or the second reusable recognition model includes a segmentation model or an object detection model.
In one embodiment, the first reusable recognition model comprises a segmentation model and the second reusable recognition model comprises an object detection model.
In one embodiment, one or more of the first reusable recognition model or the second reusable recognition model includes a user-adjusted no-parameter recognition model.
In one embodiment, one or more of the first reusable recognition model or the second reusable recognition model comprises a generic recognition model.
In one embodiment, one or more of the first reusable recognition model or the second reusable recognition model comprises a convolutional neural network recognition model.
In one embodiment, the first object type and the second object type correspond to different object types.
In one embodiment, the method further comprises training one or more of the first reusable recognition model or the second reusable recognition model with the context-specific training image or digital representation thereof.
In one embodiment, the digital representation includes each of a plurality of image blocks corresponding to respective regions of the image.
In one embodiment, the method further comprises defining a plurality of image blocks.
In one embodiment, image blocks are defined to include partially overlapping block regions.
In one embodiment, the method further comprises refining the output of the objects identified in the overlap region.
In one embodiment, refining includes performing an object merging process.
In one embodiment, a plurality of image blocks are defined differently for identifying objects of a first object type and identifying objects of a second object type.
In one embodiment, for at least some image blocks, one or more of the identification of the object of the first object type or the identification of the object of the second object type is performed in parallel.
In one embodiment, the method further comprises post-processing at least some of the objects according to a refinement process.
In one embodiment, the refinement process includes a convolution refinement process.
In one embodiment, the refinement process includes a k-nearest neighbor (k-NN) refinement process.
In one embodiment, one or more of the first object data set or the second object data set includes one or more of an image segmentation output or an object location output.
In one embodiment, the method is performed automatically by at least one digital data processor.
In one embodiment, the image represents an Integrated Circuit (IC).
In one embodiment, one or more of the first object type or the second object type includes a wire, a via, a polysilicon region, a contact, or a diffusion region.
In one embodiment, the image comprises an electron microscope image.
In one embodiment, the image represents a corresponding region of the substrate, and the method further comprises repeatedly performing the method for each of a plurality of images representing the corresponding region of the substrate.
In one embodiment, the method includes combining the first object data set and the second object data set into a combined data set representing an image.
In one embodiment, the method includes digitally rendering an object recognition image from one or more of the first object data set and the second object data set.
In one embodiment, the method includes independently training a first reusable recognition model and a second reusable recognition model.
In one embodiment, the method includes training the first reusable recognition model and the second reusable recognition model with training images enhanced by application-specific transformations.
In one embodiment, the application-specific transforms include one or more of image reflection, rotation, shifting, tilting, pixel intensity adjustment, or noise addition.
According to another aspect, there is provided an image analysis method for identifying each of a plurality of object types of interest in an image, the method being performed by at least one digital data processor in communication with a digital data storage medium storing the image, the method comprising: accessing a digital representation of an image; for each object type of interest, identifying each object of interest in the digital representation by a respective reusable object identification model associated with a respective corresponding machine learning architecture; a respective object data set representing a respective object of interest corresponding to each object type of interest in the digital representation of the image is output.
According to another aspect, there is provided a method for digitally refining a digital representation of a segmented image defined by a plurality of pixels each having a corresponding pixel value, the method being performed by at least one digital data processor in communication with a digital data storage medium storing the digital representation, the method comprising: for each refinement pixel to be refined, calculating a feature pixel value corresponding to the pixel value of the specified number of neighboring pixels; and digitally comparing the feature pixel value to a specified threshold; the refinement pixel value is assigned when the feature pixel value satisfies a comparison condition with respect to a specified threshold.
In one embodiment, calculating the feature pixel value includes performing a digital convolution process.
In one embodiment, the segmented image represents an integrated circuit.
In one embodiment, the digital representation corresponds to an output of a machine learning based image segmentation process.
According to another aspect, there is provided an image analysis method for identifying each of a plurality of circuit feature types in an image of an Integrated Circuit (IC), the method being performed by at least one digital data processor in communication with a digital data storage medium storing the image, the method comprising: specifying a feature type for each of a plurality of circuit feature types: digitally defining a feature type specific digital representation of the image; identifying, by a reusable feature type-specific object identification model associated with a respective machine learning architecture, an object specifying a feature type in a type-specific digital representation; and digitally refining the output from the feature type-specific object recognition process according to the feature type-specific refinement process.
According to another aspect, there is provided an image analysis system for identifying each of a plurality of object types in an image, the system comprising: at least one digital data processor in network communication with a digital data storage medium storing an image, the at least one digital data processor configured to execute machine executable instructions to access a digital representation of at least a portion of the image, identify an object of a first object type of the plurality of object types in the digital representation by a first reusable identification model associated with a first machine learning architecture, identify an object of a second object type of the plurality of object types in the digital representation by a second reusable identification model associated with a second machine learning architecture, and output a first object data set and a second object data set representing the first object type and the second object type of the object in the digital representation of the image, respectively.
In one embodiment, one or more of the first reusable recognition model or the second reusable recognition model includes a segmentation model or an object detection model.
In one embodiment, the first reusable recognition model comprises a segmentation model and the second reusable recognition model comprises an object detection model.
In one embodiment, one or more of the first reusable recognition model or the second reusable recognition model includes a user-adjusted no-parameter recognition model.
In one embodiment, one or more of the first reusable recognition model or the second reusable recognition model comprises a convolutional neural network recognition model.
In one embodiment, the system further comprises a non-transitory machine-readable storage medium having stored thereon the first reusable identification model and the second reusable identification model.
In one embodiment, the machine-executable instructions further comprise instructions for defining each of a plurality of image blocks corresponding to respective regions of the image.
In one embodiment, the image block includes partially overlapping block regions.
In one embodiment, the machine-executable instructions further comprise instructions for refining the output of the objects identified in the overlap region.
In one embodiment, the machine-executable instructions for refining the output correspond to performing an object merging process.
In one embodiment, the plurality of image blocks are differently defined for identifying objects of a first object type and for identifying objects of a second object type.
In one embodiment, the machine-executable instructions further comprise instructions for post-processing at least some of the objects according to a refinement process.
In one embodiment, the refinement process includes a convolution refinement process.
In one embodiment, the refinement process includes a k-nearest neighbor (k-NN) refinement process.
In one embodiment, one or more of the first object data set or the second object data set includes one or more of an image segmentation output or an object location output.
In one embodiment, the image represents an Integrated Circuit (IC).
In one embodiment, one or more of the first object type or the second object type includes a wire, a via, a polysilicon region, a contact, or a diffusion region.
In one embodiment, the image comprises an electron microscope image.
In one embodiment, the image represents a corresponding region of the substrate, and the machine-executable instructions further comprise instructions for repeatedly executing the machine-executable instructions on each of the plurality of images representing the corresponding region of the substrate.
In one embodiment, the machine-executable instructions further comprise instructions for combining the first object data set and the second object data set into a combined data set representing the image.
In one embodiment, the machine-executable instructions further comprise instructions to digitally render the object identification image from one or more of the first object data set and the second object data set.
In one embodiment, the first reusable recognition model and the second reusable recognition model are trained with training images enhanced by application-specific transformations.
In one embodiment, the application-specific transforms include one or more of image reflection, rotation, shifting, tilting, pixel intensity adjustment, or noise addition.
According to another aspect, there is provided an image analysis system for identifying each of a plurality of object types of interest in an image, the system comprising: a digital data processor operable to execute object recognition instructions; at least one digital image database comprising images to be analyzed for a plurality of object types, the at least one digital image database being accessible by the digital data processor; a digital storage medium having stored thereon, for each of a plurality of object types, a different corresponding reusable recognition model deployable by the digital data processor and associated with a respective different machine learning architecture; a non-transitory computer readable medium comprising object recognition instructions that are operable, when executed by a digital data processor, to access at least a portion of a digital representation of an image from at least one digital image database for each of a plurality of object types of interest; identifying at least one object of a specified type in the digital representation by deploying different corresponding reusable identification models; and outputs a corresponding object data set representing the object of the specified type in the digital representation of the image.
In one embodiment, the system includes a digital output storage medium accessible by the digital data processor for storing each corresponding object data set corresponding to each specified type of the plurality of object types of interest.
In one embodiment, the digital data processor is operable to repeatedly execute the object recognition instructions for a plurality of images.
In one embodiment, each different respective reusable recognition model is configured to be repeatedly applied to a plurality of images.
According to another aspect, there is provided an image analysis system for digitally refining a digital representation of a segmented image defined by a plurality of pixels each having a respective pixel value, the system comprising: at least one digital data processor in communication with the data storage medium storing the digital representation, the at least one digital data processor further in communication with the non-transitory computer readable storage medium storing digital instructions that, when executed, cause the at least one digital data processor to, for each pixel of refinement to be refined, calculate a feature pixel value corresponding to a pixel value of a specified number of neighboring pixels, digitally compare the feature pixel value to a specified threshold, and assign the refinement pixel value to the refinement pixel when the feature pixel value satisfies a comparison condition relative to the specified threshold.
In one embodiment, the feature pixel values are calculated according to a digital convolution process.
In one embodiment, the segmented image represents an integrated circuit.
In one embodiment, the digital representation corresponds to an output of a machine learning based image segmentation process.
According to another aspect, there is provided an image analysis system for identifying each of a plurality of circuit feature types in an image of an Integrated Circuit (IC), the system comprising: at least one digital data processor in communication with the storage medium storing the image, the at least one digital data processor further in communication with a non-transitory computer readable storage medium storing digital instructions that, when executed, cause the at least one digital data processor to, for each of a plurality of circuit feature types, specify a feature type, digitally define a feature type-specific digital representation of the image; identifying, by a reusable feature type-specific object identification model associated with a respective machine learning architecture, an object specifying a feature type in a type-specific digital representation; and digitally refining the output from the feature type-specific object recognition process according to the feature type-specific refinement process.
In one embodiment, each reusable feature type-specific object recognition model is stored on a non-transitory computer-readable storage medium.
According to another aspect, there is provided a non-transitory computer-readable storage medium having stored thereon digital instructions that, when executed by at least one digital data processor, cause the at least one digital data processor to digitally define a feature type specific digital representation of an image for each of a plurality of circuit features; identifying, by a reusable feature type-specific object identification model associated with a respective machine learning architecture, an object specifying a feature type in a type-specific digital representation; the output from the feature type-specific object recognition process is digitally refined according to the feature type-specific refinement process.
In one embodiment, each reusable feature type-specific object recognition model is also stored on a non-transitory computer-readable storage medium.
According to another aspect, there is provided a non-transitory computer-readable storage medium having stored thereon digital instructions that, when executed by at least one digital data processor, cause the at least one digital data processor to: accessing a digital representation of at least a portion of an image; identifying, in the digital representation, an object of a first object type of the plurality of object types by a first reusable identification model associated with a first machine learning architecture; identifying, in the digital representation, an object of a second object type of the plurality of object types by a second reusable identification model associated with a second machine learning architecture; first and second object data sets representing objects of first and second object types, respectively, in a digital representation of an image are output.
In one embodiment, each reusable feature type-specific object recognition model is also stored on a non-transitory computer-readable storage medium.
According to another aspect, there is provided a non-transitory computer readable storage medium having stored thereon digital instructions for digitally refining a digital representation of a segmented image defined by a plurality of pixels each having a respective pixel value, the digital instructions when executed by at least one digital data processor cause the at least one digital data processor to: for each refinement pixel to be refined, calculating a feature pixel value corresponding to the pixel value of the specified number of neighboring pixels; digitally comparing the feature pixel value to a specified threshold; when the feature pixel value satisfies the comparison condition with respect to the specified threshold value, a thinned pixel value is assigned to the thinned pixel.
Other aspects, features and/or advantages will become more apparent upon reading the following non-limiting description of specific embodiments, given by way of example only with reference to the accompanying drawings.
Drawings
Several embodiments of the invention are hereinafter presented, by way of example only, with reference to the accompanying drawings, wherein:
FIGS. 1A-1F are SEM images of an exemplary integrated circuit highlighting some challenges associated with automated image recognition processing, according to various embodiments;
FIGS. 2A and 2B are diagrams of exemplary machine-learning-based image recognition processes employing a corresponding machine learning architecture to recognize corresponding object types in an image, in accordance with various embodiments;
FIG. 3 is a schematic diagram of two exemplary image preprocessing steps, according to various embodiments;
FIGS. 4A and 4B are images of exemplary segmented outputs from a machine learning identification process, according to various embodiments;
FIGS. 5A and 5B are images of an exemplary image input and corresponding machine learning based segmentation output in accordance with various embodiments;
FIGS. 6A and 6B are graphs illustrating exemplary spectral deviations in wire segmentation and via detection, respectively, according to various embodiments; and
Fig. 7A and 7B are images of an exemplary image input and corresponding machine-learning based detection output, and fig. 7C is a set of input images and corresponding machine-learning based detection output overlaid thereon, in accordance with various embodiments.
Elements in the several figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present disclosure. Moreover, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present disclosure.
Detailed Description
Various implementations and aspects of the specification will be described with reference to details discussed below. The following description and drawings are illustrative of the specification and are not to be construed as limiting the specification. Numerous specific details are described to provide a thorough understanding of various embodiments of the present description. However, in some instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments of the present description.
Various devices and processes will be described below to provide examples of embodiments of the systems disclosed herein. The embodiments described below are not limited to any claimed embodiments, and any claimed embodiments may cover different methods or apparatuses than those described below. The claimed embodiments are not limited to devices or methods having all of the features of any one device or method described below nor to features common to multiple or all devices or methods described below. The apparatus or method described below may not be an implementation of any of the claimed subject matter.
Furthermore, numerous specific details are set forth in order to provide a thorough understanding of the implementations described herein. However, it will be understood by those skilled in the relevant art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, and components have not been described in detail so as not to obscure the implementations described herein.
In this specification, an element may be described as being "configured to" perform one or more functions or being "configured to" such functions. In general, elements configured to perform a function or configured to perform a function are capable of, or adapted to perform, or are operative to perform, or otherwise capable of performing the function.
It should be understood that for purposes of this specification, the language "at least one of X, Y and Z" and "one or more of X, Y and Z" may be interpreted as any combination of two or more of X only, Y only, Z only, or X, Y and Z (e.g., XYZ, XY, YZ, ZZ, etc.). Similar logic may be applied to two or more items in any of the language appearing "at least one … …" and "one or more … …".
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The phrase "in one of the embodiments" or "in at least one of the embodiments" as used herein does not necessarily refer to the same embodiment, although it may. Furthermore, the phrase "in another embodiment" or "in some embodiments" as used herein does not necessarily refer to different embodiments, although it may. Thus, as described below, different embodiments may be readily combined without departing from the scope or spirit of the innovations disclosed herein.
In addition, as used herein, the term "or" is an inclusive "or" operator and is equivalent to the term "and/or" unless the context clearly indicates otherwise. The term "based on" is not exclusive and allows for being based on additional factors not described, unless the context clearly indicates otherwise. In addition, throughout the specification, the meaning of "a", "an", and "the" include plural references. The meaning of "in … …" includes "in … …" and "on … …".
The term "comprising" as used herein will be understood to mean that the list that follows is non-exhaustive and may or may not include any other additional suitable items, such as one or more additional suitable features, components, and/or elements.
Reverse Engineering (RE) is now a common practice in the electronics industry with a wide range of applications including quality control, propagation of concepts and technologies used in semiconductor chip fabrication, and evaluation of infringement and intellectual property considerations in support of patent licensing activity.
However, as the integration of semiconductor circuits continues to increase, REs become increasingly specialized. For example, many RE applications typically require advanced microscope systems that are operable to acquire thousands of Integrated Circuit (IC) images with sufficient resolution to visualize billions of microns and sub-microns features. The high level of automation required for the number of components that must be handled is challenging, especially given the often required determination of connectivity between circuit elements that are not necessarily logically placed within a circuit layer, but are arranged to optimize space usage.
Various methods of automatically analyzing ICs have been proposed. One method is described in U.S. patent number 5,694,481 entitled "automated design analysis System for generating a schematic of a circuit from a high-magnification image of an integrated circuit" and issued to Lam et al on 12/2/1997. This example generally illustrates an overview of the IC RE method, and discloses a method for generating an IC schematic using electron microscopy images. Because of the high resolution required for circuit feature imaging, each layer of the IC layer is imaged by scanning many (tens of millions to millions) of sub-regions independently, and then such "tile" images are stitched to generate a more complete 2D representation of the IC. These 2D mosaics are then aligned in three dimensions to build a database from which a schematic diagram of the IC layout is generated.
However, such an automated process may be challenged by a number of factors, among which the most important is related to the nature of the imaging technology required to visualize such small elements, in terms of the actual extraction of circuit features. For example, widely used methods such as Scanning Electron Microscopy (SEM), transmission Electron Microscopy (TEM), scanning Capacitance Microscopy (SCM), scanning Transmission Electron Microscopy (STEM), etc., may produce images with undesirable amounts of noise and/or distortion. While these challenges can be addressed for some applications (e.g., evaluating whether an IC layout meets design rules) when the circuit layout is known, extracting circuit features from imperfect data in an automated fashion is more challenging when there is no information available about the intended circuit design.
Various extraction methods have been proposed. For example, automatic extraction of IC information has been explored in U.S. patent application No. 5,086,477, entitled "automated System for extracting design and layout information from an Integrated Circuit" and issued to Yu and Berglund, 2/4/1994, which discloses identifying circuit elements based on a comparison of circuit features to a feature template or library of feature templates. However, a library of such reference structures is built for each unique component and/or configuration delta. Given that even the components of a single transistor (i.e., source, gate, and drain) or logic gate (e.g., OR, NAND, XNOR, etc.) may have a wide range of configurations and/or shapes for performing the same function, such an approach is very challenging in practice, often resulting in a template matching system that requires a significant amount of operator intervention, is very expensive to calculate, and is limited to a particular component configuration (i.e., lacks robustness).
For example, a NAND gate may include a specified number and series and parallel connected transistors. However, the specific configuration and layout of transistor features (e.g., the size, shape, and/or relative orientation of the source, gate, and drain of the transistors) of the NAND gate, as well as the configuration of different transistors, may vary even in the IC layer and even between adjacent gates. Thus, the operator needs to identify each transistor geometry present in each gate for inclusion into the template library, where automatic extraction of subsequent transistor members may only succeed when the previously mentioned geometries are repeated.
Despite these drawbacks, this approach is still common in IC RE practice. For example, U.S. patent No. 10,386,409, entitled "non-destructive testing of integrated circuit components" and issued to Gignac et al at 2019, 8, 20, and U.S. patent No. 10,515,183, entitled "integrated circuit identification" and issued to Shehata et al at 2019, 12, 24 disclose circuit element identification based on pattern matching processing.
More generally, extracting a particular type of feature from an IC image may be important for various applications. For example, many RE or development applications may rely on identifying wires, vias, diffusion regions, polysilicon features, etc. from SEM images. While a common method of achieving this is image segmentation, automatic extraction of features faces challenges in terms of low segmentation accuracy due to noise images, contamination, and intensity variations between circuit images. The operator can be very time consuming to correct the errors that are generated.
The existing circuit segmentation process is also highly dependent on user-adjusted parameters to achieve reasonable results. For example, wilson et al (Ronald Wilson, navid Asadizanjani, domenic Forte, and Damon L Woodard, "automatic histogram-based segmentation: new method of segmenting integrated circuit structures from SEM images," arXiv:2004.13874, 2020) disclose an intensity histogram-based method for automatic segmentation of integrated circuits. However, no quantitative analysis of the performance of different integrated circuit images with significant intensity variations was made in this report. Furthermore, while emphasis is placed on wire singulation, the lack of adequate extraction of via related information, such as accurate via location data, is an important aspect of many semiconductor analysis applications. Likewise Trindade et al (Bruno Machado Trindade, eranga Ukwatta, MIKE SPENCE and Chris Pawlowicz, "divide integrated circuit layout from scanning electron microscope images", IEEE Canadian Electrical computer engineering conference (CCECE), 2018, 1-4, DOI:10.1109/CCECE.2018.8447878, 2018) discuss the effect of different preprocessing filters on Scanning Electron Microscope (SEM) images and propose a learning-free method for integrated circuit division. However, the effectiveness of the proposed method again relies on separation thresholds, which can be challenging if not impossible to universally build across images of widely varying intensities or circuit configurations. Furthermore, such thresholds may not even exist, depending on aspects of the image (e.g., quality, noise, contrast, etc.).
One possible method of automatically identifying IC features is by employing a Machine Learning (ML) architecture to identify a particular feature or feature type. However, such platforms still face challenges such as problems associated with image noise, intensity variations between images, or contamination. Furthermore, unlike the image recognition process applied to conventional photographs, the IC image may generally be discontinuous, the histogram may generally be multi-modal, and the relative positions of the modes in the histogram may vary between image captures. The pattern distribution of the members (e.g., wires, vias, diffusion regions, etc.) may overlap. For some applications, the size and distribution of features may present further challenges to analysis. For example, vias tend to be numerous, small, and sparsely distributed, similar to contamination-based noise. Furthermore, image edges can be problematic, where, for example, when some wires are "cut" (i.e., edge cut) between adjacent images, they can be difficult to distinguish from vias. This problem may be exacerbated by the fact that the machine learning process may require cutting the image into smaller sub-images due to memory and/or processing limitations, as described further below.
Generally, ML processing known in the art still requires a user to adjust user-adjusted parameters or hyper-parameters. With respect to IC component identification, this may involve requiring a user to manually adjust parameters for, for example, each grid or image set and/or those having different intensities and/or component distributions. Thus, such platforms or models are not generic, requiring user intervention to obtain acceptable results on different images or image sets. Furthermore, machine learning systems are not universal, e.g., different object types may prefer different outputs. For example, many applications may require accurate information about via locations within an IC, and for wires, continuity and/or connectivity may be a primary concern. This may be different from traditional machine learning methods, which may generally have a particular output goal (e.g., pixel-by-pixel segmentation), and/or may be evaluated using consistent metrics (e.g., recall, precision, or confidence scores). For example, lin et al (Lin et al, "deep learning based image analysis framework for hardware support for digital Integrated circuits", IEEE International integrated circuit Physics and failure analysis seminar (IPFA) in 2020, pages 1-6, DOI:10.1109/IPFA49335.2020.9261081, 2020) propose a deep learning based method to identify electrical component images. However, the proposed method involves a full convolution network for segmenting target features within the IC SEM image. That is, both the via and the metal line features are identified by the same segmentation process performed using the same machine learning architecture. However, and in accordance with different embodiments described herein, different methods and/or architectures may be used to more appropriately identify different image features. Furthermore, the machine learning model of Lin et al, although applied to images with less characteristic noise than acquired in industrial applications, cannot be reused between images of different ICs and even different IC layers. That is, the system and method of Lin et al requires retraining each new image to be processed, which is impractical for industrial applications.
For example, in IC reverse engineering applications, it may be desirable for the output (e.g., a split output of wires) to have both the correct electrical connection between the wires and to maintain the desired aesthetic quality level. That is, it may be preferable to output the segmentation result with the correct electrical connectivity while approximating how a human would segment the image. However, certain aspects of conventional segmentation may be less important. For example, for applications, the small holes in the wire or rough edges thereof may not be as important as continuity (i.e., conductivity). On the other hand, the placement of vias within an IC relative to wires, etc., may be more important than the via shape. Thus, the evaluation of the ML output quality for these different objects may depend on different aspects. Furthermore, it may be preferable to identify different objects or types based on disparate identification processes. For example, for circuit feature identification from SEM images, segmentation may provide an effective means of identifying lines and/or diffusion regions. However, depending on the current application, segmentation may not be as efficient as the detection process for Kong Shibie. Thus, and in accordance with various embodiments, different processes may be preferably applied to different image recognition aspects or object types.
To this end, at least in part, the systems and methods described herein according to different embodiments provide different examples of image analysis methods and systems for identifying each of a plurality of object types in an image. While the various exemplary embodiments described relate to identifying circuit features (e.g., wires, vias, diffusion regions, etc.) from an integrated circuit image, it should be appreciated that such embodiments may additionally or alternatively be deployed in the context of different applications to identify objects from an image or digital representation thereof. For example, while some embodiments relate to identifying wires and vias from a digital representation of an IC (e.g., a digital SEM image or a portion thereof), other embodiments relate to identifying different object types (people, structures, vehicles, etc.) from other forms of media (e.g., photographs, videos, topography, radar images, etc.).
In general, embodiments described herein relate to identifying a corresponding object type from an image using a corresponding machine learning identification model, architecture, system, or process. It should be appreciated that each machine learning process or model may be employed from a common computing architecture (sequentially or in parallel) or from a plurality of different architectures or networks. For example, according to some embodiments, networked computing systems may access different remote ML architectures via a network to perform corresponding ML recognition processing in accordance with a variety of ML frameworks or combinations thereof. Further, it should be understood that the systems and methods described herein may be extended to any number of object types. For example, any suitable combination of ML architectures may be used to identify multiple object types (e.g., 2, 3, 5, 10, or N object types). For example, one embodiment involves identifying five object types from an image using three different machine learning architectures. One or more of these machine learning architectures may be used in parallel for independent and simultaneous processing, but other embodiments involve independent sequential processing of images or digital representations thereof.
Thus, it should be appreciated that aspects of the machine learning architecture may be employed in the context of different embodiments. For example, the systems and methods described herein may include and/or have access to a variety of digital data processors, digital storage media, interfaces (e.g., programming interfaces, network interfaces, etc.), computing resources, servers, networks, machine executable code, etc., to access and/or communicate with one or more machine learning networks and/or models thereof or digital code/instructions. According to some aspects, embodiments of the system or method may themselves include a machine learning architecture or portion thereof.
Further, it should be appreciated that a machine learning architecture or network as described herein may relate to an architecture or network or portion thereof known in the art, non-limiting examples of which may include ResNet, HRNet (e.g., HRNet-3, HRNet-4, HRNet-5, etc.), pix2pix, or YOLO, although a variety of other networks known in the art or yet to be known (e.g., neural networks, convolutional neural networks, etc.) may be employed and/or accessed. Furthermore, the different embodiments relate to a combination of multiple partial or complete ML networks. For example, one embodiment involves a combination of ResNet, FASTER R CNN, and/or HRNet aspects to identify an object type from an image. According to still other embodiments, and depending on, for example, the type of object to be identified and/or the needs of a particular application, the image may be processed with various layers and/or depths of the ML network to identify objects therein. It should also be appreciated that as referred to herein, a machine learning architecture may refer to any one or more ML models, processes, code, hardware, firmware, etc., as required by a particular embodiment or application (e.g., object detection, segmentation, etc.) at hand.
For example, non-limiting examples of machine learning architectures may include HRNet-based machine learning frameworks (e.g., HRNet-3, HRNet-4, etc.). The HRNet-based framework and/or architecture may be used to train and/or develop a first machine learning model for a particular application (e.g., wire segmentation), where the model may be reused across multiple images (i.e., robust enough to segment wires from multiple images, IC layers, images representing different ICs, etc.). According to some embodiments, the machine learning architecture may include instances that may be according to a corresponding machine learning framework (e.g., HRNet) to identify object types in multiple images according to context and as described herein.
According to some embodiments, the machine learning architecture may additionally or alternatively include a combination of machine learning frameworks (e.g., HRNet and ResNet). That is, the term "machine learning architecture" referred to herein may refer not only to a single machine learning framework dedicated to a specified task, but may additionally or alternatively refer to multiple frameworks employed in combination to identify instances of a specified object type. Furthermore, the machine learning architecture or a combination of machine learning frameworks thereof may produce different forms of output (e.g., data sets related to object detection and data sets related to object segmentation) depending on the current application.
Various embodiments relate to selecting a specified machine learning architecture and/or associated model that is well suited to a particular task (e.g., analyzing an image to identify each specified object type), where the appropriate machine learning architecture and/or associated model is specified for identifying objects of interest for each object type of interest to be identified. Furthermore, according to some embodiments, selecting an appropriate machine learning architecture (e.g., one of a specified and/or appropriate complexity) and appropriate training of a specified correlation model (e.g., training according to a specified training image breadth, including, for example, a selected image transformation, a number of training images, etc.) for each object type to be identified enables generation of a generic model that can be reused across multiple images (i.e., without requiring retraining across image sets) and that performs reliably and accurately even in the presence of noise or other challenging image sets (e.g., electron microscope images of integrated circuits acquired for industrial and/or reverse engineering applications). For example, the various embodiments improve computing systems and methods by providing a machine learning framework that does not require user intervention, model retraining, and/or parameter adjustment between image analysis, particularly by selecting an appropriate machine learning framework for object-specific detection using an appropriately trained model. Models trained and applied in accordance with embodiments of the present invention are less sensitive to noise and provide improved versatility compared to existing frameworks.
For example, but not limited to, while a first machine learning architecture including a first machine learning framework (e.g., HRNet) may employ the first machine learning model to output segmented results for identifying wires in an IC image, a second machine learning architecture may include a combination of machine learning frameworks (e.g., HRNet and ResNet, or another combination or two, three, or more frameworks) to execute a second machine learning model (or combination of models) to output detection results corresponding to vias detected (i.e., not segmented) from the same IC image used as an input to the first machine learning architecture. According to some embodiments, using such a corresponding machine learning architecture to perform corresponding image recognition tasks for corresponding object types may increase the robustness of machine learning models and/or tasks used with multiple (or indeed many or all) images to be processed for a particular application. Thus, such embodiments may involve improvements to traditional methods that may employ the same machine learning architecture, framework, process, or model to identify each of a plurality of object types, which, among other drawbacks, results in poor model robustness (i.e., lack or reusability across images).
According to various embodiments, the systems and methods described herein relate to a pipeline for identifying multiple objects from an image or digital representation thereof by using an object-specific machine learning architecture, framework, or model that may be unaffected by user adjustment parameters (i.e., is generic) and automated (i.e., does not require human intervention), and that is robustly reapplied to multiple images (e.g., may be reused across multiple images). Furthermore, the different embodiments relate to ML models and/or architectures that can generate results for different images without requiring image or image type specific retraining. Some embodiments employ image preprocessing to prepare or define a digital representation of an image (e.g., a binary representation of a surface, such as an IC layer, tiles or blocks thereof, etc.) and/or employ a refinement step to post-process the output from the machine learning architecture. For exemplary purposes, the various embodiments described herein include such preprocessing and/or refinement steps. However, it should be understood that other embodiments contemplated herein may omit such a process, or have variations exchanged therewith, and that objects may be identified from images according to the object-specific machine learning processes, models, and/or systems described herein according to different embodiments.
Referring to exemplary applications of IC feature recognition, machine learning recognition processing presents a number of challenges. Fig. 1A-1F emphasize some of the challenges, where exemplary SEM image blocks (i.e., defined portions) of an IC image are shown. In this example, fig. 1A through 1C include image tiles that show variations in characteristic intensity between different ICs. For example, the via 102a in fig. 1A is significantly brighter than the via 102B in fig. 1B or the via 102C in fig. 1C, which is barely visible. Thus, in view of the low contrast between wires and vias, such as in fig. 1C, conventional intensity-based threshold techniques for segmented via identification would be very challenging. On the other hand, fig. 1D shows an exemplary IC image block featuring a high degree of noise. In this case, the lines 104d are horizontally aligned, but vertical linear noise causes a region between the wires to have high strength, and may complicate the feature extraction process. Fig. 1E and 1F illustrate exemplary images with contaminants 106E and 106F on the surface of the IC layer, which may further challenge feature identification. For example, using conventional identification processes, contaminants 106e having relatively high intensities may be prone to be mischaracterized as vias. On the other hand, the contaminant 106f partially blocks the via, which may result in missing the via during the inspection process.
Furthermore, IC SEM image segmentation is less required to emphasize advanced semantic information than natural image perception tasks, because vias and wires in SEM images tend to have relatively regular shapes and sizes. For some applications, the pipeline may include binary segmentation tasks and/or single class object detection tasks. Thus, texture information in the high resolution feature map may be relatively more important for IC segmentation than for natural image processing. Thus, for such applications, and in accordance with some embodiments, the ML architecture may include a Convolutional Neural Network (CNN) configured to maintain a high-resolution feature map. For example, a low resolution path network (e.g., resNet) that extracts image visual features by downsampling feature maps from high resolution to low resolution may not be preferred for various segmentation tasks. In contrast, according to some embodiments, the segmentation task (e.g., wire segmentation from an IC SEM image) may employ a CNN framework or process, such as HRNet, which may extract features in parallel from a multi-resolution feature map. Thus, such processing may maintain a high resolution feature map during most or all of the feature extraction processing. However, according to some embodiments, the output therefrom may then serve as an input to a variety of other ML processing (e.g., those employed by ResNet) to perform a variety of other tasks (e.g., via detection).
For example, fig. 2A illustrates an exemplary process 200 for identifying different object types using corresponding machine learning architectures and/or models, in accordance with some embodiments. In this example, the image to be processed includes 8192×8192 pixel SEM images 202 of an IC layer that includes a plurality of wires and vias to be identified, but it should be understood that different image types and/or resolutions may be processed to extract any number of objects and/or object types according to different embodiments. Image 202 is preprocessed 204 to define image blocks (e.g., image portions corresponding to different spatial regions of image 202), whereby the corresponding first and second machine learning architectures 206a and 206b (or machine learning processes or training models 206a and 206 b) independently process image blocks based at least in part on the types of objects (e.g., wires and vias) to be identified. The respective processes 208a and 208b of the refinement step 208 act on the outputs from the machine learning architectures 206a and 206b, thereby producing 210 respective outputs 210a and 210b. In one embodiment, the image blocks are recombined and/or combined to produce respective output images 210a and 210b having a resolution comparable to the input image 202. These exemplary processing steps will be further described below for different embodiments.
According to another embodiment, fig. 2B schematically illustrates another image recognition process 201. In this example, the digital representation 203 of the SEM image of the IC is used as input to a recognition process 201, which recognition process 201 is used to recognize the respective object types, in this case corresponding to dividing wires and detecting vias from the input image 203. Process 201 includes preprocessing the image to define image blocks (e.g., sub-images of smaller size than input image 203) according to corresponding preprocessing steps for each object type (e.g., different preprocessing procedures for wires and vias). For example, fig. 2B schematically shows a first preprocessing step 205a, which corresponds to defining non-overlapping image blocks for final segmentation of the wires from the image blocks. However, the preprocessing step 205b corresponds to defining overlapping image blocks from the same input image 203 based on, for example, a downstream via detection processing step. According to another embodiment, the respective preprocessing steps 205a and 205b may both include the definition of overlapping blocks, but with different amounts of overlap. For example, the preprocessing step 205a may include defining blocks according to an overlap of 10%, 50%, 80%, etc. of the overlap of the preprocessing step 205 b.
In the exemplary embodiment of fig. 2B, pre-processed image blocks 205a and 205B may be used as inputs for further processing using corresponding machine learning architectures 207 and 209, respectively. For example, image blocks for wire identification may be processed by a machine learning architecture 207 that includes a segmentation network. It should be appreciated that the first machine learning architecture 207 may include a trained machine learning model 207, according to some embodiments. However, according to another embodiment, the first machine learning architecture may include an untrained network, wherein image block 205a may be used as a training image, for example. Concurrently with, before, or after the execution of the first machine learning process 207, the image block for via identification 205b may serve as an input to the second machine learning architecture 209. In this example, the second machine learning architecture 209 acts as a framework for detecting vias in the image block 205b, which does not necessarily include a segmentation process, although may be included in some embodiments. In this example, the second machine learning architecture 209 includes a combination of machine learning networks or frameworks 209a and 209 b. That is, according to some embodiments, object recognition (e.g., via detection) may include a combined use of machine learning frameworks, wherein a first machine learning framework 209a provides as output an input to be processed by a second machine learning framework 209 b. As noted above, it should be appreciated that according to some embodiments, one or more of the machine learning frameworks 209a or 209b or the entire second machine learning framework 209 may comprise, for example, a trained machine learning model.
Process 201 may then include post-processing of the respective outputs from the respective machine learning architectures 207 and 209. For example, the wire division output from the first ML architecture 207 may undergo a thinning process 211a, where the divided pixels are thinned according to a convolution thinning process. The different refinement process 211b may operate on the via detection outputs from the second ML architecture 209, for example, to merge outputs corresponding to different image blocks to remove duplicate and/or incomplete vias in the overlap region defined during the preprocessing 205 b. Thus, according to various embodiments, the respective outputs 213a and 213b may be generated for consumption and/or further processing by a user.
It should be appreciated that processes such as those presented in fig. 2A and 2B may provide a number of advantages over conventional machine learning based recognition processes. For example, lin et al disclose using the same ML-based segmentation architecture to identify metal lines and vias in an image set. However, one of the drawbacks of such systems or processes is the lack of robustness, where a new ML model must be trained for each image set representing the IC layer (i.e., the model is not reusable). This is impractical for industrial applications where tens of thousands of images or image sets corresponding to different IC layer regions, different IC layers, and different ICs may require fast processing. However, the different embodiments described herein provide a robust (e.g., reusable) machine learning model that can be applied to different image sets at least in part by using respective machine learning architectures for different object types (e.g., wires, vias, contacts, diffusion regions, etc.). For this purpose, at least in part, the following description is provided to further describe various elements of a system and process for identifying multiple objects of interest from an image in a robust manner, non-limiting examples of which may relate to those of fig. 2A and 2B, among other things.
According to various embodiments, image recognition processing, systems, architectures, and/or models may benefit from pre-processing prior to machine learning processing. For example, various machine learning architectures (e.g., CNN networks) may perform optimally when processing images of a specified size and/or resolution and/or images below a threshold size and/or resolution. Thus, and in accordance with some embodiments, the preprocessing step (e.g., preprocessing 204) may include defining an image of a specified size and/or resolution from a larger image (i.e., at least a portion thereof). It should be appreciated that such images and/or image blocks may be accessed from a local machine, or may be accessed from a remote storage medium (e.g., via the internet), according to various embodiments.
Various preprocessing methods are contemplated herein, depending on, for example, the type of image to be processed, the type of object to be identified, the size and/or resolution of the initial image, the type of machine learning process employed, and so forth. Fig. 3 schematically shows two proposed image preprocessing routines that may be used for IC feature recognition. In this example, the image to be processed 302 includes a large high resolution SEM image of a relatively large area of an IC containing many wires and vias. For example, such images may be too large for the processing resources or usage time allocated to the user's machine learning architecture to adequately process features or train models with sufficient quality or accuracy for a particular application (e.g., wired and over Kong Shibie). Moreover, such images may simply include too many features to adequately train the recognition process in a reasonable time.
Accordingly, image preprocessing steps may be employed to define sub-images 304 and 306 (also referred to herein as image blocks) of a specified resolution and/or size that are easier and/or more accurate to process through subsequent machine learning or machine recognition processes. In this example, two different image sizes corresponding to blocks 304 and 306 are schematically shown. It should be appreciated that such different image sizes may be defined from the input image 302, depending on the current application. However, different embodiments involve defining image blocks of uniform size, wherein a majority or all of the representation of the input image 302 is represented by corresponding image blocks corresponding to respective regions of the input image 302. For example, the input image 302 may thereby define an array of tiles having a uniform size/resolution such that when stitched or combined, the input image 302 is rendered. It should be appreciated that such consistent sizes may be specified based on, for example, the particular machine learning process to be employed, the amount or quantity of dedicated resources and/or time allocated to the various machine learning processes, the density of features in the image 302 and/or image blocks 302 or 304, the type of object to be identified, and so forth.
For example, and in accordance with some embodiments, high resolution SEM image 302 may be digitally "cut" into SEM image blocks whose size is based at least in part on known or automatically digitally inferred intensity differences between the background and the particular feature type. In one embodiment, such image blocks may be defined for final segmentation of the wires in the IC SEM image, wherein the intensity difference between the background and the wires may be relatively pronounced, and wherein the shape of the wires may not vary significantly between image blocks. Thus, according to some embodiments, an image may be defined to provide a desired balance of "local" features and textures to classify images for wire segmentation, taking into account the computational resources required to do so. According to other embodiments, the block size may be defined based on constraints present in memory and processing speed of the computing resources (e.g., GPU). However, it should be understood that the different embodiments relate to selection of block sizes based on the particular application at hand.
As described above, edges of an image can present challenges to the image recognition process. For example, vias located at the edges of an image may be "cut" so as to appear incomplete in the image, or wire ends cut between images may appear as vias in one or more images and be incorrectly identified. Such "edge cuts" may be exacerbated by the definition of image blocks, where a greater proportion of the image area has edges associated with it, which may result in such challenging recognition scenarios. Thus, fig. 3 also schematically illustrates one method of handling such edge effects according to various embodiments. In this example, adjacent image blocks 308 and 310 are defined from the input image 302. In this case, however, the image block has defined a designated boundary region 312 associated therewith from which the subsequent identification process can effectively discard features identified therein. For example, according to one embodiment, a 50 pixel boundary 312 may be defined such that any incomplete edge vias or detected via-like objects may be discarded from further consideration. However, it should be appreciated that any suitable boundary 312 may be specified based on, for example, the expected feature size (e.g., previously automatically determined or estimated based on average feature size, median size, etc.) and/or density. According to a further embodiment, the boundary or overlap may also be related to a particular object type. For example, but not limited to, the wire-segmentation application may not employ overlapping image blocks (although other embodiments may do so), and overlapping blocks may be used in a via-detection application to, for example, mitigate edge-cutting effects.
Further, and in accordance with some embodiments, image blocks 308 and 310 may be defined according to specified overlap regions 314a and 314b between adjacent blocks. That is, the preprocessing step may define image blocks 308 and 310 according to a uniform size, but with a specified overlap region corresponding to a common region of the input image 302 that exists in each of at least two adjacent blocks 308 and 310. According to some embodiments, such overlap regions 314a and 314b may be specified based on, or in accordance with, an expected feature size, or another suitable metric. For example, in an embodiment associated with the obsolete border area 312, the overlap area may be defined based on one or more of the border area 312 size and the expected via size. Such a definition may facilitate subsequent processing, for example, with respect to via identification, thereby helping to accurately distinguish vias from wire ends cut across adjacent image blocks. According to an exemplary embodiment, the overlap regions 314a and 314b may be defined as twice the boundary region 312. In some embodiments, the overlap regions 314a and 314b may be defined as 100 pixels along each edge of an image and/or image block. According to some embodiments, the overlap regions 314a and 314b and the boundary region 312 for discarding features identified as being present only therein, for example, may be defined such that features discarded from the boundary region 312 (e.g., vias, sheared wires, etc.) may still be detected in the overlap regions 314a and 314b, thereby reducing the number of false positives while not ignoring features disposed near the edges of the image or image block. According to further embodiments, the overlap region size may be a function of or related to the downstream process. For example, various refinement processes applied to the machine learning process output may rely on various convolution and/or threshold comparison processes, as will be described further below. For such embodiments, it may be desirable to define overlapping and border regions such that the overlap between images is large enough to present edge vias on each adjacent block.
According to various embodiments, the size of the overlap regions 314a and 314b or boundary region 312 defined for an image block may be determined based on downstream processing and the nature of the image being processed. For example, various embodiments relate to processing image blocks using a machine learning model such as CNN. The response of the CNN process is often less accurate around the edges of the image due to the inclusion of convolution filters. Thus, if the network includes, for example, 4-layer 2x convolution downsampling, then according to one embodiment, 16 (i.e., 2 4) pixels may be pruned from the boundaries of any CNN results, leaving only the middle portion of the image. However, such boundaries or overlapping areas may depend on, for example, the nature of the object being identified, the ability to process the type of object near the edges of the image, how much information may be discarded due to downsampling, or how much of this data is initially important (i.e., how much missing information is inferred from a subset of the remaining information based on the strength of correlation within the image, etc.).
According to some embodiments, image blocks may be defined differently depending on, for example, the type of object to be identified therein and the particular process used to identify the specified object. For example, to identify (e.g., segment) the wires from the SEM image 302, the input image 302 may thereby define a stitched map of non-overlapping uniformly sized image blocks 304, thereby minimizing the amount of image blocks for processing. On the other hand, the via identification process (e.g., via detection) may involve the definition of image blocks that include overlapping regions 314a and 314b, thereby leveraging various aspects thereof (e.g., reducing false positives, improving detection, merging processes described below, etc.) based on the nature and/or size of the vias in the image. Furthermore, such embodiments may be complementary to the accurate identification of different object types. For example, while sheared wires may be troublesome for conventional recognition systems or processes associated with non-overlapping images when taken alone, according to various embodiments, automatic (e.g., digitally performed) cross-referencing between individual outputs from individual processes may, for example, reduce or eliminate errors caused by erroneously identified vias and/or wires when performed in conjunction with a via recognition process employing overlapping areas.
It should be appreciated that the same image (e.g., SEM image 302) may undergo different preprocessing steps for different object types, according to different embodiments. For example, and as described above, different wire segmentation and via detection processes may employ different image blocks defined from the same input image 302. For example, the input image may have defined therefrom a 20 x 20 array of non-overlapping image blocks for subsequent independent processing. The same input image 302 may thus define a 25 x 25 overlapping image block array for via detection that corresponds to the same total area defined by a 20 x 20 block array for wire segmentation, where each via detection block has the same size array as those blocks in wire segmentation, but because of the block overlap defined for via detection, the number of arrays of via detection blocks is greater. Of course, such a difference in array size may correspond to, for example, a ratio between each block size and the overlap region defined thereby, which may be defined automatically and/or based on the current application.
Such image blocks may be used as inputs to various machine learning processes, architectures, or models, according to different embodiments. As will be appreciated by those skilled in the art, a machine learning model may require training to adequately detect one or more object types. That is, the machine learning model may receive as input images to train the process prior to deployment on unknown samples to perform the recognition or reasoning process. For example, an image of a user mark (e.g., an SEM image block with previously identified IC features, non-limiting examples of which may include segmented wires or diffusion regions, detected wires or vias, etc.) may be used as a training set on which respective machine learning models are developed. The effectiveness of training generally depends on the number, quality, and general representativeness of the images of the training model. However, depending on, for example, the nature, sensitivity (e.g., privacy-related issues) and/or richness of such images, or the ease or cost of their purchase, the number of images available for training may be limited.
To this end, various means of generating multiple training images from a single input are known. For example, it is not uncommon to generate multiple images with different brightness, color, and orientation adjustments (collectively referred to herein as image transformations) from the same input image in order to increase the robustness of the machine learning model with limited training data. Such methods may be used, for example, in natural image perception applications. However, the various embodiments described herein contemplate selecting a specified image transformation to apply to the training image based on the particular application at hand. That is, while some embodiments described herein relate to selectively applying a specified machine learning process, architecture, or model for a selected specified object type, some embodiments also relate to selecting a specified image transformation to be applied to a training image to achieve an efficient learning process for a machine learning model. While conventional practices may dictate, for example, the application of any and all available transforms to augment an input image to generate a large number of training images, the different embodiments described herein involve performing a subset of the available image transforms on the input image, for example, to save computation time and costs associated with training a machine learning model, while also improving the result model by reducing "noise" or the like that is impractical when training the model. As a non-limiting example, conventional practice may involve applying a number of rotational transforms (e.g., rotating at 1 °,2 °, or 5 ° until 360 ° to replicate the same image) to an input image to generate a number of rotational transformed training images. While this may be beneficial to natural image recognition processing, where a model may attempt to recognize faces or other common objects, for example, at any number of angles in an image, it is not necessarily beneficial to other applications. For example, for identification of IC features that are typically horizontally and/or vertically aligned, there may be little benefit to training a machine learning model on images with feature rotation (e.g., 25 ° from horizontal). Also, training a model for an autopilot to identify an inverted pedestrian may not be of benefit.
Thus, and in accordance with various embodiments, the training of the machine learning process may be application dependent. For example, rather than applying any and all transformations to the input image blocks to train, the model is trained on a plurality of labeled image blocks that are subject to rotation in 90 ° increments, with the features remaining oriented horizontally or vertically. According to some embodiments, similar selective transformations may be applied to a limited training set of images to effectively train a machine learning model in an application-specific manner. For example, an image block of an IC as described above may be subject to horizontal and vertical reflections to simulate different but realistic circuit feature distribution scenarios. Thus, for processes related to automated driving of automobiles and pedestrian recognition, the training image transformation may selectively ignore vertical reflections or 180 ° rotations. SEM image blocks, on the other hand, may be affected by various intensity and/or color distortions or enhancements to simulate real SEM imaging results across an IC. In one embodiment, this is accomplished by adding image noise, wherein the brightness of the pixels (e.g., each pixel) is increased or decreased according to a specified noise profile (e.g., between pixel intensities between-5 and +5). Thus, according to various embodiments, a limited dataset of training images may be expanded to improve machine learning training efficiency and/or quality and final model performance in an application-specific manner.
For exemplary purposes, the following description refers to the use of a corresponding machine learning model to identify wires and vias from SEM images. However, it should be appreciated that similar or analogous training methods and/or models may be employed to identify different types of IC features (e.g., diffusion regions, etc.) or indeed general or natural image object types (e.g., vehicles, signs, faces, objects, etc.), according to different embodiments. While the various aspects described below relate to training of machine learning models or processes, which do fall within the scope of some of the different embodiments contemplated herein, it should be understood that various other embodiments relate to the use of corresponding machine learning models or processes that have been trained to identify various objects and/or object types from images. For example, different embodiments relate to identifying (e.g., segmenting) a wire from an SEM image using a first trained machine learning model, and identifying (e.g., detecting) a via or a portion thereof from the same SEM image using a second different trained machine learning model to output respective data sets corresponding thereto. In some embodiments, such outputs may be further combined or otherwise combined (e.g., in a netlist) for generating a polygonal representation of an object in an image, etc.
According to some embodiments described below HRNet is used as an exemplary machine learning framework in which a machine learning model is trained with 21 high resolution SEM images of seven (7) different types of ICs for 100 periods of time. If the loss verification step stops decreasing by more than 2 epochs (cycles), the learning rate decays by a factor of 0.1. By adopting the Adam optimization process, the initial learning rate is 0.001, and the weight attenuation is 10 -8. Regarding wire segmentation, the reported results relate to an evaluation of the segmentation results of the dataset containing 21 SEM images from the 7 training SEM IC images. For embodiments related to via detection, an additional network is employed as the feature extraction process. For example, the different embodiments described herein relate to employing HRNet or ResNet to extract features, while the fast R-CNN network is applied as an object detection network using features provided by HRNet or ResNet. The network was trained for 150 cycles using 100 high resolution SEM images from eleven (11) different ICs. For such processing, a random gradient descent (SGD) optimization was used, with an initial learning rate of 0.001, a 10-fold decay per 30 cycles, a momentum of 0.9, and a weight decay of 5 x 10 -4. The evaluation of such processing reported herein is for a dataset containing 20 high resolution SEM images from the trained 11 IC images. However, it should be understood that these embodiments are presented for exemplary purposes only, and that various other machine learning architectures, learning parameters, and evaluation metrics may be employed, and that these embodiments are expressly contemplated herein in accordance with different embodiments. For example, different machine learning and/or CNN architectures may be employed depending on the particular needs of the object recognition application. That is, depending on, for example, the complexity of the image, the type of object to be identified, etc., a machine learning process or framework including different layers, depths, abstract procedures, etc., or cycles, momentums, weights, etc., may be employed without departing from the general scope and nature of the present disclosure.
According to some embodiments, and as described above, the various systems and processes described herein relate to identifying IC features from SEM images. In some embodiments, this involves segmenting wires and detecting vias (and/or via locations) from image blocks defined by SEM images of the IC layer using a corresponding machine learning process, model, and/or machine learning architecture. That is, a first machine learning process, architecture, and/or model may be used to identify objects of a first type (e.g., segmenting wires), and a second machine learning process, architecture, and/or model may be used to identify objects of a second image type (e.g., detecting vias). However, it should be appreciated that the terms "first" and "second" should not be interpreted as implying any required sequential order of any particular form (e.g., one requiring execution of another), but rather as distinguishing between structures, processes, or models. The first and second architectures (and indeed any additional machine learning architecture) may be employed in any order and/or in parallel. For example, two or more processes may be performed in parallel, or a second process may be performed before a first process, depending on the machine learning architecture employed, network configuration, and/or associated computing resources.
In some embodiments, the wire may be segmented according to a first machine learning architecture (e.g., HRNet CNN architecture). In some such embodiments, the SEM image may first be pre-processed to define image blocks, as described above. For example, an SEM image of an IC may be divided into non-overlapping image blocks of 256×256 pixels. For training, the first ML process may downsample each input image block by two CNN layers to a feature map of 1/4 the original input size, e.g., a step size of 2. According to some embodiments, since the high-level semantic features (i.e. the information carried by the low-resolution feature map) may not be important for SEM image segmentation, the second CNN layer may have a modified step size (e.g. step size=1) so that the network extracts texture information from the higher-resolution feature map. For example, for an SEM block size of 256×256 pixels, the first two CNN layers of the network may generate a feature map of size 128×128 pixels. These feature maps can be used to generate new feature maps by interpolation (e.g., at the beginning of each stage), with 1/2 of the smallest feature map in the previous step. According to some embodiments, blocks of a particular stage of the machine learning process may extract features of different resolution representations simultaneously, where the processing block may contain, for example, three layers, and where each layer is followed by a bulk normalization layer, and in some embodiments, a ReLU activation layer. In still other embodiments, residual connections may be added in each processing block for efficient training.
According to some embodiments, different processing stages may include different numbers of frame blocks. For example, a third stage of a CNN network may include 12 CNN blocks, while a second stage may include 9 CNN blocks. However, according to different embodiments, different block numbers may be employed depending on various application specific parameters.
According to some embodiments, the different resolution feature maps output from the blocks may be combined at the end of each machine learning phase by, for example, upsampling and downsampling based on interpolation. The output feature map with the largest size from the previous stage may be up-sampled to the same size as the original input image and may be fed as input to the subsequent recognition layer. According to some embodiments, the final recognition layer may include a kernel having kernel=1 and stride=1. This layer may output, for example, a binary segmentation result of the input SEM image block. While various loss functions may be evaluated during training, one embodiment involves the evaluation of a loss function of a wire segmentation model corresponding to a pixel-level binary cross entropy function associated with the following expression, where y gt corresponds to a ground truth tag and y pred is a predictive tag.
Lwire(ygt,ypred)=-(ygt log(ypred)+(1-ygt)log(1-ypred))
As described above, various embodiments relate to post-processing or refinement of output data from machine learning processes, architectures, and/or models. For example, for segmenting wires from SEM images, certain applications may require subjecting the output of the model to a refinement process or refiner, e.g., to reduce or eliminate Electrically Significant Differences (ESD), to improve the aesthetic quality of the segmented output, etc. As referred to herein, ESD may include a short circuit or "open circuit" that may alter the electrical function or connectivity of the extraction circuit, such as through an incorrectly segmented wire. It should be appreciated that other evaluation metrics may be employed, such as pixel class classification accuracy and union (IoU). However, misclassified or segmented pixels do not necessarily result in an IC short or open circuit and therefore do not necessarily affect ESD metrics.
Fig. 4A and 4B show illustrative outputs from a first image recognition process for recognizing wires from an SEM image block, wherein ESD is observed from isolated wire pixels, and wherein the open circuits a and B shown in the box are generated by isolated pixels, according to some embodiments. In these examples, the False Positive (FP) line corresponds to pixels that were incorrectly labeled as conductive lines in the prediction, while the False Negative (FN) line corresponds to pixels that were incorrectly labeled as background pixels in the prediction. The True Positive (TP) line corresponds to the correctly marked pixel. According to some embodiments, ESD may be eliminated or reduced by incorporating isolated pixels into nearby wires using refiners or by deleting isolated pixels from consideration as wires.
According to some embodiments, a refiner or refinement process described herein may include reclassifying pixels (e.g., each pixel) from a machine learning model output (e.g., segmentation output) based on recognition results of neighboring pixels (e.g., segmentation values of neighboring pixels) and/or their eigenvalues. Such processing may be performed using, for example, a GPU or other processing resource, and convolution operations may be employed according to some embodiments. For example, while some embodiments relate to refining pixels based on various non-convolution processes, some embodiments relate to refiners that include aspects represented by pseudo-codes in which convolution principles are employed to refine pixel values based on characteristic pixel values of pixels adjacent to a pixel to be refined. In one non-limiting example, for pixel p, kernel K selects K 2 -1 neighbors around p (e.g., K 2 -1 nearest neighbors of p). The element of K is initialized to a value of 1, except for the center element. Since, for example, the pixel values in the segmentation result are binary, the eigenvalues of the neighboring values (a non-limiting example of which may include their convolutions) may be equal to, for example, the wire pixels around p. Thus, and in accordance with some embodiments, a threshold may be set, wherein p may be reclassified based on whether the feature pixel value and/or the convolution output is greater than or less than the threshold. For example, if the output is greater than a threshold value (k 2 -1) x t, then pixel p may be reclassified as a wire pixel. Conversely, if it is below the threshold, p may be reclassified as background, for example.
In some embodiments, the refiners may be independent refiners operable to, for example, segment the image to refine the segmentation values of its pixels. In other embodiments, the refiners may be components or elements used in connection with other aspects of the system or apparatus related to the generation of the segmentation results. For example, a refiner of a system or apparatus may receive as input a segmentation output from a first machine learning model or process executed via a first machine learning architecture of the system or apparatus. Similarly, the refinement process may involve a separate refinement process, or one or more steps of the process may be defined. For example, one embodiment relates to a refinement process, such as that performed in conjunction with image analysis steps that produce segmented outputs from machine learning models and/or processes as described above.
According to some embodiments, a second machine learning process, model, network, and/or architecture may be employed in parallel with, before, or after the first machine learning process to identify the second object type from the image. For some exemplary embodiments, this may involve identifying vias from SEM IC images (e.g., the same image of the first machine learning process identification wires) to ultimately establish their connectivity or relative placement. According to different embodiments, the second machine learning architecture differs from the first machine learning process (e.g., using a different CNN procedure, using a different architecture or network, a different layer configuration, parameter weights, a different network or model), in a different training manner than the first network, etc. This may be beneficial, for example, if different object types are more usefully identified according to different procedures (e.g., detection, segmentation, classification, etc.), or if different objects are preferably reported in different formats, behave differently in a common manner, and/or are related to an application-specific value indicator. For example, by processing images according to a specified machine learning architecture to identify a given object type, the identification of the corresponding machine learning model for the given object type may be robust, thereby increasing the reusability of the model for identifying object types across images, thus reducing the time and cost associated with applications that need to process many images (e.g., for industrial reverse engineering applications).
For example, but not limited to, while the examples provided above with respect to the output from the wire-segmentation process may be valuable if accurate connectivity or continuity is indicated, these aspects may be less important for via detection, where accurate reporting of via locations may be relatively more valuable than, for example, the size or shape of vias. Thus, one can employ a unique, carefully tailored machine learning architecture or model to accurately extract the most relevant or valuable information based on the object type or current application. Further, the second process may employ a different preprocessing aspect than that employed by the first machine learning process, and/or employ a different image or image block. For example, while the first segmentation process may define non-overlapping image blocks, a machine learning process for detecting vias, for example, may preprocess the SEM to define overlapping image blocks, e.g., to minimize false positives, or to combine or otherwise combine results from image blocks with specified refinement and/or post-processing steps without excessive repetition, false negatives, or false positives, in accordance with various embodiments.
For one embodiment related to detecting vias from SEM images, a second machine learning process or architecture may include a framework similar to the first architecture described above. For example, a particular CNN network (e.g., HRNet) may be particularly suited for certain tasks and/or well-developed and/or suited for certain types of images (e.g., extracting features from SEM images), and thus may share between different machine learning architectures. With respect to via inspection of the IC SEM images, and according to one embodiment, the second machine learning architecture may thus include a HRNet framework similar to the HRNet framework described above with respect to wire segmentation. However, such a second architecture may include unique elements or models, be trained differently, and/or include different outputs, layers, and/or modules, as well as additional or alternative sub-processes.
For example, embodiments for via detection may include outputting a feature map and one or more downsampled feature maps from a previous stage of a minimum feature map for input into a subsequent network (e.g., a region proposal network, etc.) to detect different sized vias, as compared to embodiments described above with respect to wire splitting using HRNet. Furthermore, and in accordance with some embodiments, additional processes may be applied during the process. In one embodiment, this may involve using Faster R-CNN as the region proposal and object detection head. However, contrary to conventional approaches, application specific layers may be applied. For example, according to one embodiment, this may include replacing the ROI pooling layer with the ROI alignment layer (e.g., proposed by Mask R-CNN) because the ROI alignment layer may sample the proposed region from the feature map more accurately using interpolation techniques. In still other embodiments, such second ML processing may include using various object detection pipelines, such as the pipeline used by ResNet, as a feature extraction framework.
As noted above with respect to the first ML process, training the second ML model generation process may also involve evaluation of various loss functions. However, one embodiment includes an evaluation of the loss function for the following modality, where L rpn is the loss of the regional proposal network in Faster R-CNN, and L box is the bounding box regression loss.
Lυia=Lrpn+Lbox
According to various embodiments, the output from the second machine learning architecture or model may undergo refinement processing. Depending on, for example, the nature of the identified object, the refiners may be similar to those described above with respect to the first refinement process, e.g., segmentation output, or may include different elements or processes. For example, according to some embodiments, the second machine learning model may output a list of prediction boxes and associated confidence scores corresponding to objects (e.g., vias) detected from the image. Objects with a confidence score associated therewith may first be discarded (e.g., vias associated with a confidence score < 0.6). However, those with sufficient confidence scores may be the final output of the recognition process.
Features detected within a specified boundary region (e.g., boundary 312) of an image or image block may also be discarded during the refinement process, as described above. For example, a via "box" detected within 50 pixels of the image edge (or other suitable boundary 312) may be discarded to remove incomplete edge vias or detected "via-like" objects. The refinement process may also include various additional steps. For example, if the predicted via "box" is entirely within the boundary region, it may be considered as an equivalent to detecting a feature with a low confidence score (e.g., <0.6 or another suitable threshold), and may therefore be discarded.
The refinement process may additionally or alternatively include a refinement merge process. For example, and in accordance with some embodiments, the via detection task may involve the definition of overlapping image blocks from SEM images. In this case, object prediction in the overlapping region may be further considered. In one embodiment, the refiner may then detect overlapping predictions in neighboring blocks (e.g., overlapping "boxes" corresponding to the via predictions), where the degree of overlap is considered to estimate whether the via predictions are different vias, or indeed the same as the vias detected in the two images from the common subject matter. For example, according to one embodiment, the refiner may compare the intersection of two predictions (IoU) with a threshold (e.g., 30% overlap). If the intersection is greater than the threshold, the predictions may be considered the same object and the prediction with the highest confidence score may be retained while the other is discarded. This may, for example, reduce false positives, according to some embodiments. It should be appreciated that other logic or similar steps may be automatically employed for refinement in accordance with various embodiments.
The following description relates to the evaluation of the performance of one embodiment of the described system and method with reference to the first and second machine learning processes or architectures described above for performing respective recognition processes (i.e., wire segmentation and via detection, respectively) of the first object type and the second object type. However, it should be understood that such processes and systems are provided for exemplary purposes only, and that various other processes or systems may be employed for similar or different object types and/or applications, according to different embodiments. For example, but not limited to, while HRNet-3 is used as the machine learning backbone for both machine learning architectures described above, HRNet-4 or HRNet-5 (with different digital levels than HRNet-3) can be used for example, different IC SEM image complexity or recognition challenges, feature distribution or types, etc. Similarly, according to other embodiments, different machine learning architectures or processes may be employed and/or trained depending on, for example, the object to be detected (such as a natural object).
As described above, the first machine learning model and the second machine learning model may be trained using selected and/or enhanced training data. However, the different embodiments relate to methods and systems for identifying different objects in an image using a previously trained machine learning identification model. Thus, while the following embodiments relate to the use of a machine learning platform trained in accordance with the above-described exemplary aspects, it should be appreciated that a corresponding machine learning model trained similarly or differently may be equally applied to identify each of a plurality of object types.
A visualization of wire segmentation from SEM image blocks (i.e., a visualization of a dataset output from a first machine learning identification model identifying a first object type) is presented in fig. 5A. In this example, the split output is displayed in the right panel (i.e., second and fourth columns from left) of each pair of corresponding images, with the background displayed black and the wire marked white. In this case, and according to various embodiments, the first machine learning identification model does not distinguish between wires and vias. That is, the vias and wires in the SEM image (the left image in each image pair, i.e., the first and third left columns) are marked with the same segmentation value (e.g., 1) in the segmentation output. According to some embodiments, this aspect (i.e., wires and vias having the same division value) may allow for the division of vias from wires, for example, by minimizing reliance on intensity thresholds, thereby improving wire predictions, which challenges conventional approaches.
Fig. 5B illustrates a further example of a wire segmentation result according to one embodiment. In this example, the reported ESD statistics correspond to an entire SEM 1920 x 1080 image defining the illustrated exemplary image block. The leftmost column of image blocks corresponds to SEM image blocks, the middle column corresponds to image blocks processed according to the segmentation process adapted by Lin et al, and the rightmost column corresponds to output obtained using the output of the first machine learning model trained as described herein.
According to some embodiments, such output results may be quantitatively evaluated. For example, table 1 summarizes the results of two machine learning identification models for identifying a first object type corresponding to segmentation of a wire from an SEM image, in accordance with some embodiments. In this case, two different machine learning frameworks (HRNet-3 and HRNet-4) are compared to the reference process adapted from Lin et al, each of which has been tested as an exemplary first machine learning identification framework.
Table 1: wire segmentation results
In this example, one difference between the two HRNet models of the first machine learning architecture is the number of stages used in the platform. Although the pixel level classification accuracy and IoU results are similar for both training models, the performance gap for average ESD is large, corresponding to the pixels segmented resulting in a different number of shorts or opens in the circuit extracted from the segmentation. Thus, depending on the needs of a particular application, performance criteria, computational requirements or access, or type of object to be identified, a user may employ a preferred architecture for the first machine learning identification process. For example, any of the models described in Table 1 may be used as the first machine learning architecture for identifying the first type of object, but a user not subject to computational constraints may choose to perform the wire segmentation process based on the HRNet-3 model because it produces the least amount of ESD.
Table 2 shows exemplary results of the refinement process described above (i.e., the convolution k-NN refiner) applied to wire segmentation from SEM IC images, according to some embodiments. That is, a convolution refiner based on neighboring pixels is applied to the rough segmentation results generated by the CNN network. In this example, k=7 and t=0.5, the ESD is reduced by 15.6% in the rough cut result. According to another embodiment, the selection of k=7 and t=0.75 effectively reduces ESD in the split result generated using HRNet-3. The latter example demonstrates the reduction of ESD per circuit highlighting that the refiners described herein are able to automatically identify object types with high reliability according to various embodiments without the need to manually adjust parameters of the identification process (e.g., manually adjust core size or threshold). Furthermore, this involves a robust model that is reusable across images. In the non-limiting example of table 2, RR refers to the rate of reduction in the percent morphology of ESD reduced by the refinement process described herein.
TABLE 2 neighbor refiner results for wire segmentation
An input image comprising an SEM IC image may comprise a large number of relatively constant texture components that are relatively sparse in the frequency domain. Thus, the different embodiments may additionally or alternatively relate to the application of frequency domain machine learning processes or models. That is, different machine learning models (e.g., first, second, or third machine learning models employed according to different embodiments) may incorporate one or more frequency domain processes, for example, to output a dataset representative of the identified object or type thereof. For example, some embodiments involve combining such processing with HRNet to ultimately identify the object. According to one embodiment, the HRNet-based processing as described above may be combined with frequency domain processing such as disclosed in Xu et al (Kai Xu, MINGHAI QIN, fei Sun, yuhao Wang, yen-Kuang Chen and Fengbo Ren, "frequency domain learning". IEEE/CVF computer vision and pattern recognition Conference (CVPR), 1737-1746, DOI:10.1109/CVPR42600.2020.00181, 2020).
According to this non-limiting embodiment, frequency domain learning may reveal spectral deviations in, for example, SEM image wire segmentation, where frequency domain processing performs, for example, a 2D Discrete Cosine Transform (DCT), e.g., 8 x 8 blocks. Thus, in the frequency domain, the transformed image may comprise dimensions corresponding to 64×h/8×w/8, where h is the height of the image in the spatial domain and w is the width of the image. According to one embodiment, spectral deviation of a machine learning process (e.g., HRNet) for an identification task (e.g., wire segmentation) can be implemented with the aid of a channel and a dynamic selection module, such as that proposed by Xu et al. Exemplary results are shown in fig. 6A, where only DC frequency channels are shown to have a probability of over 99% activated for a given type of image (e.g., an IC SEM image for wires). According to some embodiments, such results may indicate that only a particular channel or certain channels contain information related to a particular identification task (e.g., cable identification). Furthermore, according to some embodiments, the process may employ only channels deemed important for recognition as input for model training, testing, and/or recognition, thereby improving process flow and/or efficiency. Exemplary results associated with this approach are set forth in table 3. In this example, using only the DC frequency channel trained model achieves higher pixel level accuracy and IoU than machine learning processes (i.e., HRNet-3) that take all frequencies into account, indicating that removing "noise" information in other frequency channels may improve the performance of pixel classification, segmentation, or identification of other modalities, depending on the current application.
Table 3: frequency domain learning results using HRNet-3
For example, by identifying, various metrics may be employed to evaluate the performance of a machine learning process or model according to different embodiments. According to one exemplary embodiment, accuracy and recall may be evaluated. In this case, a match between the prediction box and the ground truth box associated with the via may be found by calculating IoU of each pair of the prediction box and the ground truth box. In such embodiments, if IoU of the prediction box and any ground truth box is greater than a specified threshold (e.g., > 0.3), then the box may be considered a properly detected via, referred to herein as a True Positive (TP) case. According to one embodiment, the ground truth box may have only one matching prediction box (e.g., the prediction box with the maximum IoU). Conversely, a prediction box that does not match a truth box during training may be considered a False Positive (FP) case, while a truth box that does not match a prediction box may be considered a False Negative (FN) case.
According to some embodiments, precision and recall, as referred to herein, may be described separately as follows, wherein precision evaluates error rates in predictions for various proposed methods and/or systems, and recall evaluates detection rates for various objects (e.g., vias):
And is also provided with
Regarding a second machine learning identification process or model, various embodiments relate to detecting vias from SEM images, exemplary results of which are presented in table 4. In this example HRNet-4 is employed, where the feature map is downsampled to a minimum size at the latest stage of interpolation to generate feature maps with five different resolutions as input features to the fast R-CNN process. For the HRNet-5 results, all outputs of the final stage are used as input features for the subsequent fast R-CNN. In these embodiments, 99.77% accuracy is achieved using the HRNet-5 via inspection model, and 98.56% recall is achieved using the ResNet model. The framework proposed using various alternatives improves accuracy, recall, and F1 metrics compared to the framework adapted from Lin et al, and according to different embodiments. While the models and frameworks described in Table 4 relate to various frameworks for identifying, for example, a second object type, it should be understood that various other models, frameworks, or frameworks may be employed according to different embodiments. However, it should be appreciated that some embodiments involve selecting a model or architecture associated therewith that is well suited to the current task. For example, if the second object type relates to the identification of vias, then according to some embodiments, the user may select an ML model or architecture to perform ML-based detection, rather than ML-based segmentation, as such a model may have increased robustness (i.e., reusability) for different image applications.
TABLE 4 Via detection results
To further evaluate aspects of the proposed system and method, and in accordance with some embodiments, the impact of generating overlapping blocks for object recognition (e.g., by detection) may be evaluated. For example, table 5 shows the impact of generating overlapping blocks for via detection reasoning. In this example, 5.47% precision improvement and 3.72% recall improvement are achieved with model reasoning of overlapping blocks, corresponding to the deletion of predictions in boundary regions (e.g., boundary 312) according to some embodiments, reducing the number of erroneously detected "via-like" objects while maintaining the robustness of the model reasoning.
Table 5: model inferred via inspection results with and without overlapping SEM blocks
As described above with respect to the first machine learning process or model, the second machine learning process or model may be similarly analyzed with respect to frequency domain learning for detecting the second object type. As one non-limiting example, the extracted spectral deviation for via detection is shown in fig. 6B. In this case, more channels are activated by the sample with a probability of more than 50% compared to the spectral deviation of the wire division under similar conditions. However, the DC channel remains the highest probability of being selected. In this case, only preserving the DC channel frequency corresponds to the application of block-wise average filtering of the image in the spectral domain. According to some embodiments, processing of, for example, the SEM images themselves may result in the performance reported in table 6. In this case, the detection accuracy of the training model with the block-wise average filter input is improved, but the detection recall is reduced, compared to the training model with other inputs. However, in general, models trained with conventional inputs may be similar according to different embodiments, thus highlighting the object type association selected with the machine learning process in view of the spectral and temporal aspects of the first segmentation process described above.
Table 6: via detection results using block-wise averaging filter input
The output from a second recognition process for detecting vias from SEM images of ICs, according to some embodiments, is shown in fig. 7A. In this case, the box indicating the predicted via is marked in the image. Fig. 7B illustrates a further example of a via detection result according to another embodiment. In this example, the leftmost image block column corresponds to an SEM image block, and the middle column corresponds to an image block processed according to the adjustment method described by Lin et al (i.e., not object-type specific and non-reusable), where the vias are segmented into irregularly shaped and irregularly sized objects. In this example, the rightmost column corresponds to a regular size and shape via output dataset generated using a second machine-learning based model trained as described herein, according to various embodiments. Fig. 7C shows a similar example, in which six SEM image blocks are shown with via inspection results overlaid thereon. In these examples, the dark rectangle (e.g., rectangle 702) corresponds to an irregularly shaped via detected using the process described by Lin et al, while the light square (e.g., square 704) corresponds to a via prediction made by a reusable ML model as described herein. In this example, although some vias (e.g., via 706 indicated by dark rectangles and light squares) are detected by two methods, the reusable object-specific model is superior to the model of Lin, according to different embodiments.
It should be appreciated that various forms of output may be produced according to different embodiments. For example, the predicted via may be output as a list of via locations. Further, such an output may be combined with, for example, an output from the first recognition process. In one embodiment, the image blocks or data sets identified therefrom (e.g., segmented conductors and detected vias) are recombined to form the original input image, including predicted tags (e.g., tags that remain after post-processing or refinement). In another embodiment, the data sets indicative of circuit features may be combined, formatted, and/or interpreted to generate a circuit representation for future reference. In yet another embodiment, the data output from the various recognition processes may be used to automatically generate a netlist of circuit features.
While this disclosure describes various embodiments for illustrative purposes, such description is not intended to be limited to such embodiments. On the contrary, the applicant's teachings described and illustrated herein encompass various alternatives, modifications, and equivalents, without departing from the embodiments, the general scope of which is defined in the appended claims. The steps or stages of a method or process described in this disclosure are not intended or suggested to be a particular order, except to the extent necessary or inherent to the process itself. In many cases, the order of the process steps may be varied without changing the purpose, effect or importance of the described methods.
The information shown and described in detail herein fully enables the above-described objects of the present disclosure, the presently preferred embodiments of the present disclosure, and thus represents the subject matter broadly contemplated by the present disclosure. The scope of the present disclosure fully encompasses other embodiments that may become obvious to those skilled in the art and is accordingly limited by nothing other than the appended claims, in which reference to an element in any singular is not intended to be "one and only one" (unless explicitly so stated), but rather "one or more". All structural and functional equivalents to the elements of the above-described preferred and additional embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Furthermore, there is no requirement for a system or method encompassed by the present claims that they address each and every problem the present disclosure seeks to address. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. However, it is also contemplated by the present disclosure that various changes and modifications apparent to those of ordinary skill in the art may be made in form, material, work piece and details of construction materials without departing from the spirit and scope of the disclosure as set forth in the following claims.

Claims (73)

1. An image analysis method for identifying each of a plurality of object types in an image, the image analysis method performed by at least one digital data processor in communication with a digital data storage medium storing the image, the image analysis method comprising:
accessing a digital representation of at least a portion of the image;
identifying, in the digital representation, an object of a first object type of the plurality of object types by a first reusable identification model associated with a first machine learning architecture;
Identifying, in the digital representation, an object of a second object type of the plurality of object types by a second reusable identification model associated with a second machine learning architecture; and
A first object data set and a second object data set representing objects of the first object type and the second object type, respectively, in a digital representation of the image are output.
2. The image analysis method of claim 1, wherein one or more of the first reusable recognition model or the second reusable recognition model comprises a segmentation model or an object detection model.
3. The image analysis method of claim 2, wherein the first reusable recognition model comprises a segmentation model and the second reusable recognition model comprises an object detection model.
4. A method of image analysis according to any one of claims 1 to 3, wherein one or more of the first or second reusable identification models comprises a user-adjusted no parameter identification model.
5. The image analysis method of any of claims 1-4, wherein one or more of the first reusable recognition model or the second reusable recognition model comprises a generic recognition model.
6. The image analysis method of any of claims 1 to 5, wherein one or more of the first reusable recognition model or the second reusable recognition model comprises a convolutional neural network recognition model.
7. The image analysis method according to any one of claims 1 to 6, wherein the first object type and the second object type correspond to different object types.
8. The image analysis method of any of claims 1 to 7, further comprising training one or more of the first or second reusable recognition models with a context-specific training image or digital representation thereof.
9. The image analysis method of any of claims 1-8, wherein the digital representation includes each of a plurality of image blocks corresponding to regions of the image.
10. The image analysis method of claim 9, further comprising defining the plurality of image blocks.
11. The image analysis method of claim 10, wherein the image blocks are defined to include partially overlapping block regions.
12. The image analysis method of claim 11, further comprising refining an output of the object identified in the overlapping region.
13. The image analysis method of claim 12, wherein the refining includes performing an object merging process.
14. The image analysis method of any of claims 9 to 13, wherein the plurality of image blocks are defined differently for identifying objects of the first object type and identifying objects of the second object type.
15. The image analysis method of any of claims 9 to 14, wherein for at least a portion of the image blocks, one or more of the identifying of the object of the first object type or the identifying of the object of the second object type is performed in parallel.
16. The image analysis method of any of claims 1 to 15, further comprising post-processing at least a portion of the objects according to a refinement process.
17. The image analysis method of claim 16, wherein the refinement process comprises a convolution refinement process.
18. An image analysis method as claimed in any one of claim 16 or claim 17, wherein the refinement process comprises a k nearest neighbor (k-NN) refinement process.
19. The image analysis method of any of claims 1 to 18, wherein one or more of the first object data set or the second object data set comprises one or more of an image segmentation output or an object location output.
20. The image analysis method of any one of claims 1 to 19, wherein the image analysis method is automatically implemented by the at least one digital data processor.
21. The image analysis method of any one of claims 1 to 20, wherein the image represents an Integrated Circuit (IC).
22. The image analysis method of claim 21, wherein one or more of the first object type or the second object type comprises a wire, a via, a polysilicon region, a contact, or a diffusion region.
23. The image analysis method of any one of claims 1 to 22, wherein the image comprises an electron microscope image.
24. The image analysis method of any one of claims 1 to 23, wherein the image represents a corresponding region of a substrate, and the image analysis method further comprises repeatedly performing the image analysis method for each of a plurality of images representing the corresponding region of the substrate.
25. The image analysis method of any of claims 1 to 24, further comprising combining the first object data set and the second object data set into a combined data set representative of the image.
26. The image analysis method of any of claims 1 to 25, further comprising digitally rendering an object-identifying image from one or more of the first object data set and the second object data set.
27. The image analysis method of any of claims 1 to 26, further comprising independently training the first reusable recognition model and the second reusable recognition model.
28. The image analysis method of any of claims 1 to 27, further comprising training the first and second reusable recognition models with training images enhanced by application specific transformations.
29. The image analysis method of claim 28, wherein the application-specific transforms include one or more of image reflection, rotation, shifting, tilting, pixel intensity adjustment, or noise addition.
30. An image analysis method for identifying each of a plurality of object types of interest in an image, the image analysis method performed by at least one digital data processor in communication with a digital data storage medium storing the image, the image analysis method comprising:
accessing a digital representation of the image;
For each object type of interest, identifying each object of interest in the digital representation by a respective reusable object identification model associated with a respective corresponding machine learning architecture; and
A respective object data set representing a respective object of interest in the digital representation of the image corresponding to each object type of interest is output.
31. A method for digitally refining a digital representation of a segmented image defined by a plurality of pixels each having a respective pixel value, the method being performed digitally by at least one digital data processor in communication with a digital data storage medium storing the digital representation, the method comprising:
for each refinement pixel to be refined, calculating a feature pixel value corresponding to the pixel value of the specified number of neighboring pixels;
digitally comparing the feature pixel value to a specified threshold; and
And assigning a thinned pixel value to the thinned pixel when the feature pixel value satisfies a comparison condition with respect to the specified threshold.
32. The method of claim 31, wherein calculating the feature pixel value comprises performing a digital convolution process.
33. A method according to any one of claim 31 or claim 32, wherein the segmented image represents an integrated circuit.
34. The method of any of claims 31 to 33, wherein the digital representation corresponds to an output of a machine learning based image segmentation process.
35. An image analysis method for identifying each of a plurality of circuit feature types in an image of an Integrated Circuit (IC), the image analysis method performed by at least one digital data processor in communication with a digital data storage medium storing the image, the image analysis method comprising:
Specifying a feature type for each of the plurality of circuit feature types:
digitally defining a feature type specific digital representation of the image;
identifying the object of the specified feature type in the feature type-specific digital representation by a reusable feature type-specific object identification model associated with a corresponding machine learning architecture; and
The output from the feature-type-specific object recognition process is digitally refined according to a feature-type-specific refinement process.
36. An image analysis system for identifying each of a plurality of object types in an image, the image analysis system comprising:
At least one digital data processor in network communication with a digital data storage medium storing the image, the at least one digital data processor configured to execute machine executable instructions to:
accessing a digital representation of at least a portion of the image;
identifying, in the digital representation, an object of a first object type of the plurality of object types by a first reusable identification model associated with a first machine learning architecture;
Identifying, in the digital representation, an object of a second object type of the plurality of object types by a second reusable identification model associated with a second machine learning architecture;
A first object data set and a second object data set representing objects of the first object type and the second object type, respectively, in a digital representation of the image are output.
37. The image analysis system of claim 36, wherein one or more of the first reusable recognition model or the second reusable recognition model comprises a segmentation model or an object detection model.
38. The image analysis system of claim 37, wherein the first reusable recognition model comprises a segmentation model and the second reusable recognition model comprises an object detection model.
39. The image analysis system of any of claims 36 to 38, wherein one or more of the first reusable recognition model or the second reusable recognition model comprises a user-adjusted no-parameter recognition model.
40. The image analysis system of any of claims 36 to 39, wherein one or more of the first or second reusable identification models comprises a convolutional neural network identification model.
41. The image analysis system of any of claims 36 to 40, further comprising a non-transitory machine readable storage medium storing the first reusable recognition model and the second reusable recognition model.
42. The image analysis system of any of claims 36 to 41, wherein the machine executable instructions further comprise instructions for defining each of a plurality of image blocks corresponding to regions of the image.
43. The image analysis system of claim 42, wherein the image tiles include partially overlapping tile regions.
44. The image analysis system of claim 43, wherein the machine-executable instructions further comprise instructions for refining the output of objects identified in the overlapping region.
45. The image analysis system of claim 44, wherein the machine executable instructions for refining output correspond to performing an object merging process.
46. The image analysis system of any of claims 42 to 45, wherein the plurality of image blocks are defined differently for identifying objects of the first object type and identifying objects of the second object type.
47. An image analysis system according to any of claims 36 to 46, wherein the machine executable instructions further comprise instructions for post-processing at least a portion of the objects according to a refinement process.
48. The image analysis system of claim 47, wherein the refinement process comprises a convolution refinement process.
49. An image analysis system as claimed in claim 47 or claim 48, wherein the refinement process comprises a k nearest neighbor (k-NN) refinement process.
50. The image analysis system of any of claims 36 to 49, wherein one or more of the first object data set or the second object data set comprises one or more of an image segmentation output or an object location output.
51. The image analysis system of any of claims 36 to 50, wherein the image represents an Integrated Circuit (IC).
52. The image analysis system of claim 51, wherein one or more of the first object type or the second object type comprises a wire, a via, a polysilicon region, a contact, or a diffusion region.
53. The image analysis system of any of claims 36 to 52, wherein the image comprises an electron microscope image.
54. An image analysis system as claimed in any one of claims 36 to 53, wherein the images represent regions of a substrate and the machine executable instructions further comprise instructions for repeatedly executing the machine executable instructions for each of a plurality of images representing regions of the substrate.
55. An image analysis system according to any of claims 36 to 54, wherein the machine executable instructions further comprise instructions for combining the first object data set and the second object data set into a combined data set representing the image.
56. The image analysis system of any of claims 36 to 55, wherein the machine executable instructions further comprise instructions for digitally rendering an object identification image from one or more of the first object data set and the second object data set.
57. The image analysis system of any of claims 36 to 56, wherein the first and second reusable recognition models are trained with training images enhanced by application specific transformations.
58. The image analysis system of claim 28, wherein the application-specific transforms include one or more of image reflection, rotation, shifting, tilting, pixel intensity adjustment, or noise addition.
59. An image analysis system for identifying each of a plurality of object types of interest in an image, the image analysis system comprising:
a digital data processor operable to execute object recognition instructions;
At least one digital image database comprising the images to be analyzed for the plurality of object types of interest, the at least one digital image database being accessible by a digital data processor;
A digital storage medium storing, for each of the plurality of object types of interest, a different respective reusable recognition model deployable by the digital data processor and associated with a respective different machine learning architecture; and
A non-transitory computer readable medium comprising object recognition instructions that are operable, when executed by the digital data processor, to specify a type for each of the plurality of object types of interest:
accessing a digital representation of at least a portion of the image from the at least one digital image database;
Identifying at least one object of the specified type in the digital representation by deploying the different respective reusable identification models;
A corresponding object data set representing the specified type of object in the digital representation of the image is output.
60. The image analysis system of claim 59, further comprising a digital output storage medium accessible to the digital data processor for storing each of the corresponding object data sets corresponding to each of the specified types of the plurality of object types of interest.
61. An image analysis system according to any one of claim 59 or claim 60, wherein the digital data processor is operable to repeatedly execute the object recognition instructions for a plurality of images.
62. The image analysis system of claim 61, wherein each different respective reusable recognition model is configured to be repeatedly applied to the plurality of images.
63. An image analysis system for digitally refining a digital representation of a segmented image defined by a plurality of pixels each having a respective pixel value, the image analysis system comprising:
At least one digital data processor in communication with a digital data storage medium storing the digital representation, the at least one digital data processor also in communication with a non-transitory computer readable storage medium storing digital instructions that, when executed, cause the at least one digital data processor to:
for each refinement pixel to be refined, calculating a feature pixel value corresponding to the pixel value of the specified number of neighboring pixels;
digitally comparing the feature pixel value to a specified threshold; and
And assigning a thinned pixel value to the thinned pixel when the feature pixel value satisfies a comparison condition with respect to the specified threshold.
64. The image analysis system of claim 63, wherein the feature pixel values are calculated according to a digital convolution process.
65. An image analysis system according to any of claims 63 or 64, wherein the segmented image represents an integrated circuit.
66. The image analysis system of any of claims 63 to 65, wherein the digital representation corresponds to an output of a machine learning based image segmentation process.
67. An image analysis system for identifying each of a plurality of circuit feature types in an image of an Integrated Circuit (IC), the image analysis system comprising:
At least one digital data processor in communication with a digital data storage medium storing the image, the at least one digital data processor also in communication with a non-transitory computer readable storage medium storing digital instructions that, when executed, cause the at least one digital data processor to specify a feature type for each of the plurality of circuit feature types:
digitally defining a feature type specific digital representation of the image;
Identifying the object of the specified feature type in the feature type-specific digital representation by a reusable feature type-specific object identification model associated with a respective machine learning architecture; and
The output from the feature-type-specific object recognition process is digitally refined according to a feature-type-specific refinement process.
68. The image analysis system according to claim 67, wherein the non-transitory computer readable storage medium has stored thereon the reusable feature type-specific object recognition model.
69. A non-transitory computer-readable storage medium having stored thereon digital instructions that, when executed by at least one digital data processor, cause the at least one digital data processor to, for each of a plurality of circuit feature types:
digitally defining a feature type specific digital representation of the image;
Identifying an object of a specified feature type in the type-specific digital representation by a reusable feature type-specific object identification model associated with a respective machine learning architecture; and
The output from the feature type-specific object recognition process is digitally refined according to the feature type-specific refinement process.
70. The non-transitory computer-readable storage medium of claim 69, further having stored thereon each of the reusable feature type-specific object recognition models.
71. A non-transitory computer-readable storage medium having stored thereon digital instructions that, when executed by at least one digital data processor, cause the at least one digital data processor to:
accessing a digital representation of at least a portion of an image;
identifying, in the digital representation, an object of a first object type of a plurality of object types by a first reusable identification model associated with a first machine learning architecture;
Identifying, in the digital representation, an object of a second object type of the plurality of object types by a second reusable identification model associated with a second machine learning architecture;
A first object data set and a second object data set representing objects of the first object type and the second object type, respectively, in a digital representation of the image are output.
72. The non-transitory computer-readable storage medium of claim 69, further having stored thereon each of the reusable feature type-specific object recognition models.
73. A non-transitory computer-readable storage medium having stored thereon digital instructions for digitally refining a digital representation of a segmented image defined by a plurality of pixels each having a corresponding pixel value, the digital instructions when executed by at least one digital data processor cause the at least one digital data processor to:
for each refinement pixel to be refined, calculating a feature pixel value corresponding to the pixel value of the specified number of neighboring pixels;
digitally comparing the feature pixel value to a specified threshold; and
And assigning a thinned pixel value to the thinned pixel when the feature pixel value satisfies a comparison condition with respect to the specified threshold.
CN202280075868.0A 2021-11-15 2022-11-14 Machine learning system and method for object-specific recognition Pending CN118266014A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US63/279,311 2021-11-15
US63/282,102 2021-11-22
US202263308869P 2022-02-10 2022-02-10
US63/308,869 2022-02-10
PCT/CA2022/051676 WO2023082018A1 (en) 2021-11-15 2022-11-14 Machine learning system and method for object-specific recognition

Publications (1)

Publication Number Publication Date
CN118266014A true CN118266014A (en) 2024-06-28

Family

ID=91603752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280075868.0A Pending CN118266014A (en) 2021-11-15 2022-11-14 Machine learning system and method for object-specific recognition

Country Status (1)

Country Link
CN (1) CN118266014A (en)

Similar Documents

Publication Publication Date Title
US20220067523A1 (en) Method of deep learining-based examination of a semiconductor specimen and system thereof
US10853932B2 (en) Method of defect detection on a specimen and system thereof
US11915406B2 (en) Generating training data usable for examination of a semiconductor specimen
US12007335B2 (en) Automatic optimization of an examination recipe
KR20220012217A (en) Machine Learning-Based Classification of Defects in Semiconductor Specimens
CN113807378B (en) Training data increment method, electronic device and computer readable recording medium
KR20210108338A (en) A computer implemented process to enhance edge defect detection and other defects in ophthalmic lenses
CN114255212A (en) FPC surface defect detection method and system based on CNN
CN117975087A (en) Casting defect identification method based on ECA-ConvNext
US11574397B2 (en) Image processing device, image processing method, and computer readable recording medium
CN115631197B (en) Image processing method, device, medium, equipment and system
JP6377214B2 (en) Text detection method and apparatus
JP2004296592A (en) Defect classification equipment, defect classification method, and program
CN118266014A (en) Machine learning system and method for object-specific recognition
JP7475901B2 (en) Method and system for detecting defects on a test piece
KR20230052169A (en) Apparatus and method for generating image annotation based on shap
WO2023082018A1 (en) Machine learning system and method for object-specific recognition
CN112598043A (en) Cooperative significance detection method based on weak supervised learning
JP7530330B2 (en) Segmentation of images of semiconductor samples
WO2024086927A1 (en) Method for detecting potential errors in digitally segmented images, and a system employing the same
CN118115492A (en) Ship welding defect detection method, system and equipment based on machine learning
CN117314855A (en) Surface defect detection method and system based on feature extraction and sparse representation
Fong et al. Edge model based segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication