CN110785709B - Generating high resolution images from low resolution images for semiconductor applications - Google Patents

Generating high resolution images from low resolution images for semiconductor applications Download PDF

Info

Publication number
CN110785709B
CN110785709B CN201880040444.4A CN201880040444A CN110785709B CN 110785709 B CN110785709 B CN 110785709B CN 201880040444 A CN201880040444 A CN 201880040444A CN 110785709 B CN110785709 B CN 110785709B
Authority
CN
China
Prior art keywords
resolution image
sample
low resolution
high resolution
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880040444.4A
Other languages
Chinese (zh)
Other versions
CN110785709A (en
Inventor
S·夏尔马
A·S·达恩狄安娜
M·马哈德凡
房超
A·阿索德甘
B·达菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KLA Corp
Original Assignee
KLA Tencor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/019,422 external-priority patent/US10769761B2/en
Application filed by KLA Tencor Corp filed Critical KLA Tencor Corp
Publication of CN110785709A publication Critical patent/CN110785709A/en
Application granted granted Critical
Publication of CN110785709B publication Critical patent/CN110785709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70605Workpiece metrology
    • G03F7/70653Metrology techniques
    • G03F7/70675Latent image, i.e. measuring the image of the exposed resist prior to development
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70425Imaging strategies, e.g. for increasing throughput or resolution, printing product fields larger than the image field or compensating lithography- or non-lithography errors, e.g. proximity correction, mix-and-match, stitching or double patterning
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70491Information management, e.g. software; Active and passive control, e.g. details of controlling exposure processes or exposure tool monitoring processes
    • G03F7/705Modelling or simulating from physical phenomena up to complete wafer processes or whole workflow in wafer productions
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70605Workpiece metrology
    • G03F7/70616Monitoring the printed patterns
    • G03F7/70625Dimensions, e.g. line width, critical dimension [CD], profile, sidewall angle or edge roughness
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L22/00Testing or measuring during manufacture or treatment; Reliability measurements, i.e. testing of parts without further processing to modify the parts as such; Structural arrangements therefor
    • H01L22/20Sequence of activities consisting of a plurality of measurements, corrections, marking or sorting steps
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L22/00Testing or measuring during manufacture or treatment; Reliability measurements, i.e. testing of parts without further processing to modify the parts as such; Structural arrangements therefor
    • H01L22/30Structural arrangements specially adapted for testing or measuring during manufacture or treatment, or specially adapted for reliability measurements

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manufacturing & Machinery (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Power Engineering (AREA)
  • Testing Or Measuring Of Semiconductors Or The Like (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

Methods and systems are provided for generating a high resolution image of a sample from a low resolution image of the sample. One system includes one or more computer subsystems configured for acquiring a low resolution image of a sample. The system also includes one or more components executed by the one or more computer subsystems. The one or more components include a deep convolutional neural network that includes one or more first layers configured for generating a representation of the low resolution image. The deep convolutional neural network also includes one or more second layers configured for generating a high resolution image of the sample from the representation of the low resolution image. The second layer includes a final layer configured to output the high resolution image and configured as a subpixel convolution layer.

Description

Generating high resolution images from low resolution images for semiconductor applications
Technical Field
The present invention relates generally to methods and systems for generating high resolution images from low resolution images for semiconductor applications.
Background
The following description and examples are not admitted to be prior art by virtue of their inclusion in this section.
The fabrication of semiconductor devices, such as logic and memory devices, typically involves the processing of a substrate, such as a semiconductor wafer, using a number of semiconductor fabrication processes to form various features and multiple levels of semiconductor devices. For example, photolithography is a semiconductor manufacturing process that involves transferring a pattern from a reticle to a resist disposed on a semiconductor wafer. Additional examples of semiconductor manufacturing processes include, but are not limited to, Chemical Mechanical Polishing (CMP), etching, deposition, and ion implantation. Multiple semiconductor devices may be fabricated in an arrangement on a single semiconductor wafer and subsequently separated into individual semiconductor devices.
Inspection processes are used at various steps during the semiconductor manufacturing process to detect defects on the sample to drive higher yields in the manufacturing process and thus bring higher profits. Inspection has been an important part of manufacturing semiconductor devices. However, as the size of semiconductor devices decreases, inspection becomes even more important to the successful manufacture of acceptable semiconductor devices, as smaller defects can cause device failure.
Defect review typically involves re-detecting defects that are likewise detected by the inspection process and generating additional information about the defects at a higher resolution using high magnification optics or Scanning Electron Microscopy (SEM). Defect review is therefore performed at discrete locations on the specimen where defects have been detected by inspection. Higher resolution data for defects generated by defect review is more suitable for determining attributes of the defect, such as profile, roughness, more accurate size information, etc.
Metrology processes are also used at various steps during the semiconductor manufacturing process to monitor and control the process. Metrology processes differ from inspection processes in that, unlike inspection processes where defects are detected on a sample, metrology processes are used to measure one or more characteristics of a sample that cannot be determined using currently used inspection tools. For example, metrology processes are used to measure one or more characteristics of a sample, such as dimensions (e.g., line widths, thicknesses, etc.) of features formed on the sample during the process, such that performance of the process may be determined from the one or more characteristics. Further, if one or more characteristics of the sample are unacceptable (e.g., outside of a predetermined range of characteristics), the measurement of the one or more characteristics of the sample may be used to change one or more parameters of the process such that additional samples made by the process have acceptable characteristics.
The metrology process is also different from the defect review process in that, unlike the defect review process in which a detected defect is revisited in the defect review by inspection, the metrology process can be performed at a location where no defect is detected. In other words, unlike defect review, the location at which the metrology process is performed on the specimen may be independent of the results of the inspection process performed on the specimen. In particular, the location where the metrology process is performed may be selected independently of the inspection results.
Thus, as described above, due to the limited resolution with which inspection (optical inspection and sometimes electron beam inspection) is performed, the specimen is typically required to generate additional higher resolution images for defect review of defects detected on the specimen, which may include verification of detected defects, classification of detected defects, and determination of characteristics of defects. Furthermore, higher resolution images are typically required to determine information of patterned features as formed on a sample in metrology, regardless of whether defects have been detected in the patterned features. Therefore, defect review and metrology can be a time consuming process requiring the use of the physical sample itself and requiring additional tools (in addition to the inspector) to produce higher resolution images.
However, defect review and metering is not a process that can be simply eliminated to save time and money. For example, due to the resolution at which the inspection process is performed, the inspection process generally does not produce image signals or data that can be used to determine information of detected defects sufficient to classify the defects and/or to determine the root cause of the defects. Furthermore, due to the resolution at which the inspection process is performed, the inspection process generally does not produce image signals or data that can be used to determine information of patterned features formed on the sample with sufficient accuracy.
Accordingly, it would be advantageous to form a system and method for generating a high resolution image for a sample that does not have one or more of the above-described drawbacks.
Disclosure of Invention
The following description of various embodiments is not to be construed in any way as limiting the subject matter of the appended claims.
One embodiment relates to a system configured to generate a high resolution image of a sample from a low resolution image of the sample. The system includes one or more computer subsystems configured for acquiring a low resolution image of the sample. The system also includes one or more components executed by one or more computer subsystems. The one or more components include a deep convolutional neural network that includes one or more first layers configured for generating a representation of a low resolution image. The deep convolutional neural network also includes one or more second layers configured for generating a high resolution image of the sample from a representation of the low resolution image. The one or more second layers include a final layer configured to output a high resolution image. The final layer is configured as a subpixel convolution layer. The system may be further configured as described herein.
Additional embodiments relate to another system configured to generate a high resolution image of a sample from a low resolution image of the sample. This system is configured as described above. This system also includes an imaging subsystem configured for generating a low resolution image of the sample. In this embodiment, the computer subsystem is configured for acquiring a low resolution image from the imaging subsystem. This embodiment of the system may be further configured as described herein.
Another embodiment relates to a computer-implemented method for generating a high resolution image of a sample from a low resolution image of the sample. The method includes acquiring a low resolution image of the sample. The method also includes generating a representation of the low resolution image by inputting the low resolution image into one or more first layers of the deep convolutional neural network. Further, the method includes generating a high resolution image of the sample based on the representation. Generating the high resolution image is performed by one or more second layers of the deep convolutional neural network. The one or more second layers include a final layer configured to output a high resolution image. The final layer is configured as a subpixel convolution layer. The steps of acquiring, generating a representation, and generating a high resolution image are performed by one or more computer systems. One or more components are executed by one or more computer systems, and one or more components include a deep convolutional neural network.
Each of the steps of the methods described above may be further performed as further described herein. Furthermore, embodiments of the methods described above may include any other step of any other method described herein. Additionally, the methods described above may be performed by any of the systems described herein.
Another embodiment relates to a non-transitory computer-readable medium storing program instructions executable on one or more computer systems for performing a computer-implemented method for generating a high resolution image of a sample from a low resolution image of the sample. The computer-implemented method comprises the steps of the method described above. The computer readable medium may be further configured as described herein. The steps of the computer-implemented method may be performed as further described herein. Further, a computer-implemented method in which program instructions may be executed may include any other step of any other method described herein.
Drawings
Further advantages of the present invention will become apparent to those skilled in the art by the benefit of the following detailed description of preferred embodiments and by reference to the accompanying drawings in which:
FIGS. 1 and 1a are schematic diagrams illustrating a side view of an embodiment of a system configured as described herein;
FIG. 2 is a block diagram illustrating one embodiment of a deep convolutional neural network that may be included in the embodiments described herein;
FIG. 3 is a schematic diagram illustrating one embodiment of a deep convolutional neural network that may be included in the embodiments described herein;
fig. 4 and 5 are block diagrams illustrating embodiments that may include one or more components of the embodiments described herein;
fig. 6 is a block diagram illustrating one embodiment of a pre-trained VGG network that may be included in a context aware loss module embodiment.
FIG. 7 includes examples of corresponding high-low resolution and low-resolution images produced by the imaging system and high-resolution images produced from the low-resolution images by embodiments described herein, as well as line profiles produced for each of the images;
FIG. 8 includes examples of correlations in the results of the overlay measurements along the x-axis and y-axis between a high resolution image generated by the imaging system and a high resolution image generated by embodiments described herein, and between a low resolution image generated by the imaging system and the high resolution image; and
Fig. 9 is a block diagram illustrating one embodiment of a non-transitory computer-readable medium storing program instructions to cause one or more computer systems to perform a computer-implemented method described herein.
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. The drawings may not be to scale. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.
Detailed Description
Turning now to the drawings, it should be noted that the drawings are not drawn to scale. In particular, the proportions of some of the elements of the drawings are greatly exaggerated to emphasize characteristics of the elements. It should also be noted that the drawings are not drawn to the same scale. The same reference numbers have been used to indicate elements shown in more than one figure that may be similarly configured. Unless otherwise indicated herein, any of the elements described and illustrated may comprise any suitable commercially available element.
One embodiment relates to a system configured to generate a high resolution image of a sample from a low resolution image of the sample. As further described herein, embodiments provide platform agnostic data-driven methods and systems to produce stable and robust metrology quality images. Embodiments may also be used to produce relatively high quality denoised and super resolved images. Embodiments may further be used to increase imaging throughput. Further, embodiments may be used to generate review images from relatively low frame, relatively low electron per pixel (e/p) inspection scans. "e/p" is essentially electrons per pixel, where higher e/p means higher quality but lower throughput using beam conditions to achieve higher e/p.)
Embodiments described herein are applicable to electron beam (ebeam), broadband plasma (BBP), laser scattering, limited resolution, and metrology platforms for producing relatively high quality images from images produced by any of the metrology platforms with much higher throughput. In other words, the image may be produced by the imaging system at a relatively high throughput, and therefore with a relatively low resolution, and subsequently transformed into a relatively high resolution image by the embodiments described herein, which means that a high resolution image may be efficiently produced at a relatively high throughput. Embodiments described herein advantageously provide learned transformations between relatively low resolution and relatively high resolution imaging manifolds, noise reduction, and conversion of quality from higher quality scans to lower quality scans. The imaging "manifold" may be generally defined as the theoretical probability space of all possible images.
As used herein, the term "low resolution image" of a sample is generally defined as an image in which all patterned features formed in the area of the sample from which the image was generated do not resolve in the image. For example, some of the patterned features in the area of the sample that produced the low resolution image may be resolved in the low resolution image, provided that their size is large enough so that they are resolvable. However, the low resolution image is not produced at a resolution such that all patterned features in the image are resolvable. In this way, the term "low resolution image" as used herein does not contain information about patterned features on the sample sufficient to use the low resolution image for applications such as defect review, which may include defect classification and/or verification, as well as metrology. Furthermore, the term "low resolution image" as used herein generally refers to an image produced by an inspection system that typically has a relatively low resolution (e.g., lower than a defect review and/or metrology system) in order to have a relatively fast throughput. In this way, the "low resolution image" may also be commonly referred to as a high throughput or HT image. For example, to produce an image with higher throughput, the e/p and number of frames may be reduced, thereby producing a lower quality Scanning Electron Microscope (SEM) image.
The "low resolution images" may also be "low resolution" in that they have a lower resolution than the "high resolution images" described herein. The term "high resolution image" as used herein may be generally defined as an image that resolves all patterned features of a sample with relatively high accuracy. In this way, all patterned features in the region of the sample that generated the high resolution image are resolved in the high resolution image regardless of their size. Likewise, the term "high resolution image" as used herein contains information about patterned features of a sample sufficient to make the high resolution image useful for applications such as defect review, which may include defect classification and/or verification, as well as metrology. Furthermore, the term "high resolution image" as used herein generally refers to an image that cannot be generated by an inspection system during routine operation, which is configured to sacrifice resolution capability in exchange for increased throughput. In this way, the "high resolution image" may also be referred to herein and in the art as a "high sensitivity image," which is another term for "high quality image. For example, to produce a high quality image, the e/p, frame, etc. may be increased, which produces a good quality SEM image but significantly reduces the throughput. These images are also "high sensitivity" images because they can be used for high sensitivity defect detection.
In contrast to the embodiments described further herein, the earliest methods used heuristics and cherry picking parameters to produce relatively noise-free images. These methods are typically designed to remember the statistical nature of the images they will run on, and therefore cannot be transferred to other platforms without incorporating heuristics for the other platforms. Some of the well-known methods for noise reduction in images are anisotropic diffusion, double-sided filters, wiener filters, non-native devices, etc. The two-sided filter and wiener remove noise at the pixel level by using a filter designed from neighboring pixels. Anisotropic diffusion applies the diffusion law on the image, whereby it smoothes the texture/intensity in the image according to the diffusion equation. The threshold function is used to prevent diffusion across the edges and therefore it largely preserves the edges in the image.
Earlier approaches such as wiener filtering and two-sided filtering have the disadvantage that these are parametric approaches that require fine-tuning at the image level to obtain optimal results. These methods are not data driven, which limits the performance they can achieve on challenging imaging types. Another limitation is that most of their processing is done inline, which limits the use cases in which they can be used due to throughput limitations.
One embodiment of a system configured to generate a high resolution image of a sample from a low resolution image of the sample is shown in FIG. 1. The system includes one or more computer subsystems (e.g., computer subsystem 36 and computer subsystem 102) and one or more components 100 executed by the one or more computer subsystems. In some embodiments, the system includes an imaging system (or subsystem) 10 configured to generate a low resolution image of the sample. In the embodiment of fig. 1, the imaging system is configured for scanning light over or directing light to a physical version of the sample when light from the sample is detected to thereby generate an image of the sample. The imaging system may also be configured to perform scanning (or guidance) and detection through multiple modes.
In one embodiment, the sample is a wafer. The wafer may comprise any wafer known in the art. In another embodiment, the sample is a reticle. The mask may comprise any mask known in the art.
In one embodiment, the imaging system is an optical-based imaging system. In one such example, in the embodiment of the system shown in fig. 1, the optics-based imaging system 10 includes an illumination subsystem configured to direct light to the sample 14. The illumination subsystem includes at least one light source. For example, as shown in fig. 1, the illumination subsystem includes a light source 16. In one embodiment, the illumination subsystem is configured to direct light to the sample at one or more incident angles, which may include one or more oblique angles and/or one or more perpendicular angles. For example, as shown in fig. 1, light from the light source 16 is directed through the optical element 18 and then through the lens 20 to the sample 14 at an oblique angle of incidence. The angle of inclination of the incidence may comprise any suitable angle of inclination of the incidence, which may differ depending on, for example, the characteristics of the sample.
The imaging system may be configured to direct light at different angles of incidence to the sample at different times. For example, the imaging system may be configured to change one or more characteristics of one or more elements of the illumination subsystem such that light may be directed to the sample at an angle of incidence different than that shown in fig. 1. In one such example, the imaging system may be configured to move the light source 16, optical element 18, and lens 20 so that light is directed to the sample at different oblique angles of incidence or normal (or near normal) angles of incidence.
In some examples, the imaging system may be configured to direct light to the sample at more than one angle of incidence at the same time. For example, the illumination subsystem may include more than one illumination channel, one of the illumination channels may include the light source 16, optical element 18, and lens 20 as shown in fig. 1, and another of the illumination channels (not shown) may include similar elements, which may be configured differently or identically, or may include at least one light source and possibly one or more other components, such as those further described herein. If such light is directed to the sample at the same time as other light, one or more characteristics (e.g., wavelength, polarization, etc.) of the light directed to the sample at different angles of incidence may be different, such that the light resulting from illumination of the sample at different angles of incidence may be distinguished from one another at the detector.
In another example, the illumination subsystem may include only one light source (e.g., source 16 shown in fig. 1) and light from the light source may be split into different optical paths (e.g., based on wavelength, polarization, etc.) by one or more optical elements (not shown) of the illumination subsystem. The light in each of the different optical paths may then be directed to the sample. Multiple illumination channels may be configured to direct light to the sample at the same time or at different times (e.g., when different illumination channels are used sequentially to illuminate the sample). In another example, the same illumination channel may be configured to direct light to samples having different characteristics at different times. For example, in some examples, optical element 18 may be configured as a spectral filter, and the properties of the spectral filter may be changed in a number of different ways (e.g., by swapping out the spectral filter) so that different wavelengths of light may be directed to the sample at different times. The illumination subsystem may have any other suitable configuration known in the art for sequentially or simultaneously directing light having different or the same characteristics at different or the same angles of incidence to the sample.
In one embodiment, light source 16 may comprise a broadband plasma (BBP) light source. In this way, the light generated by the light source and directed to the sample may comprise broadband light. However, the light source may comprise any other suitable light source, for example, a laser. The laser may comprise any suitable laser known in the art and may be configured to generate light at any suitable wavelength or wavelengths known in the art. Further, the laser may be configured to produce monochromatic or nearly monochromatic light. In this way, the laser may be a narrow band laser. The light source may also include a polychromatic light source that generates light at a plurality of discrete wavelengths or bands of wavelengths.
Light from the optical element 18 may be focused onto the sample 14 by the lens 20. Although lens 20 is shown in fig. 1 as a single refractive optical element, it should be understood that in practice lens 20 may comprise multiple refractive and/or reflective optical elements that combine to focus light from the optical element onto the sample. The illumination subsystem shown in fig. 1 and described herein may include any other suitable optical elements (not shown). Examples of such optical elements include, but are not limited to, polarizing components, spectral filters, spatial filters, reflective optical elements, apodizers, beam splitters, apertures, and the like, which may include any such suitable optical elements known in the art. Further, the imaging system may be configured to vary one or more of the elements of the illumination subsystem based on the type of illumination to be used for imaging.
The imaging system may also include a scanning subsystem configured to cause light to be scanned over the sample. For example, the imaging system may include a platform 22 on which the sample 14 is disposed during examination. The scanning subsystem may include any suitable mechanical and/or robotic assembly (including platform 22) that may be configured to move the sample so that the light may be scanned over the sample. In addition, or alternatively, the imaging system may be configured such that one or more optical elements of the imaging system perform some scanning of the light over the sample. The light may be scanned over the sample in any suitable manner, for example, in a serpentine-like path or in a spiral path.
The imaging system further includes one or more detection channels. At least one of the one or more detection channels includes a detector configured to detect light generated from the sample as a result of the sample being illuminated by the system and configured to generate an output in response to the detected light. For example, the imaging system shown in fig. 1 includes two detection channels, one formed by collector 24, element 26 and detector 28, and the other formed by collector 30, element 32 and detector 34. As shown in fig. 1, the two detection channels are configured to collect and detect light at different collected angles. In some examples, two detection channels are configured to detect scattered light, and the detection channels are configured to detect light scattered from the sample at different angles. However, one or more of the detection channels may be configured to detect another type of light (e.g., reflected light) from the sample.
As further shown in FIG. 1, the two detection channels are shown positioned in the plane of the sheet and the illumination subsystem is also shown positioned in the plane of the sheet. Thus, in this embodiment, the two detection channels are positioned in the plane of incidence (e.g., centered in the plane of incidence). However, one or more of the detection channels may be positioned out of the plane of incidence. For example, the detection channel formed by collector 30, elements 32, and detector 34 may be configured to collect and detect light dispersed out of the plane of incidence. Accordingly, such detection channels may generally be referred to as "side" channels, and such side channels may be centered in a plane substantially perpendicular to the plane of incidence.
Although fig. 1 shows an embodiment of an imaging system comprising two detection channels, the imaging system may comprise a different number of detection channels (e.g., only one detection channel or two or more detection channels). In one such example, the detection channel formed by collector 30, element 32, and detector 34 may form a one-sided channel as described above, and the imaging system may include additional detection channels (not shown) formed as another-sided channel positioned on the opposite side of the plane of incidence. Thus, the imaging system may include a detection channel that includes collector 24, element 26, and detector 28 and is centered in the plane of incidence and configured to collect and detect light at scattering angles that are perpendicular or near perpendicular to the sample surface. This detection channel may thus be generally referred to as a "top" channel, and the imaging system may also include two or more side channels configured as described above. As such, the imaging system may include at least three channels (i.e., one top channel and two side channels), and each of the at least three channels has its own collector, each of which is configured to collect light at a different scattering angle than each of the other collectors.
As further described above, each of the detection channels included in the imaging system may be configured to detect scattered light. Accordingly, the imaging system shown in fig. 1 may be configured for dark-field (DF) imaging of a specimen. However, the imaging system may also or alternatively include a detection channel configured for Bright Field (BF) imaging of the sample. In other words, the imaging system may include at least one detection channel configured to detect light specularly reflected from the sample. Accordingly, the imaging systems described herein may be configured for DF only, BF only, or both DF and BF imaging. Although each of the collectors is shown in fig. 1 as a single refractive optical element, it should be understood that each of the collectors may include one or more refractive optical elements and/or one or more reflective optical elements.
The one or more detection channels may include any suitable detector known in the art. For example, the detector may include a photo-electric amplifier tube (PMT), a Charge Coupled Device (CCD), a Time Delay Integration (TDI) camera, and any other suitable detector known in the art. The detector may also comprise a non-imaging detector or an imaging detector. In this way, if the detectors are non-imaging detectors, each of the detectors may be configured to detect certain characteristics of the scattered light, such as intensity, but may not be configured to detect such characteristics as a function of position within the imaging plane. Thus, the output produced by each of the detectors included in each of the detection channels of the imaging system may be a signal or data, but not an image signal or image data. In such examples, a computer subsystem, such as computer subsystem 36, may be configured to generate an image of the sample from the non-imaging output of the detector. However, in other examples, the detector may be configured as an imaging detector configured to generate an image signal or image data. Thus, the imaging system may be configured to generate the images described herein in a plurality of ways.
It should be noted that fig. 1 is provided herein to generally illustrate the configuration of an imaging system or subsystem that may be included in or generate images for use with the system embodiments described herein. It is apparent that the imaging system configurations described herein can be varied to optimize the performance of the imaging system as typically performed when designing commercially available imaging systems. Further, the systems described herein can be implemented using existing systems (e.g., by adding the functionality described herein to existing systems), such as the 29xx/39xx and Puma 9xxx series of tools available from KLA-Tencor, milpitas, california. For some such systems, embodiments described herein may be provided as optional functions of the system (e.g., in addition to other functions of the system). Alternatively, the imaging systems described herein may be of a "start-from-scratch" design to provide entirely new imaging systems.
The computer subsystem 36 of the imaging system may be coupled to the detector of the imaging system in any suitable manner (e.g., via one or more transmission media, which may include "wired" and/or "wireless" transmission media) such that the computer subsystem may receive the output generated by the detector during scanning of the sample. The computer subsystem 36 may be configured to perform a number of functions as further described herein using the output of the detector.
The computer subsystem shown in fig. 1 (as well as the other computer subsystems described herein) may also be referred to herein as a computer system. Each of the computer subsystems or systems described herein may take various forms, including a personal computer system, image computer, mainframe computer system, workstation, network appliance, internet appliance, or other device. In general, the term "computer system" may be broadly defined to encompass any device having one or more processors that execute instructions from a storage medium. The computer subsystem or system may also include any suitable processor known in the art, such as a parallel processor. In addition, a computer subsystem or system may include a computer platform with high-speed processing and software as a standalone or networked tool.
If the system includes more than one computer subsystem, the different computer subsystems may be coupled to each other such that images, data, information, instructions, etc., may be sent between the computer subsystems as further described herein. For example, computer subsystem 36 may be coupled to computer subsystem 102 by any suitable transmission medium, which may include any suitable wired and/or wireless transmission medium known in the art, as shown by the dashed lines in FIG. 1. Two or more of such computer subsystems may also be operatively coupled through a shared computer-readable storage medium (not shown).
Although the imaging system is described above as an optical or light-based imaging system, the imaging system may be an electron beam-based imaging system. In one such embodiment shown in fig. 1a, the imaging system includes an electron column 122 coupled to a computer subsystem 124. As also shown in fig. 1a, the electron column includes an electron beam source 126 configured to generate electrons focused to a sample 128 by one or more elements 130. The electron beam source may include, for example, a cathode source or an emitter tip, and the one or more elements 130 may include, for example, a gun lens, an anode, a beam limiting aperture, a gate valve, a beam current selection aperture, an objective lens, and a scanning subsystem, all of which may include any such suitable elements known in the art.
Electrons (e.g., secondary electrons) returning from the sample may be focused through one or more elements 132 to a detector 134. One or more of the elements 132 may include, for example, a scanning subsystem, which may be the same scanning subsystem included in element 130.
The electron column may comprise any other suitable element known in the art. In addition, the electron column may be further configured as described in U.S. patent No. 8,664,594 issued to Jiang et al on 4/2014, U.S. patent No. 8,692,204 issued to Kojima et al on 4/8/2014, U.S. patent No. 8,698,093 issued to Gubbens et al on 4/15/2014, and U.S. patent No. 8,716,662 issued to MacDonald et al on 5/6/2014, which are incorporated by reference as if fully set forth herein.
Although the electron column is shown in fig. 1a as being configured such that the electrons are directed to the sample at an incident oblique angle and are dispersed from the sample at another oblique angle, it is understood that the electron beam may be directed to and dispersed from the sample at any suitable angle. Moreover, the electron beam-based imaging system can be configured to use multiple modes to generate images of the sample (e.g., with different illumination angles, collection angles, etc.) as further described herein. The multiple modes of the electron beam-based imaging system may be different in any image-producing parameter of the imaging system.
The computer subsystem 124 may be coupled to the detector 134 as described above. The detector may detect electrons returning from the surface of the sample thereby forming an electron beam image of the sample. The electron beam image may comprise any suitable electron beam image. The computer subsystem 124 may be configured to perform one or more functions described further herein for the sample using the output generated by the detector 134. The computer subsystem 124 may be configured to perform any additional steps described herein. The system including the imaging system shown in fig. 1a may be further configured as described herein.
It should be noted that fig. 1a is provided herein to generally illustrate the configuration of an electron beam-based imaging system that may be included in embodiments described herein. As with the optical-based imaging systems described above, the electron beam-based imaging system configurations described herein can be varied to optimize the performance of the imaging system as typically performed when designing commercially available imaging systems. Further, the systems described herein may be implemented using existing systems (e.g., by adding functionality described herein to existing systems), such as the eSxxx and eDR-xxxx series of tools available from KLA-Tencor. For some such systems, embodiments described herein may be provided as optional functions of the system (e.g., in addition to other functions of the system). Alternatively, the system described herein may be of a "start-from-scratch" design to provide an entirely new system.
Although the imaging system is described above as an optical-based or electron beam-based imaging system, the imaging system may be an ion beam-based imaging system. Such an imaging system may be configured as shown in fig. 2, except that the electron beam source may be replaced by any suitable ion beam source known in the art. Further, the imaging system may be any other suitable ion beam-based imaging system, such as those included in commercially available Focused Ion Beam (FIB) systems, Helium Ion Microscope (HIM) systems, and Secondary Ion Mass Spectrometry (SIMS) systems.
As noted above, the imaging system is configured for scanning energy (e.g., light or electrons) over the physical version of the specimen, thereby generating an actual image for the physical version of the specimen. In this way, the imaging system may be configured as a "real" system, rather than a "virtual" system. For example, the storage media (not shown) and the computer subsystem 102 shown in FIG. 1 may be configured as a "virtual" system. In particular, the storage medium and computer subsystem are not part of the imaging system 10 and do not have any capability for processing a physical version of the specimen. In other words, in a system configured as a virtual system, the output of its one or more "detectors" may be the output previously generated by the one or more detectors of the actual system and stored in the virtual system, and during "scanning", the virtual system may replay the stored output as if the sample was being scanned. In this way, scanning a sample by a virtual system may appear the same as scanning a physical sample by an actual system, however in practice, "scanning" involves simply replaying the output of the sample in the same way as a sample is scanned. Systems and methods configured as "virtual" inspection systems are described in commonly assigned U.S. patent No. 8,126,255, issued to Bhaskar et al on 28, 2012 and U.S. patent No. 9,222,895, issued to Duffy et al on 29, 2015, both of which are incorporated by reference as if fully set forth herein. The embodiments described herein may be further configured as described in these patents. For example, one or more of the computer subsystems described herein may be further configured as described in these patents. Further, configuring one or more virtual systems as a Central Computing and Storage (CCS) system may be performed as described in the Duffy-granted patent referenced above. The persistent storage mechanism described herein may have distributed computing and storage, such as a CCS architecture, although the embodiments described herein are not limited to this architecture.
As further mentioned above, the imaging system may be configured to generate images of the sample through multiple modes. In general, a "mode" may be defined by a value of a parameter of an imaging system used to generate an image of a specimen or an output used to generate an image of a specimen. Thus, the different modes may be different in the value of at least one of the imaging parameters for the imaging system. For example, in one embodiment of the optics-based imaging system, at least one of the plurality of modes uses at least one wavelength of light for illumination that is different from at least one wavelength of light used for illumination of at least one other of the plurality of modes. The modes may be different in illumination wavelength (e.g., by using different light sources, different spectral filters, etc.) as further described herein for the different modes. In another embodiment, at least one of the plurality of modes uses an illumination channel of the imaging system that is different from an illumination channel of the imaging system used for at least one other of the plurality of modes. For example, as noted above, the imaging system may include more than one illumination channel. Thus, different illumination channels may be used for different modes.
In one embodiment, the imaging system is an inspection system. For example, the optical and electron beam imaging systems described herein may be configured as inspection systems. In another embodiment, the imaging system is a defect review system. For example, the optical and electron beam imaging systems described herein may be configured as defect review systems. In another embodiment, the imaging system is a metrology system. For example, the optical and electron beam imaging systems described herein may be configured as metrology systems. In particular, the embodiments of the imaging systems described herein and shown in fig. 1 and 1a may be modified in one or more parameters to provide different imaging capabilities depending on the application in which they are to be used. In one such example, the imaging system shown in fig. 1 may be configured to have a higher resolution if the imaging system shown in fig. 1 is to be used for defect review or metrology rather than for inspection. In other words, the embodiment of the imaging system shown in fig. 1 and 1a describes some general and various configurations for an imaging system that can be customized in a number of ways that will be apparent to those skilled in the art to produce an imaging system with different imaging capabilities that are more or less suited for different applications.
One or more computer subsystems are configured for acquiring a low resolution image of a sample. Acquiring the low resolution image may be performed using one of the imaging systems described herein (e.g., by directing a beam of light or electrons to the sample and detecting the beam of light or electrons from the sample accordingly). In this way, acquiring a low resolution image may be performed using the physical specimen itself and some sort of imaging hardware. However, acquiring the low resolution image does not necessarily involve imaging the sample using imaging hardware. For example, another system and/or method may generate a low resolution image and may store the generated low resolution image in one or more storage media, such as a virtual inspection system as described herein or another storage medium described herein. Thus, acquiring the low resolution image may include acquiring the low resolution image from a storage medium that already stores the low resolution image.
In some embodiments, the low resolution image is generated by an inspection system. For example, as described herein, a low resolution image may be generated by an inspection system configured to have a lower resolution to thereby increase its throughput. The inspection system may be an optical inspection system or an electron beam inspection system. The inspection system may have any configuration described further herein.
In one embodiment, the low resolution image is generated by an electron beam based imaging system. In another embodiment, the low resolution image is produced by an optical-based imaging system. For example, the low resolution image may be produced by any of the electron beam-based or optical-based imaging systems described herein.
In one embodiment, the low resolution image is generated by a single mode of the imaging system. In another embodiment, one or more low resolution images are generated for the sample by multiple modes of the imaging system. For example, the low resolution image input to the deep convolutional neural network (deep CNN) as further described herein may include a single low resolution image generated by only a single mode of the imaging system. Alternatively, the low resolution image input to the depth CNN may include a plurality of low resolution images generated by a plurality of modes of the imaging system (e.g., a first image generated by a first mode, a second image generated by a second mode, etc.), as further described herein. The single mode and the plurality of modes may include any of the modes further described herein.
A component (e.g., component 100 shown in fig. 1) executed by a computer subsystem (e.g., computer subsystem 36 and/or computer subsystem 102) includes deep CNN 104. The depth CNN includes one or more first layers configured for generating a representation of a low resolution image and one or more second layers configured for generating a high resolution image for the sample from the representation of the low resolution image. In this way, embodiments described herein may use one of the depths CNN described herein (e.g., one or more machine learning techniques) for transforming a low resolution image of a sample to a high resolution image of the sample. For example, as shown in fig. 2, the depth CNN is shown as an image transformation network 200. During production and/or runtime (i.e., after the image transformation network has been built and/or trained, which may be performed as described further herein), the input to the image transformation network may be an input low resolution (high throughput) image 202, and the output of the image transformation network may be an output high resolution (high sensitivity) image 204.
The one or more second layers include a final layer configured to output the high resolution layer, and the final layer is configured as a subpixel convolution layer. Fig. 3 illustrates one embodiment of an image transformation network architecture that may be suitable for use in embodiments described herein. In this embodiment, the image transformation network is a depth CNN with a sub-pixel layer as the final layer. In this architecture, the input may be a low resolution image 300, which is shown simply as a grid of pixels in fig. 3 and does not represent any particular low resolution image that may be produced by embodiments described herein. The low resolution image may be input to one or more first layers 302 and 304, which may be configured as a convolutional layer configured for feature map extraction. These first layers may form the hidden layer of the image transformation network architecture.
The representation of the low resolution image generated by the one or more first layers may thus be one or more features and/or a feature map. The features may be of any suitable type of features known in the art that can be inferred from the inputs and used to generate the outputs as described further herein. For example, the features may include a vector of intensity values per pixel. The features may also include any other type of feature described herein, such as vectors of scalar values, vectors of independent distributions, vectors of joint distributions, or any other suitable type of feature known in the art. As described further herein, features are learned through the network during training and may or may not be correlated with any actual features known in the art.
The one or more second layers include a final layer 306 configured to gather the feature map from the low resolution space and construct a subpixel convolution layer of the high resolution image 308 in a single step. The subpixel convolution layer learns an array of upgrade filters to upgrade the final low resolution feature map into a high resolution output image. In this way, the image transformation network may obtain a noisy, insufficiently resolved high-throughput input image, compute a feature map across many convolution layers, and then use the sub-pixel layers to transform the feature map into a relatively quiet, super resolved image. The subpixel convolution layer advantageously provides a relatively complex upgrade filter specifically trained for each feature map while also reducing the computational complexity of the overall operation. The depth CNN used in the embodiments described herein may be further configured as described by Shi et al in "Real-Time Single Image and Video Super-Resolution Using an effective Sub-Pixel Convolutional Neural Network" (Real-Time Single Image and Video Super-Resolution Sub-Pixel conditional Neural Network) ", 2016, 9 months, arXiv: 1609.05158v2, which are incorporated by reference as if fully set forth herein.
The deep CNNs described herein may be generally classified as deep learning models. In general, "deep learning" (also referred to as deep structure learning, hierarchical learning, or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in the data. In a simple case, there may be two sets of neurons: one to receive an input signal and one to transmit an output signal. When the input layer receives the input, it passes the modified version of the input to the next layer. In deep networks, there are many layers between the input and output (and the layers are not composed of neurons but it may be helpful to consider it in this way), allowing the algorithm to use multiple processing layers composed of multiple linear and nonlinear transforms.
Deep learning is part of a broader family of machine learning methods based on learning representations of data. The observations (e.g., images) can be represented in a variety of ways, such as vectors of intensity values per pixel, or in a more abstract way as a collection of edges, a specially shaped region, and so forth. Some representations are better at simplifying learning tasks (e.g., facial recognition or facial expression recognition) than others. One of the promise of deep learning is to replace handmade features with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.
Studies in this area have attempted to make better representations and create models to learn these representations from large scale unlabeled data. Some of the representations are inspired by advances in neuroscience and are loosely based on interpretation of information processing and communication patterns in the nervous system, e.g., neural coding, which attempts to define relationships between various stimuli and associated neuronal responses in the brain.
The deep CNNs described herein may also be classified as machine learning models. Machine learning may be generally defined as a type of Artificial Intelligence (AI) that provides a computer with learning capabilities without explicitly programming. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data. In other words, machine learning may be defined as a sub-field of computer science that "gives a computer the ability to learn without explicitly programming". Machine learning explores the study and construction of algorithms that can learn from and predict data by making data-driven predictions or decisions over the following precisely static program instructions by constructing models from sample inputs.
Machine Learning as described herein may be as described in Sugiyama, "Introduction to Statistical Machine Learning," morgan koffman, 2016, page 534; "identifiable, Generative, and interactive Learning" by Jebara, MIT paper, 2002, page 212; and Hand et al, "Principles of Data Mining (Adaptive computing and Machine Learning)", MIT press, 2001, page 578, which is incorporated by reference as if fully set forth herein. Embodiments described herein may be further configured as described in these references.
Deep CNN is also a generative model. A "generative" model may generally be defined as a model that is probabilistic in nature. In other words, a "generative" model is not one that performs a forward simulation or rule-based approach, and thus a physical model of the process involved in producing the actual image (for which the simulated image is produced) is not necessary. Alternatively, as described further herein, a "generative" model may be learned based on a suitable training set of data (as its parameters may be learned).
In one embodiment, the depth CNN is a depth generative model. For example, the deep CNN may be configured to have a deep learning architecture, as the model may include multiple layers that perform multiple algorithms or transformations. The number of layers on one or both sides of the depth CNN may be different than those shown in the figures described herein. For practical purposes suitable layers on both sides range from 2 layers to tens of layers.
The deep CNN may also be a deep neural network with a set of weights that models the world from data that has been received and trained. A neural network can be defined generally as a computational approach that is based on a relatively large set of neural units that loosely model a biological brain problem solving approach through a relatively large cluster of biological neurons connected by axons. Each neural unit is connected to many other neural units, and the links may have a forcing or inhibiting effect on the activation state of the connected neural units. These systems are self-learning and training, rather than explicitly programmed, and are particularly well suited for areas where solutions or feature detection is difficult to express in traditional computer programs.
Neural networks are typically composed of multiple layers, and signal paths run through the front and back layers. The goal of neural networks is to solve the problem in the same way as the human brain, however, several neural networks are much more abstract. Modern neural network projects typically work with thousands to millions of neural units and millions of connections. The neural network may have any suitable architecture and/or configuration known in the art.
Embodiments described herein may or may not be configured for training a depth CNN for generating a high resolution image from a low resolution image. For example, another method and/or system may be configured to generate a trained deep CNN, which may then be accessed and used by embodiments described herein. In general, the training depth CNN may include acquisition data (e.g., both low resolution images and high resolution images, which may include any of the low resolution images and high resolution images described herein). A training, testing, and validation data set may then be constructed using the list of input tuples and expected output tuples. The input tuple may be in the form of a low resolution image, and the output tuple may be a high resolution image corresponding to the low resolution image. The deep CNN may then be trained using the training data set.
In one embodiment, the one or more components include a context-aware loss module configured to train the depth CNN, and during the training of the depth CNN, the one or more computer subsystems input the high-resolution image generated by the one or more second layers and a corresponding known high-resolution image for the sample into the context-aware loss module, and the context-aware loss module determines a context-aware loss in the high-resolution image generated by the one or more second layers as compared to the corresponding known high-resolution image. For example, as shown in fig. 4, the deep CNN network is shown as an image transformation network 400. This figure shows the depth CNN during training or at set time. The input to the image transformation network is a low resolution (high throughput) image 402, which may be generated as described further herein. The image transformation network may then output a high resolution (high sensitivity) image 404 as further described herein. The output high resolution image and a corresponding known high resolution image (e.g., a "ground truth" high sensitivity image) 406 may be input to a contextual perception loss module 408. In this way, the complete network architecture of the embodiments described herein may include two blocks, the image transformation network and the context aware loss. The context aware loss module 408 may compare the two images it receives as input (i.e., the high resolution image generated by the image transformation network and the high resolution image ground truth image, e.g., generated by the imaging system) to determine one or more differences between the two input images. The context aware loss module may be further configured as described herein.
In this way, at setup time, embodiments obtain noisy, insufficiently resolved and quiet super resolved image pairs and then learn the transformation matrix between them through a neural network using context-aware loss. The term "noisy" as used herein may generally be defined as an image having a relatively low signal-to-noise ratio (SNR), while the term "quiet" as used herein may generally be defined as an image having a relatively high SNR. These terms are therefore used interchangeably herein. These image pairs may be from any of the imaging platforms available from KLA-Tencor (and others), such as, e-beam, BBP tools, limited resolution imaging tools, and the like. Once training is complete, the network learns the transformation from noisy, insufficiently resolved images to quiet, super resolved images while maintaining spatial fidelity. In this way, the embodiments described herein use a data-driven approach to exploit the data redundancy observed in semiconductor images by learning the transformation between a noisy, insufficiently resolved image to a quiet, super resolved image. The trained network can then be deployed in production, where the imaging system produces noisy high-throughput data that is then transformed into corresponding low-noise super-resolved data using the trained image transformation network. Once in production, the network executes, for example, typical post-processing algorithms.
In one such embodiment, context-aware loss includes content loss, style loss, and Total Variation (TV) regularization. One such embodiment is shown in fig. 5. In particular, context-aware loss module 408 shown in fig. 4 may include a content loss module 500, a genre loss module 502, and a TV regularization module 504, as shown in fig. 5. For example, context-aware loss is a generic architecture and is represented by style and content loss. Deep neural networks tend to learn image features progressively, starting from edges, contours in lower layers to more complex features such as surfaces or entire objects in possibly later layers. This correlates well with biological vision. We assume that the lower layers of the convolutional network learning features are considered perceptually important. We therefore design our context-aware losses after the learned activation of the network is completed. Context aware loss is mainly composed of style, content and regularization loss.
In one such embodiment, the content loss comprises a loss in a low-level feature of the corresponding known high-resolution image. For example, the content of an image is defined as lower-level features, such as edges, contours, and the like. Minimization of content loss helps preserve these low-level features important for producing a metrological quality, super resolved image. More clearly, content loss is included in the loss function to preserve edges and contours in the image, as these are important for measurements and the like used on high resolution images. Conventional techniques such as bicubic interpolation or the like or loss of training through L2 are not required to ensure such preservation of edges and contours.
The next major part of the loss is called the style conversion loss. In one such embodiment, the style loss includes a loss in one or more abstract entities that define qualitatively a corresponding known high resolution image. For example, we define style as an abstract entity that defines an image in quality, including properties such as sharpness, texture, color, and so on. One reason for using deep learning as described herein is that the differences between the low resolution/high resolution images described herein are not just resolution, but they may have different noise characteristics, loading artifacts, textures, etc. Therefore, super-resolution of only low-resolution images is not sufficient, and mapping from low-resolution to high-resolution images is learned using deep learning. The style of the image is characterized by the activation of the upper layers of the trained network. The combination of style and content loss make it possible for the image transformation network to learn the transformation between noisy, insufficiently resolved images and quiet, super resolved images. Once the image transformation network is trained using context-aware loss, it can be deployed in production to produce quiet super resolved images from noisy, insufficiently resolved high-throughput images while maintaining spatial fidelity. In some embodiments, the style conversion penalty is defined as a penalty between the super-resolution high-resolution image (i.e., the image produced by the one or more second layers) and the final layer features of the ground truth high-resolution image, especially when we want to classify on the super-resolved high-resolution image.
In another such embodiment, the context aware loss module includes a pre-trained VGG network. Fig. 6 shows how activation from a predefined network is used to calculate genre and content loss. For example, as shown in fig. 6, a pre-trained VGG network 600 may be coupled to the content loss module 500 and the format loss module 502. VGG16 (also known as OxfordNet) is a convolutional neural network architecture named for developing its group of visual geometries from oxford. The VGG network may also be further configured as described by Simoyan et al in "Very Deep Convolutional Networks for Large-Scale Image Recognition" (Very Deep Convolutional Networks for Large-Scale Image Recognition), "arXiv: 1409.1556v6, 4 months 2015, page 14, which article is incorporated by reference as if fully set forth herein. As shown in fig. 6, the pre-trained VGG network may obtain image input to a plurality of layers, including a convolutional layer (e.g., conv-64, conv-128, conv-256, and conv-512), a maxpool layer, a fully-connected layer (e.g., FC-4096), and a softmax layer, all of which may have any suitable configuration known in the art.
Activation from the VGG network may be obtained by the content loss module 500 and the format loss module 502 to thereby calculate a style and content loss. Embodiments described herein therefore define a novel lossy architecture for training a neural network using a pre-trained network. This helps optimize the neural network while preserving the use case key features in the generated image.
The embodiments described herein therefore introduce a usage dependent loss function using a pre-trained deep learning network. Conventional techniques include methods such as bicubic interpolation and L2 penalties when training deep networks, but we introduce different penalties during training our network. For example, bicubic interpolation reduces the contrast loss on sharp edges, while the L2 loss on the full image focuses on preserving all aspects of the image, but preserving most of them is not necessarily a requirement for the use case of the embodiments described herein, and we can create a loss function depending on which features in the image we want to preserve. In some such instances, content loss may be used to ensure that edges and contours are preserved, and style loss may be used to ensure that textures, colors, etc. are preserved.
Embodiments described herein may use outputs from pre-trained network layers to define a usage dependent loss function for training a network. If the use case is critical dimension uniformity or metrology measurement, embodiments may give weight to the loss of content, and if the image is to be "beautified," the loss of style may be used to preserve texture, color, etc. Furthermore, for cases where classification is important, the last layer features may be matched between the generated high resolution image and the ground truth image, and the loss on the last layer of features of the pre-trained network may be defined, as these are the features used for classification.
In signal processing, total variation denoising (also referred to as total variation regularization) is the most commonly used process in digital image processing with application in noise removal. It is based on the principle that signals with excessive and possibly spurious details have a high total variation, that is, the integral of the absolute gradient of the signal is high. According to this principle, reducing the overall variation of the signal that is subject to it is a close match of the original signal, removing unwanted details while preserving important details such as edges. The concept was pioneered by Rudin, Osher and Fatemi in 1992 and is therefore today referred to as the ROF model.
This noise removal technique has the advantage over simple techniques such as linear smoothing or median filtering that reduces noise but at the same time eliminates edges to a greater or lesser extent. By contrast, total variation denoising is significantly effective in preserving edges in synchronization while eliminating noise in flat regions, even at relatively low signal-to-noise ratios.
In some such embodiments, the one or more components include a tuning module configured to determine one or more parameters of the depth CNN based on the context-aware loss. For example, as shown in fig. 4, one or more components may include a tuning module 410 configured for back propagation of errors and/or changing network parameters determined by the context-aware loss module. Each of the layers of depth CNN described above may have one or more parameters, e.g., weights W and bias B, whose values may be determined by a training model, which may be performed as described further herein. For example, the weights and biases for the various layers included in the deep CNN may be determined during training by minimizing the loss of context perception.
In one embodiment, the depth CNN is configured such that a high resolution image generated by the one or more second layers has less noise than a low resolution image. For example, embodiments described herein provide a generalized architecture for transforming noisy and insufficiently resolved images into low-noise super resolved images using learned representations.
In another embodiment, the depth CNN is configured such that the high resolution image generated by the one or more second layers preserves the structural and spatial features of the low resolution image. For example, embodiments described herein provide a generalized architecture for transforming noisy and insufficiently resolved images into low-noise super-resolved images using learned representations while maintaining structural and spatial fidelity.
In some embodiments, the deep convolutional neural network outputs the high resolution image with a higher throughput than the throughput used to generate the high resolution image by the high resolution imaging system. For example, the embodiments described herein may be used for deep learning based super resolution for higher throughput on e-beam tools. Embodiments described herein may thus be particularly useful when it may be advantageous to use relatively low metrology (e-beam, light, etc.) for image acquisition to prevent changes (e.g., damage, contamination, etc.) from being made to the sample. However, using a relatively low dose to avoid changes to the sample typically results in a low resolution image. The challenge is therefore to produce high resolution images without causing changes to the sample. Embodiments described herein provide this capability. In particular, sample images may be acquired at higher throughput and lower resolution (or lower quality), and embodiments described herein may convert those higher throughput, lower quality images into super resolved or higher quality images without causing changes to the sample (as the sample itself is not required to produce the super resolved or higher quality images).
The embodiments described herein are therefore particularly useful for review use cases where a wafer may undergo inspection (e.g., BBP inspection) and electron beam review sequences. Further, in some instances, the user wants to place the wafer back on the inspection tool after inspection to try another inspection recipe condition (e.g., to optimize the inspection recipe condition for defects detected in the inspection and possibly sort in the review). However, if the electron beam (or other) review damages or changes the location being reviewed, those locations are no longer valid for sensitivity analysis (i.e., check for recipe changes and/or optimization). Thus, preventing damage or alteration to the sample by using low frame average electron beam image acquisition is one of the advantages of deep learning based classification of electron beam review images (e.g., the original high frame average image is not required). Thus, deep learning classification and deep learning image improvement can arguably be used in combination. Defect classification based on deep learning can be performed by embodiments described herein, as described in commonly assigned U.S. patent application No. 15/697,426, filed by He et al, 2017, 9, 6, which is incorporated by reference as if fully set forth herein. The embodiments described herein may be further configured as described in this patent application.
FIG. 7 illustrates an example of results that may be produced using embodiments described herein. The results show a comparison between the horizontal profile 700 of a noisy, insufficiently resolved high-throughput image 702, the horizontal profile 704 of a higher quality, better resolved low-throughput image 706, and the horizontal profile 708 of a quiet super resolved image 710 obtained by processing of low resolution images using embodiments described herein. High-throughput image 702 and low-throughput image 706 are generated by a low-resolution imaging system and a high-resolution imaging system, respectively, as described herein. In this way, the results shown in FIG. 7 illustrate the horizontal variation between different images along the same profile line through the images. The results shown in fig. 7 demonstrate the ability of the embodiments described herein to produce substantially noise-free high-resolution images from lower quality images while maintaining structural and spatial fidelity in the images, as confirmed by the correlation in the profiles (708 and 704) of the super resolved images produced by the embodiments described herein and the high resolution images produced by the imaging system.
In one embodiment, the one or more computer subsystems are configured to perform one or more metrology measurements for the sample based on the high resolution images generated by the one or more second layers. Fig. 8 demonstrates that the embodiments described herein work by closing the loop with ground truth data. To further test the embodiments described herein in real world metrology use cases, overlay measurements are performed on the three image sets shown in FIG. 7 and And the results compiled in figure 8. Curves 800 and 802 in fig. 8 respectively depict the correlation in the overlay measurement along the x and y axes between an image produced by a high resolution imaging system and a high resolution image produced by the depth CNN embodiments described herein, and curves 804 and 806 in fig. 8 respectively depict the correlation in the overlay measurement along the x and y axes between an image produced by a high resolution imaging system and a lower resolution image. The metric used to calculate the correlation is R2And r is squared. The squared value of r of 1 depicts a perfect fit. Near perfect R between high resolution images produced by imaging systems and depth CNN2Value (a)>0.99) shows that the image produced by the depth CNN in the metrology measurement can replace the image produced by the higher resolution imaging system without affecting performance. An R of about 0.8 in the case of images produced by low resolution and high resolution imaging systems, taking into account the relatively high accuracy required in metrology use cases2The values prove to be too low to obtain accurate measurements, and therefore measurements need to be taken from higher resolution images, which significantly reduces use case throughput (e.g., from about 18K defects per hour to about 8K defects per hour in the experiments described herein).
In another embodiment, the depth CNN functions independently of the imaging system that produces the low resolution image. In some embodiments, the low resolution image is generated by one imaging system having a first imaging stage, the one or more computer subsystems are configured for acquiring another low resolution image generated for another sample by another imaging system having a second imaging stage different from the first imaging stage, the one or more first layers are configured for generating a representation of the other low resolution image, and the one or more second layers are configured for generating a high resolution image for the other sample from the representation of the other low resolution image. For example, an important benefit of the embodiments described herein is that the same network architecture can be used to enhance images from different platforms, such as BBP tools, tools configured specifically for low resolution imaging, and the like. Furthermore, the overall burden of optimizing and learning representations is shifted offline, as training only occurs during recipe setup time. Once training is complete, runtime computations are drastically reduced. The learning process also helps to adaptively enhance the image without the need to change parameters every time as is required in the case of the old method.
In one such embodiment, the first imaging stage is an electron beam imaging stage and the second imaging stage is an optical imaging stage. For example, embodiments described herein may transform low resolution images produced using an electron beam imaging system and an optical imaging system. Embodiments described herein are also capable of performing transformations for other different types of imaging platforms (e.g., other charged particle type imaging systems).
In another such embodiment, the first imaging stage and the second imaging stage are different optical imaging stages. In yet another such embodiment, the first imaging stage and the second imaging stage are different electron beam imaging stages. For example, the first imaging stage and the second imaging stage may be the same type of imaging stage, but their imaging capabilities may differ significantly. In one such example, the first and second optical imaging stages can be laser scatter imaging stages and BBP imaging stages. These imaging stages obviously have substantially different capabilities and will produce substantially different low resolution images. Nonetheless, embodiments described herein may generate high resolution images for all such low resolution images using the learned representations generated by the training depth CNN.
Another embodiment of a system configured to generate a high resolution image of a sample from a low resolution image of the sample includes an imaging subsystem configured for generating a low resolution image of the sample. The imaging subsystem may have any of the configurations described herein. The system also includes one or more computer subsystems, such as the computer subsystem 102 shown in fig. 1, which may be configured as further described herein, as well as one or more components executed by the one or more computer subsystems, component 100, which may include any of the components described herein. The components include a depth CNN, e.g., depth CNN 104, which may be configured as described herein. For example, the depth CNN includes one or more first layers configured for generating a representation of a low resolution image and one or more second layers configured for generating a high resolution image for the sample from the representation of the low resolution image. The one or more second layers include a final layer configured to output a high resolution image. The final layer is also configured as a subpixel convolution layer. The one or more first layers and the one or more second layers may be further configured as described further herein. This system embodiment may be further configured as described herein.
The embodiments described herein have a number of advantages as can be seen from the description provided above. For example, embodiments described herein provide a general platform agnostic data driven architecture. During setup time embodiments use training data to learn the transformation between high quality images and low quality images. Learning this transformation enables embodiments to transform a noisy, insufficiently resolved input into a relatively quiet super resolved output with a metered quality at runtime using the learned transformation. Early methods were parametric methods that relied only on the current input image and did not utilize any other training data. The embodiments described herein are also generic and platform agnostic. Because the embodiments are generic and platform agnostic, the same architecture can be used to produce metrology quality images on different platforms, such as electron beam, BBP, laser scattering, low resolution imaging, and metrology platforms. Embodiments can also achieve higher throughput by using only low quality (high throughput) images in production to produce the required quality images. The embodiment also achieves noise reduction in the output image compared to the input image without affecting important features such as edges and contours in the image.
Each of the embodiments of each of the systems described above may be combined together into a single embodiment.
Another embodiment relates to a computer-implemented method for generating a high resolution image of a sample from a low resolution image of the sample. The method includes acquiring a low resolution image of the sample. The method also includes generating a representation of the low resolution image by inputting the low resolution image into one or more first layers of the depth CNN. Further, the method includes generating a high resolution image of the sample based on the representation. The generation of the high resolution image is performed by one or more second layers of depth CNN. The one or more second layers include a final layer configured to output a high resolution image, and the final layer is configured as a subpixel convolution layer. The steps of acquiring, generating a representation, and generating a high resolution image are performed by one or more computer systems. One or more components are executed by one or more computer systems, and the one or more components include a deep CNN.
Each of the steps of the method may be performed as further described herein. The method may also include any other steps that may be performed by the system, computer system or subsystem described herein and/or the imaging system or subsystem. The one or more computer systems, the one or more components, and the depth CNN may be configured according to any of the embodiments described herein, e.g., the computer subsystem 102, the component 100, and the depth CNN 104. Further, the method described above may be performed by any of the system embodiments described herein.
All methods described herein may include storing one or more steps of a method embodiment in a computer readable storage medium. The results may include any of the results described herein and may be stored in any manner known in the art. The storage medium may comprise any of the storage media described herein or any other suitable storage medium known in the art. After the results have been stored, the results may be accessed in a storage medium and used by any of the method or system embodiments described herein, formatted for display to a user, used by another software module, method, or system, and the like. For example, the generated high resolution image may be used to perform metrology measurements on the sample, to classify one or more defects detected on the sample, to verify one or more defects detected on the sample, and/or to determine whether a process for forming patterned features on the sample should be altered in some way based on one or more of the above to thereby alter patterned features formed on other samples in the same process.
Additional embodiments relate to a non-transitory computer-readable medium storing program instructions executable on one or more computer systems for performing a computer-implemented method for generating a high resolution image of a sample from a low resolution image of the sample. One such embodiment is shown in fig. 9. In particular, as shown in fig. 9, a non-transitory computer-readable medium 900 includes program instructions 902 that are executable on a computer system 904. The computer-implemented method may include any step of any method described herein.
Program instructions 902 implementing methods such as those described herein may be stored on the computer-readable medium 900. The computer readable medium may be a storage medium such as a magnetic or optical disk, a magnetic tape, or any other suitable non-transitory computer readable medium known in the art.
The program instructions may be implemented in any of a variety of ways, including procedure-based techniques, component-based techniques, and/or object-oriented techniques, among others. For example, the program instructions may be implemented using ActiveX controls, C + + objects, JavaBeans, Microsoft Foundation Classes (MFCs), SSE (streaming SIMD extensions), or other techniques or methods, as desired.
Computer system 904 can be configured according to any of the embodiments described herein.
Additional modifications and alternative embodiments of various aspects of the invention will be apparent to those skilled in the art in view of this description. For example, methods and systems are provided for generating a high resolution image of a sample from a low resolution image of the sample. Accordingly, this description is to be construed as illustrative only and is for the purpose of teaching those skilled in the art the general manner of carrying out the invention. It is to be understood that the forms of the invention shown and described herein are to be taken as the presently preferred embodiments. Elements and materials may be substituted for those illustrated and described herein, parts and processes may be reversed, and certain features of the invention may be utilized independently, all as would be apparent to one skilled in the art after having the benefit of this description of the invention. Changes may be made in the elements described herein without departing from the spirit and scope of the invention as described in the following claims.

Claims (24)

1. A system configured to generate a high resolution image of a sample from a low resolution image of the sample, comprising:
one or more computer subsystems configured for acquiring a low resolution image of a sample; and
one or more components executed by the one or more computer subsystems, wherein the one or more components comprise:
a deep convolutional neural network, wherein the deep convolutional neural network comprises:
one or more first layers configured for generating a representation of the low resolution image; and
one or more second layers configured for generating a high resolution image of the sample from the representation of the low resolution image, wherein the one or more second layers comprise a final layer configured to output the high resolution image, and wherein the final layer is further configured as a sub-pixel convolution layer.
2. The system of claim 1, wherein the deep convolutional neural network is configured such that the high resolution image generated by the one or more second layers is less noisy than the low resolution image.
3. The system of claim 1, wherein the deep convolutional neural network is configured such that the high resolution image generated by the one or more second layers preserves structural and spatial features of the low resolution image.
4. The system of claim 1, wherein the one or more components further comprise a context-aware loss module configured to train the deep convolutional neural network, wherein during training of the deep convolutional neural network, the one or more computer subsystems input the high-resolution image generated by the one or more second layers and a corresponding known high-resolution image for the sample into the context-aware loss module, and the context-aware loss module determines a context-aware loss in the high-resolution image generated by the one or more second layers as compared to the corresponding known high-resolution image.
5. The system of claim 4, wherein the context-aware loss comprises content loss, style loss, and total change regularization.
6. The system of claim 5, wherein the content loss comprises a loss in a low-level feature of the corresponding known high-resolution image.
7. The system of claim 5, wherein the style loss comprises a loss in one or more abstract entities that define the corresponding known high resolution image in quality.
8. The system of claim 4, wherein the context aware loss module comprises a pre-trained VGG network.
9. The system of claim 4, wherein the one or more components further comprise a tuning module configured to determine one or more parameters of the deep convolutional neural network based on the context-aware loss.
10. The system of claim 1, wherein the one or more computer subsystems are further configured to perform one or more metrology measurements of the sample based on the high resolution images generated by the one or more second layers.
11. The system of claim 1, wherein the deep convolutional neural network functions independently of an imaging system that generates the low resolution image.
12. The system of claim 1, wherein the low resolution image is generated by one imaging system having a first imaging stage, wherein the one or more computer subsystems are further configured for acquiring another low resolution image generated for another sample by another imaging system having a second imaging stage different from the first imaging stage, wherein the one or more first layers are configured for generating a representation of another low resolution image, and wherein the one or more second layers are further configured for generating a high resolution image for the other sample from the representation of the other low resolution image.
13. The system of claim 12, wherein the first imaging stage is an electron beam imaging stage, and wherein the second imaging stage is an optical imaging stage.
14. The system of claim 12, wherein the first and second imaging stages are different optical imaging stages.
15. The system of claim 12, wherein the first and second imaging stages are different electron beam imaging stages.
16. The system of claim 1, wherein the low resolution image is produced by an electron beam based imaging system.
17. The system of claim 1, wherein the low resolution image is produced by an optical-based imaging system.
18. The system of claim 1, wherein the low resolution image is generated by an inspection system.
19. The system of claim 1, wherein the sample is a wafer.
20. The system of claim 1, wherein the sample is a reticle.
21. The system of claim 1, wherein the deep convolutional neural network outputs the high resolution image with a throughput higher than a throughput used to generate the high resolution image by a high resolution imaging system.
22. A system configured to generate a high resolution image of a sample from a low resolution image of the sample, comprising:
an imaging subsystem configured for generating a low resolution image of a sample;
one or more computer subsystems configured for acquiring the low resolution image of the sample; and
one or more components executed by the one or more computer subsystems, wherein the one or more components comprise:
a deep convolutional neural network, wherein the deep convolutional neural network comprises:
one or more first layers configured for generating a representation of the low resolution image; and
one or more second layers configured for generating a high resolution image of the sample from the representation of the low resolution image, wherein the one or more second layers comprise a final layer configured to output the high resolution image, and wherein the final layer is further configured as a sub-pixel convolution layer.
23. A non-transitory computer-readable medium storing program instructions executable on one or more computer systems for performing a computer-implemented method for generating a high resolution image of a sample from a low resolution image of the sample, wherein the computer-implemented method comprises:
Acquiring a low-resolution image of a sample;
generating a representation of the low resolution image by inputting the low resolution image into one or more first layers of a deep convolutional neural network; and
generating a high resolution image of the sample based on the representation, wherein generating the high resolution image is performed by one or more second layers of the deep convolutional neural network, wherein the one or more second layers comprise a final layer configured to output the high resolution image, wherein the final layer is further configured as a subpixel convolutional layer, wherein the acquiring, the generating the representation, and the generating the high resolution image are performed by the one or more computer systems, wherein one or more components are performed by the one or more computer systems, and wherein the one or more components comprise the deep convolutional neural network.
24. A computer-implemented method for generating a high resolution image of a sample from a low resolution image of the sample, comprising:
acquiring a low-resolution image of a sample;
generating a representation of the low resolution image by inputting the low resolution image into one or more first layers of a deep convolutional neural network; and
Generating a high resolution image of the sample based on the representation, wherein generating the high resolution image is performed by one or more second layers of the depth convolutional neural network, wherein the one or more second layers comprise a final layer configured to output the high resolution image, wherein the final layer is further configured as a subpixel convolutional layer, wherein the acquiring, the generating the representation, and the generating the high resolution image are performed by one or more computer systems, wherein one or more components are performed by the one or more computer systems, and wherein the one or more components comprise the depth convolutional neural network.
CN201880040444.4A 2017-06-30 2018-06-29 Generating high resolution images from low resolution images for semiconductor applications Active CN110785709B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
IN201741023063 2017-06-30
IN201741023063 2017-06-30
US201762545906P 2017-08-15 2017-08-15
US62/545,906 2017-08-15
US16/019,422 2018-06-26
US16/019,422 US10769761B2 (en) 2017-06-30 2018-06-26 Generating high resolution images from low resolution images for semiconductor applications
PCT/US2018/040160 WO2019006221A1 (en) 2017-06-30 2018-06-29 Generating high resolution images from low resolution images for semiconductor applications

Publications (2)

Publication Number Publication Date
CN110785709A CN110785709A (en) 2020-02-11
CN110785709B true CN110785709B (en) 2022-07-15

Family

ID=66590463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880040444.4A Active CN110785709B (en) 2017-06-30 2018-06-29 Generating high resolution images from low resolution images for semiconductor applications

Country Status (3)

Country Link
KR (1) KR102351349B1 (en)
CN (1) CN110785709B (en)
TW (1) TWI754764B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365556B (en) * 2020-11-10 2021-09-28 成都信息工程大学 Image extension method based on perception loss and style loss
TWI775586B (en) * 2021-08-31 2022-08-21 世界先進積體電路股份有限公司 Multi-branch detection system and multi-branch detection method
KR102616400B1 (en) * 2022-04-12 2023-12-27 한국항공우주연구원 Deep learning based image resolution improving system and method by reflecting characteristics of optical system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976318A (en) * 2016-04-28 2016-09-28 北京工业大学 Image super-resolution reconstruction method
CN106228512A (en) * 2016-07-19 2016-12-14 北京工业大学 Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method
CN106339984A (en) * 2016-08-27 2017-01-18 中国石油大学(华东) Distributed image super-resolution method based on K-means driven convolutional neural network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2431889C1 (en) * 2010-08-06 2011-10-20 Дмитрий Валерьевич Шмунк Image super-resolution method and nonlinear digital filter for realising said method
US9286662B2 (en) * 2013-09-26 2016-03-15 Siemens Aktiengesellschaft Single image super resolution and denoising using multiple wavelet domain sparsity
CN106104406B (en) * 2014-03-06 2018-05-08 前进公司 The method of neutral net and neural metwork training
US9401016B2 (en) * 2014-05-12 2016-07-26 Kla-Tencor Corp. Using high resolution full die image data for inspection
WO2016019484A1 (en) * 2014-08-08 2016-02-11 Xiaoou Tang An apparatus and a method for providing super-resolution of a low-resolution image
US10417525B2 (en) * 2014-09-22 2019-09-17 Samsung Electronics Co., Ltd. Object recognition with reduced neural network weight precision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976318A (en) * 2016-04-28 2016-09-28 北京工业大学 Image super-resolution reconstruction method
CN106228512A (en) * 2016-07-19 2016-12-14 北京工业大学 Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method
CN106339984A (en) * 2016-08-27 2017-01-18 中国石油大学(华东) Distributed image super-resolution method based on K-means driven convolutional neural network

Also Published As

Publication number Publication date
TWI754764B (en) 2022-02-11
KR102351349B1 (en) 2022-01-13
KR20200015804A (en) 2020-02-12
TW201910929A (en) 2019-03-16
CN110785709A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
US10769761B2 (en) Generating high resolution images from low resolution images for semiconductor applications
KR102637409B1 (en) Generation of high-resolution images from low-resolution images for semiconductor applications
KR102321953B1 (en) A learning-based approach for the alignment of images acquired with various modalities
JP6853273B2 (en) Systems and methods incorporating neural networks and forward physical models for semiconductor applications
CN109074650B (en) Generating simulated images from input images for semiconductor applications
CN108475350B (en) Method and system for accelerating semiconductor defect detection using learning-based model
US11170475B2 (en) Image noise reduction using stacked denoising auto-encoder
US11694327B2 (en) Cross layer common-unique analysis for nuisance filtering
CN110785709B (en) Generating high resolution images from low resolution images for semiconductor applications
KR20230048110A (en) Deep learning-based defect detection
CN115552431A (en) Training a machine learning model to produce higher resolution images from inspection images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant