TWI754764B - Generating high resolution images from low resolution images for semiconductor applications - Google Patents

Generating high resolution images from low resolution images for semiconductor applications Download PDF

Info

Publication number
TWI754764B
TWI754764B TW107122445A TW107122445A TWI754764B TW I754764 B TWI754764 B TW I754764B TW 107122445 A TW107122445 A TW 107122445A TW 107122445 A TW107122445 A TW 107122445A TW I754764 B TWI754764 B TW I754764B
Authority
TW
Taiwan
Prior art keywords
resolution image
sample
low
resolution
layers
Prior art date
Application number
TW107122445A
Other languages
Chinese (zh)
Other versions
TW201910929A (en
Inventor
韶拉柏 夏瑪
亞米多斯 辛 丹迪亞那
摩漢 馬哈迪文
房超
艾米爾 亞瑟迪更
布萊恩 杜菲
Original Assignee
美商克萊譚克公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/019,422 external-priority patent/US10769761B2/en
Application filed by 美商克萊譚克公司 filed Critical 美商克萊譚克公司
Publication of TW201910929A publication Critical patent/TW201910929A/en
Application granted granted Critical
Publication of TWI754764B publication Critical patent/TWI754764B/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70605Workpiece metrology
    • G03F7/70653Metrology techniques
    • G03F7/70675Latent image, i.e. measuring the image of the exposed resist prior to development
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70425Imaging strategies, e.g. for increasing throughput or resolution, printing product fields larger than the image field or compensating lithography- or non-lithography errors, e.g. proximity correction, mix-and-match, stitching or double patterning
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70491Information management, e.g. software; Active and passive control, e.g. details of controlling exposure processes or exposure tool monitoring processes
    • G03F7/705Modelling or simulating from physical phenomena up to complete wafer processes or whole workflow in wafer productions
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70605Workpiece metrology
    • G03F7/70616Monitoring the printed patterns
    • G03F7/70625Dimensions, e.g. line width, critical dimension [CD], profile, sidewall angle or edge roughness
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L22/00Testing or measuring during manufacture or treatment; Reliability measurements, i.e. testing of parts without further processing to modify the parts as such; Structural arrangements therefor
    • H01L22/20Sequence of activities consisting of a plurality of measurements, corrections, marking or sorting steps
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L22/00Testing or measuring during manufacture or treatment; Reliability measurements, i.e. testing of parts without further processing to modify the parts as such; Structural arrangements therefor
    • H01L22/30Structural arrangements specially adapted for testing or measuring during manufacture or treatment, or specially adapted for reliability measurements

Abstract

Methods and systems for generating a high resolution image for a specimen from a low resolution image of the specimen are provided. One system includes one or more computer subsystems configured for acquiring a low resolution image of a specimen. The system also includes one or more components executed by the one or more computer subsystems. The one or more components include a deep convolutional neural network that includes one or more first layers configured for generating a representation of the low resolution image. The deep convolutional neural network also includes one or more second layers configured for generating a high resolution image of the specimen from the representation of the low resolution image. The second layer(s) include a final layer configured to output the high resolution image and configured as a sub-pixel convolutional layer.

Description

由低解析度影像產生高解析度影像以用於半導體應用Generating high-resolution images from low-resolution images for semiconductor applications

本發明一般而言係關於用於由低解析度影像產生高解析度影像以用於半導體應用之方法及系統。The present invention generally relates to methods and systems for generating high-resolution images from low-resolution images for semiconductor applications.

以下說明及實例並不由於其包含於此章節中而被認為係先前技術。The following description and examples are not to be considered prior art by virtue of their inclusion in this section.

製作諸如邏輯及記憶體裝置之半導體裝置通常包含使用大量半導體製作程序來處理諸如一半導體晶圓之一基板以形成半導體裝置之各種特徵及多個層級。舉例而言,微影係涉及將一圖案自一倍縮光罩轉移至配置於一半導體晶圓上之一光阻劑之一半導體製作程序。半導體製作程序之額外實例包含但不限於化學機械拋光(CMP)、蝕刻、沈積及離子植入。可將多個半導體裝置製作於一單個半導體晶圓上之一配置中且然後將其分離成個別半導體裝置。Fabricating semiconductor devices, such as logic and memory devices, typically involves processing a substrate, such as a semiconductor wafer, using a number of semiconductor fabrication processes to form various features and levels of the semiconductor device. For example, lithography is a semiconductor fabrication process that involves transferring a pattern from a reticle to a photoresist disposed on a semiconductor wafer. Additional examples of semiconductor fabrication processes include, but are not limited to, chemical mechanical polishing (CMP), etching, deposition, and ion implantation. Multiple semiconductor devices can be fabricated in an arrangement on a single semiconductor wafer and then separated into individual semiconductor devices.

在一半導體製造程序期間在各種步驟處使用檢驗程序來偵測樣品上之缺陷以促成在製造程序中之較高良率及因此較高利潤。檢驗一直總是製作半導體裝置之一重要部分。然而,隨著半導體裝置之尺寸減小,檢驗對可接受半導體裝置之成功製造變得甚至更加重要,此乃因較小缺陷可導致裝置不合格。Inspection processes are used at various steps during a semiconductor fabrication process to detect defects on samples to facilitate higher yields and thus higher profits in the fabrication process. Inspection has always been an important part of making semiconductor devices. However, as the size of semiconductor devices decreases, inspection becomes even more important to the successful manufacture of acceptable semiconductor devices, since smaller defects can lead to device failure.

缺陷再檢測通常涉及重新偵測如藉由一檢驗程序偵測之缺陷,且使用一高放大光學系統或一掃描電子顯微鏡(SEM)而以一較高解析度產生關於缺陷之額外資訊。因此,在其中已藉由檢驗而偵測到缺陷之樣品上之離散位置處執行缺陷再檢測。藉由缺陷再檢測而產生之缺陷之較高解析度資料較適合用於判定缺陷之屬性,諸如輪廓、粗糙度、較準確大小資訊等。Defect re-inspection typically involves re-detecting defects such as those detected by an inspection process and using a high magnification optical system or a scanning electron microscope (SEM) to generate additional information about the defect at a higher resolution. Thus, defect re-inspection is performed at discrete locations on the sample where defects have been detected by inspection. The higher-resolution data of defects generated by defect re-inspection are more suitable for determining defect attributes, such as contour, roughness, more accurate size information, and the like.

亦在一半導體製造程序期間之各種步驟處使用計量程序來監測並控制程序。計量程序與檢驗程序不同之處在於:不同於其中偵測樣品上之缺陷之檢驗程序,計量程序係用於量測使用當前所使用之檢驗工具無法判定之樣品之一或多個特性。舉例而言,計量程序用於量測樣品之一或多個特性,諸如在一程序期間形成於樣品上之特徵之一尺寸(例如,線寬、厚度等),使得可依據一或多個特性來判定該程序之效能。另外,若樣品之一或多個特性係不可接受的(例如,在特性之一預定範圍之外),則對樣品之一或多個特性之量測可用於更改程序之一或多個參數,使得藉由程序製造之額外樣品具有可接受特性。Metrology processes are also used at various steps during a semiconductor fabrication process to monitor and control the process. A metrology procedure differs from an inspection procedure in that, unlike an inspection procedure in which defects on a sample are detected, a metrology procedure is used to measure one or more characteristics of a sample that cannot be determined using the inspection tools currently in use. For example, metrology procedures are used to measure one or more properties of a sample, such as a dimension (eg, line width, thickness, etc.) of features formed on a sample during a procedure, such that the one or more properties can be measured according to to determine the effectiveness of the program. Additionally, if one or more properties of the sample are unacceptable (eg, outside a predetermined range of one of the properties), then measurement of one or more properties of the sample can be used to alter one or more parameters of the procedure, Additional samples made by the procedure have acceptable properties.

計量程序與缺陷再檢測程序不同之處亦在於:不同於其中在缺陷再檢測中再次探訪藉由檢驗而偵測之缺陷之缺陷再檢測程序,計量程序可在未偵測到缺陷之位置處執行。換言之,不同於缺陷再檢測,在樣品上執行一計量程序之位置可獨立於對樣品執行之一檢驗程序之結果。特定而言,可獨立於檢驗結果而選擇執行一計量程序之位置。Metrology procedures differ from defect re-inspection procedures also in that, unlike defect re-inspection procedures in which defects detected by inspection are revisited in defect re-inspection, metrology procedures can be performed at locations where no defects were detected . In other words, unlike defect re-inspection, the location at which a metrology procedure is performed on a sample can be independent of the results of an inspection procedure performed on the sample. In particular, the location at which to perform a metrology procedure can be selected independently of the inspection results.

因此,如上文所闡述,由於執行檢驗(光學檢驗及有時電子束檢驗)之有限解析度,因此一般需要樣品來產生額外較高解析度影像以用於對樣品上所偵測之缺陷進行缺陷再檢測,該缺陷再檢測可包含驗證所偵測缺陷、對所偵測缺陷進行分類及判定缺陷之特性。另外,一般需要較高解析度影像來判定如在計量中形成於樣品上之經圖案化特徵之資訊,而不管是否已在該等經圖案化特徵中偵測到缺陷。因此,缺陷再檢測及計量可為耗費時間之程序,其需要使用實體樣品本身以及產生較高解析度影像所需要之額外工具(除檢驗器之外)。Therefore, as explained above, due to the limited resolution of performing inspection (optical inspection and sometimes e-beam inspection), samples are generally required to generate additional higher resolution images for defect detection of defects on the samples Re-inspection, which may include verifying the detected defects, classifying the detected defects, and characterizing the defects. In addition, higher resolution images are generally required to determine information such as patterned features formed on a sample in metrology, regardless of whether defects have been detected in the patterned features. Thus, defect re-detection and metrology can be a time-consuming procedure that requires the use of the physical sample itself and additional tools (besides the inspector) required to generate higher resolution images.

然而,缺陷再檢測及計量並非可被簡單地消除以節省時間及金錢之程序。舉例而言,由於執行檢驗程序之解析度,因此檢驗程序並非一般產生可用於判定所偵測缺陷之足以對缺陷進行分類之資訊及/或判定缺陷之一根本原因之影像信號或資料。另外,由於執行檢驗程序之解析度,因此檢驗程序並非一般產生可用於以充分準確性而判定形成於樣品上之經圖案化特徵之資訊之影像信號或資料。However, defect re-inspection and metrology is not a process that can be simply eliminated to save time and money. For example, due to the resolution at which inspection procedures are performed, inspection procedures do not generally generate image signals or data sufficient to classify the defect and/or to determine a root cause of the defect that can be used to determine a detected defect. In addition, due to the resolution at which the inspection process is performed, the inspection process does not generally generate image signals or data that can be used to determine with sufficient accuracy the information of the patterned features formed on the sample.

因此,開發用於產生一樣品之一高解析度影像之不具有上文所闡述之缺點中之一或多者的系統及方法將係有利的。Accordingly, it would be advantageous to develop systems and methods for producing a high-resolution image of a sample that do not have one or more of the disadvantages set forth above.

各種實施例之以下說明不應以任何方式被視為限制隨附申請專利範圍之標的物。The following description of various embodiments should not be considered in any way to limit the subject matter of the scope of the appended claims.

一項實施例係關於一種經組態以由一樣品之一低解析度影像產生該樣品之一高解析度影像之系統。該系統包含經組態以用於獲取一樣品之一低解析度影像之一或多個電腦子系統。該系統亦包含由該一或多個電腦子系統執行之一或多個組件。該一或多個組件包含一深度迴旋神經網路,該深度迴旋神經網路包含經組態以用於產生該低解析度影像之一表示之一或多個第一層。該深度迴旋神經網路亦包含經組態以用於自該低解析度影像之該表示產生該樣品之一高解析度影像之一或多個第二層。該一或多個第二層包含經組態以輸出該高解析度影像之一最終層。該最終層經組態為一子像素迴旋層。可如本文中所闡述而進一步組態該系統。One embodiment relates to a system configured to generate a high-resolution image of a sample from a low-resolution image of the sample. The system includes one or more computer subsystems configured for acquiring a low-resolution image of a sample. The system also includes one or more components executed by the one or more computer subsystems. The one or more components include a deep convolutional neural network including one or more first layers configured to generate a representation of the low-resolution image. The deep convolutional neural network also includes one or more second layers configured for generating a high-resolution image of the sample from the representation of the low-resolution image. The one or more second layers include a final layer configured to output the high-resolution image. The final layer is configured as a subpixel convolutional layer. The system can be further configured as set forth herein.

一額外實施例係關於經組態以由一樣品之一低解析度影像產生該樣品之一高解析度影像之另一系統。此系統如上文所闡述而組態。此系統亦包含經組態以用於產生該樣品之低解析度影像之一成像子系統。在此實施例中,電腦子系統經組態以用於自成像子系統獲取低解析度影像。可如本文中所闡述而進一步組態該系統之此實施例。An additional embodiment relates to another system configured to generate a high-resolution image of a sample from a low-resolution image of the sample. This system is configured as described above. The system also includes an imaging subsystem configured for generating low-resolution images of the sample. In this embodiment, the computer subsystem is configured for acquiring low-resolution images from the imaging subsystem. This embodiment of the system can be further configured as set forth herein.

另一實施例係關於一種用於由一樣品之一低解析度影像產生該樣品之一高解析度影像之電腦實施方法。該方法包含獲取一樣品之一低解析度影像。該方法亦包含藉由將該低解析度影像輸入至一深度迴旋神經網路之一或多個第一層中而產生該低解析度影像之一表示。另外,該方法包含基於該表示而產生該樣品之一高解析度影像。產生該高解析度影像係由該深度迴旋神經網路之一或多個第二層執行。該一或多個第二層包含經組態以輸出該高解析度影像之一最終層。該最終層經組態為一子像素迴旋層。該獲取、該產生該表示及該產生該高解析度影像步驟係由一或多個電腦系統執行。一或多個組件由該一或多個電腦系統執行,且該一或多個組件包含該深度迴旋神經網路。Another embodiment relates to a computer-implemented method for generating a high-resolution image of a sample from a low-resolution image of the sample. The method includes acquiring a low-resolution image of a sample. The method also includes generating a representation of the low-resolution image by inputting the low-resolution image into one or more first layers of a deep convolutional neural network. Additionally, the method includes generating a high-resolution image of the sample based on the representation. Generating the high-resolution image is performed by one or more second layers of the deep convolutional neural network. The one or more second layers include a final layer configured to output the high-resolution image. The final layer is configured as a subpixel convolutional layer. The steps of acquiring, generating the representation, and generating the high-resolution image are performed by one or more computer systems. One or more components are executed by the one or more computer systems, and the one or more components include the deep convolutional neural network.

可如本文中進一步所闡述而進一步執行上文所闡述之該方法之步驟中之每一者。另外,上文所闡述之該方法之實施例可包含本文中所闡述之任何其他方法之任何其他步驟。此外,上文所闡述之方法可由本文中所闡述之系統中之任一者來執行。Each of the steps of the method set forth above may be further performed as set forth further herein. Additionally, embodiments of the method set forth above may include any other steps of any other method set forth herein. Furthermore, the methods set forth above may be performed by any of the systems set forth herein.

另一實施例係關於一種儲存程式指令之非暫時性電腦可讀媒體,該等程式指令可在一或多個電腦系統上執行以用於執行一電腦實施方法,該電腦實施方法用於由一樣品之一低解析度影像產生該樣品之一高解析度影像。該電腦實施方法包含上文所闡述之方法之步驟。可如本文中所闡述而進一步組態該電腦可讀媒體。可如本文中進一步所闡述而執行該電腦實施方法之步驟。另外,可為其執行該等程序指令之該電腦實施方法可包含本文中所闡述之任何其他方法之任何其他步驟。Another embodiment relates to a non-transitory computer-readable medium storing program instructions executable on one or more computer systems for performing a computer-implemented method for A low-resolution image of the sample produces a high-resolution image of the sample. The computer-implemented method comprises the steps of the method set forth above. The computer-readable medium can be further configured as set forth herein. The steps of the computer-implemented method can be performed as further described herein. In addition, the computer-implemented method for which the program instructions may be executed may include any other steps of any of the other methods set forth herein.

現在轉至圖式,應注意,各圖並未按比例繪製。特定而言,該等圖之元件中之某些元件之比例被大大地放大以強調該等元件之特性。亦應注意,該等圖並未按相同比例繪製。已使用相同元件符號指示可類似地組態之在一個以上圖中展示之元件。除非本文中另外提及,否則所闡述及所展示之元件中之任一者可包含任何適合可商業購得元件。Turning now to the drawings, it should be noted that the figures are not drawn to scale. In particular, the scale of some of the elements of the figures has been greatly exaggerated to emphasize the characteristics of the elements. It should also be noted that the figures are not drawn to the same scale. Elements shown in more than one figure that may be configured similarly have been designated using the same reference numerals. Unless otherwise mentioned herein, any of the elements illustrated and shown may comprise any suitable commercially available elements.

一項實施例係關於經組態以由一樣品之一低解析度影像產生該樣品之一高解析度影像之一系統。如本文中進一步所闡述,實施例提供平台無關之資料驅動方法及系統以產生穩定且穩健計量品質影像。實施例亦可用於產生相對高品質、經雜訊消除且超解析之影像。實施例可進一步用於增加成像通量。另外,實施例可用於自相對低圖框、相對低電子/像素(e/p)檢驗掃描產生再檢測影像。(「低圖框」意味著在相同位置處進行較少數目次影像擷取,例如,為獲得較佳成像且增加信號雜訊比,擷取多個圖框且然後將其組合以提高影像品質。「e/p」係基本上電子/像素,其中一較高e/p意味著較高品質但較低通量。使用束條件達成較高e/p。)One embodiment relates to a system configured to generate a high-resolution image of a sample from a low-resolution image of the sample. As further described herein, embodiments provide platform-independent data-driven methods and systems to produce stable and robust metrology-quality images. Embodiments can also be used to generate relatively high quality, noise-removed, and super-resolved images. Embodiments may further be used to increase imaging throughput. Additionally, embodiments may be used to generate re-inspection images from relatively low frame, relatively low electrons per pixel (e/p) inspection scans. ("Low frame" means a smaller number of image captures at the same location, eg, for better imaging and increased signal-to-noise ratio, multiple frames are captured and then combined to improve image quality ."e/p" is basically electrons/pixel, where a higher e/p means higher quality but lower flux. Use beam conditions to achieve higher e/p.)

本文中所闡述之實施例適用於電子束(ebeam)、寬頻電漿(BBP)、雷射散射、有限解析度及計量平台,該等計量平台用於以一高得多之通量由彼等平台中之任一者產生之影像產生相對高品質影像。換言之,可由一成像系統以相對高通量及因此相對低解析度產生影像且然後藉由本文中所闡述之實施例而將該等影像變換成相對高解析度影像,此意指可以相對高通量來有效地產生高解析度影像。本文中所闡述之實施例有利地提供相對低解析度成像流形與相對高解析度成像流形之間的習得變換、雜訊減少及自較高品質掃描至較低品質掃描之品質轉變。一成像「流形」可一般而言定義為所有可能影像之一理論機率空間。The embodiments described herein are applicable to electron beam (ebeam), broadband plasma (BBP), laser scattering, limited resolution, and metrology platforms for use with a much higher flux from them Images produced by either of the platforms produce relatively high quality images. In other words, images can be generated by an imaging system at relatively high throughput and thus relatively low resolution and then transformed into relatively high resolution images by the embodiments described herein, which means that relatively high throughput can be achieved amount to efficiently produce high-resolution images. Embodiments described herein advantageously provide learned transformations between relatively low-resolution imaging manifolds and relatively high-resolution imaging manifolds, noise reduction, and quality transitions from higher quality scans to lower quality scans. An imaging "manifold" can generally be defined as a theoretical probability space of all possible images.

如本文中所使用之術語一樣品之「低解析度影像」一般而言定義為一影像,其中在該影像中未解析出形成於產生影像之樣品區域中之所有經圖案化特徵。舉例而言,若產生低解析度影像之樣品區域中之經圖案化特徵中之某些經圖案化特徵之大小足夠大以使其變為可解析的,則可在低解析度影像中解析出該等經圖案化特徵。然而,低解析度影像並非以使影像中之所有經圖案化特徵變為可解析之一解析度來產生。以此方式,如本文中所使用之術語一「低解析度影像」並不包含關於樣品上之經圖案化特徵之資訊,該資訊足以使低解析度影像用於諸如缺陷再檢測(其可包含缺陷分類及/或驗證)及計量之應用。另外,如本文中所使用之術語一「低解析度影像」一般係指由檢驗系統產生之影像,該等影像通常具有相對較低解析度(例如,比缺陷再檢測及/或計量系統低)以便具有相對快速通量。以此方式,一「低解析度影像」亦可通常稱為一高通量或HT影像。舉例而言,為了以較高通量產生影像,可降低e/p及圖框數目,藉此導致較低品質掃描電子顯微鏡(SEM)影像。The term "low-resolution image" of a sample, as used herein, is generally defined as an image in which all patterned features formed in the region of the sample where the image was generated are not resolved. For example, if some of the patterned features in the sample area that produced the low-resolution image are large enough in size to become resolvable, then the low-resolution image can be resolved. The patterned features. However, low-resolution images are not generated at a resolution that makes all patterned features in the image resolvable. In this way, the term "low-resolution image" as used herein does not include information about the patterned features on the sample sufficient to enable the low-resolution image to be used, such as for defect re-inspection (which may include Defect classification and/or verification) and metrology applications. Additionally, the term a "low-resolution image" as used herein generally refers to images produced by inspection systems, which are typically of relatively lower resolution (eg, lower than defect re-inspection and/or metrology systems) in order to have a relatively fast flux. In this way, a "low-resolution image" may also be commonly referred to as a high-throughput or HT image. For example, to generate images at higher throughput, e/p and frame number may be reduced, thereby resulting in lower quality scanning electron microscope (SEM) images.

「低解析度影像」亦可由於其具有比本文中所闡述之一「高解析度影像」低之一解析度而係「低解析度」。如本文中所使用之術語一「高解析度影像」可一般定義為其中以相對高準確性解析出樣品之所有經圖案化特徵之一影像。以此方式,在一高解析度影像中解析出在產生該高解析度影像之樣品區域中之所有經圖案化特徵,而不管該等經圖案化特徵之大小如何。如此,如本文中所使用之術語一「高解析度影像」含有關於樣品之經圖案化特徵之資訊,該資訊足以使高解析度影像用於諸如缺陷再檢測(其可包含缺陷分類及/或驗證)及計量之應用。另外,如本文中所使用之術語一「高解析度影像」一般係指無法在常規操作期間由檢驗系統產生之影像,該等檢驗系統經組態以犧牲解析度能力來獲得經增加通量。以此方式,一「高解析度影像」亦可在本文中及此項技術中稱為一「高敏感度影像」,其係一「高品質影像」之另一術語。舉例而言,為產生高品質影像,可增加e/p、圖框等,此產生良好品質SEM影像但顯著地降低通量。此等影像然後由於其可用於高敏感度缺陷偵測而係「高敏感度」影像。A "low-resolution image" may also be "low-resolution" because it has a lower resolution than one of the "high-resolution images" described herein. The term a "high-resolution image" as used herein may generally be defined as an image in which all patterned features of a sample are resolved with relatively high accuracy. In this way, all patterned features in the sample area where the high-resolution image was generated are resolved in a high-resolution image, regardless of the size of the patterned features. Thus, the term "high-resolution image" as used herein contains information about the patterned features of the sample sufficient to enable the high-resolution image to be used, for example, for defect re-inspection (which may include defect classification and/or verification) and metrology applications. Additionally, the term "high resolution image" as used herein generally refers to images that cannot be produced during normal operation by inspection systems that are configured to sacrifice resolution capability for increased throughput. In this manner, a "high-resolution image" may also be referred to herein and in the art as a "high-sensitivity image," which is another term for a "high-quality image." For example, to produce high quality images, e/p, frames, etc. can be increased, which produces good quality SEM images but significantly reduces throughput. These images are then "high-sensitivity" images because they can be used for high-sensitivity defect detection.

與本文中進一步所闡述之實施例相比,大多數較舊方法使用試探法及擇優挑選之參數來產生相對不含雜訊之影像。通常設計此等方法時要記住其將運行之影像之統計性質且如此在不併入彼平台之試探法之情況下無法將該等方法移植至其他平台。用於影像中之雜訊減少之眾所周知之方法中之某些方法係各向異性擴散、雙邊濾光器、維納(Weiner)濾光器、非區域均值等。雙邊濾光器及維納濾光器藉由使用自相鄰像素設計之一濾光器而去除像素層級之雜訊。各向異性擴散將擴散定律應用於影像,藉此該各向異性擴散根據擴散方程式而平滑化一影像中之紋理/強度。使用一定限功能來防止擴散跨越邊緣而發生且因此該定限功能在很大程度上保留影像中之邊緣。In contrast to the embodiments described further herein, most older methods use heuristics and preferentially selected parameters to produce relatively noise-free images. These methods are usually designed keeping in mind the statistical nature of the images they will run on and thus cannot be ported to other platforms without incorporating the heuristics of that platform. Some of the well-known methods for noise reduction in images are anisotropic diffusion, bilateral filters, Weiner filters, non-area averaging, and the like. Bilateral and Wiener filters remove pixel-level noise by using a filter designed from adjacent pixels. Anisotropic diffusion applies the law of diffusion to an image whereby the anisotropic diffusion smoothes the texture/intensity in an image according to the diffusion equation. A bounding function is used to prevent diffusion from occurring across edges and thus the bounding function largely preserves the edges in the image.

如維納及雙邊濾光之較舊方法之缺點係此等係需要以一影像層級來精細調諧以獲得最佳結果之參數方法。此等方法並非資料驅動的,此限制了其在具挑戰性成像類型上可達成之效能。另一限制係其處理之大部分係線內完成的,此由於通量限制而限制了其可使用之使用情形。A disadvantage of older methods such as Wiener and bilateral filtering is that these are parametric methods that require fine tuning at an image level for optimal results. These methods are not data-driven, which limits their achievable performance on challenging imaging types. Another limitation is done within the tether for most of its processing, which limits the use cases in which it can be used due to throughput limitations.

在圖1中展示經組態以由一樣品之一低解析度影像產生該樣品之一高解析度影像之一系統之一項實施例。該系統包含一或多個電腦子系統(例如,電腦子系統36及電腦子系統102)及由該一或多個電腦子系統執行之一或多個組件100。在某些實施例中,該系統包含經組態以產生樣品之低解析度影像之成像系統(或子系統) 10。在圖1之實施例中,成像系統經組態以用於在偵測來自樣品之光之同時使光對樣品之一實體版本進行掃描或將光引導至該實體版本以藉此產生樣品之影像。成像系統亦可經組態而以多個模式執行掃描(或引導)及偵測。An embodiment of a system configured to generate a high-resolution image of a sample from a low-resolution image of the sample is shown in FIG. 1 . The system includes one or more computer subsystems (eg, computer subsystem 36 and computer subsystem 102) and one or more components 100 executed by the one or more computer subsystems. In certain embodiments, the system includes an imaging system (or subsystem) 10 configured to generate a low-resolution image of a sample. In the embodiment of FIG. 1, the imaging system is configured to scan or direct light to a physical version of the sample while detecting light from the sample to thereby generate an image of the sample . Imaging systems can also be configured to perform scanning (or guiding) and detection in multiple modes.

在一項實施例中,樣品係一晶圓。晶圓可包含此項技術中已知之任何晶圓。在另一實施例中,樣品係一倍縮光罩。倍縮光罩可包含此項技術中已知之任何倍縮光罩。In one embodiment, the sample is a wafer. Wafers can include any wafers known in the art. In another embodiment, the sample is a reticle. The reticle can include any reticle known in the art.

在一項實施例中,成像系統係一基於光學之成像系統。在一項此實例中,在圖1中所展示之系統之實施例中,基於光學之成像系統10包含經組態以將光引導至樣品14之一照射子系統。照射子系統包含至少一個光源。舉例而言,如在圖1中所展示,照射子系統包含光源16。在一項實施例中,照射子系統經組態以將光以一或多個入射角度(其可包含一或多個傾斜角度及/或一或多個法向角度)引導至樣品。舉例而言,如圖1中所展示,來自光源16之光以一傾斜入射角度被引導穿過光學元件18且然後穿過透鏡20到達樣品14。傾斜入射角度可包含任何適合傾斜入射角度,其可取決於(舉例而言)樣品之特性而變化。In one embodiment, the imaging system is an optical-based imaging system. In one such example, in the embodiment of the system shown in FIG. 1 , the optics-based imaging system 10 includes an illumination subsystem configured to direct light to the sample 14 . The illumination subsystem includes at least one light source. For example, as shown in FIG. 1 , the illumination subsystem includes a light source 16 . In one embodiment, the illumination subsystem is configured to direct light to the sample at one or more angles of incidence (which may include one or more oblique angles and/or one or more normal angles). For example, as shown in FIG. 1 , light from light source 16 is directed through optical element 18 at an oblique angle of incidence and then through lens 20 to sample 14 . The oblique incidence angle can include any suitable oblique incidence angle, which can vary depending on, for example, the characteristics of the sample.

成像系統可經組態以將光在不同時間以不同入射角度引導至樣品。舉例而言,成像系統可經組態以更改照射子系統之一或多個元件之一或多個特性,使得可將光以不同於圖1中所展示之入射角度之一入射角度引導至樣品。在一項此實例中,成像系統可經組態以移動光源16、光學元件18及透鏡20,使得將光以一不同傾斜入射角度或一法向(或近法向)入射角度引導至樣品。The imaging system can be configured to direct light to the sample at different angles of incidence at different times. For example, the imaging system can be configured to alter one or more characteristics of one or more elements of the illumination subsystem such that light can be directed to the sample at an angle of incidence different from that shown in FIG. 1 . In one such example, the imaging system can be configured to move light source 16, optical element 18, and lens 20 such that light is directed to the sample at a different oblique angle of incidence or a normal (or near-normal) incidence angle.

在某些例項中,成像系統可經組態以將光同時以一個以上入射角度引導至樣品。舉例而言,照射子系統可包含一個以上照射通道,該等照射通道中之一者可包含如圖1中所展示之光源16、光學元件18及透鏡20,且該等照射通道中之另一者(未展示)可包含可不同地或相同地經組態之類似元件,或者可包含至少一光源以及可能地一或多個其他組件(諸如本文中進一步闡述之組件)。若此光與其他光同時被引導至樣品,則以不同入射角度被引導至樣品之光之一或多個特性(例如,波長、偏光等)可為不同的,使得由以不同入射角度對樣品之照射產生之光可在偵測器處彼此區別開。In some instances, the imaging system can be configured to direct light to the sample at more than one angle of incidence at the same time. For example, an illumination subsystem may include more than one illumination channel, one of which may include light source 16, optical element 18, and lens 20 as shown in FIG. 1, and another of the illumination channels Those (not shown) may include similar elements that may be configured differently or identically, or may include at least one light source and possibly one or more other components such as those set forth further herein. If this light is directed to the sample at the same time as other light, then one or more properties (eg, wavelength, polarization, etc.) of the light directed to the sample at different angles of incidence may be different such that the The light generated by the irradiation can be distinguished from each other at the detector.

在另一例項中,照射子系統可包含僅一個光源(例如,圖1中所展示之源16),且來自光源之光可藉由照射子系統之一或多個光學元件(未展示)而被分離至不同光學路徑中(例如,基於波長、偏光等)。不同光學路徑中之每一者中之光然後可被引導至樣品。多個照射通道可經組態以將光同時或在不同時間(例如,當使用不同照射通道來依序照射樣品時)引導至樣品。在另一例項中,相同照射通道可經組態以將在不同時間具有不同特性之光引導至樣品。舉例而言,在某些例項中,光學元件18可經組態為一光譜濾光器,且光譜濾光器之性質可以多種不同方式(例如,藉由換出光譜濾光器)被改變,使得可在不同時間將不同波長之光引導至樣品。照射子系統可具有此項技術中已知之用於將具有不同或相同特性之光以不同或相同入射角度依序或同時引導至樣品之任何其他適合組態。In another example, the illumination subsystem may include only one light source (eg, source 16 shown in FIG. 1 ), and the light from the light source may be illuminated by one or more optical elements (not shown) of the illumination subsystem are separated into different optical paths (eg, based on wavelength, polarization, etc.). The light in each of the different optical paths can then be directed to the sample. Multiple illumination channels can be configured to direct light to the sample simultaneously or at different times (eg, when different illumination channels are used to sequentially illuminate the sample). In another example, the same illumination channel can be configured to direct light with different characteristics to the sample at different times. For example, in some instances, optical element 18 can be configured as a spectral filter, and the properties of the spectral filter can be changed in a number of different ways (eg, by swapping out the spectral filter) , so that different wavelengths of light can be directed to the sample at different times. The illumination subsystem may have any other suitable configuration known in the art for directing light of different or identical properties to the sample sequentially or simultaneously at different or identical angles of incidence.

在一項實施例中,光源16可包含一寬頻電漿(BBP)光源。以此方式,由光源產生且被引導至樣品之光可包含寬頻光。然而,光源可包含任何其他適合光源,諸如一雷射。雷射可包含此項技術中已知之任何適合雷射且可經組態以產生此項技術中已知之任何一或若干適合波長下之光。另外,雷射可經組態以產生係單色或接近單色之光。以此方式,雷射可為一窄頻雷射。光源亦可包含產生多個離散波長或波段下之光之一多色光源。In one embodiment, the light source 16 may comprise a broadband plasma (BBP) light source. In this way, the light generated by the light source and directed to the sample may comprise broadband light. However, the light source may comprise any other suitable light source, such as a laser. The laser may comprise any suitable laser known in the art and may be configured to generate light at any suitable wavelength or wavelengths known in the art. In addition, lasers can be configured to produce light that is monochromatic or nearly monochromatic. In this way, the laser can be a narrowband laser. The light source may also include a polychromatic light source that produces light at a plurality of discrete wavelengths or wavelength bands.

來自光學元件18之光可藉由透鏡20被聚焦至樣品14上。雖然透鏡20在圖1中展示為一單個折射光學元件,但應理解,實務上,透鏡20可包含組合地將來自光學元件之光聚焦至樣品之若干個折射及/或反射光學元件。在圖1中展示且在本文中所闡述之照射子系統可包含任何其他適合光學元件(未展示)。此等光學元件之實例包含(但不限於)偏光組件、光譜濾光器、空間濾光器、反射光學元件、切趾器、分束器、光圈及諸如此類,其可包含此項技術中已知之任何此等適合光學元件。另外,成像系統可經組態以基於將用於成像之照射之類型而更改照射子系統之元件中之一或多者。Light from optical element 18 can be focused onto sample 14 by lens 20 . Although lens 20 is shown in FIG. 1 as a single refractive optical element, it should be understood that, in practice, lens 20 may include several refractive and/or reflective optical elements that in combination focus light from the optical element to the sample. The illumination subsystem shown in FIG. 1 and described herein may include any other suitable optical elements (not shown). Examples of such optical elements include, but are not limited to, polarizers, spectral filters, spatial filters, reflective optical elements, apodizers, beam splitters, apertures, and the like, which may include those known in the art any such suitable optical element. Additionally, the imaging system can be configured to alter one or more of the elements of the illumination subsystem based on the type of illumination to be used for imaging.

成像系統亦可包含經組態以致使光對樣品進行掃描之一掃描子系統。舉例而言,成像系統可包含載台22,在檢驗期間將樣品14安置於該載台上。掃描子系統可包含可經組態以移動樣品使得光可對樣品進行掃描之任何適合機械及/或機器人總成(其包含載台22)。另外或另一選擇係,成像系統可經組態使得成像系統之一或多個光學元件執行光對樣品之某一掃描。光可以任何適合方式(諸如以一蛇形路徑或以一螺旋路徑)對樣品進行掃描。The imaging system may also include a scanning subsystem configured to cause the light to scan the sample. For example, the imaging system may include stage 22 on which sample 14 is placed during inspection. The scanning subsystem can include any suitable mechanical and/or robotic assembly (including stage 22) that can be configured to move the sample so that light can scan the sample. Additionally or alternatively, the imaging system may be configured such that one or more optical elements of the imaging system perform some scan of the sample with light. The light may scan the sample in any suitable manner, such as in a serpentine path or in a helical path.

成像系統進一步包含一或多個偵測通道。一或多個偵測通道中之至少一者包含一偵測器,該偵測器經組態以藉由系統而偵測歸因於對樣品之照射之來自樣品之光且回應於所偵測光而產生輸出。舉例而言,圖1中所展示之成像系統包含兩個偵測通道,一個偵測通道由收集器24、元件26及偵測器28形成且另一偵測通道由收集器30、元件32及偵測器34形成。如圖1中所展示,該兩個偵測通道經組態以依據不同收集角度收集且偵測光。在某些例項中,兩個偵測通道經組態以偵測經散射光,且偵測通道經組態以偵測以不同角度自樣品散射之光。然而,偵測通道中之一或多者可經組態以偵測來自樣品之另一類型之光(例如,經反射光)。The imaging system further includes one or more detection channels. At least one of the one or more detection channels includes a detector configured to detect, by the system, light from the sample due to illumination of the sample and responsive to the detected light to produce output. For example, the imaging system shown in FIG. 1 includes two detection channels, one detection channel formed by collector 24, element 26 and detector 28 and the other detection channel formed by collector 30, element 32 and A detector 34 is formed. As shown in Figure 1, the two detection channels are configured to collect and detect light according to different collection angles. In some instances, the two detection channels are configured to detect scattered light, and the detection channels are configured to detect light scattered from the sample at different angles. However, one or more of the detection channels can be configured to detect another type of light (eg, reflected light) from the sample.

如圖1中進一步所展示,兩個偵測通道展示為定位於紙張之平面中且照射子系統亦展示為定位於紙張之平面中。因此,在此實施例中,兩個偵測通道定位於(例如,居中於)入射平面中。然而,偵測通道中之一或多者可定位於入射平面之外。舉例而言,由收集器30、元件32及偵測器34形成之偵測通道可經組態以收集並偵測散射於入射平面之外的光。因此,此一偵測通道可通常稱為一「旁」通道,且此一旁通道可居中於實質上垂直於入射平面之一平面中。As further shown in Figure 1, the two detection channels are shown positioned in the plane of the paper and the illumination subsystem is also shown positioned in the plane of the paper. Thus, in this embodiment, the two detection channels are positioned (eg, centered) in the plane of incidence. However, one or more of the detection channels may be positioned outside the plane of incidence. For example, the detection channel formed by collector 30, element 32, and detector 34 can be configured to collect and detect light scattered outside the plane of incidence. Therefore, such a detection channel may be commonly referred to as a "side" channel, and this side channel may be centered in a plane substantially perpendicular to the plane of incidence.

雖然圖1展示包含兩個偵測通道之成像系統之一實施例,但成像系統可包含不同數目個偵測通道(例如,僅一個偵測通道或者兩個或兩個以上偵測通道)。在一個此例項中,由收集器30、元件32及偵測器34形成之偵測通道可形成如上文所闡述之一個旁通道,且成像系統可包含一額外偵測通道(未展示),該額外偵測通道形成為定位於入射平面之相對側上之另一旁通道。因此,成像系統可包含偵測通道,該偵測通道包含收集器24、元件26及偵測器28且居中於入射平面中並且經組態以收集並偵測處於法向於樣品表面或接近法向於樣品表面之散射角度之光。因此,此偵測通道可通常稱為一「頂部」通道,且成像系統亦可包含如上文所闡述而組態之兩個或兩個以上旁通道。如此,成像系統可包含至少三個通道(亦即,一個頂部通道及兩個旁通道),且至少三個通道中之每一者具有其自身之收集器,收集器中之每一者經組態以收集處於與其他收集器中之每一者所收集光之散射角度不同之散射角度之光。Although FIG. 1 shows one embodiment of an imaging system that includes two detection channels, the imaging system may include a different number of detection channels (eg, only one detection channel or two or more detection channels). In one such example, the detection channel formed by collector 30, element 32, and detector 34 may form a side channel as described above, and the imaging system may include an additional detection channel (not shown), The additional detection channel is formed as another side channel positioned on the opposite side of the incident plane. Thus, an imaging system can include a detection channel that includes a collector 24, an element 26, and a detector 28 and is centered in the plane of incidence and configured to collect and detect in the normal to the sample surface or in proximity Light at a scattering angle towards the sample surface. Thus, this detection channel may be commonly referred to as a "top" channel, and the imaging system may also include two or more side channels configured as described above. As such, an imaging system can include at least three channels (ie, one top channel and two side channels), and each of the at least three channels has its own collector, each of which is grouped state to collect light at a different scattering angle than the scattering angle of the light collected by each of the other collectors.

如上文進一步所闡述,包含於成像系統中之偵測通道中之每一者可經組態以偵測經散射光。因此,圖1中所展示之成像系統可經組態以用於樣品之暗場(DF)成像。然而,成像系統亦可或另一選擇係包含經組態以用於樣品之明場(BF)成像之偵測通道。換言之,成像系統可包含經組態以偵測自樣品被鏡面反射之光之至少一個偵測通道。因此,本文中所闡述之成像系統可經組態以用於僅DF成像、僅BF成像或DF成像及BF成像兩者。雖然收集器中之每一者在圖1中展示為單個折射光學元件,但應理解,收集器中之每一者可包含一或多個折射光學元件及/或一或多個反射光學元件。As further set forth above, each of the detection channels included in the imaging system can be configured to detect scattered light. Accordingly, the imaging system shown in Figure 1 can be configured for dark field (DF) imaging of samples. However, the imaging system may also or alternatively include a detection channel configured for bright field (BF) imaging of the sample. In other words, the imaging system can include at least one detection channel configured to detect specularly reflected light from the sample. Accordingly, the imaging systems described herein can be configured for DF-only imaging, BF-only imaging, or both DF imaging and BF imaging. Although each of the collectors is shown in FIG. 1 as a single refractive optical element, it should be understood that each of the collectors may include one or more refractive optical elements and/or one or more reflective optical elements.

一或多個偵測通道可包含此項技術中已知之任何適合偵測器。舉例而言,偵測器可包含光電倍增管(PMT)、電荷耦合裝置(CCD)、時間延遲積分(TDI)相機及此項技術中已知之任何其他適合偵測器。偵測器亦可包含非成像偵測器或成像偵測器。以此方式,若偵測器係非成像偵測器,則偵測器中之每一者可經組態以偵測經散射光之某些特性(諸如強度)但不可經組態以依據在成像平面內之位置而偵測此等特性。如此,藉由包含於成像系統之偵測通道中之每一者中之偵測器中之每一者產生之輸出可為信號或資料,但並非影像信號或影像資料。在此等例項中,一電腦子系統(諸如電腦子系統36)可經組態以自偵測器之非成像輸出產生樣品之影像。然而,在其他例項中,偵測器可經組態為成像偵測器,該等成像偵測器經組態以產生影像信號或影像資料。因此,該成像系統可經組態而以若干種方式產生本文中所闡述之影像。The one or more detection channels may comprise any suitable detector known in the art. For example, detectors may include photomultiplier tubes (PMTs), charge coupled devices (CCDs), time delay integration (TDI) cameras, and any other suitable detectors known in the art. Detectors may also include non-imaging detectors or imaging detectors. In this way, if the detectors are non-imaging detectors, each of the detectors can be configured to detect certain characteristics of scattered light, such as intensity, but not These features are detected from locations within the imaging plane. As such, the output generated by each of the detectors included in each of the detection channels of the imaging system may be a signal or data, but not an image signal or image data. In these examples, a computer subsystem, such as computer subsystem 36, may be configured to generate an image of the sample from the non-imaging output of the detector. However, in other examples, the detectors may be configured as imaging detectors that are configured to generate image signals or image data. Accordingly, the imaging system can be configured to produce the images described herein in a number of ways.

應注意,在本文中提供圖1以大體圖解說明一成像系統或子系統之一組態,該成像系統或子系統可包含於本文中所闡述之系統實施例中或可產生由本文中所闡述之系統實施例使用之影像。顯然地,可更改本文中所闡述之成像系統組態以最佳化成像系統之效能,如在設計一商業成像系統時通常所執行。另外,可使用諸如可自加利福尼亞州苗必達(Milpitas)之KLA-Tencor公司商業購得之29xx/39xx及Puma 9xxx系列之工具之一現有系統(例如,藉由將本文中所闡述之功能性添加至一現有系統)來實施本文中所闡述之系統。針對某些此等系統,本文中所闡述之實施例可作為系統之選用功能性(例如,除了系統之其他功能性之外)提供。另一選擇係,本文中所闡述之成像系統可經設計以「從零開始」以提供一全新成像系統。It should be noted that FIG. 1 is provided herein to generally illustrate one configuration of an imaging system or subsystem that may be included in or that may result from the system embodiments set forth herein The image used by the system embodiment. Obviously, the imaging system configurations described herein can be modified to optimize the performance of the imaging system, as is commonly performed when designing a commercial imaging system. Additionally, one of the existing systems such as the 29xx/39xx and Puma 9xxx series of tools commercially available from KLA-Tencor, Inc. of Milpitas, Calif., can be used (eg, by adding the functionality described herein to to an existing system) to implement the system described herein. For some of these systems, the embodiments described herein may be provided as optional functionality of the system (eg, in addition to other functionality of the system). Alternatively, the imaging system described herein can be designed "from scratch" to provide an entirely new imaging system.

成像系統之電腦子系統36可以任何適合方式(例如,經由一或多個傳輸媒體,其可包含「有線」及/或「無線」傳輸媒體)耦合至成像系統之偵測器,使得電腦子系統可接收在對樣品之掃描期間由偵測器產生之輸出。電腦子系統36可經組態以使用偵測器之輸出來執行本文中進一步所闡述之若干個功能。The imaging system's computer subsystem 36 may be coupled to the imaging system's detectors in any suitable manner (eg, via one or more transmission media, which may include "wired" and/or "wireless" transmission media), such that the computer subsystem Output generated by the detector during scanning of the sample can be received. Computer subsystem 36 may be configured to use the outputs of the detectors to perform several of the functions described further herein.

圖1中所展示之電腦子系統(以及本文中所闡述之其他電腦子系統)亦可在本文中稱為電腦系統。本文中所闡述之電腦子系統或系統中之每一者可採用各種形式,包含一個人電腦系統、影像電腦、主機電腦系統、工作站、網路器具、網際網路器具或其他裝置。一般而言,術語「電腦系統」可廣泛地定義為囊括具有一或多個處理器之執行來自一記憶體媒體之指令之任何裝置。電腦子系統或系統亦可包含此項技術中已知之任何適合處理器,諸如一平行處理器。另外,電腦子系統或系統可包含具有高速度處理及軟體之一電腦平台作為一獨立工具或一經網路連線工具。The computer subsystem shown in FIG. 1 (as well as other computer subsystems described herein) may also be referred to herein as a computer system. Each of the computer subsystems or systems described herein may take various forms, including a personal computer system, video computer, mainframe computer system, workstation, network appliance, Internet appliance, or other device. In general, the term "computer system" can be broadly defined to encompass any device having one or more processors that executes instructions from a memory medium. The computer subsystem or system may also include any suitable processor known in the art, such as a parallel processor. In addition, the computer subsystem or system may include a computer platform with high-speed processing and software as a stand-alone tool or as a network-connected tool.

若該系統包含一個以上電腦子系統,則不同電腦子系統可彼此耦合,使得影像、資料、資訊、指令等可在電腦子系統之間發送,如本文中進一步所闡述。舉例而言,電腦子系統36可藉由可包含此項技術中已知之任何適合有線及/或無線傳輸媒體之任何適合傳輸媒體而耦合至電腦子系統102 (如圖1中之虛線所展示)。此等電腦子系統中之兩者或兩者以上亦可藉由一共用電腦可讀儲存媒體(未展示)而有效地耦合。If the system includes more than one computer subsystem, the different computer subsystems can be coupled to each other such that images, data, information, instructions, etc. can be sent between the computer subsystems, as further described herein. For example, computer subsystem 36 may be coupled to computer subsystem 102 (as shown by the dashed lines in FIG. 1 ) by any suitable transmission medium, which may include any suitable wired and/or wireless transmission medium known in the art . Two or more of these computer subsystems can also be effectively coupled by a common computer-readable storage medium (not shown).

雖然成像系統在上文中闡述為係一基於光學或光之成像系統,但該成像系統可為一基於電子束之成像系統。在圖1a中所展示之一項此實施例中,成像系統包含耦合至電腦子系統124之電子柱122。亦如圖1a中所展示,電子柱包含電子束源126,該電子束源經組態以產生藉由一或多個元件130而被聚焦至樣品128之電子。電子束源可包含(舉例而言)一陰極源或發射體尖端,且一或多個元件130可包含(舉例而言)一槍透鏡、一陽極、一限束光圈、一閘閥、一束電流選擇光圈、一物鏡及一掃描子系統,所有該等元件皆可包含此項技術中已知之任何此等適合元件。Although the imaging system is described above as being an optical or light based imaging system, the imaging system may be an electron beam based imaging system. In one such embodiment shown in FIG. 1 a , the imaging system includes an electron column 122 coupled to a computer subsystem 124 . As also shown in FIG. 1 a , the electron column includes an electron beam source 126 that is configured to generate electrons that are focused to a sample 128 by one or more elements 130 . The electron beam source may include, for example, a cathode source or emitter tip, and the one or more elements 130 may include, for example, a gun lens, an anode, a beam-limiting aperture, a gate valve, a beam of current An aperture, an objective lens, and a scanning subsystem are selected, all of which may comprise any such suitable elements known in the art.

自樣品返回之電子(例如,次級電子)可藉由一或多個元件132而被聚焦至偵測器134。一或多個元件132可包含(舉例而言)一掃描子系統,該掃描子系統可為包含於元件130中之相同掃描子系統。Electrons (eg, secondary electrons) returning from the sample may be focused to detector 134 by one or more elements 132 . One or more elements 132 may include, for example, a scanning subsystem, which may be the same scanning subsystem included in element 130 .

電子柱可包含此項技術中已知之任何其他適合元件。另外,電子柱可如2014年4月4日頒予Jiang等人之美國專利第8,664,594號、2014年4月8日頒予Kojima等人之美國專利第8,692,204號、2014年4月15日頒予Gubbens等人之美國專利第8,698,093號及2014年5月6日頒予MacDonald等人之美國專利第8,716,662號中所闡述而進一步組態,該等美國專利如同全面陳述於本文中一般以引用之方式併入。The electron column may comprise any other suitable element known in the art. Alternatively, the electron column can be as described in US Patent Nos. 8,664,594 issued to Jiang et al. on April 4, 2014; US Patent Nos. 8,692,204 issued to Kojima et al. on April 8, 2014; Further configurations are set forth in U.S. Patent No. 8,698,093 to Gubbens et al. and U.S. Patent No. 8,716,662 to MacDonald et al., issued on May 6, 2014, which are generally incorporated by reference as if fully set forth herein. Incorporated.

雖然電子柱在圖1a中展示為經組態使得電子以一傾斜入射角度被引導至樣品且以另一傾斜角度自該樣品散射,但應理解,電子束可以任何適合角度被引導至樣品及自該樣品散射。另外,基於電子束之成像系統可經組態以使用多個模式來產生樣品之影像(例如,以不同照射角度、收集角度等),如本文中進一步所闡述。基於電子束之成像系統之多個模式可在成像系統之任何影像產生參數上係不同的。Although the electron column is shown in Figure 1a configured such that electrons are directed to the sample at one oblique angle of incidence and scattered from the sample at another oblique angle, it should be understood that the electron beam may be directed to the sample and from the sample at any suitable angle. The sample scatters. Additionally, electron beam-based imaging systems can be configured to use multiple modalities to generate images of samples (eg, at different illumination angles, collection angles, etc.), as further described herein. The various modes of electron beam based imaging systems may differ in any image generation parameter of the imaging system.

電腦子系統124可如上文所闡述而耦合至偵測器134。偵測器可偵測自樣品之表面返回之電子,藉此形成樣品之電子束影像。該等電子束影像可包含任何適合電子束影像。電腦子系統124可經組態以使用由偵測器134產生之輸出來針對樣品執行本文中進一步所闡述之一或多個功能。電腦子系統124可經組態以執行本文中所闡述之任何額外步驟。包含圖1a中所展示之成像系統之一系統可如本文中所闡述而進一步組態。Computer subsystem 124 may be coupled to detector 134 as described above. The detector detects electrons returning from the surface of the sample, thereby forming an electron beam image of the sample. The electron beam images may comprise any suitable electron beam images. Computer subsystem 124 may be configured to use the output generated by detector 134 to perform one or more functions for the sample as further described herein. Computer subsystem 124 may be configured to perform any of the additional steps set forth herein. A system including one of the imaging systems shown in Figure 1a may be further configured as set forth herein.

應注意,本文中提供圖1a以大體圖解說明可包含於本文中所闡述之實施例中之一基於電子束之成像系統之一組態。正如上文所闡述之基於光學之成像系統,可更改本文中所闡述之基於電子束之成像系統組態以最佳化成像系統之效能,如在設計一商業成像系統時通常所執行。另外,可使用諸如可自KLA-Tencor公司商業購得之eSxxx及eDR-xxxx系列之工具之一現有系統(例如,藉由將本文中所闡述之功能性添加至一現有系統)來實施本文中所闡述之系統。針對某些此等系統,本文中所闡述之實施例可作為系統之選用功能性(例如,除了系統之其他功能性之外)提供。另一選擇係,本文中所闡述之系統可經設計以「從零開始」以提供一全新系統。It should be noted that FIG. 1 a is provided herein to generally illustrate one configuration of an electron beam-based imaging system that may be included in the embodiments set forth herein. As with the optics-based imaging system described above, the configuration of the electron beam-based imaging system described herein can be modified to optimize the performance of the imaging system, as is commonly performed when designing a commercial imaging system. Additionally, one of the existing systems, such as the eSxxx and eDR-xxxx series of tools commercially available from KLA-Tencor Corporation, can be used to implement this document (eg, by adding the functionality described herein to an existing system). the described system. For some of these systems, the embodiments described herein may be provided as optional functionality of the system (eg, in addition to other functionality of the system). Alternatively, the system described herein can be designed "from scratch" to provide an entirely new system.

雖然上文將成像系統闡述為係一基於光學之成像系統或基於電子束之成像系統,但成像系統可為一基於離子束之成像系統。此一成像系統可如圖2中所展示而組態,惟電子束源可被替換為此項技術中已知之任何適合離子束源除外。另外,成像系統可為任何其他適合基於離子束之成像系統,諸如包含於可商業購得之聚焦離子束(FIB)系統、氦離子顯微鏡(HIM)系統以及次級離子質譜學(SIMS)系統中之基於離子束之成像系統。Although the imaging system is described above as being an optical-based imaging system or an electron beam-based imaging system, the imaging system may be an ion beam-based imaging system. Such an imaging system can be configured as shown in Figure 2, except that the electron beam source can be replaced with any suitable ion beam source known in the art. Additionally, the imaging system may be any other suitable ion beam-based imaging system, such as those included in commercially available Focused Ion Beam (FIB) systems, Helium Ion Microscopy (HIM) systems, and Secondary Ion Mass Spectrometry (SIMS) systems The ion beam based imaging system.

如上文所述,成像系統經組態以用於使能量(例如,光或電子)對樣品之一實體版本進行掃描,藉此產生樣品之實體版本之真實影像。以此方式,成像系統可經組態為一「真實」系統,而非一「虛擬」系統。舉例而言,一儲存媒體(未展示)及圖1中所展示之電腦子系統102可經組態為一「虛擬」系統。特定而言,儲存媒體及電腦子系統並非成像系統10之部分且不具有用於處置樣品之實體版本之任何能力。換言之,在組態為虛擬系統之系統中,其一或多個「偵測器」之輸出可為由一真實系統之一或多個偵測器先前產生並被儲存於虛擬系統中之輸出,且在「掃描」期間,虛擬系統可重放所儲存輸出,如同樣品正被掃描一樣。以此方式,利用一虛擬系統來掃描樣品可似乎如同利用一真實系統來掃描一實體樣品一樣,而實際上,「掃描」涉及以與可掃描樣品相同之方式簡單地重放樣品之輸出。組態為「虛擬」檢驗系統之系統及方法闡述於2012年2月28日頒予Bhaskar等人之共同讓與之美國專利第8,126,255號及2015年12月29日頒予Duffy等人之共同讓與之美國專利第9,222,895號中,該等美國專利如同全面陳述於本文中一般以引用之方式併入。可如此等專利中所闡述而進一步組態本文中所闡述之實施例。舉例而言,可如此等專利中所闡述而進一步組態本文中所闡述之一或多個電腦子系統。另外,可如頒予Duffy之上文所提及專利中所闡述而執行將一或多個虛擬系統組態為一中央運算與儲存(CCS)系統。本文中所闡述之持久性儲存機構可具有分散式運算與儲存裝置(諸如CCS架構),但本文中所闡述之實施例並不限於彼架構。As described above, the imaging system is configured for scanning a physical version of the sample with energy (eg, light or electrons), thereby producing a true image of the physical version of the sample. In this way, the imaging system can be configured as a "real" system rather than a "virtual" system. For example, a storage medium (not shown) and computer subsystem 102 shown in Figure 1 may be configured as a "virtual" system. In particular, storage media and computer subsystems are not part of imaging system 10 and do not have any capability for handling physical versions of samples. In other words, in a system configured as a virtual system, the output of one or more of its "detectors" may be outputs previously generated by one or more detectors of a real system and stored in the virtual system, And during a "scan", the virtual system can replay the stored output as if the sample was being scanned. In this way, scanning a sample with a virtual system may appear to be the same as scanning a physical sample with a real system, when in reality "scanning" involves simply replaying the output of the sample in the same manner as a scannable sample. Systems and methods configured as "virtual" inspection systems described in Common Assignment to Bhaskar et al. on Feb. 28, 2012 and Common Assignment of US Pat. Nos. 8,126,255 and Dec. 29, 2015 to Duffy et al. As in US Pat. No. 9,222,895, these US patents are incorporated by reference as if fully set forth herein. The embodiments described herein can be further configured as described in these patents. For example, one or more of the computer subsystems described herein may be further configured as described in these patents. Additionally, configuring one or more virtual systems as a central computing and storage (CCS) system may be performed as described in the above-referenced patents issued to Duffy. The persistent storage mechanism described herein may have a distributed computing and storage device, such as a CCS architecture, but the embodiments described herein are not limited to that architecture.

如上文進一步所述,成像系統可經組態而以多個模式產生樣品之影像。一般而言,一「模式」可由用於產生一樣品之影像之成像系統或用於產生樣品之影像之輸出之參數值定義。因此,不同之模式可在成像系統之成像參數中之至少一者之值上係不同的。舉例而言,在一基於光學之成像系統之一項實施例中,多個模式中之至少一者使用至少一個波長之光用於照射,該至少一個波長之光不同於針對多個模式中之至少另一者所使用之用於照射之至少一個波長之光。模式可在照射波長上係不同的(例如,藉由使用不同光源、不同光譜濾光器等),如本文中針對不同模式進一步所闡述。在另一實施例中,多個模式中之至少一者使用成像系統之一照射通道,該照射通道不同於針對多個模式中之至少另一者所使用之成像系統之一照射通道。舉例而言,如上文所述,成像系統可包含一個以上照射通道。如此,可針對不同模式使用不同照射通道。As further described above, the imaging system can be configured to generate images of the sample in multiple modes. In general, a "mode" can be defined by an imaging system used to generate an image of a sample or by parameter values of the output used to generate an image of a sample. Thus, the different modes may differ in the value of at least one of the imaging parameters of the imaging system. For example, in one embodiment of an optical-based imaging system, at least one of the plurality of modes uses at least one wavelength of light for illumination that is different from that for one of the plurality of modes At least one wavelength of light used for illumination by at least one other. The modes can be different in illumination wavelength (eg, by using different light sources, different spectral filters, etc.), as further described herein for different modes. In another embodiment, at least one of the modalities uses an illumination channel of an imaging system that is different from an illumination channel of the imaging system used for at least another of the modalities. For example, as described above, an imaging system may include more than one illumination channel. As such, different illumination channels can be used for different modes.

在一項實施例中,成像系統係一檢驗系統。舉例而言,本文中所闡述之光學及電子束成像系統可經組態為檢驗系統。在另一實施例中,成像系統係一缺陷再檢測系統。舉例而言,本文中所闡述之光學及電子束成像系統可經組態為缺陷再檢測系統。在另一實施例中,成像系統係一計量系統。舉例而言,本文中所闡述之光學及電子束成像系統可經組態為計量系統。特定而言,本文中所闡述且圖1及圖1a中所展示之成像系統之實施例可取決於其將使用之應用而在一或多個參數上進行修改以提供不同成像能力。在一項此實例中,若圖1中所展示之成像系統將用於缺陷再檢測或計量而非用於檢驗,則該成像系統可經組態以具有一較高解析度。換言之,圖1及圖1a中所展示之成像系統之實施例闡述一成像系統之某些一般及各種組態,該等組態可以熟習此項技術者將顯而易見之若干種方式經裁適以產生具有或多或少適合於不同應用之不同成像能力之成像系統。In one embodiment, the imaging system is an inspection system. For example, the optical and electron beam imaging systems described herein can be configured as inspection systems. In another embodiment, the imaging system is a defect re-inspection system. For example, the optical and electron beam imaging systems described herein can be configured as defect re-inspection systems. In another embodiment, the imaging system is a metrology system. For example, the optical and electron beam imaging systems described herein can be configured as metrology systems. In particular, the embodiments of the imaging systems described herein and shown in Figures 1 and 1a may be modified in one or more parameters to provide different imaging capabilities depending on the application in which they will be used. In one such example, if the imaging system shown in FIG. 1 is to be used for defect re-detection or metrology rather than inspection, the imaging system may be configured to have a higher resolution. In other words, the embodiments of the imaging system shown in FIGS. 1 and 1a illustrate certain general and various configurations of an imaging system that can be tailored to produce a number of ways that will be apparent to those skilled in the art Imaging systems with different imaging capabilities more or less suitable for different applications.

一或多個電腦子系統經組態以用於獲取一樣品之一低解析度影像。可使用本文中所闡述之成像系統中之一者來執行獲取低解析度影像(例如,藉由將光或一電子束引導至樣品且分別偵測來自樣品之光或一電子束)。以此方式,可使用實體樣品本身及某些種類之成像硬體來執行獲取低解析度影像。然而,獲取低解析度影像未必包含使用成像硬體來將樣品成像。舉例而言,另一系統及/或方法可產生低解析度影像且可將所產生低解析度影像儲存於如本文中所闡述之一或多個儲存媒體(諸如一虛擬檢驗系統)或者本文中所闡述之另一儲存媒體中。因此,獲取低解析度影像可包含自其中已儲存該低解析度影像之儲存媒體獲取低解析度影像。One or more computer subsystems are configured for acquiring a low-resolution image of a sample. Acquiring low-resolution images can be performed using one of the imaging systems described herein (eg, by directing light or an electron beam to the sample and detecting light or an electron beam, respectively, from the sample). In this way, acquiring low-resolution images can be performed using the solid sample itself and some kind of imaging hardware. However, acquiring low-resolution images does not necessarily involve imaging the sample using imaging hardware. For example, another system and/or method may generate a low-resolution image and may store the generated low-resolution image in one or more storage media as set forth herein (such as a virtual inspection system) or herein in another storage medium as described. Accordingly, acquiring the low-resolution image may include acquiring the low-resolution image from the storage medium in which the low-resolution image has been stored.

在某些實施例中,低解析度影像由一檢驗系統產生。舉例而言,如本文中所闡述,低解析度影像可由一檢驗系統產生,該檢驗系統經組態以具有一較低解析度以藉此增加其通量。檢驗系統可為一光學檢驗系統或一電子束檢驗系統。檢驗系統可具有本文中進一步所闡述之任何組態。In some embodiments, the low-resolution images are generated by an inspection system. For example, as set forth herein, low-resolution images may be generated by an inspection system configured to have a lower resolution to thereby increase its throughput. The inspection system can be an optical inspection system or an electron beam inspection system. The inspection system may have any configuration set forth further herein.

在一項實施例中,低解析度影像由一基於電子束之成像系統產生。在另一實施例中,低解析度影像由一基於光學之成像系統產生。舉例而言,低解析度影像可由本文中所闡述之基於電子束之成像系統或基於光學之成像系統中之任一者產生。In one embodiment, the low resolution image is produced by an electron beam based imaging system. In another embodiment, the low-resolution image is produced by an optical-based imaging system. For example, low-resolution images may be produced by any of the electron beam-based imaging systems or the optical-based imaging systems described herein.

在一項實施例中,以一成像系統之一單個模式來產生低解析度影像。在另一實施例中,以一成像系統之多個模式來產生樣品之一或多個低解析度影像。舉例而言,輸入至如本文中進一步所闡述之深度迴旋神經網路(深度CNN)之低解析度影像可包含以成像系統之僅一單個模式產生之一單個低解析度影像。另一選擇係,輸入至如本文中進一步所闡述之深度CNN之低解析度影像可包含以成像系統之多個模式產生之多個低解析度影像(例如,以一第一模式產生之一第一影像、以一第二模式產生之一第二影像等等)。單個模式及多個模式可包含本文中進一步所闡述之模式中之任一者。In one embodiment, low-resolution images are generated in a single mode of an imaging system. In another embodiment, one or more low-resolution images of the sample are generated in multiple modes of an imaging system. For example, a low-resolution image input to a deep convolutional neural network (deep CNN) as set forth further herein may include generating a single low-resolution image in only a single mode of the imaging system. Alternatively, the low-resolution images input to a deep CNN as further described herein may include low-resolution images generated in multiple modes of the imaging system (eg, a first mode generated in a first mode). an image, generating a second image in a second mode, etc.). Single mode and multiple modes may include any of the modes set forth further herein.

由電腦子系統(例如,電腦子系統36及/或電腦子系統102)執行之組件(例如,圖1中所展示之組件100)包含深度CNN 104。深度CNN包含經組態以用於產生低解析度影像之一表示之一或多個第一層及經組態以用於自低解析度影像之表示產生樣品之一高解析度影像之一或多個第二層。以此方式,本文中所闡述之實施例可使用本文中所闡述之深度CNN (例如,一或多個機器學習技術)中之一者以用於將一樣品之一低解析度影像變換成該樣品之一高解析度影像。舉例而言,如圖2中所展示,深度CNN展示為影像變換網路200。在產生及/或運行時間期間(亦即,在影像變換網路已被設置及/或訓練(其可如本文中進一步所闡述而執行)之後),去往影像變換網路之輸入可為輸入低解析度(高通量)影像202,且影像變換網路之輸出可為輸出高解析度(高敏感度)影像204。Components (eg, component 100 shown in FIG. 1 ) executed by computer subsystems (eg, computer subsystem 36 and/or computer subsystem 102 ) include deep CNN 104 . The deep CNN includes one or more first layers configured for generating a representation of a low-resolution image and one or more first layers configured for generating a high-resolution image of a sample from the representation of the low-resolution image, or Multiple second layers. In this manner, embodiments described herein may use one of the deep CNNs (eg, one or more machine learning techniques) described herein for transforming a low-resolution image of a sample into the High-resolution image of one of the samples. For example, as shown in FIG. 2, a deep CNN is shown as an image transformation network 200. During generation and/or runtime (ie, after the image transformation network has been set up and/or trained (which may be performed as further described herein)), the input to the image transformation network may be an input The low-resolution (high-throughput) image 202, and the output of the image transformation network may be the output high-resolution (high-sensitivity) image 204.

一或多個第二層包含經組態以輸出高解析度層之一最終層,且該最終層經組態為一子像素迴旋層。圖3圖解說明可適合用於本文中所闡述之實施例中之一影像變換網路架構之一項實施例。在此實施例中,影像變換網路係一深度CNN,其中一子像素層作為最終層。在此架構中,輸入可為低解析度影像300,該低解析度影像在圖3中展示為僅係像素之一柵格且並不表示可由本文中所闡述之實施例產生之任何特定低解析度影像。低解析度影像可被輸入至一或多個第一層302及304,該一或多個第一層可經組態為迴旋層,該等迴旋層經組態以用於特徵映圖提取。此等第一層可形成影像變換網路架構之隱蔽層。The one or more second layers include a final layer configured to output a high-resolution layer, and the final layer is configured as a subpixel convolutional layer. 3 illustrates one embodiment of an image transformation network architecture that may be suitable for use in the embodiments described herein. In this embodiment, the image transformation network is a deep CNN with a sub-pixel layer as the final layer. In this architecture, the input may be a low-resolution image 300, which is shown in FIG. 3 as just a grid of pixels and does not represent any particular low-resolution that can be produced by the embodiments described herein degree image. The low-resolution images may be input to one or more first layers 302 and 304, which may be configured as convolutional layers configured for feature map extraction. These first layers may form the hidden layers of the image transformation network architecture.

由一或多個第一層產生之低解析度影像之表示可因此係一或多個特徵及/或特徵映圖。特徵可具有此項技術中已知之可自輸入推斷出且用於產生本文中進一步所闡述之輸出之任何適合特徵類型。舉例而言,特徵可包含每像素之強度值之一向量。特徵亦可包含本文中所闡述之任何其他類型之特徵,例如純量值之向量、獨立分佈、聯合分佈之向量或此項技術中已知之任何其他適合特徵類型。如本文中進一步所闡述,在訓練期間由網路學習該等特徵且可或可不與此項技術中已知之任何真實特徵相關。The representation of the low-resolution image generated by the one or more first layers may thus be one or more features and/or feature maps. Features can be of any suitable feature type known in the art that can be inferred from the input and used to generate the output set forth further herein. For example, a feature may include a vector of intensity values per pixel. Features may also include any other type of feature set forth herein, such as a vector of scalar values, a vector of independent distributions, a vector of joint distributions, or any other suitable feature type known in the art. As set forth further herein, these features are learned by the network during training and may or may not be related to any real features known in the art.

一或多個第二層包含最終層306,該最終層經組態為一子像素迴旋層,該子像素迴旋層在一單個步驟中聚合來自低解析度空間之特徵映圖且構建高解析度影像308。子像素迴旋層學習提升(upscaling)濾光器之一陣列以將最終低解析度特徵映圖提升為高解析度輸出影像。以此方式,影像變換網路可採用一有雜訊較差解析之高通量輸入影像、運算跨越諸多迴旋層之特徵映圖且然後使用子像素層將特徵映圖變換成相對安靜超解析之影像。子像素迴旋層有利地提供針對每一特徵映圖而特別訓練之相對複雜提升濾光器,同時亦減小總體操作之運算複雜度。本文中所闡述之實施例中所使用之深度CNN可進一步如由Shi等人在2016年9月之如同全面陳述於本文中一般以引用之方式併入之「Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network」,arXiv:1609.05158v2中所闡述而組態。The one or more second layers include the final layer 306, which is configured as a subpixel convolutional layer that aggregates feature maps from low-resolution space and builds high-resolution in a single step Image 308. The sub-pixel convolutional layer learns to upscaling an array of filters to upscale the final low-resolution feature map to a high-resolution output image. In this way, the image transformation network can take a high-throughput input image with poor resolution of noise, compute feature maps across many convolutional layers, and then use sub-pixel layers to transform the feature maps into relatively quiet super-resolution images. . The sub-pixel convolutional layer advantageously provides a relatively complex lifting filter specially trained for each feature map, while also reducing the computational complexity of the overall operation. The deep CNNs used in the embodiments set forth herein may be further as described in "Real-Time Single Image and Video Super-Real-Time Single Image and Video Super- Resolution Using an Efficient Sub-Pixel Convolutional Neural Network", configured as described in arXiv:1609.05158v2.

本文中所闡述之深度CNN可一般被分類為深度學習模型。一般而言,「深度學習」(亦稱作深度結構化學習、階層式學習或深度機器學習)係基於嘗試將資料中之高階抽象模型化之一組演算法之機器學習之一分支。在一簡單情形中,可存在兩組神經元:一組神經元接收一輸入信號且一組神經元發送一輸出信號。當輸入層接收一輸入時,該輸入層將該輸入之一經修改版本傳遞至下一層。在一深度網路中,在輸入與輸出之間存在諸多層(且該等層並非係由神經元組成的,但以此種方式來思考該等層之組成可能有幫助),從而允許演算法使用由多個線性及非線性變換構成之多個處理層。The deep CNNs described herein can be generally classified as deep learning models. In general, "deep learning" (also known as deep structured learning, hierarchical learning, or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data. In a simple case, there may be two groups of neurons: one group of neurons receives an input signal and one group of neurons sends an output signal. When the input layer receives an input, the input layer passes a modified version of the input to the next layer. In a deep network, there are layers between the input and output (and the layers are not composed of neurons, but it may be helpful to think of the layers in this way), allowing the algorithm Use multiple processing layers consisting of multiple linear and nonlinear transformations.

深度學習係基於資料之學習表示之一較寬廣族系之機器學習方法之部分。一觀察(例如,一影像)可以諸多方式(諸如每像素之強度值之一向量)或以一較抽象方式來表示為一組邊緣、特定形狀之區域等。在簡化學習任務(例如,面部辨識或面部表情辨識)時,某些表示優於其他表示。深度學習之前景中之一者係用高效演算法來替換手工(handcrafted)特徵以用於無監督式或半監督式特徵學習及階層式特徵提取。Deep learning is part of a broader family of machine learning methods of data-based learning representations. An observation (eg, an image) can be represented in a number of ways (such as a vector of intensity values per pixel) or in a more abstract way as a set of edges, a region of a particular shape, etc. Certain representations outperform others when simplifying learning tasks (eg, facial recognition or facial expression recognition). One of the prospects for deep learning is to replace handcrafted features with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.

此領域中之研究嘗試做出較佳表示且形成模型以自大規模未經標記資料學習此等表示。該等表示中之某些表示受神經科學之進展啟發,且鬆散地基於對一神經系統中之資訊處理及通信型樣(諸如神經編碼,其嘗試定義各種刺激與大腦中之相關聯神經元回應之間的一關係)之解釋。Research in this area attempts to make better representations and form models to learn these representations from large-scale unlabeled data. Some of these representations are inspired by advances in neuroscience and are loosely based on patterns of information processing and communication in a nervous system, such as neural coding, which attempt to define various stimuli and associated neuronal responses in the brain a relationship between).

本文中所闡述之深度CNN亦可被分類為機器學習模型。機器學習可一般被定義為一種類型之人工智慧(AI),該AI為電腦提供用以在未被明確程式化之情況下進行學習之能力。機器學習集中於對電腦程式之開發,該等電腦程式可在接觸新資料時指導其自身成長及改變。換言之,機器學習可被定義為「賦予電腦用以在未被明確程式化之情況下進行學習之能力」之電腦科學之子域。機器學習探索了可自資料進行學習且對資料進行預測之演算法之研究及建構,此等演算法透過自樣本輸入構建一模型、藉由進行資料驅動之預測或決策而克服隨後之嚴格靜態程式指令。The deep CNNs described in this paper can also be classified as machine learning models. Machine learning can generally be defined as a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that guide their own growth and change when exposed to new data. In other words, machine learning can be defined as the subfield of computer science that "endows computers the ability to learn without being explicitly programmed". Machine learning explores the study and construction of algorithms that can learn from data and make predictions about data, by building a model from sample input, and by making data-driven predictions or decisions that overcome the strictly static procedures that follow instruction.

可進一步如以下各項中所闡述而執行本文中所闡述之機器學習:Sugiyama、Morgan Kaufmann之「Introduction to Statistical Machine Learning」,2016年,第534頁;Jebara之「Discriminative, Generative, and Imitative Learning」,MIT論文,2002年,第212頁;及Hand等人之「Principles of Data Mining (Adaptive Computation and Machine Learning)」,MIT出版社,2001年,第578頁;以上各項如同全面陳述於本文中一般以引用之方式併入。可如此等參考中所闡述而進一步組態本文中所闡述之實施例。The machine learning described in this article can be further performed as described in: "Introduction to Statistical Machine Learning" by Sugiyama, Morgan Kaufmann, 2016, p. 534; "Discriminative, Generative, and Imitative Learning" by Jebara , MIT Paper, 2002, p. 212; and Hand et al., "Principles of Data Mining (Adaptive Computation and Machine Learning)", MIT Press, 2001, p. 578; Generally incorporated by reference. The embodiments set forth herein may be further configured as set forth in such references.

深度CNN亦係一產生式(generative)模型。一「產生式」模型可一般定義為本質上係機率性之一模型。換言之,一「產生式」模型並非執行正向模擬或基於規則之方法之模型且如此,在產生一真實影像(針對其而產生一經模擬影像)中所涉及之程序之一物理模型係不必要的。替代地,如本文中進一步所闡述,可基於一適合訓練資料集而學習產生式模型(此在於可學習其參數)。Deep CNN is also a generative model. A "production" model can generally be defined as one that is probabilistic in nature. In other words, a "production" model is not a model that performs a forward simulation or rule-based approach and as such, a physical model of the process involved in generating a real image for which a simulated image is generated is not necessary . Alternatively, as set forth further herein, a production model (in that its parameters can be learned) can be learned based on a suitable training data set.

在一項實施例中,深度CNN係一深度產生式模型。舉例而言,深度CNN可經組態以具有一深度學習架構,此在於該模型可包含執行若干個演算法或變換之多個層。深度CNN之一側或兩側上之層數目可自本文中所闡述之圖式中所展示之層數目變化。出於實務目的,兩側上之一適合範圍之層係自2個層至幾十個層。In one embodiment, the deep CNN is a deep generative model. For example, a deep CNN can be configured to have a deep learning architecture in that the model can include multiple layers that perform several algorithms or transforms. The number of layers on one or both sides of a deep CNN can vary from the number of layers shown in the figures set forth herein. For practical purposes, a suitable range of layers on either side is from 2 layers to dozens of layers.

深度CNN亦可為具有一組權重之一深度神經網路,該深度神經網路根據其被饋送以進行訓練之資料而將世界模型化。神經網路可一般定義為基於一相對大的神經單元集合之一運算方法,該運算方法鬆散地模型化一生物大腦利用藉由軸突而連接之相對大的生物神經元叢集來解決問題之方式。每一神經單元與諸多其他神經單元連接,且連結可強制實行或抑制其對所連接神經單元之啟動狀態之影響。此等系統係自主學習且訓練的而非被明確地程式化,且在其中以一傳統電腦程式難以表達解決方案或特徵偵測之領域表現優異。A deep CNN can also be a deep neural network with a set of weights that models the world based on the data it is fed for training. A neural network can generally be defined as an algorithm based on a relatively large collection of neural units that loosely models the way a biological brain solves problems using relatively large clusters of biological neurons connected by axons . Each neuronal unit is connected to many other neuronal units, and the connection can enforce or inhibit its effect on the activation state of the connected neuronal unit. These systems are self-learning and trained rather than explicitly programmed, and excel in areas where solutions or feature detection are difficult to express with a traditional computer program.

神經網路通常由多個層組成,且信號路徑自前向後橫穿。神經網路之目標係以與人類大腦相同之方式來解決問題,儘管數個神經網路抽象得多。現代神經網路計劃通常利用幾千至幾百萬個神經單元及數百萬個連接來工作。神經網路可具有此項技術中已知之任何適合架構及/或組態。Neural networks typically consist of multiple layers with signal paths traversing from front to back. The goal of neural networks is to solve problems in the same way as the human brain, although several are much more abstract. Modern neural network schemes typically work with thousands to millions of neural units and millions of connections. The neural network may have any suitable architecture and/or configuration known in the art.

本文中所闡述之實施例可或可不經組態以訓練用於由一低解析度影像產生一高解析度影像之深度CNN。舉例而言,另一方法及/或系統可經組態以產生一經訓練深度CNN,該經訓練深度CNN可然後由本文中所闡述之實施例存取及使用。一般而言,訓練深度CNN可包含獲取資料(例如,低解析度影像及高解析度影像兩者,其可包含本文中所闡述之低解析度影像及高解析度影像中之任一者)。可然後使用輸入元組及預期輸出元組之一清單來建構一訓練、測試及驗證資料集。輸入元組可具有低解析度影像之形式,且輸出元組可為對應於低解析度影像之高解析度影像。可然後使用訓練資料集來訓練深度CNN。Embodiments described herein may or may not be configured to train a deep CNN for generating a high-resolution image from a low-resolution image. For example, another method and/or system can be configured to generate a trained deep CNN that can then be accessed and used by the embodiments described herein. In general, training a deep CNN may include acquiring data (eg, both low- and high-resolution images, which may include any of the low- and high-resolution images described herein). A list of training, testing, and validation data sets can then be constructed using a list of input tuples and expected output tuples. The input tuple may be in the form of a low-resolution image, and the output tuple may be a high-resolution image corresponding to the low-resolution image. The training dataset can then be used to train the deep CNN.

在一項實施例中,一或多個組件包含經組態以訓練深度CNN之一情境感知損失模組,且在深度CNN之訓練期間,一或多個電腦子系統將由一或多個第二層產生之高解析度影像及樣品之一對應已知高解析度影像輸入至情境感知損失模組中並且情境感知損失模組判定由一或多個第二層產生之高解析度影像與對應已知高解析度影像相比之情境感知損失。舉例而言,如圖4中所展示,深度CNN網路展示為影像變換網路400。此圖展示了在訓練期間或在設置時間之深度CNN。去往影像變換網路之輸入係低解析度(高通量)影像402,該低解析度(高通量)影像可如本文中進一步所闡述而產生。影像變換網路可然後輸出高解析度(高敏感度)影像404,如本文中進一步所闡述。可將所輸出高解析度影像及對應已知高解析度影像(例如,一「認定實況(ground truth)」高敏感度影像) 406輸入至情境感知損失模組408。以此方式,本文中所闡述之實施例之完整網路架構可包含兩個區塊:影像變換網路及情境感知損失。情境感知損失模組408可比較其作為輸入而接收之兩個影像(亦即,由影像變換網路產生之高解析度影像及(例如)由一成像系統產生之高解析度影像認定實況影像)以判定兩個輸入影像之間的一或多個差異。情境感知損失模組可如本文中所闡述而進一步組態。In one embodiment, the one or more components include a context-aware loss module configured to train the deep CNN, and during the training of the deep CNN, the one or more computer subsystems will be run by one or more second One of the high-resolution images and samples generated by the layer corresponding to the known high-resolution images is input into the context-aware loss module, and the context-aware loss module determines that the high-resolution images generated by the one or more second layers correspond to the corresponding known high-resolution images. Contextual awareness loss compared to high-resolution imagery. For example, as shown in FIG. 4, a deep CNN network is shown as image transformation network 400. This figure shows a deep CNN during training or at setup time. The input to the image transformation network is a low-resolution (high-throughput) image 402, which can be generated as described further herein. The image transformation network may then output a high-resolution (high-sensitivity) image 404, as further described herein. The output high-resolution image and corresponding known high-resolution image (eg, a "ground truth" high-sensitivity image) 406 may be input to the context-aware loss module 408 . In this way, the complete network architecture of the embodiments described herein may include two blocks: the image transformation network and the context-aware loss. The context-aware loss module 408 can compare the two images it receives as input (ie, a high-resolution image generated by an image transformation network and a high-resolution image generated, for example, by an imaging system to identify the live image) to determine one or more differences between the two input images. The context-aware loss module can be further configured as set forth herein.

以此方式,在設置時間,實施例採用有雜訊較差解析之影像與安靜超解析之影像對且然後使用情境感知損失透過一神經網路而學習該等影像之間的變換矩陣。如本文中所使用之術語「有雜訊」可一般定義為具有一相對低信號雜訊比(SNR)之一影像,而如本文中所使用之術語「安靜」可一般定義為具有一相對高SNR之一影像。此等術語因此在本文中互換地使用。此等影像對可來自可自KLA-Tencor公司(及其他公司)商業購得之成像平台中之任一者,如電子束(ebeam)、BBP工具、一有限解析度成像工具等。一旦訓練完成,網路便學習自有雜訊較差解析之影像至安靜超解析之影像之變換,同時維持空間保真度。以此方式,本文中所闡述之實施例使用一資料驅動之方法藉由學習有雜訊較差解析之影像與安靜超解析之影像之間的一變換而利用半導體影像中所觀察到之資料冗餘。可然後在生產中部署經訓練網路,其中成像系統產生有雜訊高通量資料,然後使用經訓練影像變換網路將該有雜訊高通量資料變換成對應低雜訊超解析之資料。一旦在生產中,網路便像一典型後處理演算法一樣執行。In this way, at setup time, embodiments take image pairs with noisy poor resolution and quiet super-resolution and then learn the transformation matrix between these images through a neural network using context-aware loss. The term "noisy" as used herein may generally be defined as an image having a relatively low signal-to-noise ratio (SNR), while the term "quiet" as used herein may generally be defined as having a relatively high One of the SNR images. These terms are therefore used interchangeably herein. These image pairs can be from any of the imaging platforms commercially available from KLA-Tencor Corporation (and others), such as electron beam (ebeam), BBP tools, a limited resolution imaging tool, and the like. Once trained, the network learns the transformation of its own noisy, poorly resolved images to quiet, super-resolved images, while maintaining spatial fidelity. In this manner, the embodiments described herein use a data-driven approach to exploit the data redundancy observed in semiconductor images by learning a transformation between noisy, poorly resolved images and quiet super-resolved images . The trained network can then be deployed in production, where the imaging system generates noisy high-throughput data, which is then transformed into data corresponding to low-noise super-resolution using a trained image transformation network . Once in production, the network executes like a typical post-processing algorithm.

在一項此實施例中,情境感知損失包含內容損失、風格損失及總變差(TV)正則化。圖5展示一項此實施例。特定而言,圖4中所展示之情境感知損失模組408可包含內容損失模組500、風格損失模組502及TV正則化模組504,如圖5中所展示。舉例而言,情境感知損失係一泛型框架且透過風格及內容損失而表示。深度神經網路傾向於自較低層中之邊緣、外形開始直至較複雜特徵(例如面)或可能後續層中之整個物件而逐漸學習影像特徵。此與生物視覺較好地相關。我們假設一迴旋網路之較低層學習被視為在感知上重要之特徵。因此,我們在所學習網路之啟動的基礎上來設計我們的情境感知損失。情境感知損失主要由風格、內容及正則化損失組成。In one such embodiment, the context-aware loss includes content loss, style loss, and total variation (TV) regularization. Figure 5 shows one such embodiment. In particular, the context-aware loss module 408 shown in FIG. 4 may include a content loss module 500, a style loss module 502, and a TV regularization module 504, as shown in FIG. For example, context-aware loss is a generic framework and is expressed through style and content loss. Deep neural networks tend to gradually learn image features starting from edges, shapes in lower layers, up to more complex features such as faces, or possibly entire objects in subsequent layers. This correlates well with biological vision. We assume that the learning of lower layers of a convolutional network is regarded as a perceptually important feature. Therefore, we design our context-aware loss based on the activation of the learned network. The context-aware loss mainly consists of style, content and regularization losses.

在一項此實施例中,內容損失包含對應已知高解析度影像之低階特徵中之損失。舉例而言,一影像之內容被定義為較低階特徵,例如邊緣、外形等。內容損失之小型化幫助保留對於產生計量品質、超解析之影像係重要之此等低階特徵。為了更清晰,內容損失包含於損失函數中以保留影像中之邊緣及外形,此乃因此等對於用於對高解析度影像進行量測等係重要的。例如雙三次內插值等或利用一L2損失進行訓練之傳統技術不需要保證對邊緣及外形之此保留。In one such embodiment, the content loss includes a loss in low-level features corresponding to known high-resolution images. For example, the content of an image is defined as lower-level features such as edges, shapes, and the like. The miniaturization of content loss helps preserve these low-level features that are important for producing metrology-quality, super-resolution imagery. For clarity, the content loss is included in the loss function to preserve edges and shapes in the image, which are therefore important for measurements on high-resolution images, etc. Conventional techniques such as bicubic interpolation, etc. or training with an L2 loss do not need to guarantee this preservation of edges and shapes.

損失之下一主要部分稱作風格轉變損失。在一項此實施例中,風格損失包含定性地定義對應已知高解析度影像之一或多個抽象實體中之損失。舉例而言,我們將風格定義為一抽象實體,該抽象實體定性地定義影像,包含例如銳度、紋理、色彩等性質。使用如本文中所闡述之深度學習之一個理由係本文中所闡述之低解析度影像/高解析度影像之間的差異不僅係解析度,而是該等影像可具有不同雜訊特徵、充電偽影(charging artifact)、紋理等。因此,僅將低解析度影像進行超解析並非充分的,且使用深度學習來學習自低解析度影像至高解析度影像之一映射。一影像之風格由一經訓練網路之上部層啟動表徵。風格損失與內容損失組合使得影像變換網路可學習有雜訊較差解析之影像與安靜超解析之影像之間的變換。一旦使用情境感知損失來訓練影像變換網路,便可在生產中部署該影像變換網路以由有雜訊較差解析之影像產生安靜超解析之影像,同時維持空間保真度。在某些實施例中,風格轉變損失被定義為超解析度高解析度影像(亦即,由一或多個第二層產生之影像)之最終層特徵與認定實況高解析度影像之間的損失,當我們想要對超解析之高解析度影像進行分類時尤其如此。The next major part of the loss is called the style shift loss. In one such embodiment, the style loss includes qualitatively defining a loss in one or more abstract entities corresponding to known high-resolution images. For example, we define style as an abstract entity that qualitatively defines an image, including properties such as sharpness, texture, color, and so on. One reason to use deep learning as described in this paper is that the difference between low-resolution images/high-resolution images described in this paper is not only resolution, but the images can have different noise characteristics, charging artifacts Shadows (charging artifacts), textures, etc. Therefore, it is not sufficient to super-resolve low-resolution images only, and deep learning is used to learn a mapping from low-resolution images to high-resolution images. The style of an image is represented by an upper layer activation of a trained network. The combination of style loss and content loss allows the image transformation network to learn transformations between noisy, poorly resolved images and quiet super-resolution images. Once an image transformation network is trained with a context-aware loss, the image transformation network can be deployed in production to produce quiet super-resolution images from noisy, poorly resolved images while maintaining spatial fidelity. In some embodiments, the style transfer loss is defined as the difference between the final layer features of the super-resolution high-resolution image (ie, the image generated by one or more second layers) and the identified live high-resolution image loss, especially when we want to classify super-resolved high-resolution images.

在另一此實施例中,情境感知損失模組包含一經預訓練VGG網路。圖6展示來自一預定義網路之啟動如何用於計算風格及內容損失。舉例而言,如圖6中所展示,經預訓練VGG網路600可耦合至內容損失模組500及風格損失模組502。VGG16 (亦稱作OxfordNet)係以牛津大學(Oxford)開發之視覺幾何組(Visual Geometry Group)命名之一迴旋神經網路架構。VGG網路亦可進一步如由Simonyan等人在2015年4月之如同全面陳述於本文中一般以引用之方式併入之「Very Deep Convolutional Networks for Large-Scale Image Recognition」,arXiv:1409.1556v6,第14頁中所闡述而組態。如圖6中所展示,經預訓練VGG網路可將一影像輸入至若干個層,包含迴旋層(例如,conv-64、conv-128、conv-256及conv-512)、最大池化(maxpool)層、全連接層(例如,FC-4096)及一軟性最大(softmax)層,所有該等層可具有此項技術中已知之任何適合組態。In another such embodiment, the context-aware loss module includes a pretrained VGG network. Figure 6 shows how activation from a predefined network is used to calculate style and content loss. For example, as shown in FIG. 6 , pretrained VGG network 600 may be coupled to content loss module 500 and style loss module 502 . VGG16 (also known as OxfordNet) is a convolutional neural network architecture named after the Visual Geometry Group developed by Oxford University. The VGG network may also be further incorporated by reference as in "Very Deep Convolutional Networks for Large-Scale Image Recognition" by Simonyan et al., April 2015, April 2015, as fully set forth herein, arXiv:1409.1556v6, p. configured as described on page 14. As shown in Figure 6, a pretrained VGG network can input an image to several layers, including convolutional layers (eg, conv-64, conv-128, conv-256, and conv-512), max pooling ( maxpool layer, fully connected layers (eg, FC-4096), and a softmax layer, all of which may have any suitable configuration known in the art.

來自VGG網路之啟動可由內容損失模組500及風格損失模組502獲取以藉此計算風格及內容損失。本文中所闡述之實施例因此定義一新穎損失框架以用於使用經預訓練網路來訓練神經網路。此幫助最佳化神經網路,同時保留所產生影像中之使用情形臨界特徵。The activation from the VGG network may be obtained by the content loss module 500 and the style loss module 502 to calculate the style and content loss therefrom. Embodiments described herein thus define a novel loss framework for training neural networks using pretrained networks. This helps optimize the neural network while preserving use-case critical characteristics in the resulting image.

因此,本文中所闡述之實施例使用一經預訓練深度學習網路來引入使用情形相依之損失函數。傳統技術包含例如雙三次內插值等方法以及L2損失(當訓練深度網路時),但我們在訓練我們的網路期間引入不同損失。舉例而言,雙三次內插值降低尖銳邊緣上之對比度損失,而全影像上之L2損失集中於保留一影像之所有態樣,但大多數態樣之保留未必係本文中所闡述之實施例之使用情形之一要求,且我們可取決於想要保留一影像中之哪些特徵而建立損失函數。在某些此等實例中,內容損失可用於確保保留邊緣及外形,且風格損失可用於確保保留紋理、色彩等。Accordingly, the embodiments described herein use a pretrained deep learning network to introduce a use case dependent loss function. Traditional techniques include methods such as bicubic interpolation and L2 loss (when training deep networks), but we introduce different losses during training of our network. For example, bicubic interpolation reduces contrast loss on sharp edges, while L2 loss on full images focuses on preserving all aspects of an image, although preservation of most aspects is not necessarily the case for the embodiments described herein One of the use cases requires and we can build loss functions depending on which features in an image we want to preserve. In some such instances, content loss may be used to ensure that edges and shapes are preserved, and style loss may be used to ensure that texture, color, etc. are preserved.

本文中所闡述之實施例可使用來自一經預訓練網路層之輸出來定義一使用情形相依之損失函數以用於訓練網路。若使用情形係臨界尺寸一致性或計量量測,則實施例可向內容損失賦予權重,且若將要「美化」影像,則可使用風格損失來保留紋理、色彩等。另外,針對其中分類係重要之情形,最後層特徵可在所產生高解析度影像與認定實況影像之間進行匹配,且可定義經預訓練網路之最後層特徵上之損失,此乃因此等係用於進行分類之特徵。Embodiments described herein may use the output from a pretrained network layer to define a use-case-dependent loss function for training a network. If the use case is critical size consistency or metrology measurements, embodiments may weight the content loss, and if the image is to be "beautified", the style loss may be used to preserve texture, color, etc. Additionally, for situations where classification is important, the last layer features can be matched between the generated high-resolution image and the identified live image, and the loss on the last layer features of the pretrained network can be defined, etc. A feature used for classification.

在信號處理中,總變差去雜訊(亦稱作總變差正則化)係數位影像處理中最常使用之一程序,其應用於雜訊去除。該程序係基於以下原理:具有過多且可能假性細節之信號具有高的總變差,亦即,信號之絕對梯度之積分係高的。根據此原理,為使信號與原始信號緊密匹配而降低信號之總變差會去除不想要之細節,同時保留重要細節(諸如邊緣)。該概念由Rudin、Osher及Fatemi於1992年開創且因此現今被稱作ROF模型。In signal processing, total variation denoising (also known as total variation regularization), one of the most commonly used procedures in image processing of coefficient bits, is applied to noise removal. The procedure is based on the principle that signals with excessive and possibly spurious details have high total variation, ie the integral of the absolute gradient of the signal is high. According to this principle, reducing the total variation of the signal in order to closely match the signal to the original signal removes unwanted details while preserving important details such as edges. This concept was pioneered by Rudin, Osher and Fatemi in 1992 and is therefore known today as the ROF model.

此雜訊去除技術具有優於簡單技術(諸如線性平滑化或中值濾光,其減少雜訊但同時在一較大或較小程度上消除邊緣)之優點。相比而言,總變差去雜訊在同時保留邊緣並消除平坦區域中之雜訊(即使在相對低信號雜訊比下)方面係顯著有效的。This noise removal technique has advantages over simpler techniques such as linear smoothing or median filtering, which reduce noise but at the same time eliminate edges to a greater or lesser extent. In contrast, total variation denoising is significantly effective at simultaneously preserving edges and removing noise in flat areas (even at relatively low signal-to-noise ratios).

在某些此等實施例中,一或多個組件包含經組態以基於情境感知損失而判定深度CNN之一或多個參數之一調諧模組。舉例而言,如圖4中所展示,一或多個組件可包含調諧模組410,該調諧模組經組態以用於反向傳播由情境感知損失模組判定之錯誤及/或改變之網路參數。上文所闡述之深度CNN之層中之每一者可具有一或多個參數(諸如權重W及偏壓B),可藉由訓練模型(其可如本文中進一步所闡述而執行)而判定該一或多個參數之值。舉例而言,可在訓練期間藉由使情境感知損失最小化而判定包含於深度CNN中之各種層之權重及偏壓。In certain such embodiments, the one or more components include a tuning module configured to determine one or more parameters of the deep CNN based on the contextual awareness loss. For example, as shown in FIG. 4, one or more components may include a tuning module 410 configured for back-propagating errors and/or changes determined by the context-aware loss module network parameters. Each of the layers of the deep CNN described above may have one or more parameters (such as weights and biases B) that may be determined by training a model, which may be performed as further described herein the value of the one or more parameters. For example, the weights and biases of various layers included in a deep CNN can be determined during training by minimizing the context-aware loss.

在一項實施例中,深度CNN經組態使得由一或多個第二層產生之高解析度影像具有比低解析度影像少之雜訊。舉例而言,本文中所闡述之實施例提供一個一般化框架以用於使用所學習表示來將有雜訊且解析不足之影像變換為低雜訊超解析之影像。In one embodiment, the deep CNN is configured such that high-resolution images produced by one or more second layers have less noise than low-resolution images. For example, the embodiments described herein provide a generalized framework for transforming noisy and under-resolved images into low-noise super-resolved images using learned representations.

在另一實施例中,深度CNN經組態使得由一或多個第二層產生之高解析度影像保留低解析度影像之結構及空間特徵。舉例而言,本文中所闡述之實施例提供一個一般化框架以用於使用所學習表示來將有雜訊且解析不足之影像變換為低雜訊超解析之影像,同時保留結構及空間保真度。In another embodiment, the deep CNN is configured such that high-resolution images generated by one or more second layers retain the structural and spatial characteristics of low-resolution images. For example, the embodiments described herein provide a generalized framework for using learned representations to transform noisy and under-resolved images into low-noise super-resolved images while preserving structural and spatial fidelity Spend.

在某些實施例中,深度迴旋神經網路以比利用一高解析度成像系統產生高解析度影像之一通量高之一通量輸出高解析度影像。舉例而言,本文中所闡述之實施例可用於基於深度學習之超解析度以在電子束工具上獲得較高通量。因此,當使用一相對低劑量(電子束、光等)以用於影像獲取以防止對樣品進行改變(諸如損壞、污染等)可為有利時,本文中所闡述之實施例可為尤其有用的。然而,使用一相對低劑量來避免對樣品進行改變一般會產生低解析度影像。因此,挑戰係在不對樣品造成改變之情況下產生高解析度影像。本文中所闡述之實施例提供此能力。特定而言,可以一較高通量及一較低解析度(或較低品質)獲取樣品影像,且本文中所闡述之實施例可將彼等較高通量、較低品質影像轉換成超解析或較高品質之影像而不對樣品造成改變(此乃因樣品本身不需要產生超解析或較高品質之影像)。In some embodiments, the deep convolutional neural network outputs high-resolution images at a higher throughput than is used to generate high-resolution images using a high-resolution imaging system. For example, the embodiments set forth herein can be used for deep learning based super-resolution to achieve higher throughput on e-beam tools. Thus, the embodiments described herein may be particularly useful when it may be advantageous to use a relatively low dose (electron beam, light, etc.) for image acquisition to prevent alterations (such as damage, contamination, etc.) to the sample . However, using a relatively low dose to avoid making changes to the sample generally produces low-resolution images. Therefore, the challenge is to generate high-resolution images without altering the sample. The embodiments described herein provide this capability. In particular, sample images can be acquired at a higher throughput and at a lower resolution (or lower quality), and the embodiments described herein can convert these higher throughput, lower quality images into ultra-high-quality images. Resolved or higher quality images without altering the sample (since the sample itself does not need to produce super-resolved or higher quality images).

因此,針對其中一晶圓可經歷檢驗(例如,BBP檢驗)及電子束再檢測序列之再檢測使用情形,本文中所闡述之實施例係尤其有利的。另外,在某些例項中,使用者想要在檢驗之後將晶圓放回於檢驗工具上以嘗試另一檢驗配方條件(例如,以最佳化針對在檢驗中被偵測且可能在再檢測中被分類之一缺陷之檢驗配方條件)。然而,若電子束(或其他)再檢測損壞或改變被再檢測之位置,則彼等位點不再有效用於敏感度分析(亦即,檢驗配方更改及/或最佳化)。因此,藉由使用低圖框平均電子束影像獲取而防止對樣品進行損壞或改變係電子束再檢測影像之基於深度學習之分類之優點中之一者(例如,不需要原始高圖框平均影像)。因此,深度學習分類及深度學習影像改良可以組合方式可論證地使用。基於深度學習之缺陷分類可藉由本文中所闡述之實施例而執行,如由He等人在2017年9月6日提出申請之共同讓與之美國專利申請案第15/697,426號中所闡述,該美國專利申請案如同全面陳述於本文中一般以引用之方式併入。本文中所闡述之實施例可如此專利申請案中所闡述而進一步組態。Accordingly, the embodiments described herein are particularly advantageous for re-inspection use cases where a wafer may undergo inspection (eg, BBP inspection) and an e-beam re-inspection sequence. Additionally, in some instances, the user wants to put the wafer back on the inspection tool after inspection to try another inspection recipe condition (eg, to optimize for being detected during inspection and possibly re-inspection). inspection recipe conditions for a defect classified during inspection). However, if the e-beam (or other) re-detection damages or changes the locations that are re-detected, those sites are no longer valid for sensitivity analysis (ie, checking for recipe changes and/or optimization). Thus, preventing damage to samples or alterations by using low-frame-averaged e-beam image acquisition is one of the advantages of deep-learning-based classification of e-beam re-inspection images (eg, no need for original high-frame-averaged images) ). Thus, deep learning classification and deep learning image refinement can arguably be used in combination. Deep learning-based defect classification can be performed by the embodiments described herein, as set forth in commonly assigned US Patent Application Serial No. 15/697,426 filed by He et al on Sep. 6, 2017 , this US patent application is generally incorporated by reference as if fully set forth herein. The embodiments described herein can be further configured as described in this patent application.

圖7圖解說明可使用本文中所闡述之實施例產生之結果之實例。結果展示有雜訊較差解析之高通量影像702之水平輪廓700、較高品質較佳解析之低通量影像706之水平輪廓704與藉由使用本文中所闡述之實施例處理低解析度影像而獲得之安靜超解析之影像710之水平輪廓708之間的一比較。高通量影像702及低通量影像706分別由低解析度成像系統及高解析度成像系統產生,如本文中所闡述。以此方式,圖7中所展示之結果圖解說明沿著穿過影像之相同線輪廓之不同影像之間的水平變化。圖7中所展示之結果演示本文中所闡述之實施例之用以由較低品質影像產生實質上無雜訊高解析度影像同時維持影像中之結構及空間保真度之能力,如由本文中所闡述之實施例所產生之超解析之影像及一成像系統所產生之高解析度影像之輪廓(708及704)中之相關性所確認。Figure 7 illustrates an example of the results that can be produced using the embodiments set forth herein. The results show a horizontal profile 700 of a high-throughput image 702 with poorer resolution of noise, a horizontal profile 704 of a low-throughput image 706 with higher quality better resolution, and a low-resolution image by using the embodiments described herein. And a comparison between the horizontal contours 708 of the quiet super-resolved image 710 obtained. High-throughput image 702 and low-throughput image 706 are produced by a low-resolution imaging system and a high-resolution imaging system, respectively, as described herein. In this way, the results shown in FIG. 7 illustrate the horizontal variation between different images along the same line profile through the images. The results shown in Figure 7 demonstrate the ability of the embodiments described herein to generate substantially noise-free high-resolution images from lower quality images while maintaining structural and spatial fidelity in the images, as described herein The correlations in the contours (708 and 704) of the super-resolved images produced by the embodiments described in and of the high-resolution images produced by an imaging system were confirmed.

在一項實施例中,一或多個電腦子系統經組態以基於由一或多個第二層產生之高解析度影像而對樣品執行一或多個計量量測。圖8證明本文中所闡述之實施例藉由利用認定實況資料閉合迴圈而工作。為在現實世界計量使用情形中進一步測試本文中所闡述之實施例,對圖7中所展示之三組影像執行疊蓋量測且在圖8中編譯結果。圖8中之圖表800及802繪示高解析度成像系統產生之影像與由本文中所闡述之實施例之深度CNN產生之高解析度影像之間的分別沿著x軸及y軸之疊蓋量測中之相關性,且圖8中之圖表804及806繪示高解析度成像系統產生之影像與較低解析度影像之間的分別沿著x軸及y軸之疊蓋量測中之相關性。用於計算相關性之度量係R2 ,r平方。一r平方值1繪示一完美擬合。成像系統產生之高解析度影像與深度CNN產生之高解析度影像之間的接近於完美R2 值(>0.99)展示可在不影響效能之情況下在計量量測中替代較高解析度成像系統產生之影像而使用深度CNN產生之影像。鑒於計量使用情形中所需之相對高精確度,低解析度成像系統產生之影像及高解析度成像系統產生之影像之情形中之~0.8之R2 值證明過低而無法獲得準確量測,且因此需要自顯著地降低使用情形通量之較高解析度影像進行量測(例如,在本文中所闡述之實驗中自約18K個缺陷/小時降低至約8K個缺陷/小時)。In one embodiment, the one or more computer subsystems are configured to perform one or more metrological measurements on the sample based on the high-resolution images generated by the one or more second layers. FIG. 8 demonstrates that the embodiments described herein work by closing loops with recognized live data. To further test the embodiments described herein in real-world metrology use cases, overlay measurements were performed on the three sets of images shown in FIG. 7 and the results compiled in FIG. 8 . Graphs 800 and 802 in FIG. 8 illustrate overlays along the x-axis and y-axis, respectively, between an image produced by a high-resolution imaging system and a high-resolution image produced by a deep CNN of embodiments described herein Correlation in measurements, and graphs 804 and 806 in FIG. 8 depict the overlay measurements along the x-axis and y-axis, respectively, between the image produced by the high-resolution imaging system and the lower-resolution image Correlation. The metric used to calculate the correlation is R2, r- squared . An r-squared value of 1 depicts a perfect fit. Near - perfect R2 values (>0.99) between the high-resolution images produced by the imaging system and those produced by deep CNNs demonstrate that higher-resolution imaging can be substituted in metrology measurements without compromising performance system-generated images using deep CNNs. Given the relatively high accuracy required in metrology use cases, an R value of ~0.8 in the case of images produced by low-resolution imaging systems and images produced by high - resolution imaging systems proved too low to obtain accurate measurements, And therefore measurements from higher resolution images that significantly reduce use case throughput (eg, from about 18K defects/hour to about 8K defects/hour in the experiments described herein) are required.

在另一實施例中,深度CNN獨立於產生低解析度影像之成像系統而起作用。在某些實施例中,低解析度影像由具有一第一成像平台之一個成像系統產生,一或多個電腦子系統經組態以用於獲取由具有不同於第一成像平台之一第二成像平台之另一成像系統針對另一樣品產生之另一低解析度影像,一或多個第一層經組態以用於產生另一低解析度影像之一表示,且一或多個第二層經組態以用於自另一低解析度影像之表示產生另一樣品之一高解析度影像。舉例而言,本文中所闡述之實施例之一重要益處係相同網路架構可用於增強來自不同平台(例如BBP工具、具體經組態以用於低解析度成像之工具等)之影像。此外,最佳化及學習表示之整個負擔被離線承擔,此乃因訓練僅在配方設置時間期間發生。一旦訓練完成,運行時間運算便大幅度減少。學習程序亦幫助自適應地增強影像而無需如舊方法之情形中所需之每次改變參數。In another embodiment, the deep CNN functions independently of the imaging system that produces the low-resolution images. In certain embodiments, low-resolution images are generated by an imaging system having a first imaging platform, and one or more computer subsystems are configured for acquisition by a second imaging system having a different first imaging platform Another low-resolution image produced by another imaging system of the imaging platform for another sample, the one or more first layers configured for producing a representation of the other low-resolution image, and the one or more first layers The two layers are configured for generating a high-resolution image of another sample from a representation of another low-resolution image. For example, one important benefit of the embodiments described herein is that the same network architecture can be used to enhance images from different platforms (eg, BBP tools, tools specifically configured for low resolution imaging, etc.). Furthermore, the entire burden of optimizing and learning representations is taken offline, since training only happens during recipe setup time. Once training is complete, the runtime operations are drastically reduced. The learning procedure also helps to adaptively enhance the image without changing parameters each time as was required in the case of the old method.

在一項此實施例中,第一成像平台係一電子束成像平台,且第二成像平台係一光學成像平台。舉例而言,本文中所闡述之實施例可使用一電子束成像系統及一光學成像系統來變換所產生之低解析度影像。本文中所闡述之實施例亦能夠針對其他不同類型之成像平台(例如,其他帶電粒子類型成像系統)執行變換。In one such embodiment, the first imaging stage is an electron beam imaging stage and the second imaging stage is an optical imaging stage. For example, the embodiments described herein may use an electron beam imaging system and an optical imaging system to transform the resulting low-resolution image. Embodiments described herein are also capable of performing transformations for other different types of imaging platforms (eg, other charged particle type imaging systems).

在另一此實施例中,第一成像平台及第二成像平台係不同光學成像平台。在另一此實施例中,第一成像平台及第二成像平台係不同電子束成像平台。舉例而言,第一成像平台及第二成像平台可為相同類型之成像平台,但可在其成像能力上顯著不同。在一項此實例中,第一光學成像平台及第二光學成像平台可為一雷射散射成像平台及一BBP成像平台。此等成像平台明顯具有實質上不同之能力且將產生實質上不同之低解析度影像。然而,本文中所闡述之實施例可使用藉由訓練深度CNN而產生之所學習表示來產生所有此等低解析度影像之高解析度影像。In another such embodiment, the first imaging platform and the second imaging platform are different optical imaging platforms. In another such embodiment, the first imaging platform and the second imaging platform are different electron beam imaging platforms. For example, the first imaging platform and the second imaging platform may be the same type of imaging platform, but may differ significantly in their imaging capabilities. In one such example, the first optical imaging stage and the second optical imaging stage may be a laser scattering imaging stage and a BBP imaging stage. These imaging platforms obviously have substantially different capabilities and will produce substantially different low-resolution images. However, the embodiments described herein may use learned representations generated by training a deep CNN to generate high-resolution images of all such low-resolution images.

經組態以由一樣品之一低解析度影像產生該樣品之一高解析度影像之一系統之另一實施例包含經組態以用於產生一樣品之一低解析度影像之一成像子系統。成像子系統可具有本文中所闡述之任何組態。系統亦包含可如本文中進一步所闡述而組態之一或多個電腦子系統(例如,圖1中所展示之電腦子系統102)及由一或多個電腦子系統執行之一或多個組件(例如,組件100),該一或多個組件可包含本文中所闡述之組件中之任一者。組件包含可如本文中所闡述而組態之一深度CNN (例如,深度CNN 104)。舉例而言,深度CNN包含經組態以用於產生低解析度影像之一表示之一或多個第一層及經組態以用於自低解析度影像之表示產生樣品之一高解析度影像之一或多個第二層。一或多個第二層包含經組態以輸出高解析度影像之一最終層。最終層亦經組態為一子像素迴旋層。一或多個第一層及一或多個第二層可如本文中進一步所闡述而進一步組態。此系統實施例可如本文中所闡述而進一步組態。Another embodiment of a system configured to generate a high-resolution image of a sample from a low-resolution image of the sample includes an imager configured to generate a low-resolution image of a sample system. The imaging subsystem can have any of the configurations set forth herein. The system also includes one or more computer subsystems (eg, computer subsystem 102 shown in FIG. 1 ) that can be configured as further described herein and executed by the one or more computer subsystems A component (eg, component 100), the one or more components may include any of the components set forth herein. Components include a deep CNN (eg, deep CNN 104) that can be configured as set forth herein. For example, a deep CNN includes one or more first layers configured for generating a representation of a low-resolution image and a high-resolution one configured for generating a sample from the representation of the low-resolution image Image one or more second layers. The one or more second layers include a final layer configured to output high-resolution images. The final layer is also configured as a subpixel convolutional layer. The one or more first layers and the one or more second layers may be further configured as set forth further herein. This system embodiment may be further configured as set forth herein.

本文中所闡述之實施例具有如可自上文所提供之說明中可見之若干個優點。舉例而言,本文中所闡述之實施例提供一泛型、平台無關、資料驅動之框架。實施例在設置時間期間使用訓練資料來學習高品質影像與低品質影像之間的變換。學習此變換使得實施例能夠使用所學習變換將有雜訊較差解析之輸入變換成具有計量品質之相對安靜超解析之輸出。較舊方法係僅依賴於當前輸入影像且不利用任何其他訓練資料之參數方法。本文中所闡述之實施例亦係泛型且平台無關的。由於實施例係泛型且平台無關的,因此可使用相同框架來在不同平台(例如電子束、BBP、雷射散射、低解析度成像及計量平台)上產生計量品質影像。實施例亦藉由僅使用低品質(高通量)影像而在生產中產生所需品質影像而達成較高通量。實施例亦在不影響重要特徵(例如影像中之邊緣及外形)之情況下達成與輸入影像相比之輸出影像中之雜訊減少。The embodiments set forth herein have several advantages as can be seen from the description provided above. For example, the embodiments described herein provide a generic, platform-independent, data-driven framework. Embodiments use training data during set-up time to learn the transformation between high-quality and low-quality images. Learning this transform enables an embodiment to use the learned transform to transform a noisy, poorly resolved input into a relatively quiet super-resolved output with metrological quality. Older methods are parametric methods that rely only on the current input image and do not utilize any other training data. The embodiments described herein are also generic and platform independent. Because the embodiments are generic and platform-independent, the same framework can be used to produce metrology-quality images on different platforms, such as electron beam, BBP, laser scattering, low-resolution imaging, and metrology platforms. Embodiments also achieve higher throughput by using only low quality (high throughput) images to produce the desired quality images in production. Embodiments also achieve noise reduction in the output image compared to the input image without affecting important features such as edges and contours in the image.

上文所闡述之系統中之每一者之實施例中之每一者可一起組合成一單個實施例。Each of the embodiments of each of the systems set forth above can be combined together into a single embodiment.

另一實施例係關於用於由一樣品之低解析度影像產生該樣品之一高解析度影像之一電腦實施方法。該方法包含獲取一樣品之一低解析度影像。該方法亦包含藉由將低解析度影像輸入至一深度CNN之一或多個第一層中而產生低解析度影像之一表示。另外,該方法包含基於該表示而產生樣品之一高解析度影像。藉由深度CNN之一或多個第二層而執行產生高解析度影像。一或多個第二層包含經組態以輸出高解析度影像之一最終層,且最終層經組態為一子像素迴旋層。獲取、產生表示及產生高解析度影像步驟由一或多個電腦系統執行。一或多個組件由一或多個電腦系統執行,且一或多個組件包含深度CNN。Another embodiment relates to a computer-implemented method for generating a high-resolution image of a sample from a low-resolution image of the sample. The method includes acquiring a low-resolution image of a sample. The method also includes generating a representation of the low-resolution image by inputting the low-resolution image into one or more first layers of a deep CNN. Additionally, the method includes generating a high-resolution image of the sample based on the representation. The high-resolution images are generated by performing one or more second layers of the deep CNN. The one or more second layers include a final layer configured to output high-resolution images, and the final layer is configured as a subpixel convolutional layer. The steps of acquiring, generating representations, and generating high-resolution images are performed by one or more computer systems. One or more components are executed by one or more computer systems, and the one or more components include a deep CNN.

可如本文中進一步所闡述而執行該方法之步驟中之每一者。該方法亦可包含可由本文中所闡述之系統、電腦系統或子系統及/或成像系統或子系統執行之任何其他步驟。一或多個電腦系統、一或多個組件及深度CNN可根據本文中所闡述之實施例中之任一者(例如,電腦子系統102、組件100及深度CNN 104)而組態。另外,上文所闡述之方法可藉由本文中所闡述之系統實施例中之任一者執行。Each of the steps of the method can be performed as set forth further herein. The method may also include any other steps that may be performed by the systems, computer systems or subsystems and/or imaging systems or subsystems set forth herein. One or more computer systems, one or more components, and a deep CNN may be configured according to any of the embodiments set forth herein (eg, computer subsystem 102, component 100, and deep CNN 104). Additionally, the methods set forth above may be performed by any of the system embodiments set forth herein.

本文中所闡述之所有方法可包含將方法實施例之一或多個步驟之結果儲存於一電腦可讀儲存媒體中。該等結果可包含本文中所闡述之結果中之任一者且可以此項技術中已知之任何方式儲存。儲存媒體可包含本文中所闡述之任何儲存媒體或此項技術中已知之任何其他適合儲存媒體。在結果已被儲存之後,結果可在儲存媒體中被存取且由本文中所闡述之方法或系統實施例中之任一者使用、經格式化以用於顯示給一使用者、由另一軟體模組、方法或系統等使用。舉例而言,所產生高解析度影像可用於對樣品執行計量量測、對在樣品上偵測到之一或多個缺陷進行分類、 驗證在樣品上偵測到之一或多個缺陷及/或基於以上各項中之一或多者而判定是否應以某一方式更改用於在樣品上形成經圖案化特徵之程序以藉此改變在相同程序中在其他樣品上形成之經圖案化特徵。All methods described herein can include storing the results of one or more steps of an embodiment of the method in a computer-readable storage medium. The results can include any of the results set forth herein and can be stored in any manner known in the art. Storage media may include any storage media set forth herein or any other suitable storage media known in the art. After the results have been stored, the results can be accessed in a storage medium and used by any of the method or system embodiments described herein, formatted for display to a user, by another Use of software modules, methods or systems, etc. For example, the resulting high-resolution images can be used to perform metrological measurements on the sample, classify one or more defects detected on the sample, verify that one or more defects were detected on the sample, and/or or based on one or more of the above to determine whether the procedure used to form patterned features on a sample should be altered in some way to thereby alter patterned features formed on other samples in the same procedure .

一額外實施例係關於儲存程式指令之一非暫時性電腦可讀媒體,該等程式指令可在一或多個電腦系統上執行以用於執行一電腦實施方法,該電腦實施方法用於由一樣品之一低解析度影像產生該樣品之一高解析度影像。在圖9中展示一項此實施例。特定而言,如圖9中所展示,非暫時性電腦可讀媒體900包含可在電腦系統904上執行之程式指令902。電腦實施方法可包含上文所闡述之任何方法之任何步驟。An additional embodiment pertains to a non-transitory computer-readable medium storing program instructions executable on one or more computer systems for performing a computer-implemented method for use by the same A low-resolution image of the sample produces a high-resolution image of the sample. One such embodiment is shown in FIG. 9 . In particular, as shown in FIG. 9 , non-transitory computer-readable medium 900 includes program instructions 902 executable on computer system 904 . A computer-implemented method may include any steps of any of the methods set forth above.

實施諸如本文中所闡述之方法的方法之程式指令902可儲存於電腦可讀媒體900上。電腦可讀媒體可為諸如一磁碟或光碟、一磁帶之一儲存媒體,或此項技術中已知之任何其他適合非暫時性電腦可讀媒體。Program instructions 902 implementing methods such as those described herein may be stored on computer-readable medium 900 . The computer-readable medium may be a storage medium such as a magnetic or optical disk, a magnetic tape, or any other suitable non-transitory computer-readable medium known in the art.

可以包含基於程序之技術、基於組件之技術及/或物件導向之技術以及其他技術之各種方式中之任一者來實施程式指令。舉例而言,可視需要使用ActiveX控件、C++物件、JavaBeans、微軟基礎類別(「MFC」)、SSE (流式傳輸SIMD擴展)或者其他技術或方法來實施該等程式指令。Program instructions may be implemented in any of a variety of ways, including program-based techniques, component-based techniques, and/or object-oriented techniques, as well as other techniques. For example, ActiveX controls, C++ objects, JavaBeans, Microsoft Foundation Classes ("MFC"), SSE (Streaming SIMD Extensions), or other technologies or methods may be used to implement such program instructions as desired.

電腦系統904可根據本文中所闡述之實施例中之任一者而組態。Computer system 904 may be configured according to any of the embodiments set forth herein.

鑒於此說明,熟習此項技術者將明瞭本發明之各種態樣之其他修改及替代實施例。舉例而言,提供用於由一樣品之一低解析度影像產生該樣品之一高解析度影像之方法及系統。因此,此說明應視為僅為說明性的,且係出於教示熟習此項技術者執行本發明之一般方式之目的。應理解,本文中所展示及所闡述之本發明之形式應視為目前較佳之實施例。如熟習此項技術者在受益於本發明之此說明之後將全部明瞭,元件及材料可替代本文中所圖解說明及闡述之彼等元件及材料,部分及程序可顛倒,且本發明之某些特徵可獨立地利用。可在不背離如隨附申請專利範圍中所闡述之本發明之精神及範疇之情況下對本文中所闡述之元件做出改變。In view of this description, other modifications and alternative embodiments of the various aspects of the invention will become apparent to those skilled in the art. For example, methods and systems are provided for generating a high-resolution image of a sample from a low-resolution image of the sample. Accordingly, this description should be regarded as illustrative only and for the purpose of teaching those skilled in the art the general manner of carrying out the invention. It is to be understood that the forms of the invention shown and described herein are to be considered presently preferred embodiments. Elements and materials may be substituted for those illustrated and described herein, parts and procedures may be reversed, and certain aspects of the invention may be substituted, as will be fully apparent to those skilled in the art having the benefit of this description of the invention Features can be utilized independently. Changes may be made in elements set forth herein without departing from the spirit and scope of the invention as set forth in the appended claims.

10‧‧‧成像系統/成像子系統/基於光學之成像系統14‧‧‧樣品16‧‧‧光源/源18‧‧‧光學元件20‧‧‧透鏡22‧‧‧載台24‧‧‧收集器26‧‧‧元件28‧‧‧偵測器30‧‧‧收集器32‧‧‧元件34‧‧‧偵測器36‧‧‧電腦子系統100‧‧‧組件102‧‧‧電腦子系統104‧‧‧深度迴旋神經網路122‧‧‧電子柱124‧‧‧電腦子系統126‧‧‧電子束源128‧‧‧樣品130‧‧‧元件132‧‧‧元件134‧‧‧偵測器200‧‧‧影像變換網路202‧‧‧低解析度影像/高通量影像204‧‧‧高解析度影像/高敏感度影像300‧‧‧低解析度影像302‧‧‧第一層304‧‧‧第一層306‧‧‧最終層308‧‧‧高解析度影像400‧‧‧影像變換網路402‧‧‧低解析度影像/高通量影像404‧‧‧高解析度影像/高敏感度影像406‧‧‧高解析度影像/認定實況高敏感度影像408‧‧‧情境感知損失模組410‧‧‧調諧模組500‧‧‧內容損失模組502‧‧‧風格損失模組504‧‧‧總變差正則化模組600‧‧‧經預訓練視覺幾何組網路700‧‧‧水平輪廓702‧‧‧有雜訊較差解析之高通量影像/高通量影像704‧‧‧水平輪廓706‧‧‧較高品質較佳解析之低通量影像/低通量影像708‧‧‧水平輪廓710‧‧‧安靜超解析之影像800‧‧‧圖表802‧‧‧圖表804‧‧‧圖表806‧‧‧圖表900‧‧‧非暫時性電腦可讀媒體/電腦可讀媒體902‧‧‧程式指令904‧‧‧電腦系統10‧‧‧Imaging System/Imaging Subsystem/Optical-Based Imaging System 14‧‧‧Sample 16‧‧‧Light Source/Source 18‧‧‧Optics 20‧‧‧Lens 22‧‧‧Stand 24‧‧‧Collecting Detector 26‧‧‧Component 28‧‧‧Detector 30‧‧‧Collector 32‧‧‧Component 34‧‧‧Detector 36‧‧‧Computer Subsystem 100‧‧‧Component 102‧‧‧Computer Subsystem 104‧‧‧Deep Convolutional Neural Network 122‧‧‧Electron Column 124‧‧‧Computer Subsystem 126‧‧‧Electron Beam Source 128‧‧‧Sample 130‧‧‧Component 132‧‧‧Component 134‧‧‧Detection 200‧‧‧Image Transformation Network 202‧‧‧Low Resolution Image/High Throughput Image 204‧‧‧High Resolution Image/High Sensitivity Image 300‧‧‧Low Resolution Image 302‧‧‧First Layer 304‧‧‧First layer 306‧‧‧Final layer 308‧‧‧High-resolution image 400‧‧‧Image transformation network 402‧‧‧Low-resolution image/high-throughput image 404‧‧‧High-resolution image /High Sensitivity Image 406‧‧‧High Resolution Image/Recognized Live High Sensitivity Image 408‧‧‧Context Awareness Loss Module 410‧‧‧Tuning Module 500‧‧‧Content Loss Module 502‧‧‧Style Loss Module 504‧‧‧Total Variation Regularization Module 600‧‧‧Pre-trained Visual Geometry Network 700‧‧‧Horizontal Contouring 702‧‧‧High-throughput Image/High-Throughput Image with Poor Resolution of Noise 704‧‧‧Horizontal contour 706‧‧‧Higher quality and better resolution low-throughput image/low-throughput image 708‧‧‧Horizontal contour 710‧‧‧Quiet super-resolution image 800‧‧‧Graph 802‧‧‧ Exhibit 804‧‧‧Exhibit 806‧‧‧Exhibit 900‧‧‧Non-transitory computer readable medium/computer readable medium 902‧‧‧Program instructions 904‧‧‧Computer system

在受益於對較佳實施例之以下詳細說明之情況下且在參考隨附圖式之後,熟習此項技術者將明瞭本發明之其他優點,其中: 圖1及圖1a係圖解說明如本文中所闡述而組態之一系統之實施例之側視圖的示意圖; 圖2係圖解說明可包含於本文中所闡述之實施例中之一深度迴旋神經網路之一項實施例的一方塊圖; 圖3係圖解說明可包含於本文中所闡述之實施例中之一深度迴旋神經網路之一項實施例的一示意圖; 圖4及圖5係圖解說明可包含於本文中所闡述之實施例中之一或多個組件之實施例的方塊圖; 圖6係圖解說明可包含於一情境感知損失模組實施例中之一經預訓練VGG網路之一項實施例之一方塊圖; 圖7包含由一成像系統產生之對應高及低解析度影像以及藉由本文中所闡述之實施例而由低解析度影像產生之一高解析度影像以及針對影像中之每一者所產生之輪廓的實例; 圖8包含在由一成像系統產生之一高解析度影像與藉由本文中所闡述之實施例而產生之一高解析度影像之間以及由成像系統產生之低解析度影像與高解析度影像之間進行之沿著x軸及y軸之疊蓋量測之結果中之相關性的實例;且 圖9係圖解說明儲存程式指令之一非暫時性電腦可讀媒體之一項實施例之一方塊圖,該等程式指令用於致使一或多個電腦系統執行本文中所闡述之一電腦實施之方法。 儘管易於對本發明做出各種修改及替代形式,但其特定實施例係以實例方式展示於圖式中且將在本文中詳細地闡述。該等圖式可未按比例繪製。然而,應理解,圖式及對其之詳細說明並非意欲將本發明限制於所揭示之特定形式,而是相反,本發明將涵蓋歸屬於如由隨附申請專利範圍所界定之本發明之精神及範疇內之所有修改、等效形式及替代形式。Other advantages of the present invention will become apparent to those skilled in the art having the benefit of the following detailed description of the preferred embodiments and after reference to the accompanying drawings, in which: Figures 1 and 1a are illustrated as herein A schematic diagram of a side view of an embodiment of a system as described and configured; FIG. 2 is a block diagram illustrating an embodiment of a deep convolutional neural network that may be included in the embodiments described herein; Figure 3 illustrates a schematic diagram of one embodiment of a deep convolutional neural network that may be included in the embodiments described herein; Figures 4 and 5 illustrate embodiments that may be included in the embodiments described herein A block diagram of an embodiment of one or more components in; FIG. 6 is a block diagram illustrating an embodiment of a pretrained VGG network that may be included in a context-aware loss module embodiment; FIG. 7 Including corresponding high and low resolution images produced by an imaging system and a high resolution image produced from the low resolution image by the embodiments described herein and the contours produced for each of the images Examples; FIG. 8 includes between a high-resolution image produced by an imaging system and a high-resolution image produced by the embodiments described herein, and a low-resolution image and a high-resolution image produced by the imaging system Examples of correlations in results of overlay measurements taken along the x- and y-axes between degree images; and FIG. 9 illustrates one embodiment of a non-transitory computer-readable medium storing program instructions A block diagram of program instructions for causing one or more computer systems to perform a computer-implemented method described herein. While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will be described in detail herein. The drawings may not be drawn to scale. It should be understood, however, that the drawings and detailed description thereof are not intended to limit the invention to the particular form disclosed, but on the contrary, this invention is to encompass the spirit attributable to the invention as defined by the appended claims and all modifications, equivalents and alternatives within the scope.

10‧‧‧成像系統/成像子系統/基於光學之成像系統 10‧‧‧Imaging system/imaging subsystem/optical-based imaging system

14‧‧‧樣品 14‧‧‧Sample

16‧‧‧光源/源 16‧‧‧Light Source/Source

18‧‧‧光學元件 18‧‧‧Optics

20‧‧‧透鏡 20‧‧‧Lens

22‧‧‧載台 22‧‧‧Platform

24‧‧‧收集器 24‧‧‧Collector

26‧‧‧元件 26‧‧‧Components

28‧‧‧偵測器 28‧‧‧Detector

30‧‧‧收集器 30‧‧‧Collector

32‧‧‧元件 32‧‧‧Components

34‧‧‧偵測器 34‧‧‧Detector

36‧‧‧電腦子系統 36‧‧‧Computer Subsystem

100‧‧‧組件 100‧‧‧Components

102‧‧‧電腦子系統 102‧‧‧Computer subsystem

104‧‧‧深度迴旋神經網路 104‧‧‧Deep Convolutional Neural Networks

Claims (23)

一種經組態以由一樣品之一低解析度影像產生該樣品之一高解析度影像之系統,其包括:一或多個電腦子系統,其經組態以用於獲取一樣品之一低解析度影像;及一或多個組件,其由該一或多個電腦子系統執行,其中該一或多個組件包括:一深度迴旋(convolutional)神經網路,其中該深度迴旋神經網路包括:一或多個第一層,其經組態以用於產生該低解析度影像之一表示;一或多個第二層,其經組態以用於自該低解析度影像之該表示產生該樣品之一高解析度影像,其中該一或多個第二層包括經組態以輸出該高解析度影像之一最終層,且其中該最終層進一步經組態為一子像素迴旋層,且其中該一或多個組件進一步包括經組態以訓練該深度迴旋神經網路之一情境感知(context aware)損失模組,其中在訓練該深度迴旋神經網路期間,該一或多個電腦子系統將由該一或多個第二層產生之該高解析度影像及該樣品之一對應已知高解析度影像輸入至該情境感知損失模組中且該情境感知損失模組判定由該一或多個第二層產生之該高解析度影像中與該對應已知高解析度影像相比之情境感知損失。 A system configured to generate a high-resolution image of a sample from a low-resolution image of the sample, comprising: one or more computer subsystems configured for acquiring a low-resolution image of a sample high-resolution images; and one or more components executed by the one or more computer subsystems, wherein the one or more components include: a deep convolutional neural network, wherein the deep convolutional neural network includes : one or more first layers configured for generating a representation of the low-resolution image; one or more second layers configured for the representation from the low-resolution image generating a high-resolution image of the sample, wherein the one or more second layers include a final layer configured to output the high-resolution image, and wherein the final layer is further configured as a subpixel convolutional layer , and wherein the one or more components further include a context aware loss module configured to train the deep convolutional neural network, wherein during training of the deep convolutional neural network, the one or more The computer subsystem inputs the high-resolution image generated by the one or more second layers and a corresponding known high-resolution image of the sample into the context-awareness loss module, and the context-awareness loss module determines that the A loss of context awareness in the high-resolution image generated by one or more second layers compared to the corresponding known high-resolution image. 如請求項1之系統,其中該深度迴旋神經網路經組態使得由該一或多個第二層產生之該高解析度影像具有比該低解析度影像少之雜訊。 The system of claim 1, wherein the deep convolutional neural network is configured such that the high resolution image generated by the one or more second layers has less noise than the low resolution image. 如請求項1之系統,其中該深度迴旋神經網路經組態使得由該一或多個第二層產生之該高解析度影像保留該低解析度影像之結構及空間特徵。 The system of claim 1, wherein the deep convolutional neural network is configured such that the high-resolution images generated by the one or more second layers retain the structural and spatial characteristics of the low-resolution images. 如請求項1之系統,其中該情境感知損失包括內容損失、風格損失及總變差正則化(variation regularization)。 The system of claim 1, wherein the context-aware loss includes content loss, style loss, and total variation regularization. 如請求項4之系統,其中該內容損失包括該對應已知高解析度影像之低階特徵中之損失。 The system of claim 4, wherein the content loss includes a loss in the low-level features corresponding to known high-resolution images. 如請求項4之系統,其中該風格損失包括定性地(qualitatively)定義該對應已知高解析度影像之一或多個抽象實體中之損失。 The system of claim 4, wherein the stylistic loss includes qualitatively defining a loss in one or more abstract entities corresponding to the known high-resolution images. 如請求項1之系統,其中該情境感知損失模組包括一經預訓練VGG網路。 The system of claim 1, wherein the context-aware loss module includes a pretrained VGG network. 如請求項1之系統,其中該一或多個組件進一步包括經組態以基於該情境感知損失而判定該深度迴旋神經網路之一或多個參數之一調諧模組。 The system of claim 1, wherein the one or more components further comprise a tuning module configured to determine one or more parameters of the deep convolutional neural network based on the situational awareness loss. 如請求項1之系統,其中該一或多個電腦子系統進一步經組態以基於 由該一或多個第二層產生之該高解析度影像而對該樣品執行一或多個計量量測。 The system of claim 1, wherein the one or more computer subsystems are further configured to be based on One or more metrological measurements are performed on the sample from the high-resolution images generated by the one or more second layers. 如請求項1之系統,其中該深度迴旋神經網路獨立於產生該低解析度影像之一成像系統而起作用。 The system of claim 1, wherein the deep convolutional neural network functions independently of an imaging system that produces the low-resolution image. 如請求項1之系統,其中該低解析度影像由具有一第一成像平台之一個成像系統產生,其中該一或多個電腦子系統進一步經組態以用於獲取由具有不同於該第一成像平台之一第二成像平台之另一成像系統針對另一樣品產生之另一低解析度影像,其中該一或多個第一層經組態以用於產生該另一低解析度影像之一表示,且其中該一或多個第二層進一步經組態以用於自該另一低解析度影像之該表示產生該另一樣品之一高解析度影像。 The system of claim 1, wherein the low-resolution image is generated by an imaging system having a first imaging platform, wherein the one or more computer subsystems are further configured for acquiring images having a different image than the first imaging platform Another low-resolution image generated for another sample by another imaging system of a second imaging platform, one of the imaging platforms, wherein the one or more first layers are configured for use in generating the other low-resolution image A representation, and wherein the one or more second layers are further configured for generating a high-resolution image of the other sample from the representation of the other low-resolution image. 如請求項11之系統,其中該第一成像平台係一電子束成像平台,且其中該第二成像平台係一光學成像平台。 The system of claim 11, wherein the first imaging stage is an electron beam imaging stage, and wherein the second imaging stage is an optical imaging stage. 如請求項11之系統,其中該第一成像平台及該第二成像平台係不同光學成像平台。 The system of claim 11, wherein the first imaging stage and the second imaging stage are different optical imaging stages. 如請求項11之系統,其中該第一成像平台及該第二成像平台係不同電子束成像平台。 The system of claim 11, wherein the first imaging stage and the second imaging stage are different electron beam imaging stages. 如請求項1之系統,其中該低解析度影像由一基於電子束之成像系統 產生。 The system of claim 1, wherein the low-resolution image is produced by an electron beam-based imaging system produce. 如請求項1之系統,其中該低解析度影像由一基於光學之成像系統產生。 The system of claim 1, wherein the low-resolution image is produced by an optical-based imaging system. 如請求項1之系統,其中該低解析度影像由一檢驗系統產生。 The system of claim 1, wherein the low-resolution image is generated by an inspection system. 如請求項1之系統,其中該樣品係一晶圓。 The system of claim 1, wherein the sample is a wafer. 如請求項1之系統,其中該樣品係一倍縮光罩(reticle)。 The system of claim 1, wherein the sample is a reticle. 如請求項1之系統,其中該深度迴旋神經網路以比利用一高解析度成像系統產生該高解析度影像之一通量高之一通量輸出該高解析度影像。 The system of claim 1, wherein the deep convolutional neural network outputs the high-resolution image at a higher throughput than using a high-resolution imaging system to generate the high-resolution image. 一種經組態以由一樣品之一低解析度影像產生該樣品之一高解析度影像之系統,其包括:一成像子系統,其經組態以用於產生一樣品之一低解析度影像;一或多個電腦子系統,其經組態以用於獲取該樣品之該低解析度影像;及一或多個組件,其由該一或多個電腦子系統執行,其中該一或多個組件包括:一深度迴旋神經網路,其中該深度迴旋神經網路包括:一或多個第一層,其經組態以用於產生該低解析度影像之一 表示;及一或多個第二層,其經組態以用於自該低解析度影像之該表示產生該樣品之一高解析度影像,其中該一或多個第二層包括經組態以輸出該高解析度影像之一最終層,且其中該最終層進一步經組態為一子像素迴旋層,且其中該一或多個組件進一步包括經組態以訓練該深度迴旋神經網路之一情境感知(context aware)損失模組,其中在訓練該深度迴旋神經網路期間,該一或多個電腦子系統將由該一或多個第二層產生之該高解析度影像及該樣品之一對應已知高解析度影像輸入至該情境感知損失模組中且該情境感知損失模組判定由該一或多個第二層產生之該高解析度影像中與該對應已知高解析度影像相比之情境感知損失。 A system configured to generate a high-resolution image of a sample from a low-resolution image of the sample, comprising: an imaging subsystem configured for generating a low-resolution image of a sample ; one or more computer subsystems configured for acquiring the low-resolution image of the sample; and one or more components executed by the one or more computer subsystems, wherein the one or more The components include: a deep convolutional neural network, wherein the deep convolutional neural network includes: one or more first layers configured for generating one of the low-resolution images a representation; and one or more second layers configured for generating a high-resolution image of the sample from the representation of the low-resolution image, wherein the one or more second layers include configured to output a final layer of the high-resolution image, and wherein the final layer is further configured as a sub-pixel convolutional layer, and wherein the one or more components further comprise a device configured to train the deep convolutional neural network a context aware loss module, wherein during training of the deep convolutional neural network, the one or more computer subsystems will be generated by the one or more second layers of the high-resolution image and the sample A corresponding known high-resolution image is input into the context-aware loss module and the context-aware loss module determines that the high-resolution image generated by the one or more second layers matches the corresponding known high-resolution Image vs. situational awareness loss. 一種儲存程式指令之非暫時性電腦可讀媒體,該等程式指令可在一或多個電腦系統上執行以用於執行一電腦實施方法,該電腦實施方法用於由一樣品之一低解析度影像產生該樣品之一高解析度影像,其中該電腦實施方法包括:獲取一樣品之一低解析度影像;藉由將該低解析度影像輸入至一深度迴旋神經網路之一或多個第一層中而產生該低解析度影像之一表示;及基於該表示而產生該樣品之一高解析度影像,其中產生該高解析度影像係由該深度迴旋神經網路之一或多個第二層執行,其中該一或多個第二層包括經組態以輸出該高解析度影像之一最終層,其中 該最終層進一步經組態為一子像素迴旋層,其中該獲取、該產生該表示及該產生該高解析度影像係由該一或多個電腦系統執行,其中一或多個組件由該一或多個電腦系統執行,且其中該一或多個組件包括該深度迴旋神經網路,且其中該一或多個組件進一步包括經組態以訓練該深度迴旋神經網路之一情境感知(context aware)損失模組,其中在訓練該深度迴旋神經網路期間,該一或多個電腦子系統將由該一或多個第二層產生之該高解析度影像及該樣品之一對應已知高解析度影像輸入至該情境感知損失模組中且該情境感知損失模組判定由該一或多個第二層產生之該高解析度影像中與該對應已知高解析度影像相比之情境感知損失。 A non-transitory computer-readable medium storing program instructions executable on one or more computer systems for performing a computer-implemented method for recording a low-resolution sample from a sample The image generates a high-resolution image of the sample, wherein the computer-implemented method comprises: acquiring a low-resolution image of a sample; by inputting the low-resolution image into one or more first steps of a deep convolutional neural network generating a representation of the low-resolution image in one layer; and generating a high-resolution image of the sample based on the representation, wherein the high-resolution image is generated by one or more first steps of the deep convolutional neural network Two-layer implementation, wherein the one or more second layers include a final layer configured to output the high-resolution image, wherein The final layer is further configured as a sub-pixel convolution layer, wherein the acquiring, the generating the representation, and the generating the high-resolution image are performed by the one or more computer systems, wherein one or more components are performed by the one or more computer systems executing, and wherein the one or more components include the deep convolutional neural network, and wherein the one or more components further include a context awareness configured to train the deep convolutional neural network aware) loss module, wherein during training of the deep convolutional neural network, the one or more computer subsystems correspond to one of the high-resolution images and the samples generated by the one or more second layers corresponding to known high-resolution A high-resolution image is input into the context-aware loss module and the context-aware loss module determines a context in the high-resolution image generated by the one or more second layers compared to the corresponding known high-resolution image Perceived loss. 一種用於由一樣品之一低解析度影像產生該樣品之一高解析度影像之電腦實施方法,其包括:獲取一樣品之一低解析度影像;藉由將該低解析度影像輸入至一深度迴旋神經網路之一或多個第一層中而產生該低解析度影像之一表示;及基於該表示而產生該樣品之一高解析度影像,其中產生該高解析度影像係由該深度迴旋神經網路之一或多個第二層執行,其中該一或多個第二層包括經組態以輸出該高解析度影像之一最終層,其中該最終層進一步經組態為一子像素迴旋層,其中該獲取、該產生該表示及該產生該高解析度影像係由一或多個電腦系統執行,其中一或多個組件由該一或多個電腦系統執行,且其中該一或多個組件包括該深度迴旋神經網路,且其中該一或多個組件進一步包括經組態 以訓練該深度迴旋神經網路之一情境感知(context aware)損失模組,其中在訓練該深度迴旋神經網路期間,該一或多個電腦子系統將由該一或多個第二層產生之該高解析度影像及該樣品之一對應已知高解析度影像輸入至該情境感知損失模組中且該情境感知損失模組判定由該一或多個第二層產生之該高解析度影像中與該對應已知高解析度影像相比之情境感知損失。 A computer-implemented method for generating a high-resolution image of a sample from a low-resolution image of the sample, comprising: acquiring a low-resolution image of a sample; by inputting the low-resolution image to a generating a representation of the low-resolution image in one or more first layers of a deep convolutional neural network; and generating a high-resolution image of the sample based on the representation, wherein generating the high-resolution image is performed by the One or more second layers of the deep convolutional neural network execute, wherein the one or more second layers include a final layer configured to output the high-resolution image, wherein the final layer is further configured as a a subpixel convolutional layer, wherein the acquiring, the generating the representation, and the generating the high-resolution image are performed by one or more computer systems, wherein one or more components are performed by the one or more computer systems, and wherein the One or more components include the deep convolutional neural network, and wherein the one or more components further include configured to train a context aware loss module of the deep convolutional neural network, wherein during training of the deep convolutional neural network, the one or more computer subsystems will be generated by the one or more second layers The high-resolution image and one of the samples corresponding to a known high-resolution image are input into the context-aware loss module and the context-aware loss module determines the high-resolution image generated by the one or more second layers Context awareness loss compared to the corresponding known high-resolution imagery.
TW107122445A 2017-06-30 2018-06-29 Generating high resolution images from low resolution images for semiconductor applications TWI754764B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
IN201741023063 2017-06-30
IN201741023063 2017-06-30
US201762545906P 2017-08-15 2017-08-15
US62/545,906 2017-08-15
US16/019,422 2018-06-26
US16/019,422 US10769761B2 (en) 2017-06-30 2018-06-26 Generating high resolution images from low resolution images for semiconductor applications

Publications (2)

Publication Number Publication Date
TW201910929A TW201910929A (en) 2019-03-16
TWI754764B true TWI754764B (en) 2022-02-11

Family

ID=66590463

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107122445A TWI754764B (en) 2017-06-30 2018-06-29 Generating high resolution images from low resolution images for semiconductor applications

Country Status (3)

Country Link
KR (1) KR102351349B1 (en)
CN (1) CN110785709B (en)
TW (1) TWI754764B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365556B (en) * 2020-11-10 2021-09-28 成都信息工程大学 Image extension method based on perception loss and style loss
TWI775586B (en) * 2021-08-31 2022-08-21 世界先進積體電路股份有限公司 Multi-branch detection system and multi-branch detection method
KR102616400B1 (en) * 2022-04-12 2023-12-27 한국항공우주연구원 Deep learning based image resolution improving system and method by reflecting characteristics of optical system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150086131A1 (en) * 2013-09-26 2015-03-26 Siemens Aktiengesellschaft Single-image super resolution and denoising using multiple wavelet domain sparsity
US20150324965A1 (en) * 2014-05-12 2015-11-12 Kla-Tencor Corporation Using High Resolution Full Die Image Data for Inspection
CN106796716A (en) * 2014-08-08 2017-05-31 北京市商汤科技开发有限公司 Apparatus and method for providing super-resolution for low-resolution image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2431889C1 (en) * 2010-08-06 2011-10-20 Дмитрий Валерьевич Шмунк Image super-resolution method and nonlinear digital filter for realising said method
JP6382354B2 (en) * 2014-03-06 2018-08-29 プログレス インコーポレイテッドProgress,Inc. Neural network and neural network training method
US10417525B2 (en) * 2014-09-22 2019-09-17 Samsung Electronics Co., Ltd. Object recognition with reduced neural network weight precision
CN105976318A (en) * 2016-04-28 2016-09-28 北京工业大学 Image super-resolution reconstruction method
CN106228512A (en) * 2016-07-19 2016-12-14 北京工业大学 Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method
CN106339984B (en) * 2016-08-27 2019-09-13 中国石油大学(华东) Distributed image ultra-resolution method based on K mean value driving convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150086131A1 (en) * 2013-09-26 2015-03-26 Siemens Aktiengesellschaft Single-image super resolution and denoising using multiple wavelet domain sparsity
US20150324965A1 (en) * 2014-05-12 2015-11-12 Kla-Tencor Corporation Using High Resolution Full Die Image Data for Inspection
CN106796716A (en) * 2014-08-08 2017-05-31 北京市商汤科技开发有限公司 Apparatus and method for providing super-resolution for low-resolution image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DONG, C. et al., 13th European conference on Computer Vision, 2014, LNCS 8692, pages 184-199 *

Also Published As

Publication number Publication date
CN110785709B (en) 2022-07-15
KR102351349B1 (en) 2022-01-13
CN110785709A (en) 2020-02-11
TW201910929A (en) 2019-03-16
KR20200015804A (en) 2020-02-12

Similar Documents

Publication Publication Date Title
US10769761B2 (en) Generating high resolution images from low resolution images for semiconductor applications
TWI734724B (en) Systems, methods and non-transitory computer-readable media for generating high resolution images from low resolution images for semiconductor applications
KR102321953B1 (en) A learning-based approach for the alignment of images acquired with various modalities
TWI715773B (en) Systems and methods incorporating a neural network and a forward physical model for semiconductor applications
CN108475350B (en) Method and system for accelerating semiconductor defect detection using learning-based model
CN109074650B (en) Generating simulated images from input images for semiconductor applications
TWI722050B (en) Single image detection
KR102622720B1 (en) Image noise reduction using stacked denoising autoencoders
TWI809094B (en) Cross layer common-unique analysis for nuisance filtering
TWI754764B (en) Generating high resolution images from low resolution images for semiconductor applications
TW202206800A (en) Deep learning based defect detection
TW202211092A (en) Training a machine learning model to generate higher resolution images from inspection images
CN115516295A (en) Defect size measurement using deep learning method
TW202134641A (en) Deep learning networks for nuisance filtering