TW201825860A - Optical measurement of bump height - Google Patents

Optical measurement of bump height Download PDF

Info

Publication number
TW201825860A
TW201825860A TW106127073A TW106127073A TW201825860A TW 201825860 A TW201825860 A TW 201825860A TW 106127073 A TW106127073 A TW 106127073A TW 106127073 A TW106127073 A TW 106127073A TW 201825860 A TW201825860 A TW 201825860A
Authority
TW
Taiwan
Prior art keywords
pixel
captured image
sample
determining
image
Prior art date
Application number
TW106127073A
Other languages
Chinese (zh)
Other versions
TWI769172B (en
Inventor
隆尼 索塔曼
詹姆士 建國 許
Original Assignee
美商澤塔儀器公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/233,812 external-priority patent/US20180045937A1/en
Priority claimed from US15/338,838 external-priority patent/US10157457B2/en
Priority claimed from US15/346,594 external-priority patent/US10359613B2/en
Priority claimed from US15/346,607 external-priority patent/US10168524B2/en
Application filed by 美商澤塔儀器公司 filed Critical 美商澤塔儀器公司
Publication of TW201825860A publication Critical patent/TW201825860A/en
Application granted granted Critical
Publication of TWI769172B publication Critical patent/TWI769172B/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/022Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by means of tv-camera scanning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/06Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness for measuring thickness ; e.g. of sheet material
    • G01B11/0608Height gauges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/22Measuring arrangements characterised by the use of optical techniques for measuring depth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B9/00Measuring instruments characterised by the use of optical techniques
    • G01B9/04Measuring microscopes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/0016Technical microscopes, e.g. for inspection or measuring in industrial production processes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/571Depth or shape recovery from multiple images from focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2441Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using interferometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B2210/00Aspects not specifically covered by any group under G01B, e.g. of wheel alignment, caliper-like sensors
    • G01B2210/56Measuring geometric parameters of semiconductor structures, e.g. profile, critical dimensions or trench depth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Abstract

A method of generating 3D information including: varying the distance between the sample and an objective lens of the optical microscope at pre-determined steps; capturing an image at each pre-determined step; determining a characteristic value of each pixel in each captured image; determining, for each captured image, the greatest characteristic value across a first portion of pixels in the captured image; comparing the greatest characteristic value for each captured image to determine if a surface of the sample is present at each pre-determined step; determining a first captured image that is focused on an apex of a bump of the sample; determining a second captured image that is focused on a first surface of the sample based on the characteristic value of each pixel in each captured image; and determining a first distance between the apex of the bump and the first surface.

Description

凸塊高度之光學量測Optical measurement of bump height

所描述實施例大體上係關於量測一樣本之三維資訊且更特定言之係關於按一快速且可靠方式自動量測三維資訊。The described embodiments are generally directed to measuring the same three-dimensional information and, more particularly, about automatically measuring three-dimensional information in a fast and reliable manner.

各種物件或樣本之三維(3-D)量測在許多不同應用中係有用的。一個此應用係在晶圓級封裝處理期間。在晶圓級製造之不同步驟期間之一晶圓之三維量測資訊可提供關於存在可存在於晶圓上之晶圓處理缺陷之洞察。在晶圓級製造期間之晶圓之三維量測資訊可在耗費額外資金來繼續處理晶圓之前提供關於不存在缺陷之洞察。當前藉由一顯微鏡之人類操縱來收集一樣本之三維量測資訊。人類使用者使用其眼睛使顯微鏡聚焦以判定顯微鏡何時聚焦在樣本之一表面上。需要收集三維量測資訊之一改良方法。Three-dimensional (3-D) measurements of various objects or samples are useful in many different applications. One such application is during wafer level packaging processing. The three-dimensional measurement information of one of the wafers during the different steps of wafer level fabrication provides insight into the presence of wafer processing defects that may be present on the wafer. The 3D measurement information of the wafer during wafer level fabrication provides insight into the absence of defects before additional funds are available to continue processing the wafer. Currently, a three-dimensional measurement information is collected by human manipulation of a microscope. The human user uses his eye to focus the microscope to determine when the microscope is focused on one of the surfaces of the sample. An improved method for collecting 3D measurement information.

在一第一新穎態樣中,使用一光學顯微鏡藉由以下步驟產生一樣本之三維(3-D)資訊:按預定步階變更該樣本與該光學顯微鏡之一物鏡之間的距離;在各預定步階處擷取一影像;判定各經擷取影像中之各像素之一特性值;針對各經擷取影像判定跨該經擷取影像中之所有像素之最大特性值;比較各經擷取影像之該最大特性值以判定各預定步階處是否存在該樣本之一表面;基於各經擷取影像中之各像素之該特性值判定聚焦在該樣本之一第一表面上之一第一經擷取影像;基於各經擷取影像中之各像素之該特性值判定聚焦在該樣本之一第二表面上之一第二經擷取影像;及判定該第一表面與該第二表面之間的一第一距離。 在一第二新穎態樣中,一種三維(3-D)量測系統包含:判定樣本之一半透明層之一厚度;及判定該樣本之一金屬層之一厚度,其中該金屬層之該厚度等於該半透明層之該厚度與第一距離之間的差,其中第一表面係一光阻層之一頂表面,且其中第二表面係一金屬層之一頂表面。 在一第三新穎態樣中,使用一光學顯微鏡藉由以下步驟產生一樣本之三維(3-D)資訊:按預定步階變更該樣本與該光學顯微鏡之一物鏡之間的距離;在各預定步階處擷取一影像;判定各經擷取影像中之各像素之一特性值;針對各經擷取影像判定跨該經擷取影像中之像素之一第一部分之最大特性值;比較各經擷取影像之該最大特性值以判定各預定步階處是否存在該樣本之一表面;判定聚焦在該樣本之一凸塊之一頂點上之一第一經擷取影像;基於各經擷取影像中之各像素之該特性值判定聚焦在該樣本之一第一表面上之一第二經擷取影像;及判定該凸塊之該頂點與該第一表面之間的一第一距離。 在一第四新穎態樣中,判定在跨所有經擷取影像之x-y像素位置之一第二部分內之各x-y像素位置之一最大特性值,其中x-y像素位置之該第二部分包含在各經擷取影像中所包含之至少一些該等x-y像素位置;判定該經擷取影像之一子集,其中僅包含一x-y像素位置最大特性值之經擷取影像包含於該子集中;及判定在該經擷取影像子集內之所有經擷取影像當中,該第一經擷取影像相較於該經擷取影像子集內之所有其他經擷取影像聚焦在一最高z位置上。 在下文實施方式中描述進一步細節及實施例以及技術。本發明內容並不界定本發明。本發明係由發明申請專利範圍界定。In a first novel aspect, an optical microscope is used to generate the same three-dimensional (3-D) information by changing the distance between the sample and an objective lens of the optical microscope in a predetermined step; Determining an image at a predetermined step; determining a characteristic value of each pixel in each captured image; determining, for each captured image, a maximum characteristic value across all pixels in the captured image; comparing each of the selected images Taking the maximum characteristic value of the image to determine whether a surface of the sample exists at each predetermined step; determining, based on the characteristic value of each pixel in each captured image, focusing on one of the first surfaces of the sample Capturing an image; determining, based on the characteristic value of each pixel in each captured image, a second captured image focused on a second surface of the sample; and determining the first surface and the second A first distance between the surfaces. In a second novel aspect, a three-dimensional (3-D) measurement system includes: determining a thickness of one of the translucent layers of the sample; and determining a thickness of one of the metal layers of the sample, wherein the thickness of the metal layer And a difference between the thickness of the translucent layer and the first distance, wherein the first surface is a top surface of a photoresist layer, and wherein the second surface is a top surface of a metal layer. In a third novel aspect, an optical microscope is used to generate the same three-dimensional (3-D) information by changing the distance between the sample and one of the objective lenses of the optical microscope in a predetermined step; Determining an image at a predetermined step; determining a characteristic value of each pixel in each captured image; determining, for each captured image, a maximum characteristic value of the first portion of one of the pixels in the captured image; comparing Determining, by each of the maximum characteristic values of the image, whether one of the surfaces of the sample exists at each predetermined step; determining that the first captured image is focused on one of the vertices of one of the bumps of the sample; Determining, by the characteristic value of each pixel in the image, a second captured image focused on one of the first surfaces of the sample; and determining a first between the vertex of the bump and the first surface distance. In a fourth novel aspect, determining a maximum characteristic value of each of the xy pixel positions in a second portion of one of the xy pixel positions across the captured image, wherein the second portion of the xy pixel position is included in each At least some of the xy pixel locations included in the image are captured; determining a subset of the captured images, wherein the captured image including only the maximum characteristic value of the xy pixel location is included in the subset; and determining Among all the captured images in the captured image subset, the first captured image is focused at a highest z position compared to all other captured images in the captured image subset. Further details and embodiments and techniques are described in the following embodiments. This summary does not define the invention. The invention is defined by the scope of the invention patent application.

相關申請案之交叉參考 本申請案係2016年10月31日申請之標題為「OPTICAL MEASUREMENT OF OPENING DIMENSIONS IN A WAFER」之非臨時美國專利申請案第15/338,838號之一部分接續案且根據35 U.S.C. §120規定主張該案之優先權。該案之全部揭示內容以引用的方式併入本文中。申請案15/338,838係2016年8月10日申請之標題為「AUTOMATED 3-D MEASUREMENT」之非臨時美國專利申請案第15/233,812號之一部分接續案且根據35 U.S.C. §120規定主張該案之優先權。該案之全部揭示內容以引用的方式併入本文中。 現將詳細參考本發明之背景實例及一些實施例,其等之實例在隨附圖式中加以繪示。在下文描述及發明申請專利範圍中,諸如「頂部」、「下面」、「上」、「下」、「頂部」、「底部」、「左」及「右」之關係術語可用於描述所描述結構之不同部分之間的相對定向,且應理解,所描述之整體結構可實際上以任何方式定向在三維空間中。 圖1係一半自動化三維計量系統1之一圖。半自動化三維計量系統1包含一光學顯微鏡(未展示)、一開啟/關閉按鈕5、一電腦4及一載物台2。在操作中,將一晶圓3放置在載物台2上。半自動化三維計量系統1之功能係擷取一物件之多個影像且自動產生描述物件之各種表面之三維資訊。此亦稱為一物件之一「掃描」。晶圓3係由半自動化三維計量系統1分析之一物件之一實例。一物件亦可稱為一樣本。在操作中,將晶圓3放置在載物台2上且半自動化三維計量系統1開始自動產生描述晶圓3之表面之三維資訊之程序。在一個實例中,半自動化三維計量系統1開始於按壓連接至電腦4之一鍵盤(未展示)上之一指定鍵。在另一實例中,半自動化三維計量系統1開始於跨一網路(未展示)將一開始命令發送至電腦4。半自動化三維計量系統1亦可經組態以與一半自動化晶圓處置系統(未展示)配接,該自動化晶圓處置系統在完成一晶圓之一掃描之後移除該晶圓且插入一新晶圓進行掃描。 一全自動化三維計量系統(未展示)類似於圖1之半自動化三維計量系統;然而,一全自動化三維計量系統亦包含一機器人處置器,其可在無人類干預的情況下自動拾取一晶圓且將晶圓放置在載物台上。以一類似方式,一全自動化三維計量系統亦可使用機器人處置器自載物台自動拾取一晶圓且自載物台移除晶圓。在生產許多晶圓期間可期望一全自動化三維計量系統,因為其避免一人類操作者之可能污染且改良時間效率及總成本。替代性地,當僅需量測少量晶圓時,在研究及開發活動期間可期望半自動化三維計量系統1。 圖2係包含多個物鏡11及一可調整載物台12之一三維成像顯微鏡10之一圖。三維成像顯微鏡可為一共焦顯微鏡、一結構化照明顯微鏡、一干涉儀顯微鏡或此項技術中熟知的任何其他類型之顯微鏡。一共焦顯微鏡將量測強度。一結構化照明顯微鏡將量測一經投影結構之對比度。一干涉儀顯微鏡將量測干涉條紋對比度。 在操作中,將一晶圓放置在可調整載物台12上且選擇一物鏡。三維成像顯微鏡10在調整載物台(晶圓擱置於其上)之高度時擷取晶圓之多個影像。此導致在晶圓定位於遠離選定透鏡之各種距離處時擷取晶圓之多個影像。在一個替代實例中,將晶圓放置在一固定載物台上且調整物鏡之位置,藉此在不移動載物台的情況下變更物鏡與樣本之間的距離。在另一實例中,可在x-y方向上調整載物台且可在z方向上調整物鏡。 經擷取影像可本地儲存在包含於三維成像顯微鏡10中之一記憶體中。替代性地,經擷取影像可儲存在包含於一電腦系統中之一資料儲存裝置中,其中三維顯微鏡10跨一資料通信鏈路將經擷取影像傳遞至電腦系統。一資料通信鏈路之實例包含:一通用串列匯流排(USB)介面、一乙太網路連接、一火線匯流排介面、一無線網路(諸如WiFi)。 圖3係包含一三維顯微鏡21、一樣本處置器22、一電腦23、一顯示器27 (選用)及輸入裝置28之一三維計量系統20之一圖。三維計量系統20係包含於半自動化三維計量系統1中之一系統之一實例。電腦23包含一處理器24、一儲存裝置25及一網路裝置26 (選用)。電腦經由顯示器27將資訊輸出至一使用者。若顯示器27係一觸控螢幕裝置,則該顯示器亦可用作一輸入裝置。輸入裝置28可包含一鍵盤及一滑鼠。電腦23控制三維顯微鏡21及樣本處置器/載物台22之操作。當由電腦23接收一開始掃描命令時,電腦發送一或多個命令以組態用於影像擷取之三維顯微鏡(「顯微鏡控制資料」)。例如,需選擇正確物鏡,需選擇待擷取影像之解析度,且需選擇儲存經擷取影像之模式。當由電腦23接收一開始掃描命令時,電腦發送一或多個命令以組態樣本處置器/載物台22 (「處置器控制資料」)。例如,需選擇正確高度(z方向)調整且需選擇正確水平(x-y方向)對準。 在操作期間,電腦23引起樣本處置器/載物台22調整至適當位置。一旦樣本處置器/載物台22經適當定位,電腦23將引起三維顯微鏡聚焦在一焦平面上且擷取至少一個影像。接著,電腦23將引起該載物台在z方向上移動,使得改變樣本與光學顯微鏡之物鏡之間的距離。一旦載物台移動至新位置,電腦23將引起光學顯微鏡擷取一第二影像。此程序繼續直至在光學顯微鏡之物鏡與樣本之間的各所要距離處擷取一影像。將在各距離處擷取之影像自三維顯微鏡21傳遞至電腦23 (「影像資料」)。將經擷取影像儲存在包含於電腦23中之儲存裝置25中。在一個實例中,電腦23分析經擷取影像且將三維資訊輸出至顯示器27。在另一實例中,電腦23分析經擷取影像且經由網路29將三維資訊輸出至一遠端裝置。在又另一實例中,電腦23並不分析經擷取影像,而是經由網路29將經擷取影像發送至另一裝置進行處理。三維資訊可包含基於經擷取影像呈現之一三維影像。三維資訊可不包含任何影像,而是包含基於各經擷取影像之各種特性之資料。 圖4係繪示在變更光學顯微鏡之物鏡與樣本之間的距離時擷取影像之一方法之一圖。在圖4中繪示之實施例中,各影像包含1000乘1000個像素。在其他實施例中,影像可包含各種像素組態。在一個實例中,將連續距離之間的間隔固定為一預定量。在另一實例中,連續距離之間的間隔可不固定。倘若僅樣本之z方向掃描之一部分需要額外z方向解析度,則在z方向上之影像之間的此不固定間隔可為有利的。z方向解析度係基於在z方向上按每單位長度擷取之影像數目,因此在z方向上按每單位長度擷取額外影像將增大所量測之z方向解析度。相反地,在z方向上按每單位長度擷取較少影像將減小所量測之z方向解析度。 如上文論述,首先調整光學顯微鏡以使其聚焦在定位於與光學顯微鏡之一物鏡相距距離1處之一焦平面上。接著,光學顯微鏡擷取一影像,該影像儲存在一儲存裝置(即,「記憶體」)中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離2。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離3。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離4。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離5。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。程序針對光學顯微鏡之物鏡與樣本之間的N個不同距離而繼續。指示哪一影像與各距離相關聯之資訊亦儲存在儲存裝置中以用於處理。 在一替代實施例中,光學顯微鏡之物鏡與樣本之間的距離係固定的。實情係,光學顯微鏡包含一變焦透鏡,其允許光學顯微鏡變更光學顯微鏡之焦平面。以此方式,當載物台及由載物台支撐之樣本固定時,光學顯微鏡之焦平面跨N個不同焦平面而變化。針對各焦平面擷取一影像且將影像儲存在一儲存裝置中。接著,處理跨所有各種焦平面之經擷取影像以判定樣本之三維資訊。此實施例需要一變焦透鏡,其可提供跨所有焦平面之足夠解析度且引入最小影像失真。另外,需要各變焦位置之間的校準及變焦透鏡之所得焦距。 圖5係繪示光學顯微鏡之物鏡與樣本之間的距離之一圖表,其中各x-y座標具有最大特性值。一旦針對各距離擷取及儲存影像,可分析各影像之各像素之特性。例如,可分析各影像之各像素之光強度。在另一實例中,可分析各影像之各像素之對比度。在又另一實例中,可分析各影像之各像素之條紋對比度。可藉由比較一像素之強度與預設數目個周圍像素之強度來判定一像素之對比度。針對關於如何產生對比度資訊之額外描述,參見由James Jianguo Xu等人於2010年2月3日申請之標題為「3-D Optical Microscope」之美國專利申請案第12/699,824號(該案之標的物以引用的方式併入本文中)。 圖6係使用在圖5中展示之各x-y座標之最大特性值呈現之一三維影像之一三維圖。具有介於1與19之間的一X位置之所有像素在z方向距離7處具有一最大特性值。具有介於20與29之間的一X位置之所有像素在z方向距離2處具有一最大特性值。具有介於30與49之間的一X位置之所有像素在z方向距離7處具有一最大特性值。具有介於50與59之間的一X位置之所有像素在z方向距離2處具有一最大特性值。具有介於60與79之間的一X位置之所有像素在z方向距離7處具有一最大特性值。以此方式,可使用跨所有經擷取影像之每x-y像素之最大特性值產生圖6中繪示之三維影像。另外,在已知距離2且已知距離7之情況下,可藉由自距離2減去距離7來計算圖6中繪示之井深度。 峰值模式操作 圖7係繪示使用在各種距離處擷取之影像之峰值模式操作之一圖。如上文關於圖4論述,首先調整光學顯微鏡以使其聚焦在定位於與光學顯微鏡之一物鏡相距距離1處之一平面上。接著,光學顯微鏡擷取一影像,該影像儲存在一儲存裝置(即,「記憶體」)中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離2。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離3。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離4。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離5。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。程序針對光學顯微鏡之物鏡與載物台之間的N個不同距離而繼續。指示哪一影像與各距離相關聯之資訊亦儲存在儲存裝置中以用於處理。 在峰值模式操作中判定跨在一個z距離處之一單一經擷取影像中之所有x-y位置之最大特性值,而不是判定跨在各種z距離處之所有經擷取影像之各x-y位置之最大特性值。換言之,針對各經擷取影像,選擇跨包含於經擷取影像中之所有像素之最大特性值。如在圖7中繪示,具有最大特性值之像素位置將可能在不同經擷取影像之間變化。特性可為強度、對比度或條紋對比度。 圖8係繪示當一光阻(PR)開口在光學顯微鏡之視場內時使用在各種距離處擷取之影像之峰值模式操作之一圖。物件之俯視圖展示PR開口在x-y平面中之橫截面積。PR開口亦具有z方向上之特定深度之一深度。在下文圖8中之俯視圖展示在各距離處擷取之影像。在距離1處,光學顯微鏡未聚焦在晶圓之頂表面或PR開口之底表面上。在距離2處,光學顯微鏡聚焦在PR開口之底表面上,但未聚焦在晶圓之頂表面上。此導致與接收自離焦之其他表面(晶圓之頂表面)反射之光之像素相比,接收自PR開口之底表面反射之光之像素中之一增大特性值(強度/對比度/條紋對比度)。在距離3處,光學顯微鏡未聚焦在晶圓之頂表面或PR開口之底表面上。因此,在距離3處,最大特性值將實質上低於在距離2處量測之特性值。在距離4處,光學顯微鏡未聚焦在樣本之任何表面上;然而,歸因於空氣之折射率與光阻層之折射率之差異,量測到最大特性值(強度/對比度/條紋對比度)之一增大。圖11及隨附文字更詳細描述此現象。在距離6處,光學顯微鏡聚焦在晶圓之頂表面上,但未聚焦在PR開口之底表面上。此導致與接收自離焦之其他表面(PR開口之底表面)反射之光之像素相比,接收自晶圓之頂表面反射之光之像素中之一增大特性值(強度/對比度/條紋對比度)。一旦判定來自各經擷取影像之最大特性值,便可利用結果來判定晶圓之一表面定位於哪些距離處。 圖9係繪示源自峰值模式操作之三維資訊之一圖表。如關於圖8論述,在距離1、3及5處擷取之影像之最大特性值具有小於在距離2、4及6處擷取之影像之最大特性值之一最大特性值。在各種z距離處之最大特性值之曲線可歸因於環境效應(諸如振動)而含有雜訊。為最小化此雜訊,可在進一步資料分析之前應用一標準平滑法,諸如具有某核心大小之高斯濾波(Gaussian filtering)。 由一峰值尋找演算法執行比較最大特性值之一個方法。在一個實例中,使用一導數法沿著z軸定位零交叉點以判定存在各「峰值」之距離。接著,比較在發現一峰值之各距離處之最大特性值以判定量測到最大特性值之距離。在圖9之情況中,將在距離2處發現一峰值,此用作晶圓之一表面定位於距離2處之一指示。 藉由比較各最大特性值與一預設定臨限值來執行比較最大特性值之另一方法。可基於晶圓材料、距離及光學顯微鏡之規格來計算臨限值。替代性地,可在自動化處理之前藉由經驗測試判定臨限值。在任一情況中,比較各經擷取影像之最大特性值與臨限值。若最大特性值大於臨限值,則判定最大特性值指示晶圓之一表面之存在。若最大特性值不大於臨限值,則判定最大特性值並不指示晶圓之一表面。 求和模式操作 圖10係繪示使用在各種距離處擷取之影像之求和模式操作之一圖。如上文關於圖4論述,首先調整光學顯微鏡以使其聚焦在定位於與光學顯微鏡之一物鏡相距距離1處之一平面上。接著,光學顯微鏡擷取一影像,該影像儲存在一儲存裝置(即,「記憶體」)中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離2。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離3。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離4。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離5。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。程序針對光學顯微鏡之物鏡與樣本之間的N個不同距離而繼續。指示哪一影像與各距離相關聯之資訊亦儲存在儲存裝置中以用於處理。 將各經擷取影像之所有x-y位置之特性值相加在一起,而不是判定跨在一個z距離處之一單一經擷取影像中之所有x-y位置之最大特性值。換言之,針對各經擷取影像,將包含於經擷取影像中之所有像素之特性值加總在一起。特性可為強度、對比度或條紋對比度。實質上大於相鄰z距離之平均經加總特性值之一經加總特性值指示在該距離處存在晶圓之一表面。然而,此方法亦可導致如在圖11中描述之假肯定(false positive)。 圖11係繪示在使用求和模式操作時之錯誤表面偵測之一圖。在圖11中繪示之晶圓包含一矽基板30及沈積在矽基板30之頂部上之一光阻層31。矽基板30之頂表面定位於距離2處。光阻層31之頂表面定位於距離6處。在距離2處擷取之影像將導致實質上大於在不存在晶圓之一表面之距離處擷取之其他影像之一特性值總和。在距離6處擷取之影像將導致實質上大於在不存在晶圓之一表面之距離處擷取之其他影像之一特性值總和。此時,求和模式操作看似係存在晶圓之一表面之一有效指示符。然而,在距離4處擷取之影像將導致實質上大於在不存在晶圓之一表面之距離處擷取之其他影像之一特性值總和。此係一問題,因為如在圖11中清晰展示,晶圓之一表面未定位於距離4處。實情係,距離4處之特性值總和之增大係定位於距離2及6處之表面之一假影。輻照光阻層之光之一主要部分並不反射,而是行進至光阻層中。此光行進之角度歸因於空氣及光阻之折射率差異而改變。新角度比輻照光阻之頂表面之光角度更接近於法線。光行進至在光阻層下方之矽基板之頂表面。接著,藉由高度反射矽基板層反射光。在反射光離開光阻層且進入空氣時,反射光之角度歸因於空氣與光阻層之間的折射率差異而再次改變。輻照光之此第一重導引、反射及第二重導引引起光學顯微鏡觀察到距離4處之特性值(強度/對比度/條紋對比度)之一增大。此實例繪示每當一樣本包含一透明材料時,求和模式操作將偵測不存在於樣本上之表面。 圖12係繪示源自求和模式操作之三維資訊之一圖表。此圖表繪示在圖11中繪示之現象之結果。距離4處之加總特性值之大值錯誤地指示距離4處存在一表面。需要不導致晶圓之表面之存在之假肯定指示之一方法。 範圍模式操作 圖13係繪示使用在各種距離處擷取之影像之範圍模式操作之一圖。如上文關於圖4論述,首先調整光學顯微鏡以使其聚焦在定位於與光學顯微鏡之一物鏡相距距離1處之一平面上。接著,光學顯微鏡擷取一影像,該影像儲存在一儲存裝置(即,「記憶體」)中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離2。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離3。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。接著,調整載物台,使得光學顯微鏡之物鏡與樣本之間的距離係距離4。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。接著,調整載物台使得光學顯微鏡之物鏡與樣本之間的距離係距離5。接著,光學顯微鏡擷取一影像,該影像儲存在儲存裝置中。程序針對光學顯微鏡之物鏡與樣本之間的N個不同距離而繼續。指示哪一影像與各距離相關聯之資訊亦儲存在儲存裝置中以用於處理。 判定包含於一個z距離處之一單一經擷取影像中之具有一特定範圍內之一特性值之像素之一計數,而不是判定跨該單一經擷取影像中之所有x-y位置之所有特性值之總和。換言之,針對各經擷取影像,判定具有一特定範圍內之一特性值之像素之一計數。特性可為強度、對比度或條紋對比度。實質上大於相鄰z距離處之平均像素計數之一個特定z距離處之一像素計數指示該距離處存在晶圓之一表面。此方法減少在圖11中描述之假肯定。 圖14係繪示源自範圍模式操作之三維資訊之一圖表。在知道存在於晶圓上之不同材料類型及光學顯微鏡組態之情況下,可針對各材料類型判定特性值之一預期範圍。例如,光阻層將反射輻照光阻層之頂表面之相對少量光(即,4%)。矽層將反射輻照矽層之頂表面之光(即,37%)。在距離4處觀察到的來自光阻層之頂表面之重導引反射(即,21%)將實質上大於在距離6處觀察到的反射;然而,在距離4處觀察到的來自矽基板之頂表面之重導引反射(即,21%)將實質上小於在距離2處觀察到的反射。因此,當尋找光阻層之頂表面時,以光阻之預期特性值為中心之一第一範圍可用於濾除具有在第一範圍以外的特性值之像素,藉此濾除具有並非源自光阻層之頂表面之反射之特性值之像素。在圖15中繪示藉由應用第一特性值範圍而產生之跨所有距離之像素計數。如在圖15中展示,藉由應用第一範圍濾除來自其他距離(表面)之一些但未必所有像素。此在多個距離處量測之特性值落入第一範圍內時發生。然而,在計數像素之前應用第一範圍仍用以使所要表面處之像素計數比其他距離處之其他像素計數更突出。此在圖15中繪示。在應用第一範圍之後,距離6處之像素計數大於距離2及4處之像素計數,而在應用第一範圍之前,距離6處之像素計數小於距離2及4處之像素計數(如在圖14中展示)。 以一類似方式,當尋找矽基板層之頂表面時,可使用以矽基板層之預期特性值為中心之一第二範圍來濾除具有第二範圍以外的特性值之像素,藉此濾除具有並非源自矽基板層之頂表面之反射之特性值之像素。在圖16中繪示藉由應用第二特性值範圍而產生之跨所有距離之像素計數。此範圍應用憑藉知道存在於所掃描晶圓上之所有材料的預期特性值而減少一晶圓表面定位於距離4處之錯誤指示。如關於圖15論述,藉由應用一範圍濾除來自其他距離(表面)之一些但未必所有像素。然而,當在多個距離處量測之特性值並不落入相同範圍內時,則應用範圍之結果將消除來自其他距離(表面)之所有像素計數。圖16繪示此案例。在圖16中,在產生各距離處之像素計數之前應用第二範圍。應用第二範圍之結果係僅計數距離2處之像素。此產生矽基板之表面定位於距離2處之一十分明確指示。 應注意,為減少由潛在雜訊(諸如環境振動)引起之影響,可在實行任何峰值搜尋操作之前將一標準平滑操作(諸如高斯濾波)應用至沿著z距離之總像素計數。 圖17係繪示包含於峰值模式操作中之各種步驟之一流程圖200。在步驟201中,按預定步階變更樣本與一光學顯微鏡之物鏡之間的距離。在步驟202中,在各預定步階處擷取一影像。在步驟203中,判定各經擷取影像中之各像素之一特性。在步驟204中,針對各經擷取影像,判定跨該經擷取影像中之所有像素之最大特性。在步驟205中,比較各經擷取影像之最大特性以判定各預定步階處是否存在樣本之一表面。 圖18係繪示包含於範圍模式操作中之各種步驟之一流程圖300。在步驟301中,按預定步階變更樣本與一光學顯微鏡之物鏡之間的距離。在步驟302中,在各預定步階處擷取一影像。在步驟303中,判定各經擷取影像中之各像素之一特性。在步驟304中,針對各經擷取影像,判定具有一第一範圍內之一特性值之像素之一計數。在步驟305中,基於各經擷取影像之像素計數判定各預定步階處是否存在樣本之一表面。 圖19係包含一單一特徵之一經擷取影像之一圖。一特徵之一個實例係光阻層中呈一圓形形狀之一開口。一特徵之另一實例係光阻層中呈溝槽形狀之一開口(諸如一未鍍重佈線(RDL)結構)。在晶圓製程期間,量測一晶圓層中之一光阻開口之各種特徵係有利的。一光阻開口之量測在金屬鍍覆至孔中之前提供結構中之瑕疵之偵測。例如,若一光阻開口不具有正確大小,則鍍RDL寬度將係錯誤的。偵測此類型之缺陷可防止一缺陷晶圓之進一步製造。防止一缺陷晶圓之進一步製造節省材料及處理費用。圖19繪示當經擷取影像聚焦在光阻層之頂表面上時,自光阻層之頂表面反射之光之經量測強度大於自光阻層中之開口反射之光之經量測強度。如下文更詳細論述,與經擷取影像中之各像素相關聯之資訊可用於產生經擷取影像中之各像素之一強度值。接著,可比較各像素之強度值與一強度臨限值以判定各像素是否與經擷取影像之一第一區域(諸如光阻層之頂表面)相關聯或與經擷取影像之一第二區域(諸如光阻開口區)相關聯。此可藉由以下步驟完成:(i)首先將一強度臨限值應用至經擷取影像中之各像素之經量測強度;(ii)將具有低於強度臨限值之一強度值之所有像素分類為與經擷取影像之一第一區域相關聯;(iii)將具有高於強度臨限值之一強度值之所有像素分類為與經擷取影像之一第二區域相關聯;及(iv)將一特徵界定為相同區域內鄰接與相同區域相關聯之其他像素之一像素群組。 在圖19中展示之經擷取影像可為一彩色影像。彩色影像之各像素包含紅色、藍色及綠色(RBG)通道值。此等色彩值之各者可經組合以產生各像素之一單一強度值。在下文描述用於將各像素之RBG值轉換為單一強度值之各種方法。 一第一方法係使用三個加權值將三個色彩通道轉換為一強度值。換言之,各色彩通道具有其自身加權值或轉換因數。吾人可使用在一系統配方中界定之三個轉換因數之一預設集合或基於其樣本量測需求修改三個轉換因數。一第二方法係自各色彩通道之一預設色彩通道值減去各像素之色彩通道,接著使用在第一方法中論述之轉換因數將此結果轉換為強度值。一第三方法係使用一「色差」方案將色彩轉換為強度值。在一色差方案中,藉由一像素之色彩相較於一預定義固定紅色、綠色及藍色色彩值之接近程度來定義所得像素強度。色差之一個實例係一像素之色彩值與固定色彩值之間的加權向量距離。「色差」之又另一方法係具有自影像自動導出之一固定色彩值之一色差法。在一個實例中,其中一影像之邊界區已知具有背景色彩。邊界區像素之色彩之加權平均值可用作色差方案之固定色彩值。 一旦彩色影像已轉換為一強度影像,便可比較一強度臨限值與各像素之強度以判定像素所屬之影像區域。換言之,具有高於強度臨限值之一強度值之一像素指示像素接收自樣本之一第一表面反射之光,且具有低於強度臨限值之一強度值之一像素指示像素未接收自樣本之第一表面反射之光。一旦將影像中之各像素映射至一區域,便可判定聚焦在影像中之特徵之近似形狀。 圖20、圖21及圖22繪示產生一強度臨限值之三個不同方法,該強度臨限值可用於區分量測自光阻層之頂表面反射之光之像素與量測未自光阻層之頂表面反射之光之像素。 圖20繪示產生用於分析經擷取影像之一強度臨限值之一第一方法。在此第一方法中,針對各經量測強度值產生一像素計數。此類型之圖亦稱為一直方圖。一旦產生每強度值之像素計數,便可判定源自從光阻層反射之經量測光之像素之峰值計數與源自未從光阻層反射之經量測光之像素之峰值計數之間的強度範圍。選擇該強度範圍內之一強度值作為強度臨限值。在一個實例中,選擇兩個峰值計數之間的中點作為臨限值強度。在落入本發明之揭示內容內之其他實例中,可使用兩個峰值計數之間的其他強度值。 圖21係產生用於分析經擷取影像之一強度臨限值之一第二方法。在步驟311中,作出關於表示光阻區域之經擷取影像之一第一百分比之判定。可藉由實體量測、光學檢測或基於生產規格作出此判定。在步驟312中,作出關於表示光阻開口區之經擷取影像之一第二百分比之判定。可藉由實體量測、光學檢測或基於生產規格作出此判定。在步驟313中,根據由各像素量測之強度對經擷取影像中之所有像素分類。在步驟314中,選擇具有所有像素強度之倒數第二百分比內之一強度之所有像素。在步驟315中,分析所有選定像素。 圖22繪示判定一強度臨限值之一第三方法。在步驟321中,將一預定強度臨限值儲存至記憶體中。在步驟322中,比較各像素之強度與經儲存強度臨限值。在步驟323中,選擇具有小於強度臨限值之一強度值之所有像素。在步驟324中,分析選定像素。 無關於如何產生強度臨限值,使用臨限強度值以大致判定經擷取影像中之特徵之邊界所處之位置。接著,將使用特徵之大致邊界以判定特徵之邊界之一更精確量測,如下文論述。 圖23係在圖19中展示之一光阻開口之一三維圖。在製程期間關注各種光阻開口量測,諸如頂部開口及底部開口之面積、頂部開口及底部開口之直徑、頂部開口及底部開口之圓周、頂部開口及底部開口之橫截面寬度及開口之深度。一第一量測係頂部表面開口面積。圖8 (及隨附文字)描述如何自在距樣本之不同距離處獲得之複數個影像選擇聚焦在光阻開口之頂表面上之一影像及聚焦在光阻開口之底表面上之一影像。一旦選擇聚焦在頂表面上之影像,便可使用聚焦在光阻開口之頂表面上之影像來判定上述頂部開口量測。同樣地,一旦選擇聚焦在光阻開口之底表面上之影像,便可使用聚焦在光阻開口之底表面上之影像來判定上述底部開口量測。如在上文及James Jianguo Xu等人之標題為「3-D Optical Microscope」之美國專利申請案第12/699,824號(該案之標的物以引用的方式併入本文中)中論述,在擷取多個影像時可將一圖案或網格投影至樣本之表面上。在一個實例中,包含經投影圖案或網格之一影像用於判定光阻開口量測。在另一實例中,在相同z距離處擷取之未包含圖案或網格之一新影像用於判定光阻開口量測。在後一實例中,不具有樣本上之一經投影圖案或網格之新影像提供一「更清晰」影像,其提供光阻開口之邊界之更容易偵測。 圖24係在圖23中展示之頂表面開口之一二維圖。二維圖清晰展示頂表面開口之邊界(實線) 40。使用一最佳擬合線(虛線41)追蹤邊界。一旦產生最佳擬合線追蹤,便可產生最佳擬合線41之直徑、面積及圓周。 圖25係在圖23中展示之底表面開口之二維圖。二維圖清晰展示底表面開口之邊界(實線42)。使用一最佳擬合線(虛線43)追蹤邊界。一旦產生最佳擬合線追蹤,可計算最佳擬合線之底表面開口直徑、面積及圓周。 在本實例中,由與光學顯微鏡通信之一電腦系統自動產生最佳擬合線。可藉由分析選定影像之暗部分及亮部分之間的轉變而產生最佳擬合線,如下文更詳細論述。 圖26係一光阻層中之一開口之一二維影像。將影像聚焦在光阻層之頂表面上。在此實例中,自光阻層之頂表面反射之光係亮的,因為顯微鏡聚焦在光阻層之頂表面上。自光阻開口量測之光強度係暗的,因為光阻開口中不存在反射表面。使用各像素之強度來判定像素是否屬於光阻之頂表面或光阻中之開口。來自光阻之頂表面與光阻中之開口之間的轉變之強度改變可跨越多個像素及多個強度位準。影像背景強度亦可不係均勻的。因此,需要進一步分析來判定光阻之邊界之確切像素位置。為判定一單一表面轉變點之像素位置,在轉變區外部之一相鄰亮區內獲得一強度平均值,且在轉變區外部之相鄰暗區內獲得一強度平均值。使用相鄰亮區之平均值與相鄰暗區之平均值之間的中間強度值作為區分一像素是否屬於光阻之頂表面或光阻中之開口之強度臨限值。此強度臨限值可不同於先前論述之用於選擇一單一經擷取影像內之特徵之強度臨限值。一旦判定中間強度臨限值,便比較中間強度臨限值與所有像素以區分屬於光阻之頂表面或光阻中之開口之像素。若像素強度高於強度臨限值,則將像素判定為一光阻像素。若像素強度低於強度臨限值,則將像素判定為一開口區像素。多個邊界點可以此方式判定且用於擬合一形狀。接著,使用擬合形狀以計算光阻之頂開口之所有所要尺寸。在一個實例中,擬合形狀可選自以下之群組:圓形、方形、矩形、三角形、橢圓形、六邊形、五邊形等。 圖27繪示跨圖26之亮度轉變周圍之相鄰區之經量測強度之變動。在相鄰區之最左部分處,經量測強度較高,因為顯微鏡聚焦在光阻層之頂表面上。經量測光強度透過相鄰區之亮度轉變而減小。經量測光強度在相鄰區之最右部分處下降至一最小範圍,因為在相鄰區之最右部分中不存在光阻層之頂表面。圖27繪製跨相鄰區之經量測強度之此變動。接著,可藉由應用一臨限值強度來判定指示光阻層之頂表面在何處結束之邊界點。光阻之頂表面結束之邊界點定位於經量測強度與臨限值強度之交叉點處。在沿著亮度轉變定位之不同相鄰區處重複此程序。針對各相鄰區判定一邊界點。接著,使用各相鄰區之邊界點來判定頂表面邊界之大小及形狀。 圖28係一光阻層中之一開口之一二維影像。將影像聚焦在光阻開口之底表面上。在此實例中,自光阻開口區之底表面反射之光係亮的,因為顯微鏡聚焦在光阻開口之底表面上。自光阻區反射之光亦相對亮,因為基板係具有高反射率之矽或金屬晶種層。歸因於由光阻邊界引起之光散射,自光阻層之邊界反射之光較暗。使用各像素之經量測強度以判定像素是否屬於光阻開口之底表面。來自光阻之底表面與光阻開口區之間的轉變之強度改變可跨越多個像素及多個強度位準。影像背景強度亦可不係均勻的。因此,需要進一步分析來判定光阻開口之確切像素位置。為判定一邊界點之像素位置,在相鄰像素內判定具有最小強度之一像素之位置。多個邊界點可以此方式判定且用於擬合一形狀。接著,使用擬合形狀來計算底部開口之所要尺寸。 圖29繪示跨圖28之亮度轉變周圍之相鄰區之經量測強度之變動。在相鄰區之最右部分處,經量測強度較高,因為顯微鏡聚焦在光阻開口之底表面上。經量測光強度減小至一最小強度且接著透過相鄰區之亮度轉變而減小。歸因於來自基板表面之光反射,經量測光強度在相鄰區之最右部分處升高至一相對高強度範圍。圖29繪製跨相鄰區之經量測強度之此變動。接著,可藉由尋找最小經量測強度之位置來判定指示光阻開口之邊界所處之位置之邊界點。邊界點定位於最小經量測強度所處之位置。在沿著亮度轉變定位之不同相鄰區處重複程序。針對各相鄰區判定一邊界點。接著,使用各相鄰區之邊界點來判定底表面邊界之大小及形狀。 圖30係一光阻層中之一溝槽結構(諸如一未鍍重佈線(RDL)結構)之一二維影像。將影像聚焦在光阻層之頂表面上。在此實例中,自光阻層之頂表面反射之光係亮的,因為顯微鏡聚焦在光阻層之頂表面上。自光阻層中之開口反射之光較暗,因為自開口溝槽區反射較少光。使用各像素之強度來判定像素是否屬於光阻之頂表面或光阻中之開口區。來自光阻之頂表面與光阻中之開口區之間的轉變之強度改變可跨越多個像素及多個強度位準。影像背景強度亦可不係均勻的。因此,需要進一步分析來判定光阻之邊界之確切像素位置。為判定一單一表面轉變點之像素位置,在轉變區外部之一相鄰亮區內獲得一強度平均值,且在轉變區外部之相鄰暗區內獲得一強度平均值。使用相鄰亮區之平均值與相鄰暗區之平均值之間的中間強度值作為區分頂表面光阻反射與非頂表面光阻反射之強度臨限值。一旦判定中間強度臨限值,便比較中間強度臨限值與所有相鄰像素以判定頂表面像素與光阻開口區之間的一邊界。若像素強度高於強度臨限值,則將像素判定為一頂表面光阻像素。若像素強度低於強度臨限值,則將像素判定為一光阻開口區像素。多個邊界點可以此方式判定且用於擬合一形狀。接著,使用擬合形狀來計算溝槽之光阻開口之所有所要尺寸,諸如溝槽寬度。 圖31繪示跨圖30之亮度轉變周圍之相鄰區之經量測強度之變動。在相鄰區之最左部分處,經量測強度較高,因為顯微鏡聚焦在光阻層之頂表面上。經量測光強度透過相鄰區之亮度轉變而減小。經量測光強度在相鄰區之最右部分處下降至一最小範圍,因為在相鄰區之最右部分中不存在光阻層之頂表面。圖31繪製跨相鄰區之經量測強度之此變動。接著,可藉由應用一臨限值強度來判定指示光阻層之頂表面結束之邊界點。光阻之頂表面結束之邊界點定位於經量測強度與臨限值強度之交叉點處。在沿著亮度轉變定位之不同相鄰區處重複此程序。針對各相鄰區判定一邊界點。接著,使用各相鄰區之邊界點來判定頂表面邊界之大小及形狀。 關於圖26至圖31,像素強度僅為可用於區分一影像中之不同區域之像素之像素特性之一個實例。例如,亦可使用各像素之波長或色彩而以一類似方式區分來自一影像中之不同區域之像素。一旦精確界定各區域之間的邊界,接著,使用該邊界來判定一PR開口之臨界尺寸(CD),諸如其直徑或寬度。通常,接著比較經量測CD值與在其他類型之工具(諸如一臨界尺寸掃描電子顯微鏡(CD-SEM))上量測之值。為確保生產監測工具中之量測精度,此種類的交叉校準係必要的。 圖32係部分填充有鍍金屬之一光阻開口之一三維圖。光阻層中之開口呈溝槽形狀,諸如一鍍重佈線(RDL)結構。在晶圓製程期間,在光阻仍完整時量測沈積至光阻開口中之鍍金屬之各種特徵係有利的。例如,若金屬之厚度不夠厚,則吾人可始終鍍覆額外金屬,只要光阻尚未被剝除。在晶圓仍處於一可工作階段時發現潛在問題之能力防止一缺陷晶圓之進一步製造且節省材料及處理費用。 圖33係部分填充有鍍金屬之一光阻開口之一橫截面圖。圖33清晰展示光阻(「PR」)區域之頂表面之高度大於鍍金屬之頂表面之高度。亦在圖33中繪示鍍金屬之頂表面之寬度。使用上文描述之各種方法,可判定光阻區域之頂表面之z位置及鍍金屬之頂表面之z位置。光阻區域之頂表面與鍍金屬之頂表面之間的距離(亦稱為「步階高度」)等於光阻區域之頂表面之高度與鍍金屬之頂表面之高度之間的差。為判定鍍金屬之厚度,需要光阻區域之厚度之另一量測。如上文關於圖11論述,光阻區域係半透明的且具有不同於露天之折射率之一折射率。因此,聚焦在自光阻區域之底表面反射之光上之經擷取影像之焦平面實際上未定位於光阻區域之底表面處。然而,此時,吾人之目標不同。吾人不希望濾除錯誤表面量測,而是現需要光阻區域之厚度。圖40繪示未自光阻區域之頂表面反射之入射光之一部分如何歸因於光阻材料之折射率而按不同於入射光之一角度行進通過光阻區域。若未解決此錯誤,則光阻區域之經量測厚度係D’ (聚焦在自光阻區域之頂表面反射之光上之經擷取影像之經量測z位置減去聚焦在自光阻區域之底表面反射之光上之經擷取影像之經量測z位置),圖40清晰繪示之經量測厚度D’不接近光阻區域之實際厚度D。然而,可藉由將一校正計算應用至光阻區域之經量測厚度而移除由光阻區域之折射率引入之錯誤。在圖40中展示一第一校正計算,其中光阻區域之實際厚度(D)等於光阻區域之經量測厚度(D’)乘以光阻區域之折射率。在圖40中展示一第二校正計算,其中光阻區域之實際厚度(D)等於光阻區域之經量測厚度(D’)乘以光阻區域之折射率加上一偏移值。第二校正計算更普遍且考慮以下事實:光阻之折射率依據波長而變化且當透過一透明介質成像時,一物鏡之球面像差可影響z位置量測。因此,只要遵循適當校準程序,便可使用聚焦在自光阻區域之底表面反射之光上之經擷取影像之焦平面之一z位置來計算光阻區域之實際厚度。 一旦將校正方程式應用至光阻區域之經量測厚度,便可獲得光阻區域之真實厚度。再次參考圖33,現在可計算鍍金屬之厚度。鍍金屬之厚度等於光阻區域之厚度減去光阻區域之頂表面之z位置與鍍金屬之頂表面之z位置之間的差。 圖34係具有鍍金屬之一圓形光阻開口之一三維圖。圖35係具有在圖34中展示之鍍金屬之圓形光阻開口之一橫截面圖。圖35之橫截面圖類似於圖33之橫截面圖。圖35清晰展示光阻(「PR」)區域之頂表面之高度大於鍍金屬之頂表面之高度。使用上文描述之各種方法,可判定光阻區域之頂表面之z位置及鍍金屬之頂表面之z位置。光阻區域之頂表面與鍍金屬之頂表面之間的距離(亦稱為「步階高度」)等於光阻區域之頂表面之高度與鍍金屬之頂表面之高度之間的差。為判定鍍金屬之厚度,需要光阻區域之厚度之另一量測。如上文關於圖11論述,光阻區域係半透明的且具有不同於露天之折射率之一折射率。因此,聚焦在自光阻區域之底表面反射之光上之經擷取影像之焦平面實際上未定位於光阻區域之底表面處。然而,此時,吾人之目標不同。現需要光阻區域之厚度。圖40繪示未自光阻區域之頂表面反射之入射光之一部分如何歸因於光阻材料之折射率而按不同於入射光之一角度行進通過光阻區域。若未解決此錯誤,則光阻區域之經量測厚度係D’ (聚焦在自光阻區域之頂表面反射之光上之經擷取影像之經量測z位置減去聚焦在自光阻區域之底表面反射之光上之經擷取影像之經量測z位置),圖40清晰繪示之經量測厚度D’不接近光阻區域之實際厚度D。然而,可藉由將一校正計算應用至光阻區域之經量測厚度而移除由光阻區域之折射率引入之錯誤。在圖40中展示一第一校正計算,其中光阻區域之實際厚度(D)等於光阻區域之經量測厚度(D’)乘以光阻區域之折射率。在圖40中展示一第二校正計算,其中光阻區域之實際厚度(D)等於光阻區域之經量測厚度(D’)乘以光阻區域之折射率加上一偏移值。第二校正計算更普遍且考慮以下事實:光阻之折射率依據波長而變化且當透過一透明介質成像時一物鏡之球面像差可影響z位置量測。因此,只要遵循適當校準程序,便可使用聚焦在自光阻區域之底表面反射之光上之經擷取影像之焦平面之一z位置來計算光阻區域之實際厚度。 一旦將校正方程式應用至光阻區域之經量測厚度,便可獲得光阻區域之真實厚度。再次參考圖35,現可計算鍍金屬之厚度。鍍金屬之厚度等於光阻區域之厚度減去光阻區域之頂表面之z位置與鍍金屬之頂表面之z位置之間的差。 圖36係鈍化層上方之一金屬柱之一三維圖。圖37係在圖36中展示之鈍化層上方之一金屬柱之一橫截面圖。圖37清晰展示鈍化層之頂表面之高度小於金屬層之頂表面之高度。亦在圖37中繪示鍍金屬之頂表面之直徑。使用上文描述之各種方法,可判定鈍化層之頂表面之z位置及金屬層之頂表面之z位置。鈍化層之頂表面與金屬層之頂表面之間的距離(亦稱為「步階高度」)等於金屬層之頂表面之高度與鈍化層之頂表面之高度之間的差。為判定金屬層之厚度,需要鈍化層之厚度之另一量測。如上文關於圖11論述,半透明材料(諸如一光阻區域或一鈍化層)具有不同於露天之折射率之一折射率。因此,聚焦在自鈍化層之底表面反射之光上之經擷取影像之焦平面實際上未定位於鈍化層之底表面處。然而,此時,吾人之目標不同。現需要鈍化層之厚度。圖47繪示未自鈍化層之頂表面反射之入射光之一部分如何歸因於鈍化材料之折射率而按不同於入射光之一角度行進通過鈍化層。若未解決此錯誤,則鈍化層之經量測厚度係D’ (聚焦在自鈍化區域之頂表面反射之光上之經擷取影像之經量測z位置減去聚焦在自鈍化區域之底表面反射之光上之經擷取影像之經量測z位置),圖47清晰繪示之經量測厚度D’不接近鈍化層之實際厚度D。然而,可藉由將一校正計算應用至鈍化層之經量測厚度而移除由鈍化層之折射率引入之錯誤。在圖47中展示一第一校正計算,其中鈍化層之實際厚度(D)等於鈍化層之經量測厚度(D’)乘以鈍化層之折射率。在圖47中展示一第二校正計算,其中鈍化層之實際厚度(D)等於鈍化層之經量測厚度(D’)乘以鈍化層之折射率加上一偏移值。第二校正計算更普遍且考慮以下事實:鈍化層之折射率依據波長而變化且當透過一透明介質成像時,一物鏡之球面像差可影響z位置量測。因此,只要遵循適當校準程序,便使用聚焦在自鈍化層之底表面反射之光上之擷取影像之焦平面之一z位置以計算鈍化層之實際厚度。 一旦將校正方程式應用至鈍化層之經量測厚度,便可獲得鈍化層之真實厚度。再次參考圖37,現可計算金屬層之厚度。金屬層之厚度等於鈍化層之厚度及鈍化層之頂表面之z位置與金屬層之頂表面之z位置之間的差之和。 圖38係鈍化層上方之金屬之一三維圖。在此特定情況中,所展示之金屬結構係重佈線(RDL)。圖39係在圖38中展示之鈍化層上方之金屬之一橫截面圖。圖39清晰展示鈍化層之頂表面之高度小於金屬層之頂表面之高度。使用上文描述之各種方法,可判定鈍化層之頂表面之z位置及金屬層之頂表面之z位置。鈍化層之頂表面與金屬層之頂表面之間的距離(亦稱為「步階高度」)等於金屬層之頂表面之高度與鈍化層之頂表面之高度之間的差。為判定金屬層之厚度,需要鈍化層之厚度之另一量測。如上文關於圖11論述,半透明材料(諸如一光阻區域或一鈍化層)具有不同於露天之折射率之一折射率。因此,聚焦在自鈍化層之底表面反射之光上之經擷取影像之焦平面實際上未定位於鈍化層之底表面處。然而,此時,吾人之目標不同。現需要鈍化層之厚度。圖40繪示未自鈍化層之頂表面反射之入射光之一部分如何歸因於鈍化材料之折射率而按不同於入射光之一角度行進通過鈍化層。若未解決此錯誤,則鈍化層之經量測厚度係D’ (聚焦在自鈍化區域之頂表面反射之光上之經擷取影像之經量測z位置減去聚焦在自鈍化區域之底表面反射之光上之經擷取影像之經量測z位置),圖40清晰繪示之經量測厚度D’不接近鈍化層之實際厚度D。然而,可藉由將一校正計算應用至鈍化層之經量測厚度而移除由鈍化層之折射率引入之錯誤。在圖40中展示一第一校正計算,其中鈍化層之實際厚度(D)等於鈍化層之經量測厚度(D’)乘以鈍化層之折射率。在圖40中展示一第二校正計算,其中鈍化層之實際厚度(D)等於鈍化層之經量測厚度(D’)乘以鈍化層之折射率加上一偏移值。第二校正計算更普遍且考慮以下事實:鈍化層之折射率依據波長而變化且當透過一透明介質成像時,一物鏡之球面像差可影響z位置量測。因此,只要遵循適當校準程序,便可使用聚焦在自鈍化層之底表面反射之光上之經擷取影像之焦平面之一z位置來計算鈍化層之實際厚度。 一旦將校正方程式應用至鈍化層之經量測厚度,可獲得鈍化層之真實厚度。再次參考圖39,現可計算金屬層之厚度。金屬層之厚度等於鈍化層之厚度及鈍化層之頂表面之z位置與金屬層之頂表面之z位置之間的差之和。 圖41係繪示當一光阻開口在光學顯微鏡之視場內時使用在各種距離處擷取之影像之峰值模式操作之一圖。自類似於在圖32中展示之樣本結構之一樣本獲得在圖41中繪示之經擷取影像。此結構係一鍍金屬溝槽結構。樣本之俯視圖展示光阻開口(一鍍金屬)在x-y平面中之面積。PR開口亦具有z方向上之特定深度之一深度(高於鍍金屬)。在下文圖41中之俯視圖展示在各距離處擷取之影像。在距離1處,光學顯微鏡未聚焦在光阻區域之頂表面或鍍金屬之頂表面上。在距離2處,光學顯微鏡聚焦在鍍金屬之頂表面上,但未聚焦在光阻區域之頂表面上。此導致與接收自離焦之其他表面(光阻區域之頂表面)反射之光之像素相比,接收自鍍金屬之頂表面反射之光之像素中之一增大特性值(強度/對比度/條紋對比度)。在距離3處,光學顯微鏡未聚焦在光阻區域之頂表面或鍍金屬之頂表面上。因此,在距離3處,最大特性值將實質上低於在距離2處量測之最大特性值。在距離4處,光學顯微鏡未聚焦在樣本之任何表面上;然而,歸因於空氣之折射率與光阻區域之折射率之差異,量測到最大特性值(強度/對比度/條紋對比度)之一增大。圖11、圖40及隨附文字更詳細描述此現象。在距離6處,光學顯微鏡聚焦在光阻區域之頂表面上,但未聚焦在鍍金屬之頂表面上。此導致與接收自離焦之其他表面(鍍金屬之頂表面)反射之光之像素相比,接收自光阻區域之頂表面反射之光之像素中之一增大特性值(強度/對比度/條紋對比度)。一旦判定來自各經擷取影像之最大特性值,便可利用結果來判定晶圓之各表面定位於哪些距離處。 圖42係繪示源自在圖41中繪示之峰值模式操作之三維資訊之一圖表。如關於圖41論述,在距離1、3及5處擷取之影像之最大特性值具有低於在距離2、4及6處擷取之影像之最大特性值之一最大特性值。在各種z距離處之最大特性值之曲線可含有歸因於環境效應(諸如振動)之雜訊。為最小化此雜訊,可在進一步資料分析之前應用一標準平滑法,諸如具有特定核心大小之高斯濾波(Gaussian filtering)。 由一峰值尋找演算法執行比較最大特性值之一個方法。在一個實例中,使用一導數法沿著z軸定位零交叉點以判定存在各「峰值」之距離。接著,比較在發現一峰值之各距離處之最大特性值以判定量測到最大特性值之距離。在圖42中展示之情況中,將在距離2處發現一峰值,此用作樣本之一表面定位於距離2處之一指示。 藉由比較各最大特性值與一預設定臨限值來執行比較最大特性值之另一方法。可基於晶圓材料、距離及光學顯微鏡之規格來計算臨限值。替代性地,可在自動化處理之前藉由經驗測試判定臨限值。在任一情況中,比較各經擷取影像之最大特性值與臨限值。若最大特性值大於臨限值,則判定最大特性值指示晶圓之一表面之存在。若最大特性值不大於臨限值,則判定最大特性值並不指示晶圓之一表面。 上文描述之峰值模式方法之一替代用途、圖13中描述之範圍模式方法及相關文字可用於判定一樣本之不同表面之z位置。 圖43係聚焦在一溝槽結構中之一光阻層之一頂表面上之一經擷取影像之一圖,包含一第一分析區域A及一第二分析區域B之一輪廓。如上文論述,各經擷取影像之一整個視場可用於產生三維資訊。然而,具有僅使用視場之一可選擇部分(區域A或區域B)產生三維資訊之選項係有利的。在一個實例中,一使用者使用與處理經擷取影像之一電腦通信之一滑鼠或觸控螢幕裝置選擇區域。一旦選擇,吾人可將不同臨限值應用至各區域以更有效地挑選出如在圖42中展示之一特定表面峰值。在圖43中繪示此案例。當期望獲取關於鍍金屬之頂表面之三維資訊時,設定視場之可選擇部分(區域A)以包含鍍金屬之多個區域,此係因為與一金屬表面相關聯之特性值通常大於與光阻相關聯之特性值,因此可將一高臨限值應用至區域A以濾除與光阻相關聯之特性值以改良金屬表面峰值之偵測。替代性地,當期望獲取關於一光阻區域之頂表面之三維資訊時,將視場之可選擇部分(區域B)設定為定位於一影像中心之一小區。相較於與金屬表面相關聯之特性值,與一光阻表面相關聯之特性值通常相對弱。用於判定特性值計算之原始信號之品質在圍封於區域B內之視場之中心周圍係最佳的。藉由設定區域B之一適當臨限值,吾人可更有效地偵測光阻表面之一弱特性值峰值。使用者可經由顯示樣本之俯視影像之圖形介面設定及調整區域A及區域B以及各區域內使用之臨限值且將其等保存在用於自動化量測之一配方中。 圖44係鈍化結構上方之一凸塊之一三維圖。圖45係在圖44中展示之鈍化結構上方之凸塊之一俯視圖,包含一第一分析區域A及一第二分析區域B之一輪廓。區域A可經設定,使得區域A在一自動化序列量測期間將始終包含金屬凸塊之頂點。區域B並不圍封金屬凸塊之任何部分且僅圍封鈍化層之一部分。僅分析所有經擷取影像之區域A提供像素過濾,使得所分析之大多數像素包含關於金屬凸塊之資訊。分析所有經擷取影像之區域B提供像素過濾,使得所分析之所有像素包含關於鈍化層之資訊。使用者可選擇分析區域之應用提供基於位置而非像素值之像素過濾。例如,當需要鈍化層之頂表面之位置時,可應用區域B且可自分析立即消除由金屬凸塊引起之所有效應。在另一實例中,當需要金屬凸塊之頂點之位置時,可應用區域A且可自分析立即消除由大鈍化層區引起之所有效應。 在一些實例中,固定區域A與區域B之間的空間關係亦係有用的。當量測一已知大小之一金屬凸塊時(諸如在圖44及圖45中繪示),固定區域A與區域B之間的空間關係以提供一致量測係有用的,因為區域A始終用於量測金屬凸塊之三維資訊且區域B始終用於量測鈍化層之三維資訊。再者,當區域A及區域B具有一固定空間關係時,一個區域之調整自動引起另一區域之一調整。在圖46中繪示此情境。圖46係繪示當整個凸塊未定位於原始分析區域A中時調整分析區域A及分析區域B之一俯視圖。此可由於多種原因而發生,諸如處置器對樣本之一不精確放置或樣本製造期間的程序變動。無論原因為何,區域A需經調整以適當地以金屬凸塊之頂點為中心。區域B亦需經調整以確保區域B並不包含金屬凸塊之任何部分。當區域A與區域B之間的空間關係固定時,則區域A之調整自動引起區域B之重新對準。 圖47係在圖44中繪示之鈍化結構上方之凸塊之一橫截面圖。當鈍化層之厚度實質上大於影像獲取期間光學顯微鏡之預定步階之間的距離時,可如上文論述般容易地偵測鈍化層之頂表面之z位置。然而,當鈍化層之厚度實質上不大於光學顯微鏡之預定步階之間的距離(即,鈍化層相對薄)時,可能無法容易地偵測及量測鈍化層之頂表面之z位置。難度歸因於相較於自鈍化層之底表面反射之光之大百分比之自鈍化層之頂表面反射之光之小百分比而產生。換言之,相較於與鈍化層之底表面相關聯之特性值峰值,與鈍化層之頂表面相關聯之特性值峰值十分弱。當聚焦在來自鈍化層之底表面之高強度反射上之一預定步階處之經擷取影像與聚焦在來自鈍化層之頂表面之低強度反射上之一預定步階處之經擷取影像相距小於幾個預定步階時,無法區分自鈍化層之底表面接收之反射與自鈍化層之頂表面接收之反射。可藉由不同方法之操作解決此問題。 在一第一方法中,可增大跨掃描之預定步階總數,以便提供跨整個掃描之額外解析度。例如,可使跨相同掃描距離之預定步階數目加倍,此將導致掃描之Z解析度加倍。此方法亦將導致在一單一掃描期間擷取之影像量加倍。可增大掃描之解析度直至可區分自頂表面反射量測之特性峰值與自底表面反射量測之特性峰值。圖49繪示其中在掃描中提供足夠解析度以區分來自鈍化層之頂表面及底表面之反射之一情境。 在一第二方法中,亦增大預定步階總數,然而,僅步階之一部分用於擷取影像且其餘部分被略過。 在一第三方法中,可變更預定步階之間的距離,使得步階之間的距離在鈍化層附近較小且步階之間的距離在鈍化層附近以外較大。此方法提供在鈍化層附近之較大解析度及在鈍化層附近以外之較小解析度。此方法無需將額外預定步階添加至掃描,而是按一非線性方式重新分佈預定步階以在無需高解析度之情況下犧牲較低解析度在需要之處提供額外解析度。 對於關於如何改良掃描解析度之額外描述,參見由James Jianguo Xu等人於2011年12月21日申請之標題為「3D Microscope Including Insertable Components To Provide Multiple Imaging and Measurement Capabilities」之美國專利申請案第13/333,938號(該案之標的物以引用的方式併入本文中)。 使用上文論述之方法之任一者,可判定鈍化層之頂表面之z位置。 金屬凸塊之頂點相對於鈍化層之頂表面之高度(「鈍化層上方之凸塊高度」)亦為一關注量測。鈍化層上方之凸塊高度等於凸塊之頂點之z位置減去鈍化層之頂表面之z位置。上文描述鈍化層之頂表面之z位置之判定。可使用不同方法執行凸塊之頂點之z位置之判定。 在一第一方法中,藉由判定跨所有經擷取影像之各x-y像素位置之峰值特性值之z位置來判定凸塊之頂點之z位置。換言之,針對各x-y像素位置,在每一z位置處跨所有經擷取影像比較經量測特性值且將含有最大特性值之z位置儲存在一陣列中。跨所有x-y像素位置執行此程序之結果係所有x-y像素位置之一陣列及每一x-y像素位置之相關聯峰值z位置。陣列中之最大z位置量測為凸塊之頂點之z位置。對於關於如何產生三維資訊之額外描述,參見由James Jianguo Xu等人於2010年2月3日申請之標題為「3-D Optical Microscope」之美國專利申請案第12/699,824號及美國專利第8,174,762號(該等案之標的物以引用的方式併入本文中)。 在一第二方法中,藉由產生凸塊之表面之一擬合三維模型且接著使用三維模型計算凸塊之表面之峰值來判定凸塊之頂點之z位置。在一個實例中,此可藉由產生上文關於第一方法描述之相同陣列來完成,然而,一旦完成陣列,便使用陣列來產生三維模型。可使用擬合至資料之一二階多項式函數產生三維模型。一旦產生三維模型,便判定凸塊之表面斜率之導數。凸塊之頂點經計算定位於凸塊之表面斜率之導數等於零之處。 一旦判定凸塊之頂點之z位置,便可藉由自凸塊之頂點之z位置減去鈍化層之頂表面之z位置來計算鈍化層上方之凸塊高度。 圖48係繪示當僅一鈍化層在光學顯微鏡之視場之區域B內時使用在各種距離處擷取之影像之峰值模式操作之一圖。藉由僅分析區域B (在圖45中展示)內之像素,排除關於金屬凸塊之所有像素資訊。因此,藉由分析區域B內之像素所產生之三維資訊將僅受存在於區域B中之鈍化層影響。自類似於在圖44中展示之樣本結構之一樣本獲得在圖48中繪示之經擷取影像。此結構係鈍化結構上方之一金屬凸塊。樣本之俯視圖展示鈍化層在x-y平面中之面積。在僅選擇區域B內之像素之情況下,在俯視圖中不可見金屬凸塊。在下文圖48中之俯視圖展示在各距離處擷取之影像。在距離1處,光學顯微鏡未聚焦在鈍化層之頂表面或鈍化層之頂表面上。在距離2處,光學顯微鏡未聚焦在樣本之任何表面上;然而,歸因於空氣之折射率與鈍化層之折射率之差異,量測到最大特定值(強度/對比度/條紋對比度)之一增大。圖11、圖40及隨附文字更詳細描述此現象。在距離3處,光學顯微鏡未聚焦在鈍化層之頂表面或鈍化層之底表面上。因此,在距離3處,最大特性值將實質上低於在距離2處量測之特性值。在距離4處,光學顯微鏡聚焦在鈍化層之頂表面上,此導致與接收自離焦之其他表面反射之光之像素相比,接收自鈍化層之頂表面反射之光之像素中之一增大特性值(強度/對比度/條紋對比度)。在距離5、6及7處,光學顯微鏡未聚焦在鈍化層之頂表面或鈍化層之底表面上。因此,在距離5、6及7處,最大特性值將實質上低於在距離2及4處量測之特性值。一旦判定來自各經擷取影像之最大特性值,便可利用結果來判定樣本之各表面定位於哪些距離處。 圖49係繪示源自圖48之峰值模式操作之三維資訊之一圖表。歸因於藉由僅分析所有經擷取影像之區域B內之像素而提供之像素過濾,峰值模式操作僅提供鈍化層在兩個z位置(2及4)處之一表面之一指示。鈍化層之頂表面定位在兩個經指示z位置位置之較高者處。兩個經指示z位置位置之最低者係一錯誤「偽影表面」,其中歸因於鈍化層之折射率而量測自鈍化層之底表面反射之光。僅使用定位於區域B內之像素量測鈍化層之頂表面之z位置簡化峰值模式操作且減小歸因於來自定位於相同樣本上之金屬凸塊之光反射之錯誤量測之可能性。 上文描述之峰值模式方法之一替代用途、圖13中描述之範圍模式方法及相關文字可用於判定一樣本之不同表面之z位置。 儘管為指導目的在上文描述某些特定實施例,然本專利文件之教示具有一般適用性且不限於上文描述之特定實施例。因此,在不脫離如在發明申請專利範圍中闡述之本發明之範疇的情況下可實踐所描述實施例之各種特徵之各種修改、調適及組合。CROSS-REFERENCE TO RELATED APPLICATIONS This application is hereby incorporated by reference in its entirety in its entire entire entire entire entire entire entire entire entire entire entire entire entire content . S. C.  § 120 stipulates the priority of the case. The entire disclosure of this disclosure is incorporated herein by reference. Application 15/338,838 is part of the non-provisional U.S. Patent Application Serial No. 15/233,812, filed on August 10, 2016, entitled "AUTOMATED 3-D MEASUREMENT", and is based on 35 U. S. C.  § 120 stipulates the priority of the case. The entire disclosure of this disclosure is incorporated herein by reference.  Reference will now be made in detail to Examples of such are shown in the accompanying drawings. In the scope of the following description and invention patent application, Such as "top", "below", "on", "under", "top", "bottom", The term "left" and "right" may be used to describe the relative orientation between different parts of the described structure. And should understand that The overall structure described can be oriented in virtually any way in three dimensions.  Figure 1 is a diagram of one of the automated three-dimensional metering systems 1. The semi-automated three-dimensional metering system 1 comprises an optical microscope (not shown), An on/off button 5, A computer 4 and a stage 2. In operation, A wafer 3 is placed on the stage 2. The function of the semi-automated 3D metrology system 1 captures multiple images of an object and automatically generates three-dimensional information describing the various surfaces of the object. This is also known as "scanning" one of the objects. Wafer 3 is an example of one of the objects analyzed by the semi-automated three-dimensional metering system 1. An object can also be called the same book. In operation, The wafer 3 is placed on the stage 2 and the semi-automated three-dimensional metering system 1 begins to automatically generate a program describing the three-dimensional information of the surface of the wafer 3. In one example, The semi-automated three-dimensional metering system 1 begins by pressing a designated button that is connected to a keyboard (not shown) of the computer 4. In another example, The semi-automated three-dimensional metering system 1 begins by sending a start command to the computer 4 across a network (not shown). The semi-automated 3D metering system 1 can also be configured to mate with half of an automated wafer handling system (not shown). The automated wafer handling system removes the wafer after scanning one of the wafers and inserts a new wafer for scanning.  A fully automated three-dimensional metering system (not shown) similar to the semi-automated three-dimensional metering system of Figure 1; however, A fully automated 3D metering system also includes a robotic handler, It automatically picks up a wafer without human intervention and places the wafer on the stage. In a similar way, A fully automated 3D metering system can also automatically pick up a wafer from the stage using a robotic handler and remove the wafer from the stage. A fully automated 3D metrology system can be expected during the production of many wafers. Because it avoids the possibility of contamination by a human operator and improves time efficiency and total cost. Alternatively, When only a small amount of wafer needs to be measured, A semi-automated three-dimensional metering system 1 can be desired during research and development activities.  2 is a diagram of a three-dimensional imaging microscope 10 including a plurality of objective lenses 11 and an adjustable stage 12. The three-dimensional imaging microscope can be a confocal microscope, a structured illumination microscope, An interferometer microscope or any other type of microscope well known in the art. A confocal microscope will measure the intensity. A structured illumination microscope will measure the contrast of a projected structure. An interferometer microscope will measure the interference fringe contrast.  In operation, A wafer is placed on the adjustable stage 12 and an objective lens is selected. The three-dimensional imaging microscope 10 captures multiple images of the wafer while adjusting the height of the stage on which the wafer rests. This results in multiple images of the wafer being taken while the wafer is being positioned at various distances away from the selected lens. In an alternative example, Place the wafer on a fixed stage and adjust the position of the objective lens, Thereby, the distance between the objective lens and the sample is changed without moving the stage. In another example, The stage can be adjusted in the x-y direction and the objective lens can be adjusted in the z direction.  The captured image can be stored locally in one of the memories included in the three-dimensional imaging microscope 10. Alternatively, The captured image can be stored in a data storage device included in a computer system. The three-dimensional microscope 10 transmits the captured image to the computer system across a data communication link. An example of a data communication link includes: A universal serial bus (USB) interface, An Ethernet connection, a firewire bus interface, A wireless network (such as WiFi).  Figure 3 includes a three-dimensional microscope 21, a sample handler 22, a computer 23, A display 27 (optional) and a map of one of the three-dimensional metering systems 20 of the input device 28. The three-dimensional metering system 20 is an example of one of the systems included in the semi-automated three-dimensional metering system 1. The computer 23 includes a processor 24, A storage device 25 and a network device 26 (optional). The computer outputs the information to a user via the display 27. If the display 27 is a touch screen device, The display can also be used as an input device. Input device 28 can include a keyboard and a mouse. The computer 23 controls the operation of the three-dimensional microscope 21 and the sample handler/stage 22. When a start scan command is received by the computer 23, The computer sends one or more commands to configure a 3D microscope for image capture ("Microscope Control Data"). E.g, Need to choose the correct objective lens, Need to select the resolution of the image to be captured, And you need to choose the mode for storing captured images. When a start scan command is received by the computer 23, The computer sends one or more commands to configure the sample handler/stage 22 ("Processor Control Data"). E.g, You need to select the correct height (z-direction) adjustment and choose the correct horizontal (x-y direction) alignment.  During operation, Computer 23 causes sample handler/stage 22 to be adjusted to the appropriate position. Once the sample handler/stage 22 is properly positioned, The computer 23 will cause the three-dimensional microscope to focus on a focal plane and capture at least one image. then, Computer 23 will cause the stage to move in the z direction, This changes the distance between the sample and the objective lens of the optical microscope. Once the stage moves to the new location, The computer 23 will cause the optical microscope to capture a second image. This procedure continues until an image is taken at each desired distance between the objective lens of the optical microscope and the sample. The images captured at the respective distances are transmitted from the three-dimensional microscope 21 to the computer 23 ("image data"). The captured image is stored in a storage device 25 included in the computer 23. In one example, The computer 23 analyzes the captured image and outputs the three-dimensional information to the display 27. In another example, The computer 23 analyzes the captured image and outputs the three-dimensional information to a remote device via the network 29. In yet another example, The computer 23 does not analyze the captured image, Instead, the captured image is sent to another device for processing via network 29. The three-dimensional information can include rendering a three-dimensional image based on the captured image. 3D information can contain no images. Rather, it contains information based on various characteristics of each captured image.  4 is a view showing one of the methods of capturing an image when changing the distance between the objective lens of the optical microscope and the sample. In the embodiment illustrated in Figure 4, Each image contains 1000 by 1000 pixels. In other embodiments, Images can contain a variety of pixel configurations. In one example, The interval between successive distances is fixed to a predetermined amount. In another example, The interval between consecutive distances may not be fixed. If only one part of the z-direction scan of the sample requires additional z-direction resolution, This unfixed spacing between images in the z direction can be advantageous. The z-direction resolution is based on the number of images captured per unit length in the z-direction. Therefore, extracting additional images per unit length in the z direction will increase the measured z-direction resolution. Conversely, Placing fewer images per unit length in the z direction will reduce the measured z-direction resolution.  As discussed above, The optical microscope is first adjusted to focus on one of the focal planes located at a distance of one from the objective lens of the optical microscope. then, Taking an image with an optical microscope, The image is stored in a storage device (ie, In "memory"). then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of two. then, Taking an image with an optical microscope, The image is stored in a storage device. then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of three. then, Taking an image with an optical microscope, The image is stored in a storage device. then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of four. then, Taking an image with an optical microscope, The image is stored in a storage device. then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of 5. then, Taking an image with an optical microscope, The image is stored in a storage device. The procedure continues for N different distances between the objective lens of the optical microscope and the sample. Information indicating which image is associated with each distance is also stored in the storage device for processing.  In an alternate embodiment, The distance between the objective lens of the optical microscope and the sample is fixed. Real facts, The optical microscope includes a zoom lens, It allows an optical microscope to change the focal plane of the optical microscope. In this way, When the stage and the sample supported by the stage are fixed, The focal plane of the optical microscope varies across N different focal planes. An image is captured for each focal plane and the image is stored in a storage device. then, The captured image is processed across all of the focal planes to determine the three-dimensional information of the sample. This embodiment requires a zoom lens, It provides sufficient resolution across all focal planes and introduces minimal image distortion. In addition, The calibration between the zoom positions and the resulting focal length of the zoom lens are required.  Figure 5 is a graph showing the distance between the objective lens of the optical microscope and the sample, Each of the x-y coordinates has a maximum characteristic value. Once images are captured and stored for each distance, The characteristics of each pixel of each image can be analyzed. E.g, The light intensity of each pixel of each image can be analyzed. In another example, The contrast of each pixel of each image can be analyzed. In yet another example, The stripe contrast of each pixel of each image can be analyzed. The contrast of a pixel can be determined by comparing the intensity of a pixel with the intensity of a predetermined number of surrounding pixels. For additional descriptions on how to generate contrast information, See U.S. Patent Application Serial No. 12/699, entitled "3-D Optical Microscope", filed on February 3, 2010, by No. 824 (the subject matter of which is incorporated herein by reference).  Figure 6 is a three-dimensional representation of one of the three dimensional images using the maximum characteristic values of the x-y coordinates shown in Figure 5. All pixels having an X position between 1 and 19 have a maximum characteristic value at a distance 7 in the z direction. All pixels having an X position between 20 and 29 have a maximum characteristic value at a distance 2 in the z direction. All pixels having an X position between 30 and 49 have a maximum characteristic value at a distance 7 in the z direction. All pixels having an X position between 50 and 59 have a maximum characteristic value at a distance 2 in the z direction. All pixels having an X position between 60 and 79 have a maximum characteristic value at a distance 7 in the z direction. In this way, The three-dimensional image depicted in Figure 6 can be generated using the maximum characteristic value for each x-y pixel of all captured images. In addition, In the case where the distance 2 is known and the distance 7 is known, The depth of the well depicted in Figure 6 can be calculated by subtracting the distance 7 from distance 2.  Peak Mode Operation Figure 7 is a graph showing peak mode operation using images captured at various distances. As discussed above with respect to Figure 4, The optical microscope is first adjusted to focus on a plane that is positioned at a distance of one from one of the objective lenses of the optical microscope. then, Taking an image with an optical microscope, The image is stored in a storage device (ie, In "memory"). then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of two. then, Taking an image with an optical microscope, The image is stored in a storage device. then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of three. then, Taking an image with an optical microscope, The image is stored in a storage device. then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of four. then, Taking an image with an optical microscope, The image is stored in a storage device. then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of 5. then, Taking an image with an optical microscope, The image is stored in a storage device. The procedure continues for N different distances between the objective lens of the optical microscope and the stage. Information indicating which image is associated with each distance is also stored in the storage device for processing.  Determining the maximum characteristic value of all x-y positions in a single captured image at one z-distance in peak mode operation, Rather than determining the maximum characteristic value for each x-y position of all captured images across various z-distances. In other words, For each captured image, Select the maximum characteristic value across all pixels included in the captured image. As shown in Figure 7, The pixel location with the largest characteristic value will likely vary between different captured images. Characteristics can be strength, Contrast or stripe contrast.  Figure 8 is a graph showing the peak mode operation of images taken at various distances when a photoresist (PR) opening is in the field of view of an optical microscope. The top view of the object shows the cross-sectional area of the PR opening in the x-y plane. The PR opening also has a depth of a particular depth in the z direction. The top view in Figure 8 below shows images captured at various distances. At distance 1, The optical microscope is not focused on the top surface of the wafer or the bottom surface of the PR opening. At distance 2, The optical microscope is focused on the bottom surface of the PR opening, But not focused on the top surface of the wafer. This results in a pixel that is received from light that is reflected from other surfaces of the defocus (the top surface of the wafer) One of the pixels received from the light reflected from the bottom surface of the PR opening increases the characteristic value (intensity/contrast/streak contrast). At a distance of 3, The optical microscope is not focused on the top surface of the wafer or the bottom surface of the PR opening. therefore, At a distance of 3, The maximum characteristic value will be substantially lower than the characteristic value measured at distance 2. At distance 4, The optical microscope is not focused on any surface of the sample; however, Due to the difference between the refractive index of air and the refractive index of the photoresist layer, One of the measured maximum characteristic values (intensity/contrast/streak contrast) is increased. Figure 11 and the accompanying text describe this phenomenon in more detail. At a distance of 6, The optical microscope is focused on the top surface of the wafer, However, it is not focused on the bottom surface of the PR opening. This results in a pixel that is received from light that is reflected from other surfaces of the defocus (the bottom surface of the PR opening) One of the pixels received from the light reflected from the top surface of the wafer increases the characteristic value (intensity/contrast/streak contrast). Once the maximum characteristic value from each captured image is determined, The results can be used to determine at which distances one of the wafer surfaces is located.  Figure 9 is a diagram showing one of three-dimensional information derived from peak mode operation. As discussed with respect to Figure 8, At distance 1, The maximum characteristic value of the images captured at 3 and 5 has less than the distance 2 One of the maximum characteristic values of the image captured at 4 and 6 and the maximum characteristic value. The curve of the maximum characteristic value at various z distances can be attributed to environmental effects such as vibrations and contain noise. To minimize this noise, A standard smoothing method can be applied before further data analysis, Such as Gaussian filtering with a certain core size.  A method of comparing the maximum characteristic values by a peak finding algorithm. In one example, The zero crossing point is located along the z-axis using a derivative method to determine the distance between each "peak". then, The maximum characteristic value at each distance at which a peak is found is compared to determine the distance at which the maximum characteristic value is measured. In the case of Figure 9, A peak will be found at distance 2, This is used as an indication that one of the surfaces of the wafer is positioned at a distance of two.  Another method of comparing the maximum characteristic values is performed by comparing each of the maximum characteristic values with a predetermined threshold. Based on wafer material, Distance and optical microscope specifications are used to calculate the threshold. Alternatively, The threshold can be determined by empirical testing prior to automated processing. In either case, Compare the maximum characteristic value and threshold value of each captured image. If the maximum characteristic value is greater than the threshold, It is then determined that the maximum characteristic value indicates the presence of one of the surfaces of the wafer. If the maximum characteristic value is not greater than the threshold, It is then determined that the maximum characteristic value does not indicate one of the surfaces of the wafer.  Summation Mode Operation FIG. 10 is a diagram showing the sum mode operation using images captured at various distances. As discussed above with respect to Figure 4, The optical microscope is first adjusted to focus on a plane that is positioned at a distance of one from one of the objective lenses of the optical microscope. then, Taking an image with an optical microscope, The image is stored in a storage device (ie, In "memory"). then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of two. then, Taking an image with an optical microscope, The image is stored in a storage device. then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of three. then, Taking an image with an optical microscope, The image is stored in a storage device. then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of four. then, Taking an image with an optical microscope, The image is stored in a storage device. then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of 5. then, Taking an image with an optical microscope, The image is stored in a storage device. The procedure continues for N different distances between the objective lens of the optical microscope and the sample. Information indicating which image is associated with each distance is also stored in the storage device for processing.  Adding the characteristic values of all x-y positions of each captured image, Rather than determining the maximum characteristic value of all x-y positions in a single captured image spanning one z distance. In other words, For each captured image, The characteristic values of all the pixels included in the captured image are added together. Characteristics can be strength, Contrast or stripe contrast. One of the average summed characteristic values that is substantially greater than the adjacent z-distance, via the summed characteristic value, indicates that one of the wafer surfaces exists at that distance. however, This method can also result in a false positive as described in FIG.  Figure 11 is a diagram showing an error surface detection when operating in a summation mode. The wafer shown in FIG. 11 includes a germanium substrate 30 and a photoresist layer 31 deposited on top of the germanium substrate 30. The top surface of the crucible substrate 30 is positioned at a distance of two. The top surface of the photoresist layer 31 is positioned at a distance of 6. An image captured at distance 2 will result in a sum of characteristic values that are substantially greater than one of the other images captured at a distance from one of the surfaces of the wafer. The image captured at distance 6 will result in a sum that is substantially greater than the characteristic value of one of the other images captured at a distance from one of the surfaces of the wafer. at this time, The sum mode operation appears to be a valid indicator of one of the surfaces of the wafer. however, The image captured at distance 4 will result in a sum of characteristic values that are substantially greater than one of the other images captured at a distance from one of the wafer surfaces. This is a problem, Because as clearly shown in Figure 11, One of the wafer surfaces is not positioned at a distance of four. Real facts, The increase in the sum of the characteristic values at 4 distances is one of the artifacts located on the surface at distances 2 and 6. The main part of the light that irradiates the photoresist layer is not reflected. Instead, it travels into the photoresist layer. The angle at which this light travels is due to the difference in refractive index between air and photoresist. The new angle is closer to the normal than the light angle of the top surface of the irradiated photoresist. Light travels to the top surface of the germanium substrate below the photoresist layer. then, The light is reflected by the highly reflective 矽 substrate layer. When the reflected light leaves the photoresist layer and enters the air, The angle of the reflected light changes again due to the difference in refractive index between the air and the photoresist layer. This first re-directing of the irradiation light, The reflection and the second re-directing caused an increase in the characteristic value (intensity/contrast/streak contrast) at the distance 4 observed by the optical microscope. This example shows that whenever a transparent material is included, The sum mode operation will detect surfaces that are not present on the sample.  Figure 12 is a diagram showing one of three-dimensional information from the summation mode operation. This chart shows the results of the phenomenon depicted in Figure 11. The large value of the total characteristic value at the distance of 4 incorrectly indicates that there is a surface at the distance 4. One method of false positive indication that does not result in the presence of the surface of the wafer is required.  Range Mode Operation Figure 13 is a diagram showing a range mode operation using images captured at various distances. As discussed above with respect to Figure 4, The optical microscope is first adjusted to focus on a plane that is positioned at a distance of one from one of the objective lenses of the optical microscope. then, Taking an image with an optical microscope, The image is stored in a storage device (ie, In "memory"). then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of two. then, Taking an image with an optical microscope, The image is stored in a storage device. then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of three. then, Taking an image with an optical microscope, The image is stored in a storage device. then, Adjust the stage, The distance between the objective lens of the optical microscope and the sample is such that the distance is 4. then, Taking an image with an optical microscope, The image is stored in a storage device. then, The stage is adjusted such that the distance between the objective lens of the optical microscope and the sample is a distance of 5. then, Taking an image with an optical microscope, The image is stored in a storage device. The procedure continues for N different distances between the objective lens of the optical microscope and the sample. Information indicating which image is associated with each distance is also stored in the storage device for processing.  Determining, by one of a plurality of pixels having a characteristic value within a particular range, in a single captured image at a z-distance, Rather than determining the sum of all characteristic values across all x-y locations in the single captured image. In other words, For each captured image, A count of one of the pixels having one of the characteristic values within a particular range is determined. Characteristics can be strength, Contrast or stripe contrast. One of the pixel counts at a particular z-distance that is substantially greater than the average pixel count at the adjacent z-distance indicates that one of the wafers exists at that distance. This method reduces the false positives described in Figure 11.  Figure 14 is a diagram showing one of three-dimensional information derived from range mode operation. Knowing the different material types and optical microscope configurations present on the wafer, One expected range of characteristic values can be determined for each material type. E.g, The photoresist layer will reflect a relatively small amount of light on the top surface of the irradiated photoresist layer (ie, 4%). The layer of germanium will reflect the light of the top surface of the irradiated layer (ie, 37%). Redirected reflection from the top surface of the photoresist layer observed at distance 4 (ie, 21%) will be substantially larger than the reflection observed at distance 6; however, Redirected reflection from the top surface of the germanium substrate observed at distance 4 (ie, 21%) will be substantially less than the reflection observed at distance 2. therefore, When looking for the top surface of the photoresist layer, The first range, centered on the expected characteristic value of the photoresist, can be used to filter out pixels having characteristic values outside the first range, Thereby, pixels having characteristic values that are not derived from the reflection of the top surface of the photoresist layer are filtered out. A pixel count across all distances generated by applying a first range of characteristic values is depicted in FIG. As shown in Figure 15, Some but not necessarily all pixels from other distances (surfaces) are filtered out by applying the first range. This occurs when the characteristic values measured at a plurality of distances fall within the first range. however, Applying the first range before counting the pixels is still used to make the pixel count at the desired surface more prominent than other pixel counts at other distances. This is illustrated in Figure 15. After applying the first range, The pixel count at 6 is greater than the pixel count at distances 2 and 4. Before applying the first range, The pixel count at distance 6 is less than the pixel count at distances 2 and 4 (as shown in Figure 14).  In a similar way, When looking for the top surface of the ruthenium substrate layer, A pixel having a characteristic value outside the second range may be filtered using a second range that is a center of a desired property value of the germanium substrate layer. Thereby, pixels having characteristic values that are not derived from the reflection of the top surface of the ruthenium substrate layer are filtered out. A pixel count across all distances generated by applying a second range of characteristic values is depicted in FIG. This range application reduces the erroneous indication that a wafer surface is positioned at a distance 4 by knowing the expected characteristic values of all materials present on the scanned wafer. As discussed with respect to Figure 15, Some but not necessarily all pixels from other distances (surfaces) are filtered out by applying a range. however, When the characteristic values measured at a plurality of distances do not fall within the same range, The result of the application range will eliminate all pixel counts from other distances (surfaces). Figure 16 shows this case. In Figure 16, The second range is applied before the pixel count at each distance is generated. The result of applying the second range is to count only pixels at distance 2. This is a very clear indication that the surface of the germanium substrate is positioned at a distance of two.  It should be noted that To reduce the effects of potential noise, such as environmental vibrations, A standard smoothing operation, such as Gaussian filtering, can be applied to the total pixel count along the z-distance before any peak seek operation is performed.  Figure 17 is a flow chart diagram 200 of various steps involved in peak mode operation. In step 201, The distance between the sample and the objective lens of an optical microscope is changed in a predetermined step. In step 202, An image is captured at each predetermined step. In step 203, A characteristic of each pixel in each captured image is determined. In step 204, For each captured image, A determination is made as to the maximum characteristics of all pixels in the captured image. In step 205, The maximum characteristics of each captured image are compared to determine if one of the surfaces of the sample is present at each predetermined step.  Figure 18 is a flow diagram 300 showing one of the various steps involved in range mode operation. In step 301, The distance between the sample and the objective lens of an optical microscope is changed in a predetermined step. In step 302, An image is captured at each predetermined step. In step 303, A characteristic of each pixel in each captured image is determined. In step 304, For each captured image, A count of one of the pixels having a characteristic value within a first range is determined. In step 305, A surface of one of the samples is determined at each predetermined step based on the pixel count of each captured image.  Figure 19 is a diagram of a captured image containing one of a single feature. An example of a feature is an opening in a circular shape in the photoresist layer. Another example of a feature is one of the openings in the photoresist layer in the shape of a trench (such as an unplated wiring (RDL) structure). During the wafer process, It is advantageous to measure various features of one of the photoresist openings in a wafer layer. The measurement of a photoresist opening provides detection of defects in the structure before the metal is plated into the holes. E.g, If a photoresist opening does not have the correct size, The plated RDL width will be wrong. Detecting this type of defect prevents further fabrication of a defective wafer. Preventing further manufacturing of a defective wafer saves material and processing costs. FIG. 19 illustrates that when the captured image is focused on the top surface of the photoresist layer, The measured intensity of the light reflected from the top surface of the photoresist layer is greater than the measured intensity of the light reflected from the opening in the photoresist layer. As discussed in more detail below, The information associated with each pixel in the captured image can be used to generate an intensity value for each of the pixels in the captured image. then, The intensity value of each pixel and an intensity threshold can be compared to determine whether each pixel is associated with a first region of the captured image (such as the top surface of the photoresist layer) or with a second region of the captured image (such as a photoresist open area) is associated. This can be done by the following steps: (i) first applying an intensity threshold to the measured intensity of each pixel in the captured image; (ii) classifying all pixels having an intensity value below one of the intensity thresholds as being associated with the first region of the captured image; (iii) classifying all pixels having an intensity value above one of the intensity thresholds as being associated with a second region of the captured image; And (iv) defining a feature as a group of pixels adjacent to other pixels associated with the same region within the same region.  The captured image shown in Figure 19 can be a color image. Each pixel of the color image contains red, Blue and green (RBG) channel values. Each of these color values can be combined to produce a single intensity value for each pixel. Various methods for converting the RBG values of each pixel into a single intensity value are described below.  A first method uses three weighting values to convert three color channels into one intensity value. In other words, Each color channel has its own weighting or conversion factor. We can modify one of the three conversion factors defined in one of the system recipes or modify the three conversion factors based on their sample measurement requirements. A second method subtracts the color channel of each pixel from a preset color channel value of each color channel. This result is then converted to an intensity value using the conversion factor discussed in the first method. A third method uses a "color difference" scheme to convert color to intensity values. In a color difference scheme, By using a pixel of color compared to a predefined fixed red, The proximity of the green and blue color values defines the resulting pixel intensity. An example of chromatic aberration is the weighted vector distance between a color value of a pixel and a fixed color value. Yet another method of "color difference" is to have one of the fixed color values automatically derived from the image. In one example, The boundary area of one of the images is known to have a background color. The weighted average of the colors of the pixels in the boundary region can be used as a fixed color value for the color difference scheme.  Once the color image has been converted to an intensity image, The intensity threshold and the intensity of each pixel can be compared to determine the image region to which the pixel belongs. In other words, A pixel having one of the intensity values above the intensity threshold indicates that the pixel receives light reflected from the first surface of one of the samples, And having one of the intensity values below one of the intensity thresholds indicates that the pixel has not received light reflected from the first surface of the sample. Once the pixels in the image are mapped to an area, The approximate shape of the feature focused on the image can be determined.  Figure 20, 21 and 22 illustrate three different methods of generating an intensity threshold. The intensity threshold can be used to measure the pixels of the light reflected from the top surface of the photoresist layer and the pixels that measure the light that is not reflected from the top surface of the photoresist layer.  Figure 20 illustrates a first method of generating one of the intensity thresholds for analyzing a captured image. In this first method, A pixel count is generated for each measured intensity value. This type of graph is also known as a histogram. Once the pixel count for each intensity value is generated, It is then possible to determine the range of intensities between the peak counts of the pixels originating from the measured light reflected from the photoresist layer and the peak counts of the pixels originating from the measured light not reflected from the photoresist layer. One of the intensity values within the intensity range is selected as the intensity threshold. In one example, The midpoint between the two peak counts is chosen as the threshold strength. In other examples falling within the disclosure of the present invention, Other intensity values between the two peak counts can be used.  Figure 21 is a second method of generating one of the intensity thresholds for analyzing a captured image. In step 311, A determination is made as to a first percentage of the captured image representing the photoresist region. Can be measured by physical means, This determination is made by optical inspection or based on production specifications. In step 312, A determination is made as to a second percentage of the captured image representing the open area of the photoresist. Can be measured by physical means, This determination is made by optical inspection or based on production specifications. In step 313, All pixels in the captured image are classified according to the intensity measured by each pixel. In step 314, Select all pixels with one intensity within the penultimate percentage of all pixel intensities. In step 315, Analyze all selected pixels.  Figure 22 illustrates a third method of determining a strength threshold. In step 321, A predetermined intensity threshold is stored in the memory. In step 322, The intensity of each pixel and the stored intensity threshold are compared. In step 323, Select all pixels that have an intensity value that is less than one of the intensity thresholds. In step 324, Analyze selected pixels.  Nothing about how to generate intensity thresholds, The threshold intensity value is used to roughly determine the location of the boundary of the feature in the captured image. then, The approximate boundary of the feature will be used to more accurately measure one of the boundaries of the feature, As discussed below.  Figure 23 is a three-dimensional view of one of the photoresist openings shown in Figure 19. Pay attention to various photoresist opening measurements during the process, Such as the area of the top opening and the bottom opening, The diameter of the top opening and the bottom opening, The top opening and the circumference of the bottom opening, The cross-sectional width of the top opening and the bottom opening and the depth of the opening. A first measurement system has a top surface opening area. Figure 8 (and accompanying text) describes how a plurality of images obtained at different distances from the sample are selected to focus on one of the images on the top surface of the photoresist opening and one of the images on the bottom surface of the photoresist opening. Once you select an image that is focused on the top surface, The top opening measurement can be determined using an image that is focused on the top surface of the photoresist opening. Similarly, Once the image is focused on the bottom surface of the photoresist opening, The bottom opening measurement can be determined using an image focused on the bottom surface of the photoresist opening. U.S. Patent Application Serial No. 12/699, entitled "3-D Optical Microscope" by James Jianguo Xu et al. No. 824 (the subject matter of which is incorporated herein by reference), A pattern or grid can be projected onto the surface of the sample while capturing multiple images. In one example, An image containing a projected pattern or grid is used to determine the photoresist opening measurement. In another example, A new image that is not included in the pattern or grid captured at the same z-distance is used to determine the photoresist opening measurement. In the latter instance, A new image that does not have a projected pattern or grid on the sample provides a "clearer" image. It provides easier detection of the boundaries of the photoresist opening.  Figure 24 is a two dimensional view of the top surface opening shown in Figure 23. The two-dimensional map clearly shows the boundary of the top surface opening (solid line) 40. The boundary is tracked using a best fit line (dashed line 41). Once the best fit line tracking is produced, The diameter of the best fit line 41 can be produced, Area and circumference.  Figure 25 is a two-dimensional view of the bottom surface opening shown in Figure 23. The two-dimensional map clearly shows the boundary of the bottom surface opening (solid line 42). The boundary is tracked using a best fit line (dashed line 43). Once the best fit line tracking is produced, Calculate the bottom surface opening diameter of the best fit line, Area and circumference.  In this example, A best fit line is automatically generated by a computer system that communicates with the optical microscope. A best fit line can be generated by analyzing the transition between the dark and bright portions of the selected image. As discussed in more detail below.  Figure 26 is a two dimensional image of one of the openings in a photoresist layer. Focus the image on the top surface of the photoresist layer. In this example, The light reflected from the top surface of the photoresist layer is bright, Because the microscope is focused on the top surface of the photoresist layer. The light intensity measured from the photoresist opening is dark, Because there is no reflective surface in the photoresist opening. The intensity of each pixel is used to determine if the pixel belongs to the top surface of the photoresist or the opening in the photoresist. The intensity change from the transition between the top surface of the photoresist and the opening in the photoresist can span multiple pixels and multiple intensity levels. The background intensity of the image may not be uniform. therefore, Further analysis is needed to determine the exact pixel location of the boundary of the photoresist. To determine the pixel location of a single surface transition point, Obtaining an average value of intensity in one of the adjacent bright areas outside the transition zone, And an average of the intensity is obtained in the adjacent dark areas outside the transition zone. The intermediate intensity value between the average of the adjacent bright areas and the average of the adjacent dark areas is used as the intensity threshold for distinguishing whether a pixel belongs to the top surface of the photoresist or the opening in the photoresist. This intensity threshold can be different from the previously discussed intensity threshold for selecting features within a single captured image. Once the intermediate strength threshold is determined, The intermediate intensity threshold is compared to all pixels to distinguish pixels belonging to the top surface of the photoresist or the opening in the photoresist. If the pixel intensity is higher than the intensity threshold, The pixel is then determined to be a photoresist pixel. If the pixel intensity is below the intensity threshold, The pixel is then determined to be an open area pixel. Multiple boundary points can be determined in this manner and used to fit a shape. then, The fitted shape is used to calculate all the desired dimensions of the top opening of the photoresist. In one example, The fitted shape can be selected from the following groups: Round, Square, rectangle, triangle, Oval, hexagon, Pentagon and so on.  Figure 27 illustrates the variation in measured intensity across adjacent regions around the luminance transition of Figure 26. In the leftmost part of the adjacent area, The measured intensity is high, Because the microscope is focused on the top surface of the photoresist layer. The measured light intensity is reduced by the brightness transition of the adjacent zone. The measured light intensity drops to a minimum range at the rightmost portion of the adjacent zone, Because there is no top surface of the photoresist layer in the rightmost portion of the adjacent region. Figure 27 plots this variation in measured intensity across adjacent regions. then, A boundary point indicating where the top surface of the photoresist layer ends can be determined by applying a threshold intensity. The boundary point at the end of the top surface of the photoresist is located at the intersection of the measured intensity and the threshold intensity. This procedure is repeated at different adjacent locations located along the brightness transition. A boundary point is determined for each adjacent zone. then, The boundary points of the adjacent regions are used to determine the size and shape of the top surface boundary.  Figure 28 is a two dimensional image of one of the openings in a photoresist layer. Focus the image on the bottom surface of the photoresist opening. In this example, The light reflected from the bottom surface of the open area of the photoresist is bright, Because the microscope is focused on the bottom surface of the photoresist opening. The light reflected from the photoresist area is also relatively bright. Because the substrate has a high reflectivity or a metal seed layer. Due to light scattering caused by the photoresist boundary, Light reflected from the boundary of the photoresist layer is dark. The measured intensity of each pixel is used to determine if the pixel belongs to the bottom surface of the photoresist opening. The intensity change from the transition between the bottom surface of the photoresist and the open area of the photoresist can span multiple pixels and multiple intensity levels. The background intensity of the image may not be uniform. therefore, Further analysis is needed to determine the exact pixel location of the photoresist opening. To determine the pixel position of a boundary point, A position having one of the smallest intensity pixels is determined within the adjacent pixel. Multiple boundary points can be determined in this manner and used to fit a shape. then, The fitted shape is used to calculate the desired size of the bottom opening.  Figure 29 illustrates the variation in measured intensity across adjacent regions around the luminance transition of Figure 28. At the far right part of the adjacent area, The measured intensity is high, Because the microscope is focused on the bottom surface of the photoresist opening. The measured light intensity is reduced to a minimum intensity and then reduced by the brightness transition of the adjacent zone. Due to light reflection from the surface of the substrate, The measured light intensity rises to a relatively high intensity range at the rightmost portion of the adjacent zone. Figure 29 plots this variation in measured intensity across adjacent regions. then, The boundary point indicating the location at which the boundary of the photoresist opening is located can be determined by finding the position of the minimum measured intensity. The boundary points are located at the location where the minimum measured intensity is at. The procedure is repeated at different adjacent zones located along the brightness transition. A boundary point is determined for each adjacent zone. then, The boundary points of the adjacent regions are used to determine the size and shape of the bottom surface boundary.  Figure 30 is a two dimensional image of a trench structure, such as an unplated wiring (RDL) structure, in a photoresist layer. Focus the image on the top surface of the photoresist layer. In this example, The light reflected from the top surface of the photoresist layer is bright, Because the microscope is focused on the top surface of the photoresist layer. The light reflected from the opening in the photoresist layer is dark, Because less light is reflected from the open trench region. The intensity of each pixel is used to determine whether the pixel belongs to the top surface of the photoresist or the open area in the photoresist. The intensity change from the transition between the top surface of the photoresist and the open region in the photoresist can span multiple pixels and multiple intensity levels. The background intensity of the image may not be uniform. therefore, Further analysis is needed to determine the exact pixel location of the boundary of the photoresist. To determine the pixel location of a single surface transition point, Obtaining an average value of intensity in one of the adjacent bright areas outside the transition zone, And an average of the intensity is obtained in the adjacent dark areas outside the transition zone. The intermediate intensity value between the average of the adjacent bright areas and the average of the adjacent dark areas is used as the intensity threshold for distinguishing the top surface resist reflection from the non-top surface resist reflection. Once the intermediate strength threshold is determined, The intermediate intensity threshold is compared to all adjacent pixels to determine a boundary between the top surface pixel and the photoresist open area. If the pixel intensity is higher than the intensity threshold, The pixel is then determined as a top surface photoresist pixel. If the pixel intensity is below the intensity threshold, The pixel is then determined to be a photoresist open area pixel. Multiple boundary points can be determined in this manner and used to fit a shape. then, Use the fitted shape to calculate all the desired dimensions of the photoresist opening of the trench, Such as the groove width.  Figure 31 illustrates the variation in measured intensity across adjacent regions around the luminance transition of Figure 30. In the leftmost part of the adjacent area, The measured intensity is high, Because the microscope is focused on the top surface of the photoresist layer. The measured light intensity is reduced by the brightness transition of the adjacent zone. The measured light intensity drops to a minimum range at the rightmost portion of the adjacent zone, Because there is no top surface of the photoresist layer in the rightmost portion of the adjacent region. Figure 31 plots this variation in measured intensity across adjacent regions. then, A boundary point indicating the end of the top surface of the photoresist layer can be determined by applying a threshold intensity. The boundary point at the end of the top surface of the photoresist is located at the intersection of the measured intensity and the threshold intensity. This procedure is repeated at different adjacent locations located along the brightness transition. A boundary point is determined for each adjacent zone. then, The boundary points of the adjacent regions are used to determine the size and shape of the top surface boundary.  Regarding Figures 26 to 31, Pixel intensity is only one example of pixel characteristics that can be used to distinguish pixels of different regions in an image. E.g, Pixels from different regions of an image can also be distinguished in a similar manner using the wavelength or color of each pixel. Once the boundaries between the regions are precisely defined, then, Using the boundary to determine the critical dimension (CD) of a PR opening, Such as its diameter or width. usually, The measured CD values are then compared to values measured on other types of tools, such as a critical dimension scanning electron microscope (CD-SEM). To ensure measurement accuracy in production monitoring tools, This type of cross calibration is necessary.  Figure 32 is a three dimensional view of one of the photoresist openings partially filled with metallization. The opening in the photoresist layer has a groove shape. Such as a plated heavy wiring (RDL) structure. During the wafer process, It is advantageous to measure the various features of the metallization deposited into the photoresist opening while the photoresist is still intact. E.g, If the thickness of the metal is not thick enough, Then we can always plate extra metal, As long as the photoresist has not been stripped. The ability to spot potential problems while the wafer is still in a working phase prevents further fabrication of a defective wafer and saves material and processing costs.  Figure 33 is a cross-sectional view of one of the photoresist openings partially filled with a metallization. Figure 33 clearly shows that the height of the top surface of the photoresist ("PR") region is greater than the height of the top surface of the metallization. The width of the top surface of the metallization is also shown in FIG. Using the various methods described above, The z position of the top surface of the photoresist region and the z position of the top surface of the metallization can be determined. The distance between the top surface of the photoresist region and the top surface of the metallization (also referred to as "step height") is equal to the difference between the height of the top surface of the photoresist region and the height of the top surface of the metallization. To determine the thickness of the metallization, Another measurement of the thickness of the photoresist area is required. As discussed above with respect to Figure 11, The photoresist region is translucent and has a refractive index that is different from the refractive index of the open air. therefore, The focal plane of the captured image that is focused on the light reflected from the bottom surface of the photoresist region is not actually positioned at the bottom surface of the photoresist region. however, at this time, Our goals are different. I don't want to filter out the wrong surface measurement, Instead, the thickness of the photoresist area is now required. Figure 40 illustrates how a portion of the incident light that is not reflected from the top surface of the photoresist region travels through the photoresist region at an angle different from the incident light due to the refractive index of the photoresist material. If this error is not resolved, The measured thickness of the photoresist region D' (focusing on the measured z-position of the captured image on the light reflected from the top surface of the photoresist region minus the focus on the bottom surface of the self-resisting region The position of the image on the light is measured by the z position), Figure 40 clearly shows that the measured thickness D' is not close to the actual thickness D of the photoresist region. however, The error introduced by the refractive index of the photoresist region can be removed by applying a correction calculation to the measured thickness of the photoresist region. A first correction calculation is shown in FIG. The actual thickness (D) of the photoresist region is equal to the measured thickness (D') of the photoresist region multiplied by the refractive index of the photoresist region. A second correction calculation is shown in FIG. The actual thickness (D) of the photoresist region is equal to the measured thickness (D') of the photoresist region multiplied by the refractive index of the photoresist region plus an offset value. The second correction calculation is more general and considers the following facts: The refractive index of the photoresist varies depending on the wavelength and when imaged through a transparent medium, The spherical aberration of an objective lens can affect the z position measurement. therefore, As long as the appropriate calibration procedure is followed, The actual thickness of the photoresist region can be calculated using one of the focal planes of the focal plane of the captured image focused on the light reflected from the bottom surface of the photoresist region.  Once the correction equation is applied to the measured thickness of the photoresist region, The true thickness of the photoresist area is obtained. Referring again to Figure 33, The thickness of the metallization can now be calculated. The thickness of the metallization is equal to the difference between the thickness of the photoresist region minus the z-position of the top surface of the photoresist region and the z-position of the top surface of the metallization.  Figure 34 is a three dimensional view of one of the circular photoresist openings with metallization. Figure 35 is a cross-sectional view of one of the circular photoresist openings having the metallization shown in Figure 34. Figure 35 is a cross-sectional view similar to the cross-sectional view of Figure 33. Figure 35 clearly shows that the height of the top surface of the photoresist ("PR") region is greater than the height of the top surface of the metallization. Using the various methods described above, The z position of the top surface of the photoresist region and the z position of the top surface of the metallization can be determined. The distance between the top surface of the photoresist region and the top surface of the metallization (also referred to as "step height") is equal to the difference between the height of the top surface of the photoresist region and the height of the top surface of the metallization. To determine the thickness of the metallization, Another measurement of the thickness of the photoresist area is required. As discussed above with respect to Figure 11, The photoresist region is translucent and has a refractive index that is different from the refractive index of the open air. therefore, The focal plane of the captured image that is focused on the light reflected from the bottom surface of the photoresist region is not actually positioned at the bottom surface of the photoresist region. however, at this time, Our goals are different. The thickness of the photoresist area is now required. Figure 40 illustrates how a portion of the incident light that is not reflected from the top surface of the photoresist region travels through the photoresist region at an angle different from the incident light due to the refractive index of the photoresist material. If this error is not resolved, The measured thickness of the photoresist region D' (focusing on the measured z-position of the captured image on the light reflected from the top surface of the photoresist region minus the focus on the bottom surface of the self-resisting region The position of the image on the light is measured by the z position), Figure 40 clearly shows that the measured thickness D' is not close to the actual thickness D of the photoresist region. however, The error introduced by the refractive index of the photoresist region can be removed by applying a correction calculation to the measured thickness of the photoresist region. A first correction calculation is shown in FIG. The actual thickness (D) of the photoresist region is equal to the measured thickness (D') of the photoresist region multiplied by the refractive index of the photoresist region. A second correction calculation is shown in FIG. The actual thickness (D) of the photoresist region is equal to the measured thickness (D') of the photoresist region multiplied by the refractive index of the photoresist region plus an offset value. The second correction calculation is more general and considers the following facts: The refractive index of the photoresist varies depending on the wavelength and the spherical aberration of an objective lens can affect the z-position measurement when imaged through a transparent medium. therefore, As long as the appropriate calibration procedure is followed, The actual thickness of the photoresist region can be calculated using one of the focal planes of the focal plane of the captured image focused on the light reflected from the bottom surface of the photoresist region.  Once the correction equation is applied to the measured thickness of the photoresist region, The true thickness of the photoresist area is obtained. Referring again to Figure 35, The thickness of the metallization can now be calculated. The thickness of the metallization is equal to the difference between the thickness of the photoresist region minus the z-position of the top surface of the photoresist region and the z-position of the top surface of the metallization.  Figure 36 is a three dimensional view of one of the metal pillars above the passivation layer. Figure 37 is a cross-sectional view of one of the metal posts above the passivation layer shown in Figure 36. Figure 37 clearly shows that the height of the top surface of the passivation layer is less than the height of the top surface of the metal layer. The diameter of the top surface of the metallization is also shown in FIG. Using the various methods described above, The z position of the top surface of the passivation layer and the z position of the top surface of the metal layer can be determined. The distance between the top surface of the passivation layer and the top surface of the metal layer (also referred to as "step height") is equal to the difference between the height of the top surface of the metal layer and the height of the top surface of the passivation layer. To determine the thickness of the metal layer, Another measurement of the thickness of the passivation layer is required. As discussed above with respect to Figure 11, A translucent material, such as a photoresist region or a passivation layer, has a refractive index that is different from the refractive index of the open air. therefore, The focal plane of the captured image that is focused on the light reflected from the bottom surface of the passivation layer is not actually positioned at the bottom surface of the passivation layer. however, at this time, Our goals are different. The thickness of the passivation layer is now required. Figure 47 illustrates how a portion of the incident light that is not reflected from the top surface of the passivation layer travels through the passivation layer at an angle different from the incident light due to the refractive index of the passivation material. If this error is not resolved, The measured thickness of the passivation layer D' (focusing on the measured z position of the captured image on the light reflected from the top surface of the passivation region minus the light reflected on the bottom surface of the self-passivation region) After measuring the z position of the image, Figure 47 clearly shows that the measured thickness D' is not close to the actual thickness D of the passivation layer. however, The error introduced by the refractive index of the passivation layer can be removed by applying a correction calculation to the measured thickness of the passivation layer. A first correction calculation is shown in FIG. 47, Wherein the actual thickness (D) of the passivation layer is equal to the measured thickness (D') of the passivation layer multiplied by the refractive index of the passivation layer. A second correction calculation is shown in FIG. Wherein the actual thickness (D) of the passivation layer is equal to the measured thickness (D') of the passivation layer multiplied by the refractive index of the passivation layer plus an offset value. The second correction calculation is more general and considers the following facts: The refractive index of the passivation layer varies depending on the wavelength and when imaged through a transparent medium, The spherical aberration of an objective lens can affect the z position measurement. therefore, As long as the appropriate calibration procedure is followed, The actual thickness of the passivation layer is calculated using one of the focal planes of the focal plane of the image captured on the light reflected from the bottom surface of the passivation layer.  Once the correction equation is applied to the measured thickness of the passivation layer, The true thickness of the passivation layer is obtained. Referring again to Figure 37, The thickness of the metal layer can now be calculated. The thickness of the metal layer is equal to the sum of the thickness of the passivation layer and the difference between the z position of the top surface of the passivation layer and the z position of the top surface of the metal layer.  Figure 38 is a three dimensional view of a metal above the passivation layer. In this particular case, The metal structure shown is a heavy wiring (RDL). Figure 39 is a cross-sectional view of one of the metals above the passivation layer shown in Figure 38. Figure 39 clearly shows that the height of the top surface of the passivation layer is less than the height of the top surface of the metal layer. Using the various methods described above, The z position of the top surface of the passivation layer and the z position of the top surface of the metal layer can be determined. The distance between the top surface of the passivation layer and the top surface of the metal layer (also referred to as "step height") is equal to the difference between the height of the top surface of the metal layer and the height of the top surface of the passivation layer. To determine the thickness of the metal layer, Another measurement of the thickness of the passivation layer is required. As discussed above with respect to Figure 11, A translucent material, such as a photoresist region or a passivation layer, has a refractive index that is different from the refractive index of the open air. therefore, The focal plane of the captured image that is focused on the light reflected from the bottom surface of the passivation layer is not actually positioned at the bottom surface of the passivation layer. however, at this time, Our goals are different. The thickness of the passivation layer is now required. Figure 40 illustrates how a portion of the incident light that is not reflected from the top surface of the passivation layer travels through the passivation layer at an angle different from the incident light due to the refractive index of the passivation material. If this error is not resolved, The measured thickness of the passivation layer D' (focusing on the measured z position of the captured image on the light reflected from the top surface of the passivation region minus the light reflected on the bottom surface of the self-passivation region) After measuring the z position of the image, Figure 40 clearly shows that the measured thickness D' is not close to the actual thickness D of the passivation layer. however, The error introduced by the refractive index of the passivation layer can be removed by applying a correction calculation to the measured thickness of the passivation layer. A first correction calculation is shown in FIG. Wherein the actual thickness (D) of the passivation layer is equal to the measured thickness (D') of the passivation layer multiplied by the refractive index of the passivation layer. A second correction calculation is shown in FIG. Wherein the actual thickness (D) of the passivation layer is equal to the measured thickness (D') of the passivation layer multiplied by the refractive index of the passivation layer plus an offset value. The second correction calculation is more general and considers the following facts: The refractive index of the passivation layer varies depending on the wavelength and when imaged through a transparent medium, The spherical aberration of an objective lens can affect the z position measurement. therefore, As long as the appropriate calibration procedure is followed, The actual thickness of the passivation layer can be calculated using one of the focal planes of the focal plane of the captured image focused on the light reflected from the bottom surface of the passivation layer.  Once the correction equation is applied to the measured thickness of the passivation layer, The true thickness of the passivation layer can be obtained. Referring again to Figure 39, The thickness of the metal layer can now be calculated. The thickness of the metal layer is equal to the sum of the thickness of the passivation layer and the difference between the z position of the top surface of the passivation layer and the z position of the top surface of the metal layer.  Figure 41 is a graph showing the peak mode operation of images taken at various distances when a photoresist opening is in the field of view of an optical microscope. The captured image depicted in Figure 41 is obtained from a sample similar to the sample structure shown in Figure 32. This structure is a metallized trench structure. The top view of the sample shows the area of the photoresist opening (a metallization) in the x-y plane. The PR opening also has a depth (higher than metallization) at a particular depth in the z-direction. The top view in Figure 41 below shows the images captured at various distances. At distance 1, The optical microscope is not focused on the top surface of the photoresist area or the top surface of the metallization. At distance 2, The optical microscope is focused on the top surface of the metallization, However, it is not focused on the top surface of the photoresist region. This results in a pixel that is received from light that is reflected from other surfaces of the defocus (the top surface of the photoresist region) One of the pixels received from the light reflected from the top surface of the metallization increases the characteristic value (intensity/contrast/streak contrast). At a distance of 3, The optical microscope is not focused on the top surface of the photoresist area or the top surface of the metallization. therefore, At a distance of 3, The maximum characteristic value will be substantially lower than the maximum characteristic value measured at distance 2. At distance 4, The optical microscope is not focused on any surface of the sample; however, Due to the difference between the refractive index of air and the refractive index of the photoresist region, One of the measured maximum characteristic values (intensity/contrast/streak contrast) is increased. Figure 11, Figure 11, Figure 40 and accompanying text describe this phenomenon in more detail. At a distance of 6, The optical microscope is focused on the top surface of the photoresist area, However, it is not focused on the top surface of the metallization. This results in a pixel that is received from light that is reflected from other surfaces of the defocus (the top surface of the metallization) One of the pixels received from the light reflected from the top surface of the photoresist region increases the characteristic value (intensity/contrast/streak contrast). Once the maximum characteristic value from each captured image is determined, The results can be used to determine at which distances each surface of the wafer is positioned.  Figure 42 is a graph showing one of three-dimensional information derived from the peak mode operation illustrated in Figure 41. As discussed with respect to Figure 41, At distance 1, The maximum characteristic values of the images captured at 3 and 5 have a lower than at distance 2 One of the maximum characteristic values of the image captured at 4 and 6 and the maximum characteristic value. The curve of the maximum characteristic value at various z distances may contain noise due to environmental effects such as vibration. To minimize this noise, A standard smoothing method can be applied before further data analysis, Such as Gaussian filtering with a specific core size.  A method of comparing the maximum characteristic values by a peak finding algorithm. In one example, The zero crossing point is located along the z-axis using a derivative method to determine the distance between each "peak". then, The maximum characteristic value at each distance at which a peak is found is compared to determine the distance at which the maximum characteristic value is measured. In the case shown in Figure 42, A peak will be found at distance 2, This is used as an indication that one of the surfaces of the sample is positioned at a distance of two.  Another method of comparing the maximum characteristic values is performed by comparing each of the maximum characteristic values with a predetermined threshold. Based on wafer material, Distance and optical microscope specifications are used to calculate the threshold. Alternatively, The threshold can be determined by empirical testing prior to automated processing. In either case, Compare the maximum characteristic value and threshold value of each captured image. If the maximum characteristic value is greater than the threshold, It is then determined that the maximum characteristic value indicates the presence of one of the surfaces of the wafer. If the maximum characteristic value is not greater than the threshold, It is then determined that the maximum characteristic value does not indicate one of the surfaces of the wafer.  One of the peak mode methods described above is an alternative use, The range mode method and associated text depicted in Figure 13 can be used to determine the z position of different surfaces of the same.  Figure 43 is a view of one of the captured images focused on one of the top surfaces of one of the photoresist layers in a trench structure, A contour of a first analysis area A and a second analysis area B is included. As discussed above, The entire field of view of each captured image can be used to generate three-dimensional information. however, It is advantageous to have an option to generate three-dimensional information using only one selectable portion of the field of view (area A or region B). In one example, A user selects an area using a mouse or touch screen device that communicates with one of the processed images. Once selected, We can apply different thresholds to each region to more efficiently pick out a particular surface peak as shown in Figure 42. This case is illustrated in Figure 43. When it is desired to obtain three-dimensional information about the top surface of the metallization, Set a selectable portion of the field of view (Area A) to include multiple areas of metallization, This is because the characteristic value associated with a metal surface is typically greater than the characteristic value associated with the photoresist. Therefore, a high threshold can be applied to region A to filter out the characteristic values associated with the photoresist to improve the detection of metal surface peaks. Alternatively, When it is desired to obtain three-dimensional information about the top surface of a photoresist region, The selectable portion of the field of view (region B) is set to be located in one of the cells of a video center. Compared to the characteristic values associated with metal surfaces, The characteristic values associated with a photoresist surface are typically relatively weak. The quality of the original signal used to determine the property value calculation is optimal around the center of the field of view enclosed within region B. By setting an appropriate threshold for one of the areas B, We can more effectively detect the peak value of a weak characteristic value of the photoresist surface. The user can set and adjust the area A and the area B and the thresholds used in each area via the graphical interface of the top view image of the sample and store them in one of the recipes for automated measurement.  Figure 44 is a three dimensional view of one of the bumps above the passivation structure. Figure 45 is a top plan view of a bump above the passivation structure shown in Figure 44, A contour of a first analysis area A and a second analysis area B is included. Area A can be set, This allows area A to always contain the vertices of the metal bumps during an automated sequence measurement. Region B does not enclose any portion of the metal bumps and only encloses one portion of the passivation layer. Only analyze all areas of the captured image to provide pixel filtering. This allows most of the pixels analyzed to contain information about the metal bumps. Analyze all areas of the captured image to provide pixel filtering. All pixels analyzed are made to contain information about the passivation layer. The user can select an application in the analysis area to provide pixel filtering based on location rather than pixel values. E.g, When the position of the top surface of the passivation layer is required, Area B can be applied and all effects caused by the metal bumps can be eliminated immediately by analysis. In another example, When the position of the apex of the metal bump is required, Area A can be applied and all effects caused by the large passivation layer area can be eliminated immediately by analysis.  In some instances, The spatial relationship between the fixed area A and the area B is also useful. When the metal bump of one of the known sizes is equivalently measured (such as shown in FIGS. 44 and 45), The spatial relationship between fixed area A and area B is useful to provide a consistent measurement system, Because area A is always used to measure the three-dimensional information of the metal bumps and area B is always used to measure the three-dimensional information of the passivation layer. Furthermore, When area A and area B have a fixed spatial relationship, Adjustment of one area automatically causes one of the other areas to adjust. This scenario is illustrated in FIG. FIG. 46 is a plan view showing one of the analysis area A and the analysis area B when the entire bump is not positioned in the original analysis area A. This can happen for a variety of reasons, Such as a processor's inaccurate placement of one of the samples or program changes during sample manufacture. Whatever the reason, Area A needs to be adjusted to properly center the apex of the metal bump. Area B also needs to be adjusted to ensure that Area B does not contain any part of the metal bumps. When the spatial relationship between the area A and the area B is fixed, Then the adjustment of the area A automatically causes the realignment of the area B.  Figure 47 is a cross-sectional view of one of the bumps above the passivation structure illustrated in Figure 44. When the thickness of the passivation layer is substantially greater than the distance between the predetermined steps of the optical microscope during image acquisition, The z position of the top surface of the passivation layer can be easily detected as discussed above. however, When the thickness of the passivation layer is substantially no greater than the distance between the predetermined steps of the optical microscope (ie, When the passivation layer is relatively thin) It may not be easy to detect and measure the z position of the top surface of the passivation layer. The difficulty is due to a small percentage of the light reflected from the top surface of the passivation layer compared to a large percentage of the light reflected from the bottom surface of the passivation layer. In other words, Compared to the peak value of the characteristic value associated with the bottom surface of the passivation layer, The peak value of the characteristic value associated with the top surface of the passivation layer is very weak. The captured image is captured at a predetermined step on a high intensity reflection from the bottom surface of the passivation layer and at a predetermined step on a low intensity reflection from the top surface of the passivation layer When the distance is less than a few predetermined steps, It is not possible to distinguish between the reflection received from the bottom surface of the passivation layer and the reflection received from the top surface of the passivation layer. This problem can be solved by different methods of operation.  In a first method, Increase the total number of steps in the cross scan. In order to provide additional resolution across the entire scan. E.g, Doubling the number of predetermined steps across the same scan distance, This will result in a doubling of the Z resolution of the scan. This method will also result in doubling the amount of image captured during a single scan. The resolution of the scan can be increased until the characteristic peak from the top surface reflection measurement and the characteristic peak from the bottom surface reflection measurement can be distinguished. Figure 49 depicts one of the scenarios in which sufficient resolution is provided in the scan to distinguish reflections from the top and bottom surfaces of the passivation layer.  In a second method, Also increase the total number of predetermined steps, however, Only one part of the step is used to capture the image and the rest is skipped.  In a third method, The distance between the predetermined steps can be changed, The distance between the steps is made smaller near the passivation layer and the distance between the steps is larger outside the vicinity of the passivation layer. This method provides greater resolution near the passivation layer and a smaller resolution outside of the passivation layer. This method does not require adding additional predetermined steps to the scan. Rather, the predetermined steps are redistributed in a non-linear fashion to provide additional resolution where needed without sacrificing lower resolution without high resolution.  For an additional description of how to improve the scanning resolution, See U.S. Patent Application Serial No. 13/333, entitled "3D Microscope Including Insertable Components To Provide Multiple Imaging and Measurement Capabilities", filed on December 21, 2011 by James Jiang. No. 938 (the subject matter of which is incorporated herein by reference).  Using any of the methods discussed above, The z position of the top surface of the passivation layer can be determined.  The height of the apex of the metal bump relative to the top surface of the passivation layer ("bump height above the passivation layer") is also a measure of interest. The height of the bump above the passivation layer is equal to the z position of the apex of the bump minus the z position of the top surface of the passivation layer. The determination of the z position of the top surface of the passivation layer is described above. The determination of the z-position of the vertices of the bumps can be performed using different methods.  In a first method, The z-position of the apex of the bump is determined by determining the z-position of the peak characteristic value of each x-y pixel position of all captured images. In other words, For each x-y pixel location, The measured characteristic values are compared across all captured images at each z position and the z position containing the largest characteristic values is stored in an array. The result of performing this procedure across all x-y pixel locations is an array of all x-y pixel locations and an associated peak z position for each x-y pixel location. The maximum z position in the array is measured as the z position of the apex of the bump. For an additional description of how to generate 3D information, See U.S. Patent Application Serial No. 12/699, entitled "3-D Optical Microscope", filed on February 3, 2010, by 824 and US Patent No. 8, 174, No. 762 (the subject matter of which is incorporated herein by reference).  In a second method, The z-position of the apex of the bump is determined by fitting a three-dimensional model to one of the surfaces of the bump and then calculating the peak of the surface of the bump using the three-dimensional model. In one example, This can be done by generating the same array described above with respect to the first method, however, Once the array is completed, The array is used to generate a three-dimensional model. A three-dimensional model can be generated using a second-order polynomial function fitted to one of the data. Once the 3D model is generated, The derivative of the surface slope of the bump is determined. The apex of the bump is calculated to be located where the derivative of the surface slope of the bump is equal to zero.  Once the z position of the apex of the bump is determined, The height of the bump above the passivation layer can be calculated by subtracting the z position of the top surface of the passivation layer from the z position of the apex of the bump.  Figure 48 is a graph showing the peak mode operation of images taken at various distances when only one passivation layer is in the region B of the field of view of the optical microscope. By analyzing only the pixels in region B (shown in Figure 45), Exclude all pixel information about metal bumps. therefore, The three-dimensional information generated by analyzing the pixels in region B will only be affected by the passivation layer present in region B. The captured image depicted in Figure 48 is obtained from a sample similar to the sample structure shown in Figure 44. This structure is one of the metal bumps above the passivation structure. The top view of the sample shows the area of the passivation layer in the x-y plane. In the case where only pixels in the area B are selected, Metal bumps are not visible in the top view. The top view in Figure 48 below shows the images captured at various distances. At distance 1, The optical microscope is not focused on the top surface of the passivation layer or the top surface of the passivation layer. At distance 2, The optical microscope is not focused on any surface of the sample; however, Due to the difference between the refractive index of air and the refractive index of the passivation layer, One of the largest specific values (intensity/contrast/streak contrast) is measured to increase. Figure 11, Figure 11, Figure 40 and accompanying text describe this phenomenon in more detail. At a distance of 3, The optical microscope is not focused on the top surface of the passivation layer or the bottom surface of the passivation layer. therefore, At a distance of 3, The maximum characteristic value will be substantially lower than the characteristic value measured at distance 2. At distance 4, An optical microscope is focused on the top surface of the passivation layer, This results in a pixel that is received from light reflected from other surfaces of the defocusing One of the pixels received from the light reflected from the top surface of the passivation layer increases the characteristic value (intensity/contrast/streak contrast). At a distance of 5, 6 and 7 places, The optical microscope is not focused on the top surface of the passivation layer or the bottom surface of the passivation layer. therefore, At a distance of 5, 6 and 7 places, The maximum characteristic value will be substantially lower than the characteristic value measured at distances 2 and 4. Once the maximum characteristic value from each captured image is determined, The results can be used to determine at which distances each surface of the sample is located.  Figure 49 is a graph showing one of the three-dimensional information from the peak mode operation of Figure 48. Due to the pixel filtering provided by analyzing only the pixels in all regions B of the captured image, The peak mode operation only provides one of the surfaces of the passivation layer at one of the two z positions (2 and 4). The top surface of the passivation layer is positioned at the higher of the two indicated z position positions. The lowest of the two indicated z position positions is an error "artifact surface". Light reflected from the bottom surface of the passivation layer is measured due to the refractive index of the passivation layer. Measuring the z-position of the top surface of the passivation layer using only pixels positioned within region B simplifies peak mode operation and reduces the likelihood of false measurements due to light reflections from metal bumps positioned on the same sample.  One of the peak mode methods described above is an alternative use, The range mode method and associated text depicted in Figure 13 can be used to determine the z position of different surfaces of the same.  Although certain specific embodiments have been described above for instructional purposes, However, the teachings of this patent document have general applicability and are not limited to the specific embodiments described above. therefore, Various modifications of various features of the described embodiments can be practiced without departing from the scope of the invention as set forth in the appended claims. Adaptation and combination.

1‧‧‧半自動化三維計量系統1‧‧‧Semi-automated 3D metering system

2‧‧‧載物台2‧‧‧stage

3‧‧‧晶圓3‧‧‧ wafer

4‧‧‧電腦4‧‧‧ computer

5‧‧‧開啟/關閉按鈕5‧‧‧ On/Off button

10‧‧‧三維成像顯微鏡10‧‧‧3D Imaging Microscope

11‧‧‧可調整物鏡11‧‧‧Adjustable objective

12‧‧‧可調整載物台12‧‧‧Adjustable stage

20‧‧‧三維計量系統20‧‧‧3D metering system

21‧‧‧三維顯微鏡21‧‧‧3D microscope

22‧‧‧樣本處置器/載物台22‧‧‧sample handler/stage

23‧‧‧電腦23‧‧‧ computer

24‧‧‧處理器24‧‧‧ Processor

25‧‧‧儲存裝置25‧‧‧Storage device

26‧‧‧網路裝置26‧‧‧Network devices

27‧‧‧顯示器27‧‧‧ display

28‧‧‧輸入裝置28‧‧‧ Input device

29‧‧‧網路29‧‧‧Network

30‧‧‧矽基板30‧‧‧矽 substrate

31‧‧‧光阻層31‧‧‧ photoresist layer

40‧‧‧頂表面開口之邊界40‧‧‧Boundary of the top surface opening

41‧‧‧最佳擬合線41‧‧‧Best fit line

42‧‧‧底表面開口之邊界42‧‧‧Boundary of the bottom surface opening

43‧‧‧最佳擬合線43‧‧‧Best fit line

200‧‧‧流程圖200‧‧‧flow chart

201‧‧‧步驟201‧‧‧Steps

202‧‧‧步驟202‧‧‧Steps

203‧‧‧步驟203‧‧‧Steps

204‧‧‧步驟204‧‧‧Steps

205‧‧‧步驟205‧‧‧Steps

300‧‧‧流程圖300‧‧‧ Flowchart

301‧‧‧步驟301‧‧‧Steps

302‧‧‧步驟302‧‧‧Steps

303‧‧‧步驟303‧‧ steps

304‧‧‧步驟304‧‧‧Steps

305‧‧‧步驟305‧‧‧Steps

311‧‧‧步驟311‧‧‧Steps

312‧‧‧步驟312‧‧ steps

313‧‧‧步驟313‧‧‧Steps

314‧‧‧步驟314‧‧‧Steps

315‧‧‧步驟315‧‧‧Steps

321‧‧‧步驟321‧‧‧Steps

322‧‧‧步驟322‧‧‧Steps

323‧‧‧步驟323‧‧‧Steps

324‧‧‧步驟324‧‧‧Steps

隨附圖式(其中相同數字指示相同組件)繪示本發明之實施例。 圖1係執行一樣本之自動化三維量測之一半自動化三維計量系統1之一圖。 圖2係包含可調整物鏡11及一可調整載物台12之一三維成像顯微鏡10之一圖。 圖3係包含一三維顯微鏡、一樣本處置器、一電腦、一顯示器及輸入裝置之一三維計量系統20之一圖。 圖4係繪示在變更光學顯微鏡之物鏡與載物台之間的距離時擷取影像之一方法之一圖。 圖5係繪示光學顯微鏡之物鏡與樣本表面之間的距離之一圖表,其中各x-y座標具有最大特性值。 圖6係使用在圖5中展示之各x-y座標之最大特性值呈現之一影像之一三維圖。 圖7係繪示使用在各種距離處擷取之影像之峰值模式操作之一圖。 圖8係繪示當一光阻開口在光學顯微鏡之視場內時使用在各種距離處擷取之影像之峰值模式操作之一圖。 圖9係繪示源自峰值模式操作之三維資訊之一圖表。 圖10係繪示使用在各種距離處擷取之影像之求和模式操作之一圖。 圖11係繪示在使用求和模式操作時之錯誤表面偵測之一圖。 圖12係繪示源自求和模式操作之三維資訊之一圖表。 圖13係繪示使用在各種距離處擷取之影像之範圍模式操作之一圖。 圖14係繪示源自範圍模式操作之三維資訊之一圖表。 圖15係僅繪示具有一第一範圍內之一特性值之像素計數之一圖表。 圖16係僅繪示具有一第二範圍內之一特性值之像素計數之一圖表。 圖17係繪示包含於峰值模式操作中之各種步驟之一流程圖。 圖18係繪示包含於範圍模式操作中之各種步驟之一流程圖。 圖19係聚焦在一光阻層之頂表面上之一經擷取影像(包含一單一特徵)之一圖。 圖20係繪示產生一強度臨限值之一第一方法之一圖。 圖21係繪示產生一強度臨限值之一第二方法之一圖。 圖22係繪示產生一強度臨限值之一第三方法之一圖。 圖23係一樣本中之一光阻開口之一三維圖。 圖24係在圖23中展示之光阻之頂表面開口之一二維圖。 圖25係在圖23中展示之光阻之底表面開口之二維圖。 圖26係聚焦在一光阻層之一頂表面上之一經擷取影像。 圖27係繪示偵測在圖26中繪示之光阻層之一邊界之一圖。 圖28係聚焦在一光阻層之一底表面上之一經擷取影像。 圖29係繪示偵測在圖28中繪示之光阻層之一邊界之一圖。 圖30係聚焦在一溝槽結構中之一光阻層之一頂表面上之一經擷取影像。 圖31係繪示偵測在圖30中繪示之光阻層之一邊界之一圖。 圖32係部分填充有鍍金屬之一光阻開口之一三維圖。 圖33係部分填充有鍍金屬之一光阻開口之一橫截面圖。 圖34係具有鍍金屬之一光阻開口之一三維圖。 圖35係具有鍍金屬之一光阻開口之一橫截面圖。 圖36係鈍化層上方之一金屬柱之一三維圖。 圖37係鈍化層上方之一金屬柱之一橫截面圖。 圖38係鈍化層上方之金屬之一三維圖。 圖39係鈍化層上方之金屬之一橫截面圖。 圖40係繪示接近於一鍍金屬表面之一半透明材料之量測之一橫截面圖。 圖41係繪示當一光阻開口在光學顯微鏡之視場內時使用在各種距離處擷取之影像之峰值模式操作之一圖。 圖42係繪示源自在圖41中繪示之峰值模式操作之三維資訊之一圖表。 圖43係聚焦在一溝槽結構中之一光阻層之一頂表面上之一經擷取影像之一圖,包含一第一分析區域A及一第二分析區域B之一輪廓。 圖44係鈍化結構上方之一凸塊之一三維圖。 圖45係鈍化結構上方之凸塊之一俯視圖,包含一第一分析區域A及一第二分析區域B之一輪廓。 圖46係繪示當整個凸塊未定位於原始分析區域A中時調整分析區域A及分析區域B之一俯視圖。 圖47係鈍化結構上方之凸塊之一橫截面圖。 圖48係繪示當僅一光阻層在光學顯微鏡之視場之區域B內時使用在各種距離處擷取之影像之峰值模式操作之一圖。 圖49係繪示源自圖48之峰值模式操作之三維資訊之一圖表。Embodiments of the present invention are illustrated with the accompanying drawings in which like reference numerals Figure 1 is a diagram of one of the semi-automated three-dimensional metering systems 1 performing the same automated three-dimensional measurement. 2 is a diagram of a three-dimensional imaging microscope 10 including an adjustable objective 11 and an adjustable stage 12. 3 is a diagram of a three-dimensional metering system 20 including a three-dimensional microscope, a processor, a computer, a display, and an input device. FIG. 4 is a view showing one of the methods of capturing an image when changing the distance between the objective lens of the optical microscope and the stage. Figure 5 is a graph showing the distance between the objective lens of the optical microscope and the surface of the sample, wherein each x-y coordinate has a maximum characteristic value. Figure 6 is a three-dimensional representation of one of the images using the maximum characteristic values of the x-y coordinates shown in Figure 5. Figure 7 is a graph showing peak mode operation using images captured at various distances. Figure 8 is a diagram showing the peak mode operation of using images taken at various distances when a photoresist opening is in the field of view of an optical microscope. Figure 9 is a diagram showing one of three-dimensional information derived from peak mode operation. Figure 10 is a diagram showing the sum mode operation using images captured at various distances. Figure 11 is a diagram showing an error surface detection when operating in a summation mode. Figure 12 is a diagram showing one of three-dimensional information from the summation mode operation. Figure 13 is a diagram showing a range mode operation using images captured at various distances. Figure 14 is a diagram showing one of three-dimensional information derived from range mode operation. Figure 15 is a graph showing only one of the pixel counts having one of the first range of characteristic values. Figure 16 is a graph showing only one of the pixel counts having one of the characteristic values in the second range. Figure 17 is a flow chart showing one of the various steps involved in peak mode operation. Figure 18 is a flow chart showing one of the various steps involved in the range mode operation. Figure 19 is a diagram of one of the captured images (including a single feature) focused on a top surface of a photoresist layer. Figure 20 is a diagram showing one of the first methods of generating an intensity threshold. Figure 21 is a diagram showing one of the second methods of generating an intensity threshold. Figure 22 is a diagram showing one of the third methods of generating an intensity threshold. Figure 23 is a three-dimensional view of one of the photoresist openings in the same. Figure 24 is a two dimensional view of the top surface opening of the photoresist shown in Figure 23. Figure 25 is a two-dimensional view of the bottom surface opening of the photoresist shown in Figure 23. Figure 26 is an image captured by focusing on one of the top surfaces of a photoresist layer. FIG. 27 is a view showing one of the boundaries of one of the photoresist layers shown in FIG. Figure 28 is an image captured by focusing on one of the bottom surfaces of a photoresist layer. FIG. 29 is a view showing one of the boundaries of one of the photoresist layers shown in FIG. Figure 30 is a view of one of the top surfaces of one of the photoresist layers in a trench structure. FIG. 31 is a diagram showing one of the boundaries of the photoresist layer detected in FIG. Figure 32 is a three dimensional view of one of the photoresist openings partially filled with metallization. Figure 33 is a cross-sectional view of one of the photoresist openings partially filled with a metallization. Figure 34 is a three dimensional view of one of the photoresist openings with metallization. Figure 35 is a cross-sectional view of one of the photoresist openings with metallization. Figure 36 is a three dimensional view of one of the metal pillars above the passivation layer. Figure 37 is a cross-sectional view of one of the metal pillars above the passivation layer. Figure 38 is a three dimensional view of a metal above the passivation layer. Figure 39 is a cross-sectional view of one of the metals above the passivation layer. Figure 40 is a cross-sectional view showing the measurement of a translucent material that is close to a metallized surface. Figure 41 is a graph showing the peak mode operation of images taken at various distances when a photoresist opening is in the field of view of an optical microscope. Figure 42 is a graph showing one of three-dimensional information derived from the peak mode operation illustrated in Figure 41. Figure 43 is a view of a captured image of one of the top surfaces of one of the photoresist layers in a trench structure, including a contour of a first analysis area A and a second analysis area B. Figure 44 is a three dimensional view of one of the bumps above the passivation structure. 45 is a top plan view of a bump above the passivation structure, including a contour of a first analysis area A and a second analysis area B. FIG. 46 is a plan view showing one of the analysis area A and the analysis area B when the entire bump is not positioned in the original analysis area A. Figure 47 is a cross-sectional view of one of the bumps above the passivation structure. Figure 48 is a diagram showing the peak mode operation of images taken at various distances when only one photoresist layer is in the region B of the field of view of the optical microscope. Figure 49 is a graph showing one of the three-dimensional information from the peak mode operation of Figure 48.

Claims (22)

一種使用一光學顯微鏡產生一樣本之三維(3-D)資訊之方法,該方法包括: 按預定步階變更該樣本與該光學顯微鏡之一物鏡之間的距離; 在各預定步階處擷取一影像,其中該樣本之一第一表面及該樣本之一第二表面在該等經擷取影像之各者之一視場內; 判定各經擷取影像中之各像素之一特性值; 針對各經擷取影像判定跨該經擷取影像中之像素之一第一部分之最大特性值; 比較各經擷取影像之該最大特性值以判定各預定步階處是否存在該樣本之一表面; 判定聚焦在該樣本之一凸塊之一頂點上之一第一經擷取影像; 基於各經擷取影像中之各像素之該特性值判定聚焦在該樣本之一第一表面上之一第二經擷取影像;及 判定該凸塊之該頂點與該第一表面之間的一第一距離。A method of producing an identical three-dimensional (3-D) information using an optical microscope, the method comprising: changing a distance between the sample and an objective lens of the optical microscope in a predetermined step; capturing at each predetermined step An image, wherein a first surface of the sample and a second surface of the sample are in a field of view of each of the captured images; determining a characteristic value of each of the pixels in each captured image; Determining a maximum characteristic value of a first portion of one of the pixels in the captured image for each captured image; comparing the maximum characteristic value of each captured image to determine whether a surface of the sample exists at each predetermined step Determining a first captured image focused on one of the vertices of one of the bumps of the sample; determining, based on the characteristic value of each pixel in each of the captured images, focusing on one of the first surfaces of the sample The second captured image; and determines a first distance between the vertex of the bump and the first surface. 如請求項1之方法,其中該光學顯微鏡包含一載物台,其中該樣本由該載物台支撐,其中該光學顯微鏡經調適以與一電腦系統通信,其中該電腦系統包含經調適以儲存各經擷取影像之一記憶體裝置,且其中該光學顯微鏡選自由一共焦顯微鏡、一結構化照明顯微鏡及一干涉儀構成之群組。The method of claim 1, wherein the optical microscope comprises a stage, wherein the sample is supported by the stage, wherein the optical microscope is adapted to communicate with a computer system, wherein the computer system includes adapted to store each A memory device is captured, wherein the optical microscope is selected from the group consisting of a confocal microscope, a structured illumination microscope, and an interferometer. 如請求項1之方法,其中該第一經擷取影像之該判定進一步包括: 判定在跨所有經擷取影像之x-y像素位置之一第二部分內之各x-y像素位置之一最大特性值,其中x-y像素位置之該第二部分包含在各經擷取影像中所包含之至少一些該等x-y像素位置; 判定該等經擷取影像之一子集,其中僅包含一x-y像素位置最大特性值之經擷取影像包含於該子集中;及 判定在該經擷取影像子集內之所有經擷取影像當中,該第一經擷取影像相較於該經擷取影像子集內之所有其他經擷取影像聚焦在一最高z位置上。The method of claim 1, wherein the determining of the first captured image further comprises: determining a maximum characteristic value of one of xy pixel positions in a second portion of one of xy pixel positions across all captured images, Wherein the second portion of the xy pixel location includes at least some of the xy pixel locations included in each captured image; determining a subset of the captured images, wherein only one xy pixel location maximum characteristic value is included The captured image is included in the subset; and determining that all of the captured images in the captured image subset are compared to all of the captured image subsets Other captured images are focused at a maximum z position. 如請求項1之方法,其中像素之該第一部分包含在該經擷取影像中所包含之所有像素,且其中各像素之該特性值選自由強度、對比度及條紋對比度構成之群組。The method of claim 1, wherein the first portion of the pixel comprises all of the pixels included in the captured image, and wherein the characteristic value of each pixel is selected from the group consisting of intensity, contrast, and fringe contrast. 如請求項1之方法,其中像素之該第一部分包含少於在該經擷取影像中所包含之所有像素。The method of claim 1, wherein the first portion of pixels comprises less than all of the pixels included in the captured image. 如請求項3之方法,其中像素之該第二部分包含在該經擷取影像中所包含之所有像素。The method of claim 3, wherein the second portion of the pixels comprises all of the pixels included in the captured image. 如請求項3之方法,其中像素之該第二部分包含少於在該經擷取影像中所包含之所有像素。The method of claim 3, wherein the second portion of pixels comprises less than all of the pixels included in the captured image. 如請求項1之方法,其中像素之該第一部分並未接收自該金屬凸塊反射之光。The method of claim 1, wherein the first portion of the pixel does not receive light reflected from the metal bump. 如請求項3之方法,其中像素之該第二部分接收自該金屬凸塊之該頂點反射之光。The method of claim 3, wherein the second portion of the pixel receives light reflected from the vertex of the metal bump. 如請求項3之方法,其中像素之該第一部分與像素之該第二部分之間的空間關係係固定的。The method of claim 3, wherein the spatial relationship between the first portion of the pixel and the second portion of the pixel is fixed. 如請求項3之方法,其中像素之該第二部分係連續的且以該凸塊之該頂點為中心。The method of claim 3, wherein the second portion of the pixel is continuous and centered at the vertex of the bump. 如請求項1之方法,其中該凸塊係一金屬凸塊且其中該第一表面係一鈍化層之一頂表面。The method of claim 1, wherein the bump is a metal bump and wherein the first surface is a top surface of a passivation layer. 一種使用一光學顯微鏡產生一樣本之三維(3-D)資訊之方法,該方法包括: 按預定步階變更該樣本與該光學顯微鏡之一物鏡之間的距離; 在各預定步階處擷取一影像,其中該樣本之一第一表面及該樣本之一第二表面在該等經擷取影像之各者之一視場內; 判定各經擷取影像中之各像素之一特性值; 針對各經擷取影像判定具有跨像素之一第一部分之一第一範圍內之一特性值之像素之一計數,其中不具有該第一範圍內之一特性值之所有像素未包含於該像素計數中; 基於各經擷取影像之該像素計數判定各預定步階處是否存在該樣本之一表面。 判定聚焦在該樣本之一凸塊之一頂點上之一第一經擷取影像; 基於各經擷取影像中之各像素之該特性值判定聚焦在該樣本之一第一表面上之一第二經擷取影像;及 判定該凸塊之該頂點與該第一表面之間的一第一距離。A method of producing an identical three-dimensional (3-D) information using an optical microscope, the method comprising: changing a distance between the sample and an objective lens of the optical microscope in a predetermined step; capturing at each predetermined step An image, wherein a first surface of the sample and a second surface of the sample are in a field of view of each of the captured images; determining a characteristic value of each of the pixels in each captured image; Determining, for each captured image, a count of pixels having a characteristic value within a first range of one of the first portions of one of the pixels, wherein all pixels not having one of the first range of characteristic values are not included in the pixel Counting; determining whether a surface of the sample exists at each predetermined step based on the pixel count of each captured image. Determining a first captured image focused on one of the vertices of one of the bumps of the sample; determining, based on the characteristic value of each pixel in each of the captured images, focusing on one of the first surfaces of the sample And capturing an image; and determining a first distance between the vertex of the bump and the first surface. 如請求項13之方法,其中該光學顯微鏡包含一載物台,其中該樣本由該載物台支撐,其中該光學顯微鏡經調適以與一電腦系統通信,其中該電腦系統包含經調適以儲存各經擷取影像之一記憶體裝置,且其中該光學顯微鏡選自由一共焦顯微鏡、一結構化照明顯微鏡及一干涉儀構成之群組。The method of claim 13, wherein the optical microscope comprises a stage, wherein the sample is supported by the stage, wherein the optical microscope is adapted to communicate with a computer system, wherein the computer system includes adapted to store each A memory device is captured, wherein the optical microscope is selected from the group consisting of a confocal microscope, a structured illumination microscope, and an interferometer. 如請求項13之方法,其中該第一經擷取影像之該判定進一步包括: 判定在跨所有經擷取影像之x-y像素位置之一第二部分內之各x-y像素位置之一最大特性值,其中x-y像素位置之該第二部分包含在各經擷取影像中所包含之至少一些該等x-y像素位置; 判定該等經擷取影像之一子集,其中僅包含一x-y像素位置最大特性值之經擷取影像包含於該子集中;及 判定在該經擷取影像子集內之所有經擷取影像當中,該第一經擷取影像相較於該經擷取影像子集內之所有其他經擷取影像聚焦在最高z位置上。The method of claim 13, wherein the determining of the first captured image further comprises: determining a maximum characteristic value of one of xy pixel positions in a second portion of one of xy pixel positions across all captured images, Wherein the second portion of the xy pixel location includes at least some of the xy pixel locations included in each captured image; determining a subset of the captured images, wherein only one xy pixel location maximum characteristic value is included The captured image is included in the subset; and determining that all of the captured images in the captured image subset are compared to all of the captured image subsets Other captured images are focused on the highest z position. 如請求項13之方法,其中像素之該第一部分包含在該經擷取影像中所包含之所有像素,且其中各像素之該特性值選自由強度、對比度及條紋對比度構成之群組。The method of claim 13, wherein the first portion of the pixel comprises all of the pixels included in the captured image, and wherein the characteristic value of each pixel is selected from the group consisting of intensity, contrast, and fringe contrast. 如請求項13之方法,其中像素之該第一部分包含少於在該經擷取影像中所包含之所有像素。The method of claim 13, wherein the first portion of pixels comprises less than all of the pixels included in the captured image. 如請求項15之方法,其中像素之該第二部分包含在該經擷取影像中所包含之所有像素。The method of claim 15, wherein the second portion of pixels comprises all of the pixels included in the captured image. 如請求項15之方法,其中像素之該第一部分與像素之該第二部分之間的空間關係係固定的。The method of claim 15, wherein the spatial relationship between the first portion of the pixel and the second portion of the pixel is fixed. 如請求項15之方法,其中像素之該第二部分係連續的且以該凸塊之該頂點為中心。The method of claim 15, wherein the second portion of the pixel is continuous and centered at the vertex of the bump. 一種使用一光學顯微鏡產生一樣本之三維(3-D)資訊之方法,該方法包括: 按預定步階變更該樣本與該光學顯微鏡之一物鏡之間的距離; 在各預定步階處擷取一影像,其中該樣本之一第一表面及該樣本之一第二表面在該等經擷取影像之各者之一視場內; 判定各經擷取影像中之各像素之一特性值; 判定該樣本之一凸塊之一頂點之一z位置; 基於各經擷取影像中之各像素之該特性值判定聚焦在該樣本之一第一表面上之一第一經擷取影像;及 判定該凸塊之該頂點與該第一表面之間的一第一距離。A method of producing an identical three-dimensional (3-D) information using an optical microscope, the method comprising: changing a distance between the sample and an objective lens of the optical microscope in a predetermined step; capturing at each predetermined step An image, wherein a first surface of the sample and a second surface of the sample are in a field of view of each of the captured images; determining a characteristic value of each of the pixels in each captured image; Determining a position z of one of the vertices of one of the bumps of the sample; determining, based on the characteristic value of each pixel in each of the captured images, a first captured image focused on one of the first surfaces of the sample; Determining a first distance between the vertex of the bump and the first surface. 如請求項21之方法,其中該頂點之該z位置之該判定包括: 識別跨所有經擷取影像之複數個x,y,z像素位置,其中該複數個x,y,z像素位置與該凸塊之一頂表面相關聯; 應用一最佳擬合演算法以產生該凸塊之該頂表面之一連續三維估計;及 判定該連續三維估計之一最大高度。The method of claim 21, wherein the determining of the z-position of the vertex comprises: identifying a plurality of x, y, z pixel locations across all of the captured images, wherein the plurality of x, y, z pixel locations and the A top surface of one of the bumps is associated; applying a best-fit algorithm to generate a continuous three-dimensional estimate of the top surface of the bump; and determining a maximum height of the continuous three-dimensional estimate.
TW106127073A 2016-08-10 2017-08-10 Methods of generating three-dimensional (3-d) information of a sample using an optical microscope TWI769172B (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US15/233,812 US20180045937A1 (en) 2016-08-10 2016-08-10 Automated 3-d measurement
US15/233,812 2016-08-10
US15/338,838 2016-10-31
US15/338,838 US10157457B2 (en) 2016-08-10 2016-10-31 Optical measurement of opening dimensions in a wafer
US15/346,607 2016-11-08
US15/346,594 2016-11-08
US15/346,594 US10359613B2 (en) 2016-08-10 2016-11-08 Optical measurement of step size and plated metal thickness
US15/346,607 US10168524B2 (en) 2016-08-10 2016-11-08 Optical measurement of bump hieght

Publications (2)

Publication Number Publication Date
TW201825860A true TW201825860A (en) 2018-07-16
TWI769172B TWI769172B (en) 2022-07-01

Family

ID=61162501

Family Applications (1)

Application Number Title Priority Date Filing Date
TW106127073A TWI769172B (en) 2016-08-10 2017-08-10 Methods of generating three-dimensional (3-d) information of a sample using an optical microscope

Country Status (5)

Country Link
KR (1) KR102226779B1 (en)
CN (1) CN109791039B (en)
SG (1) SG11201901045UA (en)
TW (1) TWI769172B (en)
WO (1) WO2018031574A1 (en)

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09113235A (en) * 1995-10-16 1997-05-02 Dainippon Screen Mfg Co Ltd Three-dimensional measuring method and indication method and three-dimensional measuring device
US6366357B1 (en) * 1998-03-05 2002-04-02 General Scanning, Inc. Method and system for high speed measuring of microscopic targets
TW555954B (en) * 2001-02-28 2003-10-01 Olympus Optical Co Confocal microscope, optical height-measurement method, automatic focusing method
JP2004354469A (en) * 2003-05-27 2004-12-16 Yokogawa Electric Corp Confocal microscope display device
JP2005055540A (en) * 2003-08-07 2005-03-03 Olympus Corp Confocal microscope and height measuring instrument
US7512436B2 (en) * 2004-02-12 2009-03-31 The Regents Of The University Of Michigan Method of evaluating metabolism of the eye
EP2653861B1 (en) 2006-12-14 2014-08-13 Life Technologies Corporation Method for sequencing a nucleic acid using large-scale FET arrays
US7729049B2 (en) * 2007-05-26 2010-06-01 Zeta Instruments, Inc. 3-d optical microscope
TWI414817B (en) * 2010-07-23 2013-11-11 Univ Nat Taipei Technology Linear chromatic confocal microscope system
US9389408B2 (en) 2010-07-23 2016-07-12 Zeta Instruments, Inc. 3D microscope and methods of measuring patterned substrates
US10048480B2 (en) 2011-01-07 2018-08-14 Zeta Instruments, Inc. 3D microscope including insertable components to provide multiple imaging and measurement capabilities
JP6488073B2 (en) * 2014-02-28 2019-03-20 株式会社日立ハイテクノロジーズ Stage apparatus and charged particle beam apparatus using the same

Also Published As

Publication number Publication date
KR102226779B1 (en) 2021-03-10
CN109791039A (en) 2019-05-21
CN109791039B (en) 2021-03-23
WO2018031574A1 (en) 2018-02-15
KR20190029766A (en) 2019-03-20
SG11201901045UA (en) 2019-03-28
TWI769172B (en) 2022-07-01
WO2018031574A9 (en) 2018-04-05

Similar Documents

Publication Publication Date Title
TWI729186B (en) Optical measurement of opening dimensions in a wafer
KR101680558B1 (en) Defect observation method and defect observation device
TWI551855B (en) System and method for inspecting a wafer and a program storage device readble by the system
JP4585876B2 (en) Sample observation method and apparatus using a scanning electron microscope
TWI733877B (en) Optical measurement of step size and plated metal thickness
JP2009259036A (en) Image processing device, image processing method, image processing program, recording medium, and image processing system
WO2014011182A1 (en) Convergence/divergence based depth determination techniques and uses with defocusing imaging
TW202004939A (en) Performance monitoring of design-based alignment
JP2019194670A (en) Range differentiators for auto-focusing in optical imaging systems
US6295384B1 (en) Removing noise caused by artifacts from a digital image signal
US10168524B2 (en) Optical measurement of bump hieght
TWI751184B (en) Methods of generating three-dimensional (3-d) information of a sample and three-dimensional (3-d) measurement systems
JP2010185692A (en) Device, system and method for inspecting disk surface
TWI769172B (en) Methods of generating three-dimensional (3-d) information of a sample using an optical microscope
JP5891717B2 (en) Hole internal inspection device, hole internal inspection method, and program
CN112197942A (en) Method and system for analyzing imaging performance of ultra-precision machining optical micro-lens array
US6787378B2 (en) Method for measuring height of sphere or hemisphere