CN117242400A - Measuring tool calibration method and related measuring tool - Google Patents

Measuring tool calibration method and related measuring tool Download PDF

Info

Publication number
CN117242400A
CN117242400A CN202280029347.1A CN202280029347A CN117242400A CN 117242400 A CN117242400 A CN 117242400A CN 202280029347 A CN202280029347 A CN 202280029347A CN 117242400 A CN117242400 A CN 117242400A
Authority
CN
China
Prior art keywords
target
illumination
point
parameter values
intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280029347.1A
Other languages
Chinese (zh)
Inventor
廉晋
A·E·A·科伦
塞巴斯蒂安努斯·阿德里安努斯·古德恩
林慧全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ASML Holding NV
Original Assignee
ASML Holding NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ASML Holding NV filed Critical ASML Holding NV
Priority claimed from PCT/EP2022/057659 external-priority patent/WO2022223230A1/en
Publication of CN117242400A publication Critical patent/CN117242400A/en
Pending legal-status Critical Current

Links

Abstract

A method and associated apparatus for determining a correction to a measurement of a target is disclosed. The measurement is influenced by a target-dependent correction parameter having a dependence on the target and/or a stack comprising the target. The method comprises the following steps: obtaining first measurement data relating to a measurement of a reference target, the first measurement data comprising at least a first set of intensity parameter values and a second set of intensity parameter values; and second measurement data relating to measurements of the reference target, the second measurement data comprising a third set of intensity parameter values. A target invariant correction parameter is determined from the first and second measurement data, the target invariant correction parameter being a component of the target dependent correction parameter that is independent of the target and/or stack; and the correction is determined from the target invariant correction parameter.

Description

Measuring tool calibration method and related measuring tool
Cross Reference to Related Applications
The present application claims priority from european application 21169097.9 filed on day 2021, month 4, and 19 and european application 21176858.5 filed on day 2021, month 5, and 31, and the entire contents of these european applications are incorporated herein by reference.
Technical Field
The present invention relates to metrology applications, and in particular to metrology applications in integrated circuit fabrication.
Background
A lithographic apparatus is a machine that is configured to apply a desired pattern onto a substrate. For example, lithographic apparatus can be used in the manufacture of Integrated Circuits (ICs). The lithographic apparatus may, for example, project a pattern (also often referred to as a "design layout" or "design") at a patterning device (e.g., a mask) onto a layer of radiation-sensitive material (resist) that is disposed on a substrate (e.g., a wafer).
To project a pattern onto a substrate, a lithographic apparatus may use electromagnetic radiation. The wavelength of this radiation determines the smallest dimension of the features that can be formed on the substrate. Typical wavelengths currently in use are 365nm (i-line), 248nm, 193nm and 13.5nm. Lithographic apparatus using Extreme Ultraviolet (EUV) radiation (having a wavelength in the range of 4nm to 20nm, e.g. 6.7nm or 13.5 nm) may be used to form smaller features on a substrate than lithographic apparatus using radiation, e.g. having a wavelength of 193 nm.
Low-k 1 Photolithography may be used to process features having dimensions less than the classical resolution limits of a lithographic apparatus. In this process, the resolution is made The rate formula may be expressed as cd=k1×λ/NA, where λ is the wavelength of the radiation employed, NA is the numerical aperture of projection optics in the lithographic apparatus, CD is the "critical dimension" (typically the minimum feature size of the printing, but in this case half pitch), k 1 Is an empirical resolution factor. Generally, k 1 The smaller the pattern that reproduces on the substrate similar to the shape and size planned by the circuit designer, the more difficult it becomes to achieve a particular electrical functionality and performance. To overcome these difficulties, complex fine tuning steps may be applied to the lithographic projection apparatus and/or the design layout. These steps include, for example, but are not limited to: alternatively, a tight control loop for controlling the stability of the lithographic apparatus may be used to improve the reproduction of the pattern at low k 1.
During the manufacturing process, it is necessary to inspect the manufactured structure and/or measure characteristics of the manufactured structure. Suitable inspection and measurement equipment is known, including, for example, spectroscatterometers and angle-resolved scatterometers. A spectroscatterometer may direct a broadband radiation beam onto the substrate and measure the spectrum (intensity as a function of wavelength) of radiation scattered into a particular narrow angular range. Angle-resolved scatterometers may use a monochromatic radiation beam and measure the intensity of the scattered radiation as a function of angle.
Asymmetry in the scatterometer, which is shown as sensor errors or tool induced offsets (TIS), can lead to difficulties in measuring overlaps or other parameters of interest.
Disclosure of Invention
In a first aspect of the invention, there is provided a method of determining a correction for a measurement of an object, the measurement being affected by an object-dependent correction parameter term having a dependence on the object and/or a stack comprising the object, the method comprising: obtaining first measurement data relating to measurements of a reference target, the first measurement data comprising at least a first set of intensity parameter values and a corresponding second set of intensity parameter values; obtaining second measurement data relating to measurements of the reference target, the second measurement data comprising a third set of intensity parameter values; determining a target invariant correction parameter from the first and second measurement data, the target invariant correction parameter being a component of the target dependent correction parameter that is independent of the target and/or stack; and determining the correction from the target invariant correction parameters.
Also disclosed are a processing device and associated program memory, and a computer program, each of the processing device and associated program memory, and computer program comprising instructions for a processor, the instructions causing the processor to perform the method of the first aspect.
Drawings
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying schematic drawings in which:
FIG. 1 depicts a schematic diagram of a lithographic apparatus;
FIG. 2 depicts a schematic overview of a lithography unit;
figure 3 depicts a schematic diagram of global lithography, which represents the cooperation between three key technologies for optimizing semiconductor manufacturing;
fig. 4 depicts a schematic overview of a scatterometry apparatus for use as a metrology device, which may comprise a dark-field and/or bright-field microscope according to an embodiment of the present invention;
-fig. 5 comprises: (a) A schematic of a dark field scatterometer of a target according to an embodiment of the invention is measured using a first control perforation; (b) Details of the diffraction spectrum of the target grating for a given illumination direction; (c) A second control perforation that provides an additional illumination mode when using a scatterometer for diffraction-based overlay (DBO) measurements; and (d) a third control perforation combining the first and second pairs of holes
Fig. 6 depicts a schematic overview of a scatterometry apparatus for use as a metrology device with an illumination arrangement capable of performing the method of the embodiments;
figure 7 depicts a plurality of aperture profiles as defined by moving apertures so as to provide an illumination arrangement capable of performing the method of the embodiments;
FIG. 8 schematically depicts a metrology device operable to measure a parameter of interest;
fig. 9 (a), (b), (c) and (d) schematically depict the scan path of the illumination beam; and
FIG. 10 depicts a block diagram of a computer system for controlling a system and/or method as disclosed herein.
Detailed Description
In this context, the terms "radiation" and "beam" are used to encompass all types of electromagnetic radiation, including ultraviolet radiation (e.g. having a wavelength of 365nm, 248nm, 193nm, 157nm or 126 nm) and EUV radiation (extreme ultra-violet radiation, e.g. having a wavelength in the range of about 5nm to 100 nm).
The terms "reticle", "mask" or "patterning device" as used in the present invention may be broadly interpreted as referring to a generic patterning device that can be used to impart an incoming radiation beam with a patterned cross-section that corresponds to a pattern to be created in a target portion of the substrate. In this context, the term "light valve" may also be used. Examples of other such patterning devices, in addition to classical masks (transmissive or reflective, binary, phase-shift, hybrid, etc.), include programmable mirror arrays and programmable LCD arrays.
FIG. 1 schematically depicts a lithographic apparatus LA. The lithographic apparatus LA comprises: an illumination system (also referred to as an illuminator) ILL configured to condition a radiation beam B (e.g., UV radiation, DUV radiation, or EUV radiation); a mask support (e.g. a mask table) MT constructed to support a patterning device (e.g. a mask) MA and connected to a first positioner PM configured to accurately position the patterning device MA in accordance with certain parameters; a substrate support (e.g., a wafer table) WT configured to hold a substrate (e.g., a resist-coated wafer) W connected to a second positioner PW configured to accurately position the substrate support in accordance with certain parameters; and a projection system (e.g., a refractive projection lens system) PS configured to project a pattern imparted to the radiation beam B by patterning device MA onto a target portion C (e.g., a portion including one or more dies) of the substrate W.
In operation, the illumination system ILL receives a radiation beam from a radiation source SO, for example, via the beam delivery system BD. The illumination system ILL may include various types of optical components, such as refractive, reflective, magnetic, electromagnetic, electrostatic and/or other types of optical components, or any combination thereof, for directing, shaping, and/or controlling radiation. The illuminator ILL may be used to condition the radiation beam B to have a desired spatial and angular intensity distribution in its cross-section at the plane of the patterning device MA.
The term "projection system" PS used in the present invention should be broadly interpreted as encompassing various types of projection system, including refractive, reflective, catadioptric, anamorphic, magnetic, electromagnetic and/or electrostatic optical systems, or any combination thereof, as appropriate for the exposure radiation being used, and/or for other factors such as the use of an immersion liquid or the use of a vacuum. Any term "projection lens" used herein may be considered as synonymous with the more general term "projection system" PS.
The lithographic apparatus LA may be of the type: wherein at least a portion of the substrate may be covered by an immersion liquid (e.g. water) having a relatively high refractive index in order to fill the space between the projection system PS and the substrate W, which is also referred to as immersion lithography. Further information about immersion techniques is given in US 6952253, which is incorporated by reference in the present invention.
The lithographic apparatus LA may also be of a type having two or more substrate supports WT (also referred to as "dual stage"). In such a "multi-stage" machine, the substrate supports WT may be used in parallel, and/or another substrate W on another substrate support WT may be used to expose a pattern on another substrate W while the step of preparing the substrate W for subsequent exposure of the substrate W on one of the substrate supports WT is performed.
In addition to the substrate support WT, the lithographic apparatus LA may also comprise a measurement table. The measuring platform is arranged to hold the sensor and/or the cleaning device. The sensor may be arranged to measure a property of the projection system PS or a property of the radiation beam B. The measurement platform may hold a plurality of sensors. The cleaning device may be arranged to clean a part of the lithographic apparatus, for example a part of the projection system PS or a part of the system providing the immersion liquid. The measurement table may be moved under the projection system PS when the substrate support WT is remote from the projection system PS.
In operation, the radiation beam B is incident on the patterning device (e.g., mask) MA, which is held on the mask support MT, and is patterned by a pattern (design layout) present on the patterning device MA. Having traversed the mask MA, the radiation beam B passes through the projection system PS, which focuses the beam onto a target portion C of the substrate W. By means of the second positioner PW and position measurement system IF, the substrate support WT can be moved accurately, e.g. so as to position different target portions C in the path of the radiation beam B in a focused and aligned position. Similarly, the first positioner PM and possibly another position sensor (which is not explicitly depicted in fig. 1) can be used to accurately position the patterning device MA with respect to the path of the radiation beam B. Patterning device MA and substrate W may be aligned using mask alignment marks M1, M2 and substrate alignment marks P1, P2. Although the substrate alignment marks P1, P2 occupy dedicated target portions as illustrated, the substrate alignment marks P1, P2 may be located in spaces between target portions. When the substrate alignment marks P1, P2 are located between the target portions C, these substrate alignment marks are referred to as scribe-lane alignment marks.
As shown in fig. 2, the lithographic apparatus LA may form part of a lithographic cell LC, sometimes also referred to as a lithography cell or (lithography) cluster, which often also comprises an apparatus for performing pre-exposure and post-exposure processes on the substrate W. Conventionally, these include a spin coater SC for depositing a resist layer, a developer DE for developing the resist after exposure, a chill plate CH and a bake plate BK for adjusting the temperature of the substrate W (for example, for adjusting the solvent in the resist layer), for example. A substrate transport apparatus or robot RO picks up a substrate W from input/output ports I/O1, I/O2, moves the substrate between different process devices, and transfers the substrate W to a feed station LB of the lithographic apparatus LA. The means in the lithography unit, which are often also commonly referred to as track or coating development system, are typically under the control of a track or coating development system control unit TCU, which itself may be controlled by a management control system SCS, which may also control the lithography apparatus LA, for example via a lithography control unit LACU.
In order to properly and consistently expose the substrate W exposed by the lithographic apparatus LA, it is desirable to inspect the substrate to measure properties of the patterned structure, such as overlay error between subsequent layers, line thickness, critical Dimension (CD), etc. For this purpose, an inspection tool (not shown) may be included in the lithography unit LC. Especially in case an error is detected, in case the detection is still to be performed before other substrates W of the same batch or lot are to be exposed or processed, the exposure of subsequent substrates or other processing steps to be performed on the substrates W may be e.g. adjusted.
An inspection apparatus, which may also be referred to as a metrology apparatus, is used to determine the properties of the substrate W and, in particular, how the properties of different substrates W change or how properties associated with different layers of the same substrate W change from layer to layer. The inspection apparatus may alternatively be configured to identify defects on the substrate W, and may for example be part of the lithographic cell LC, or may be integrated into the lithographic apparatus LA, or may even be a separate device. The inspection apparatus may measure properties on the latent image (the image in the resist layer after exposure), or on the semi-latent image (the image in the resist layer after the post-exposure bake step PEB), or on the developed resist image (where the exposed or unexposed portions of the resist have been removed), or even on the etched image (after a pattern transfer step such as etching).
Typically, the patterning process in the lithographic apparatus LA is one of the most critical steps in the process, which requires a higher accuracy of the sizing and placement of structures on the substrate W. To ensure such higher accuracy, three systems may be combined into a so-called "overall" control environment, as schematically depicted in fig. 3. One of these systems is a lithographic apparatus LA, which is (in practice) connected to a metrology tool MT (second system) and to a computer system CL (third system). The key to this "monolithic" environment is to optimize the cooperation between the three systems to enhance the overall process window and to provide a tight control loop to ensure that the patterning performed by the lithographic apparatus LA remains within the process window. The process window defines a series of process parameters (e.g., dose, focus, overlap) within which a particular manufacturing process produces a defined result (e.g., a functional semiconductor device) -typically, the process parameters in the lithographic process or patterning process are allowed to vary within the defined result.
The computer system CL can use (part of) the design layout to be patterned to predict which resolution enhancement technique to use and to perform computational lithography simulation and computation to determine which mask layout and lithographic apparatus set the largest overall process window (depicted in fig. 3 by the double arrow in the first scale SC 1) that implements the patterning process. Typically, resolution enhancement techniques are arranged to match patterning possibilities of the lithographic apparatus LA. The computer system CL may also be used to detect where the lithographic apparatus LA is currently operating within the process window (e.g. using input from the metrology tool MT) to predict whether defects due to, for example, sub-optimal processing (depicted in fig. 3 by the arrow pointing to "0" in the second scale SC 2) may be present.
The metrology tool MT may provide input to the computer system CL to enable accurate simulation and prediction, and may provide feedback to the lithographic apparatus LA to identify, for example, possible drift in the calibration state of the lithographic apparatus LA (depicted in fig. 3 by the plurality of arrows in the third scale SC 3).
In a lithographic process, it is desirable to frequently measure the resulting structure, for example, for process control and verification. The tool used to make such measurements is commonly referred to as a metrology tool MT. Different types of metrology tools MT are known for making such measurements, including scanning electron microscopes or various forms of scatterometry metrology tools MT. Scatterometers are general-purpose instruments that allow measurement of parameters of a lithographic process by having a sensor in the pupil or in a plane conjugate to the pupil of the objective lens of the scatterometer or by having a sensor in the image plane or in a plane conjugate to the image plane, the measurement is often referred to as pupil-based measurement, in which case the measurement is often referred to as image-or field-based measurement. Such scatterometers and associated measurement techniques are further described in patent applications US20100328655, US2011102753A1, US20120044470A, US20110249244, US20110026032 or EP1,628,164A, which are incorporated by reference in their entirety. The aforementioned scatterometers may use light from the soft x-ray and visible to near IR wavelength ranges to measure gratings.
In a first embodiment, the scatterometer MT is an angle-resolved scatterometer. In such a scatterometer, a reconstruction method may be applied to the measured signal to reconstruct or calculate the properties of the grating. Such reconstruction may for example be caused by simulating the interaction of the scattered radiation with a mathematical model of the target structure and comparing the simulation result with the measured result. Parameters of the mathematical model are adjusted until the simulated interactions produce a diffraction pattern similar to that observed from a real target.
In a second embodiment, the scatterometer MT is a spectroscatterometer MT. In such a spectroscatterometer MT, radiation emitted by a radiation source is directed onto a target, and reflected or scattered radiation from the target is directed to a spectrometer detector that measures the spectrum of the specularly reflected radiation (i.e. a measurement of intensity as a function of wavelength). From such data, the structure or profile of the target that produced the detected spectrum can be reconstructed, for example, by rigorous coupled wave analysis and nonlinear regression or by comparison with a library of simulated spectra.
In a third embodiment, the scatterometer MT is an ellipsometer. The ellipsometer allows the parameters of the lithographic process to be determined by measuring the scattered radiation for each polarization state. Such a metrology device emits polarized light (such as linearly polarized light, circularly polarized light or elliptically polarized light) by using, for example, a suitable polarization filter in the illumination section of the metrology device. Sources suitable for use in the metrology apparatus may also provide polarized radiation. Various embodiments of existing ellipsometers are described in U.S. patent applications 11/451,599, 11/708,678, 12/256,780, 12/486,449, 12/920,968, 12/922,587, 13/000,229, 13/033,135, 13/533,110, and 13/891,410, which are incorporated by reference herein in their entireties.
In an embodiment of the scatterometer MT, the scatterometer MT is adapted to measure the overlap of two misaligned gratings or periodic structures by measuring the reflection spectrum and/or detecting an asymmetry in the configuration, the asymmetry being related to the extent of the overlap. Two (typically stacked) grating structures may be applied in two different layers (not necessarily a continuous layer) and the two grating structures may be formed at substantially the same location on the wafer. The scatterometer may have a symmetric detection configuration as described, for example, in commonly owned patent application ep1,628,164a, such that any asymmetry is clearly identifiable. This provides a simple and straightforward way to measure misalignment in the grating. Further examples regarding measuring overlay errors between two layers comprising a targeted periodic structure by asymmetry of the periodic structure can be found in PCT patent application No. WO 2011/012624 or US patent application No. US 20160161863, which are incorporated herein by reference in their entirety.
Other parameters of interest may be focal length and dose. The focal length and dose may be determined simultaneously by scatterometry (or alternatively by scanning electron microscopy), as described in U.S. patent application US2011-0249244, the entire contents of which are incorporated herein by reference. A single structure with a unique combination of critical dimensions and sidewall angle measurements for each point in the focus energy matrix (fem—also known as the focus exposure matrix) may be used. If these unique combinations of critical dimensions and sidewall angles are available, focal length and dose values can be uniquely determined from these measurements.
The metrology target may be the totality of the composite grating formed mainly in the resist by a lithographic process and also formed after, for example, an etching process. Typically, the pitch and linewidth of the structures in the grating are largely dependent on the measurement optics (in particular the NA of the optics) to be able to obtain the diffraction orders from the metrology targets. As indicated previously, the diffraction signal may be used to determine an offset (also referred to as an "overlay") between two layers or may be used to reconstruct at least a portion of the original grating as produced by the lithographic process. Such reconstruction may be used to provide guidance of the quality of the lithographic process and may be used to control at least a portion of the lithographic process. The target may have smaller sub-segments configured to mimic the dimensions of the functional portion of the design layout in the target. Due to such sub-segmentation, the target will behave more like a functional part of the design layout, so that the overall process parameter measurement is better like the functional part of the design layout. The target may be measured in either an underfill mode or an overfill mode. In the underfill mode, the measurement beam produces a spot less than the entire target. In the overfill mode, the measurement beam produces a spot greater than the entire target. In this overfill mode, different targets can also be measured simultaneously, thereby determining different process parameters simultaneously.
The overall measurement quality of a lithographic parameter using a particular target is determined, at least in part, by the measurement recipe used to measure that lithographic parameter. The term "substrate measurement recipe" may include measuring one or more parameters of itself, one or more parameters of one or more patterns measured, or both. For example, if the measurement used in the substrate measurement recipe is a diffraction-based optical measurement, the one or more parameters of the measurement may include the wavelength of the radiation, the polarization of the radiation, the angle of incidence of the radiation with respect to the substrate, the orientation of the radiation with respect to the pattern on the substrate, and so forth. One of the criteria for selecting a measurement option may be, for example, the sensitivity of one of the measurement parameters to process variations. Further examples are described in U.S. patent application US2016-0161863 and published U.S. patent application US20160370717A1 in the present invention by reference in its entirety.
A metrology apparatus, such as a scatterometer, is depicted in fig. 4. The metrology apparatus includes a broadband (white light) radiation projector 2 that projects radiation onto a substrate 6. The reflected or scattered radiation is passed to a spectrometer detector 4 which measures the spectrum 6 of the specularly reflected radiation (i.e. the measurement of intensity as a function of wavelength). From this data, the structure or profile 8 of the detected spectrum can be reconstructed by the processing unit PU (e.g. by rigorous coupled wave analysis and nonlinear regression, or by comparison with a library of simulated spectra as shown at the bottom of fig. 3). In general, for reconstruction, the general form of the structure is known and some parameters are assumed from knowledge of the process of manufacturing the structure, leaving only a few parameters of the structure to be determined from the scatterometry data. Such a scatterometer may be configured as a normal incidence scatterometer or an oblique incidence scatterometer.
To monitor the lithographic process, a parameter of the patterned substrate is measured. The parameters may include, for example, overlay errors between successive layers formed in or on the patterned substrate. Such measurements may be performed on a product substrate and/or on a dedicated metrology target. There are various techniques for measuring microstructures formed during photolithography, including the use of scanning electron microscopes and various specialized tools. A special inspection tool in a fast and non-invasive form is such a scatterometer: in the scatterometer, a radiation beam is directed onto a target on the surface of the substrate, and properties of the scattered or reflected beam are measured.
Examples of known scatterometers include angle-resolved scatterometers of the type described in US2006033921A1 and US2010201963 A1. The target used by such scatterometers is a relatively large (e.g., 40 μm by 40 μm) grating, and the measurement beam produces a spot smaller than the grating (i.e., the grating is underfilled). In addition to measurement of feature shape by reconstruction, diffraction-based overlay can also be measured using such a device, as described in published patent application US2006066855 A1. Overlay measurement of smaller targets is achieved using diffraction-based overlay measurement of dark field imaging of diffraction orders. Examples of dark field imaging measurements can be found in international patent applications WO 2009/078708 and WO 2009/106279, the entire contents of which are hereby incorporated by reference. Further developments of the technology have been described in published patent publications US20110027704A, US20110043791A, US2011102753A1, US20120044470A, US20120123581A, US20130258310A, US20130271740a and WO2013178422 A1. These targets may be smaller than the illumination spot and may be surrounded by product structures on the wafer. Multiple gratings can be measured in one image using a composite grating target. The contents of all of these applications are also incorporated herein by reference.
In a diffraction-based dark-field metrology apparatus, a beam of radiation is directed onto a metrology target and one or more properties of the scattered radiation are measured in order to determine a property of interest of the target. The properties of the scattered radiation may include, for example, intensity at a single scattering angle (e.g., as a function of wavelength), or intensity at one or more wavelengths as a function of scattering angle.
Fig. 5 (a) presents an embodiment of a metrology apparatus, and more specifically, an embodiment of a dark field scatterometer. The target T and the diffracted rays of the measuring radiation for irradiating the target are illustrated in more detail in fig. 5 (b). The illustrated metrology apparatus is of the type known as dark field metrology apparatus. The metrology apparatus may be a stand alone device or may be incorporated into the lithographic apparatus LA, for example at a measurement station or at a lithographic cell LC. The optical axis with several branches through the device is indicated by the dashed line O. In this apparatus, light emitted by a source 11 (e.g., a xenon lamp) is directed onto a substrate W via a beam splitter 15 by an optical system comprising lenses 12, 14 and an objective lens 16. The lenses are arranged in a dual sequence of 4F arrangements. A different lens arrangement may be used if it still provides the substrate image onto the detector and at the same time allows access to the intermediate pupil plane for spatial frequency filtering. The angular range of radiation incident on the substrate can thus be selected by defining the spatial intensity distribution in a plane of the spatial spectrum where the plane of the substrate is present, here referred to as the (conjugate) pupil plane. In particular, this can be done by inserting an aperture plate 13 of a suitable form between the lenses 12 and 14 in the plane of the rear projection image, which becomes the pupil plane of the objective lens. In the illustrated example, the aperture plate 13 has different forms, labeled 13N and 13S, to allow for selection of different illumination modes. The illumination system in this example forms an off-axis illumination pattern. In the first illumination mode, the aperture plate 13N provides off-axis illumination from a direction designated "north" for ease of description only. In the second illumination mode, the aperture plate 13S is used to provide similar illumination, but from the opposite direction labeled "south". Other illumination modes are possible by using different apertures. The remainder of the pupil plane is expected to be dark because any unnecessary light outside the desired illumination mode will interfere with the desired measurement signal.
As shown in fig. 5 (b), the target T is placed with the substrate W perpendicular to the optical axis O of the objective lens 16. The substrate W may be supported by a support (not shown). The radiation I of the measuring radiation impinging on the target T from an angle deviating from the axis O generates a zeroth order radiation (solid line 0) and two first order radiation (dash-dot line representing +1 order and two-dot chain line representing one 1 order). It should be remembered that for smaller overfilled targets, these rays are only one type of parallel ray out of a number of parallel rays that cover the area of the substrate that includes metrology target T and other features. Since the aperture in plate 13 has a finite width (necessary to allow a useful number of rays, the incident ray I will actually occupy a range of angles, and the diffracted rays O and +1/-1 will spread out slightly, each of the steps +1 and 1 will spread further over a range of angles according to the point spread function of the smaller target, rather than a single ideal ray as shown.
At least one of the first orders diffracted by the target T on the substrate W is collected by the objective lens 16 and directed back through the beam splitter 15. Returning to fig. 5 (a), the first illumination mode and the second illumination mode are illustrated by designating diametrically opposed apertures labeled north (N) and south (S). When the incident ray I of the measurement radiation comes from the north side of the optical axis, i.e. when the first illumination mode is applied using the aperture plate 13N, the +1st order diffracted ray, denoted +1 (N), enters the objective lens 16. Conversely, when the second illumination mode is applied using aperture plate 13S, the (labeled-1 (S) -1 diffracted ray is the ray that enters lens 16.
The second beam splitter 17 splits the diffracted beam into two measurement branches. In the first measurement branch, the optical system 18 forms a diffraction spectrum (pupil plane image) of the target on a first sensor 19 (e.g. a CCD or CMOS sensor) using the zero-order diffracted beam and the first-order diffracted beam. Each diffraction order illuminates a different point on the sensor so that image processing can compare and contrast multiple orders. The pupil plane image acquired by the sensor 19 may be used to focus the metrology device and/or normalize the intensity measurements of the first order beam. Pupil plane images can also be used for many measurement purposes, such as reconstruction.
In the second measurement branch, the optical systems 20, 22 form an image of the target T on a sensor 23 (e.g. a CCD or CMOS sensor). In the second measurement branch, the aperture stop 21 is disposed in a plane conjugate to the pupil plane. The aperture stop 21 functions to block the zero-order diffracted beam, so that an image of the object formed on the sensor 23 is formed only by the-1-order beam or the-1-order beam. The images acquired by the sensors 19 and 23 are output to a processor PU that processes the images, the function of which will depend on the particular type of measurement being performed. Note that the term "image" as used herein is broad. If only one of-1 and-1 orders is present, the image of the grating lines will not be formed in the same way.
The particular form of aperture plate 13 and field stop 21 shown in fig. 5 is merely an example. In another embodiment of the invention, on-axis illumination of the target is used, and an aperture stop with an off-axis aperture is used to pass substantially only one first order diffracted light to the sensor. In still other embodiments, second, third and higher order beams (not shown in fig. 5) may be used in the measurement instead of or in addition to the first order beam.
To enable the measurement radiation to be adapted to these different types of measurements, the aperture plate 13 may include a number of aperture patterns formed around a disc that rotates to bring the desired pattern into position. Note that the aperture plate 13N or 13S may be used only to measure gratings oriented in one direction (X or Y, depending on the arrangement). For the measurement of orthogonal gratings, turning the target through 90 ° and 270 ° can be implemented. In fig. 5 (c) and (d) different well plates are shown. The use of these and many other variations, as well as the application of the device, are described in the previously published applications mentioned above.
The measurement of the target in dark field metrology may include (for example): measurement of 1 st order diffraction I +1 And 1 st order diffraction (I -1 ) And calculates the intensity asymmetry (a=i +1 -I 1 ) This indicates asymmetry of the target. The metrology target may comprise one or more grating structures, according to which the intensity may be asymmetric from suchThe parameter of interest is inferred from the sexual measurements, e.g., the target is designed such that the asymmetry in the target varies with the parameter of interest. For example, in overlay metrology, a target may include at least one composite grating formed from at least one pair of overlapping sub-gratings patterned in different layers of a semiconductor device. Thus, the asymmetry of the target will depend on the alignment of the two layers and thus on the overlap. Other targets may be formed with structures that are exposed with varying degrees of variation based on the focus setting used during exposure; the measurement of the structure enables (again by intensity asymmetry) to infer back the focus setting.
Metrology measurements such as those performed using the above-described apparatus and methods may suffer from sensor errors e (sometimes referred to in the art as Tool Induced Shifts (TIS)), which may reduce the accuracy of the measurements. The sensor error e is caused by the fact that the sensor optics of the measuring sensor are not perfect and may be asymmetric.
The result of this sensor error is a contribution to the measured intensity signal. With respect to the dark field metrology measurement just described, asymmetry a now includes the sensor error contribution: a=i +1 (1+∈)-I -1 (1- ∈), where I +1 And I -1 Is the intensity of +1 order diffraction and the intensity of-1 order diffraction without sensor error.
US7656518 (incorporated herein by reference) discloses a method of measuring and correcting the sensor error. The target pattern (e.g., grating or periodic structure) is illuminated twice: a first image is obtained in a first substrate orientation (e.g., 0 °) and a second image is obtained as a second substrate orientation 180 ° relative to the first substrate orientation. One of these images is rotated 180 ° relative to the other and subtracted from the other image. In this way, the asymmetry of the scatterometer can be corrected.
In a DBO setup (e.g., based on angle-resolved pupil plane measurements), a target pattern is illuminated with radiation, and the intensity of the resulting scattered radiation (typically integrated over time) is measured at a plurality of predetermined locations in a two-dimensional array at the detector (e.g., at each detector pixel). The target portion is then rotated about 180 deg. in the plane of the substrate or parallel to the plane of the substrate (i.e. a plane substantially perpendicular to the optical axis of the sensor optics) and the measurement is again taken. Such rotation of the target portion may be achieved via rotation of the substrate, the sensor, or both. Sensor asymmetry can be calculated and stored on a pixel-by-pixel basis. This means that intensity measurements at substrate rotations of 0 deg. and 180 deg. are made for each pixel, each pixel corresponding to a corresponding angular position relative to the target pattern, to obtain a pair of two-dimensional angular scatter spectra. One of these two-dimensional angular scatter spectra is rotated 180 °. If there is no sensor asymmetry, the rotated two-dimensional angular scatter spectrum should be the same as the other, non-rotated angular scatter spectrum. Any sensor asymmetry or sensor error will be displayed as a difference between the two images. Thus, the asymmetry error correction value (or sensor error correction value) for each pixel may be calculated by subtracting the intensity of one of the two-dimensional angular scatter spectra from the intensity of the other two-dimensional angular scatter spectrum. This value may then be divided by 2 for each pixel.
For μdbo (overlap based on micro-diffraction), the μdbo image is acquired at the image plane. Typically, the μdbo image may include one or more regions of interest (ROIs), each ROI being associated with a particular diffraction order. For example, the μdbo image may include two ROIs (e.g., a first ROI for +1 order diffraction, and a second ROI for-1 order diffraction), or four ROIs (+1 order and-1 order for each of the two grating directions). In other examples, the +1 order and-1 order may be imaged sequentially, in which case each μdbo image may include only a single ROI (one direction) or two ROIs (two directions). A single intensity value is typically determined for each ROI (e.g., as an average over the ROI). Thus, the correction of sensor errors in the μdbo method is based on the selected ROI and the average intensity therein, rather than on the pixel level. The remaining description will focus on the embodiments of μdbo, but the disclosed concepts are applicable to DBO and other measurement methods (e.g., including measurement of other parameters such as focal length, more specifically DBF and μdbf).
For a 0 degree orientation, the measured intensities of +1 and-1 diffraction orders are respectively Andand at a 180 degree orientation, the measured intensities of +1 and-1 diffraction orders are respectivelyAnd->Since the intensity in the absence of sensor errors will be the same in both orientations; i.e. < -> Then:
and the sensor error (and thus the corresponding correction value) can be derived from the measured intensityAnd->The determination is as follows:
the correction value may be used directly to correct further measurements of the target portion with a 0 deg. substrate rotation. This may be accomplished by dividing the measured intensity value(s) by (1 +. Epsilon.) or (1 +. Epsilon.) as appropriate. Correction values can be saved and applied to many measurements, thereby reducing the impact on throughput. This is because the correction value does not substantially change over time.
The described calibration method requires sampling a large number of targets on the wafer in order to set each correction option. In addition, the method has strong target dependence; this means that the tool needs to be recalibrated for each different use case (e.g. for different stacks as the stack changes). In other words, the sensor error correction parameters depend on local variations in the stack. Because the stack may change rapidly during the lithographic process, new calibrations should be performed at regular intervals. This requires a large number of wafer rotations, which require a large number of expensive platforms and measurement times.
It is desirable to perform sensor error calibration with lower requirements on platen time and/or wafer rotation times.
The method includes determining a target invariant correction parameter e (p x ,p y ). This describes that due to the illumination location or pixel p corresponding to the illumination pupil (e.g. fourier plane or angle-resolved plane of the detection optics) through the illumination pupil x ,p y The contribution of common sensor errors due to illuminated sensor optics of (2). The illumination from each individual illumination pixel or site will have a unique path through the sensor (detection) optics. Note that the sensor error itself is a parameter independent of the target, and the sensor error is caused purely by lens aberration, system transmittance, and the like. Calibration/determination of sensor errors requires the use of targets and, therefore, the calibration/determined correction is in fact target dependent. A calibration method is described herein that can decouple or separate a target contribution from a determined sensor error contribution.
It can be shown that the actual target-dependent correction parameter e TD (p x ,p y ) Is the target invariant correction parameter epsilon (p x ,p y ) And a target-dependent intensity distribution W of scattered light caused by the detection of specific structures and/or stacks in the pupil plane TD (p x ,p y ) Is a combination of (a) and (b). In this context, relying on the target is described as being dependent on the structure and/or stack, i.e. on the sample or structure being measured. More specifically, the target-dependent correction parameter ε TD (p x ,p y ) (e.g., sensor error) is the target invariant correction parameter ε (p) x ,p y ) And an intensity distribution W dependent on the target TD (p x ,p y ) Is a product of (2); namely:
TD (p x ,p y )=W TD (p x ,p y )∈(p x ,p y )
thus, the sensor error for target A will be ε A (p x ,p y )=W A (p x ,p y )∈(p x ,p y ) And the sensor error for target B will be e B (p x ,p y )=W B (p x ,p y )∈(p x ,p y ). For μDBO measurements, there will typically be only ε TD Instead of a per-pixel distribution (e.g., average intensity over the ROI). In DBO, since DBO is based on pupil measurement, E (p x ,p y ) Can be measured directly.
The method comprises determining a target-dependent correction parameter for the reference target or the reference target from first measurement data relating to the reference target to obtain a (target-dependent) reference correction parameter E FID (p x ,p y ). The first measurement data may include a first set of intensity parameter values related to point illumination at a plurality of first point illumination locations in the pupil plane and a second set of intensity parameter values related to point illumination at a plurality of second point illumination locations in the pupil plane. Each of the plurality of first point illuminated locations has a corresponding point symmetric location in the plurality of second point illuminated locations. For example, a first set of intensity parameter values may be obtained from point illumination on a plurality of first point illumination locations located in a first region (e.g., a first half) of the illumination pupil, and a second set of intensity parameter values may be obtained from point illumination on a plurality of second point illumination locations located in a second region (e.g., a second half) of the illumination pupil, the second region being positioned point-symmetrically with respect to the first region in the illumination pupil. Further description of how the first set of intensity parameter values and the second set of intensity parameter values may be obtained is disclosed later herein.
The method further comprises measuring second metrology data on the same reference target, the second metrology data comprising a third set of intensity parameter values or an intensity distribution W dependent on the reference target FID (p x ,p y ) (e.g., an angularly resolved intensity distribution). Once these quantities have been measured, the above relationship can be used to determine a target invariant correction parameter ε (p x ,p y ) (i.e. E) FID (p x ,p y ) Divided by W FID (p x ,p y )). In this way, the error contribution of the optical device is separated from the error contribution of the stack variation. The intensity parameter may be an intensity acquired at the detector, or a related index (e.g., normalized or otherwise processed intensity). Further description of how the third set of intensity parameter values may be obtained is disclosed later herein.
The reference may be any suitable diffraction structure (e.g., grating) and may be located on a platen of, for example, a metrology apparatus (e.g., outside the perimeter of a loaded wafer on the platen). In a specific example, the fiducial may include a small portion of silicon including a diffractive structure in the resist thereon. The fiducial may be mounted on a rotatable portion of the wafer stage such that the fiducial may be rotated independently of the wafer stage and/or wafer (e.g., through 180 degrees). However, if the reference is known to be sufficiently symmetrical, the calibration may not require rotation of the reference.
Using scatterometry apparatus which has been described (e.g. with respect to fig. 5), typically using incoherent illumination modesQuantity epsilon FID (p x ,p y ) Not simply straightforward. Rather, embodiments include using a scatterometer apparatus with an illumination arrangement optimized for this sensor error calibration. Such illumination may include a directable spot illumination source or a narrow beam of radiation (e.g., coherent) that may be scanned across an illumination pupil (e.g., a plurality of first spot illumination spots and the plurality of second spot illumination spots). Thus, such an illumination point source or beam may be a laser beam or other laser-like, and have a small etendue, and may be used in a partially coherent illumination mode to perform partially coherent imaging; integrating the time trajectory of the full scan of the spot illumination source at the pupil is equivalent to partial coherence imaging. The area of each point illumination spot within the illumination pupil, or the area of the illumination beam, may for example correspond to a single pixel or a small number of pixels, e.g. less than five or less than three, of the detectors in the detection pupil plane (or equivalent plane), assuming that such detectors are present.
At each point along the illumination pupil scan path within the illumination pupil, an image is acquired, for example as measured using an imaging branch (detector at the image plane) or a pupil branch (detector at the pupil plane). In particular implementations, for example, at each of these locations, a μdbo image is obtained from which intensity parameter values are determined (e.g., as an average within the ROI). Thus, each intensity parameter value within the first set of intensity parameter values may be obtained from a μdbo image obtained by point illumination at a specific location in the first portion of the illumination pupil, and each intensity parameter value within the second set of intensity parameter values may be obtained from a μdbo image obtained by point illumination at a specific location in the second portion of the illumination pupil, such that there is an intensity parameter value in the second set corresponding to each intensity parameter value in the first set, as the respective value relates to illumination from a point-symmetrical illumination location in the illumination pupil.
At each of these spot irradiation sites, the irradiation radiation and the resulting scattered radiation (which has been scattered/diffracted by the reference) will travel through the sensor The only path of the optics. Target-dependent correction parameter e for reference FID (p x ,p y ) Can be calculated from each point-symmetric pair (i.e., pairs of illumination pupil points symmetric about the center of the illumination pupil, wherein each point-symmetric pair corresponds to a pixel coordinate p associated with one half or more of the first (or second) point illumination locations of the illumination pupil x ,p y The different pixels described, as illustrated in fig. 6). For an example of a single directional reference approach (i.e., where the reference has a higher degree of symmetry), this can be done using the following relationship:
wherein the method comprises the steps ofAnd->A first set of intensity parameter values over a plurality of first point-illuminated locations and a corresponding second set of intensity parameter values over a plurality of (point-symmetrical) second point-illuminated locations, respectively. I +1 (p x ,p y ),I -1 (p x ,p y ) Is the intensity that would have been measured without sensor error (i.e., using a symmetric sensor). These equations can be derived from the measured intensity values for ε FID (p x ,p y ) Solving, provided that the reference is sufficiently symmetrical, so that the reference can be calculated for each coordinate (p x ,p y ) To assume I +1 =I -1
In case the reference is not so symmetrical, or at least cannot be assumed to be so symmetrical, then the reference is measured in a first orientation and a second orientation to obtain a measured Value of quantity(where the subscripts 0, 180 are assigned directions), thus obtaining two further sets of intensity parameter values (a fourth set of intensity parameter values and a fifth set of intensity parameter values), the fourth set corresponding to the first set being in an opposite orientation and the fifth set corresponding to the second set being in an opposite orientation. The reference e can be determined from the following relationship FID (p x ,p y ) Target-dependent correction parameters of (c):
the measured intensities (i.e., the first, second, fourth, and fifth sets of intensity parameter values associated with the point illumination source) may be detected by a camera in a detection pupil plane or a detection image plane (e.g., a μdbo type measurement). The latter is possible because when a certain point in the illumination pupil is illuminated, said certain point in the illumination pupil has a unique path through the optics (given a certain target with a certain pitch), and thus the total intensity measured in the image plane is the intensity of the unique point in the detection pupil plane.
FIG. 6 is a proposed metrology device that may be used to perform reference-based calibration as described. The apparatus is shown in simplified form and may be similar to the apparatus of fig. 5 except in terms of the illumination arrangement. The coherent (or partially coherent) illumination radiation ILL may be transferred by a single mode fiber SMF (or other suitable transfer method) to an input lens INL and a guiding element or scanning mirror (or galvanomirror) SM. The scanning mirror is one example of several examples of a guiding element. Another option may include (e.g., phase) Spatial Light Modulators (SLMs) that add programmable phase gradients to the wavefront. Binary amplitude SLMs (e.g., digital micromirror devices DMD) may also be used to modulate the phase. The scanning mirror SM or the guiding element is controlled to scan the illumination Ill on the illumination pupil. The scanned beam is transmitted to a reference FID (or other target) on the stage STA via a beam splitter BS, a lens system L1 and an objective lens OB. The scattered radiation SC is directed by the beam splitter BS to a pupil imaging branch comprising the lens system L2, L3, the wedge W, and the detector DET.
Fig. 6 also shows the pupil plane PP. In the particular arrangement illustrated, the illumination pupil includes an upper left quadrant and a lower right quadrant of the pupil plane (or conjugate plane of the pupil plane) of the objective lens OB. The other two quadrants define the detection pupil. This is purely an exemplary arrangement and other illumination pupil profiles are possible, including those that illuminate without passing through the objective OB. The actual illumination arrangement is not important to the concepts disclosed herein. The scanned illumination path is shown as a meandering path through the two illumination quadrants. For each pixel coordinate (p x ,p y ) Calculating a sensor term E from a pair of intensity values of a pair of point-symmetric illumination pupil points FID (p x ,p y ). Thus, for the exemplary coordinate system used herein, the illustrated pixel coordinates describe only a first portion or half (one quadrant) of the illumination pupil associated with the plurality of first point illumination locations and the first set of intensity parameter values. Each of the plurality of second point irradiation sites (associated with the plurality of second point irradiation sites) is assigned the coordinates of its point-symmetrical site among the plurality of first point irradiation sites. For the purpose of illustration, a single such pair of illumination pupil points IPP is shown 1 、IPP 2 . Other aspects of the illumination branch mayIn the form of the illumination branch of fig. 5, the illumination branch may comprise, for example, elements marked 12, 13 and 14 between the input lens INL and the scanning mirror SM.
FIG. 7 illustrates a method for determining a sensor term ε FID (p x ,p y ) Is an alternative irradiation concept. The method may be used with the apparatus of fig. 5 (or the like) having an illumination branch as shown in the figure. To obtain the necessary scanning beam (illumination point source) from such an illumination branch, one or two moving apertures may be provided. The figure shows the aperture AP in five different positions in the illumination pupil (here again illustrated as two quadrants of the objective pupil); of course, there will be more positions to cover the illumination pupil over the complete scan. Note that this illustrated example shows simultaneous illumination in two directions (two apertures provide two illumination point sources). This is not necessary, but of course the measurement time is halved. The aperture size may be matched to the width of the fourier transform of the illumination mode selector of the metrology apparatus.
The quantity W may be measured using conventional pupil plane measurements, for example using the pupil imaging branch of the apparatus of fig. 5 or 6 (e.g. directly) FID (p x ,p y ) (i.e., a third set of intensity parameter values) to obtain an angle-resolved intensity distribution. In the latter case, the incoherent illumination mode typically used for such measurements may be approximated by a partial incoherent mode comprising integrating the measured intensity on the detector during a fast scan of the point illumination source on the illumination pupil during acquisition. Alternatively, this information may be obtained via an imaging branch detector (image sensor) by integration in accordance with the point or location of the illumination pupil, which corresponds to the point in the detection pupil.
Once the target invariant correction parameters e (p x ,p y ) The target invariant correction parameter e (p x ,p y ) With W from a particular target (e.g. target C) C (p x ,p y ) Pupil intensity measurements (comprising a set of target intensity parameter valuesTarget measurement data) to determine a target-dependent correction parameter e C (p x ,p y ) (i.e., as target corrections for sensor errors specific to the target/stack). The alignment does not require wafer rotation. For example by adapting the target-dependent correction parameter e for each pixel C (p x ,p y ) Separate from the intensity values measured for the corresponding pixels, the target-dependent correction parameter e C (p x ,p y ) And can then be used to correct the measurement from target C. The asymmetry correction values for each pixel may be saved and applied to a number of measurements of the target C. The set of target intensity parameter values may be obtained in the same way as the third set of intensity parameter values, but using the target for measurement instead of the reference, and thus the set of target intensity parameter values may be measured in a pupil plane or an image plane.
This calibration may be repeated as often as necessary (e.g., as often as the process causes the target/stack to change sufficiently to require calibration). It should be appreciated that the correction parameters e (p x ,p y ) Should have the same pitch as the target that is subsequently used for calibration.
In summary, performing sensor error (TIS) calibration on the reference will significantly reduce measurement burden and design complexity. At most, only the reference need be rotatable, and if sufficiently symmetrical, this is not necessary in practice. Furthermore, it should be appreciated that performing sensor error correction on the reference will also make the sensor design simple and reduce the associated costs of such sensors. "wafer rotation" actually includes rotating and repositioning the wafer. For example, a target at the edge of the wafer will move to the other side relative to the sensor after the wafer rotates, and the sensor or stage must travel over the entire range of the wafer. This becomes a constraint in sensor or platform design.
The hardware implementation described above and illustrated in fig. 6 is disclosed in the specific context of quantifying and correcting sensor errors. However, the main features of this implementation may also have utility in direct metrology for lithographic process monitoring and/or control (e.g., for measuring parameters of interest, such as, for example, overlay).
Currently, such measurements may be performed on a metrology tool such as that illustrated in FIG. 5 (a). Such tools typically use a non-coherent (e.g., laser-generated plasma) source and may be operable to perform dark-field imaging. The known configuration may divide the (conjugate) pupil plane of the objective lens into an illumination pupil formed by two diagonally opposite quadrants and a detection pupil formed by the other two diagonally opposite quadrants (e.g. substantially as illustrated in fig. 6). By using wedges (e.g., four wedges at the pupil plane), it is known that two complementary higher diffraction orders (e.g., +1 and-1) can be imaged simultaneously.
In many current applications, the illumination spot is larger than the overlapping target being measured, resulting in an overfill measurement mode. In such measurement modes, it is not possible to prevent the measurement signal from the target from being contaminated by unwanted background signals (e.g. generated by dummy fill patterns) due to crosstalk, which may be incoherent or/and coherent. In a few cases, there may be additional targets that can be used as calibration targets in order to mitigate incoherent crosstalk; however, this is not always the case.
It is often desirable to use spatially coherent light sources. Calibration targets that do not guarantee incoherent crosstalk will be available or obtainable on production wafers and cannot be used to correct the effects of coherent crosstalk even when present.
FIG. 8 illustrates a metrology device operable to measure a parameter of interest (such as overlay, focal length, critical dimension, etc.). Many of the elements making up the device are identical to those of the device of fig. 6, and will not be described again, at least in the case where many of the elements making up the device operate in the same manner as before.
It is proposed to use such an arrangement in a selective dark field mode with a coherent illumination source and to use pixel selection techniques at the pupil plane to reduce coherent and incoherent crosstalk from surrounding structures.
A coherent laser source (introduced, for example, via a single-mode optical fiber SMF) is focused on a pupil plane of the objective lens OB, wherein a scanning mirror SM or a guiding element (e.g., a galvano mirror) is used to control the focus position of the coherent laser source. In metrology applications, the pitch of the metrology targets and the illumination wavelength used are always known. Thus, at any point along the illumination scan path, a location in the pupil plane of the corresponding diffraction order from a particular target (e.g., the conjugate pupil plane of objective OB) is always deterministic. In contrast, the background structures will typically have a pitch that is unknown but different from the target pitch.
It is therefore proposed to select only the desired diffraction order from the metrology target and to image it and, for example, to block other scattered radiation not included in the desired diffraction order. Such an embodiment may perform this selection using the pixel selector PS, so that only pixels corresponding to these diffraction orders are selected based on the illumination wavelength and the target pitch. In this way, crosstalk may be eliminated or at least reduced.
The synchronization module SYNC (e.g. any processing means suitably programmed, whether dedicated to this synchronization task or otherwise) can control the synchronization between the scanning mirror SM and the pixel selector PS so that the appropriate pixel is selected during the focus scan; i.e. those pixels corresponding to the target signal (e.g. such that the selected pixels move with the irradiation position based on the target pitch and the irradiation wavelength).
Fig. 8 shows by way of example details of a possible implementation of the selection module or pixel selector PS. Input beam PS in Incident on the DMD of the digital micromirror device selecting the desired pixel, thereby inputting the light beam PS in Directed to pixel selector output PS out The remaining illumination is directed to beam dump BP. Other DMD-based pixel selection arrangements are contemplated, as may be non-DMD-based pixel selection arrangements (e.g., in a more basic arrangement, two pinholes with sufficiently fast actuation may be used).
In an embodiment, the incoherent normal and complementary images may be obtained by integrating the acquired diffraction orders over the detector DET during a normal and complementary scanning trajectory, respectively. Alternatively, for example, if the camera is slow and it is not desired to take consecutive camera images, the standard wedge configuration may still be used to separate the +1 order diffraction and-1 order diffraction on the camera.
Fig. 9 illustrates a scan path of an illumination beam (e.g., as controlled via scan mirror SM), wherein the illumination beam is in (a) a first pupil plane portion IPP and (c) a second pupil plane portion IPP'. As with the sensor error determination method already described, the illumination scan path may comprise a meandering path over an illumination pupil (which may comprise two diagonally opposite quadrants of the objective pupil plane). Fig. 9 (b) again shows the pupil plane, but now with the scattered beam (from the target) resulting from the illumination position IPP illustrated in fig. 9 (a). Similarly, fig. 9 (d) shows a scattered beam obtained from the irradiation position IPP' illustrated in fig. 9 (c). In each case, the illustrated scattered beam includes a zero order beam, a +1st order diffraction in the X direction +1x, and a +1st order diffraction in the Y direction +1y. The synchronization module SYNC will synchronize with the illumination scan, select pixels corresponding to diffraction orders +1x, +1y, so that only these pixels are selected (e.g. using pixel selector PS), and reject radiation corresponding to other pixels in the detection pupil (of the other two quadrants in the pupil plane).
In an embodiment, the scan path in the pupil may be optimized (e.g., during the recipe setting phase) to maximize overlay accuracy. By way of example, this may be achieved by optimizing the scan path so that light from the surrounding structure has minimal interaction with edges in the optical system. Such an optimal scan path may depend on the pitch of the surrounding structures.
There is also the possibility to perform a gated measurement, i.e. a snapshot of each point. In this case, each obtained image is produced by coherent imaging. The coherent aberration correction algorithm can then be applied directly. As an example, the corrected coherent image may then be averaged to effectively obtain an incoherent image from which a metrology value of interest (e.g., overlap) may be determined. Alternatively, the metrology values of interest (e.g., overlap) may be determined separately from each coherent image, after which these values may be averaged or otherwise combined into a single final overlap value for the target. In either case, this would enable the scan path to be optimized in post-processing (as mentioned in the previous paragraph), for example, by averaging only the desired/beneficial subset of coherent images and eliminating the remaining images. Examples of how this post-processing may be performed include:
The derived metrology values of the coherent images are substantially identical to each other, only those coherent images being averaged; and/or
Determining how to post-process (e.g., which coherent images to use in averaging) based on external reference data (such as AEI overlay data); for example, post-processing of the image is selected such that the correlation with the AEI overlap is optimized (in other words, such that the overlap with respect to the product is optimized).
Fig. 10 is a block diagram illustrating a computer system 1000 that may facilitate implementing the methods and processes disclosed herein. Computer system 1000 includes a bus 1002 or other communication mechanism for communicating information, and a processor 1004 (or multiple processors 1004 and 1005) coupled with bus 1002 for processing information. Computer system 1000 also includes a main memory 1006, such as a Random Access Memory (RAM) or other dynamic memory, coupled to bus 1002 for storing information and instructions to be executed by processor 1004. Main memory 1006 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1004. Computer system 1000 also includes a Read Only Memory (ROM) 1008 or other static storage device coupled to bus 1002 for storing static information and instructions for processor 1004. A storage device 1010, such as a magnetic disk or optical disk, is provided and coupled to bus 1002 for storing information and instructions.
Computer system 1000 may be coupled via bus 1002 to a display 1012, such as a Cathode Ray Tube (CRT) or flat panel display or touch panel display, for displaying information to a computer user. An input device 1014, including alphanumeric and other keys, is coupled to bus 1002 for communicating information and command selections to processor 1004. Another type of user input device is cursor control 1016, such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to processor 1004 and for controlling cursor movement on display 1012. The input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), which allows the device to specify positions in a plane. A touch panel (screen) display may also be used as an input device.
Portions of one or more methods as described herein may be performed by computer system 1000 in response to processor 1004 executing one or more sequences of one or more instructions contained in main memory 1006. Such instructions may be read into main memory 1006 from another computer-readable medium, such as storage device 1010. Execution of the sequences of instructions contained in main memory 1006 causes processor 1004 to perform the process steps described herein. One or more processors in a multi-processing arrangement may also be employed to execute the sequences of instructions contained in main memory 1006. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, the description herein is not limited to any specific combination of hardware circuitry and software.
The term "computer-readable medium" as used herein refers to any medium that participates in providing instructions to processor 1004 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 1010. Volatile media includes dynamic memory, such as main memory 1006. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 1002. Transmission media can also take the form of acoustic or light waves, such as those generated during Radio Frequency (RF) and Infrared (IR) data communications. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, such as a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 1004 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 1000 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infrared detector coupled to bus 1002 can receive the data carried in the infrared signal and place the data on bus 1002. Bus 1002 carries the data to main memory 1006, from which processor 1004 retrieves and executes the instructions. The instructions received by main memory 1006 may optionally be stored on storage device 1010 either before or after execution by processor 1004.
Computer system 1000 also preferably includes a communication interface 1018 coupled to bus 1002. Communication interface 1018 provides a two-way data communication coupling to a network link 1020 that is connected to a local network 1022. For example, communication interface 1018 may be an Integrated Services Digital Network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 1018 may be a Local Area Network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 1018 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Network link 1020 typically provides data communication through one or more networks to other data devices. For example, network link 1020 may provide a connection through local network 1022 to a host computer 1024 or to data equipment operated by an Internet Service Provider (ISP) 1026. ISP 1026 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the "Internet" 1028. Local network 1022 and internet 1028 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 1020 and through communication interface 1018 are exemplary forms of carrier waves transporting the information, which carry the digital data to and from computer system 1000, through the various networks and the signals on network link 1020 and through communication interface 1018.
Computer system 1000 can send messages and receive data, including program code, through the network(s), network link 1020 and communication interface 1018. In the Internet example, a server 1030 may transmit requested code for an application program through Internet 1028, ISP 1026, local network 1022 and communication interface 1018. For example, one such downloaded application may provide one or more of the techniques described herein. The received code may be executed by processor 1004 as it is received, and/or stored in storage 1010, or other non-volatile storage for later execution. In this manner, computer system 1000 may obtain application code in the form of a carrier wave.
Additional embodiments are discussed in the following numbered aspect list:
1. a method of determining a target-dependent correction parameter for a measurement of a target, the measurement being affected by a target-dependent sensor error contribution having a dependence on the target and/or a stack comprising the target, the method comprising:
obtaining first measurement data relating to measurements of a reference target, the first measurement data comprising at least a first set of intensity parameter values and a corresponding second set of intensity parameter values;
Obtaining second measurement data relating to measurements of the reference target, the second measurement data comprising a third set of intensity parameter values;
determining a target invariant correction parameter from the first and second measurement data, the target invariant correction parameter being a component of the target dependent correction parameter that is independent of the target and/or stack; and
the target-dependent correction parameters are determined from the target invariant correction parameters.
2. The method according to aspect 1, comprising: the second measurement data is acquired by illuminating the reference target and detecting the third set of intensity parameter values from the scattered radiation, the third set of intensity parameter values comprising an angle-resolved intensity parameter distribution.
3. The method of aspect 2, wherein the illuminating the reference target comprises illuminating the target with incoherent radiation.
4. The method of aspect 2, wherein the illuminating the reference target comprises illuminating the reference target using a point radiation source scanned over an illumination pupil.
5. The method according to aspect 4, comprising: radiation scattered from the reference object is integrated during scanning over a detector located in the detection pupil plane.
6. The method according to any one of aspects 2 to 5, comprising:
the third set of intensity parameter values is detected at a detection pupil plane.
7. The method according to aspect 4, comprising: the third set of intensity parameter values is detected at the image plane by integrating radiation scattered from the reference target in accordance with the point of the illumination pupil.
8. A method according to any preceding aspect, wherein the determining a target invariant correction parameter comprises determining a reference correction parameter from the first measurement data, the reference correction parameter comprising a target dependent correction parameter relating to the reference target; and is also provided with
The target invariant correction parameter is determined from the reference correction parameter and the second measurement data.
9. The method of aspect 8, wherein determining the target invariant correction parameter comprises dividing the reference correction parameter by the second measurement data.
10. The method of any of claims 8 or 9, wherein the first set of intensity parameter values relates to intensity values obtained using at least one point illumination source at each of a plurality of first point illumination locations within an illumination pupil, and the second set of intensity parameter values relates to intensity values obtained using the at least one point illumination source at each of a plurality of second point illumination locations within the illumination pupil, wherein each of the plurality of first point illumination locations has a respective point symmetry location in the plurality of second point illumination locations such that the plurality of first point illumination locations and the plurality of second point illumination locations together comprise a plurality of point symmetry point illumination locations.
11. The method of aspect 10, wherein the baseline correction parameters are determined from a plurality of pairs of the intensity parameter values, each pair of intensity parameter values comprising: intensity parameter values from the first set of intensity parameter values and intensity parameter values from the second set of intensity parameter values, and each pair of intensity parameter values corresponds to each pair of point-symmetrical point-illuminated locations of the plurality of pairs of point-symmetrical point-illuminated locations.
12. The method of aspect 10 or 11, comprising: acquiring the first measurement data by scanning the at least one point illumination source over the plurality of first point illumination locations and the plurality of second point illumination locations, and obtaining intensity parameter values from images acquired at each of the plurality of first point illumination locations and the plurality of second point illumination locations.
13. The method of aspect 12, wherein scanning the at least one point illumination source includes moving at least one illumination aperture within the illumination pupil.
14. The method of aspect 12, wherein scanning the at least one point illumination source comprises: the at least one point illumination source is directed over the plurality of first point illumination locations and the plurality of second point illumination locations using a directing element.
15. The method of any of claims 10 to 14, wherein each of the first and second sets of intensity parameter values relates to a first orientation of the reference target relative to sensor optics used to obtain the first measurement data, and the first measurement data further comprises a fourth and fifth set of intensity parameter values, each of the fourth and fifth sets of intensity parameter values relating to intensity values obtained using the at least one point illumination source at each of the plurality of first and second point illumination locations, respectively, for a second orientation of the reference target relative to the sensor optics.
16. The method of any one of aspects 1 to 14, wherein the first measurement data relates only to a first orientation of the reference target.
17. The method of any preceding aspect, wherein the step of determining a target-dependent correction parameter comprises: determining a target-dependent correction parameter for a target, the steps comprising:
obtaining target measurement data comprising a set of target intensity parameter values related to the measurement of the target; and
Determining the target-dependent correction parameters for the target from a combination of the set of target intensity parameter values and the target invariant correction parameters.
18. The method of aspect 17, comprising: the target measurement data is acquired by illuminating the target and the set of target intensity parameter values is detected as an angularly resolved intensity value distribution from scattered radiation detected in a detection pupil plane.
19. The method of aspect 17, comprising: the method further comprises obtaining the target measurement data by illuminating the target with a point radiation source scanning over an illumination pupil, and detecting the set of target intensity parameter values at an image plane by integrating radiation scattered from the target in accordance with a point of the illumination pupil.
20. The method of aspects 18 or 19, wherein the target-dependent correction parameter is determined from a product of the set of target intensity parameter values and the target invariant correction parameter.
21. The method according to any one of aspects 17 to 20, comprising: the target-dependent correction parameters are used to correct the measurement of the target.
22. A computer program comprising instructions for a processor, the instructions causing the processor and/or associated device to perform the method according to any preceding aspect or any one of aspects 31 to 36.
23. A processing device and associated program memory comprising instructions for the processor to cause the processor to perform the method of any one of aspects 1 to 21 or aspects 31 to 36.
24. A metrology apparatus for determining a property of interest of a target, the metrology apparatus comprising the processing device of aspect 23.
25. A metrology apparatus for determining a property of interest of a target, the metrology apparatus being operable to perform the method of any one of aspects 1 to 21, the metrology apparatus comprising a point illumination source comprising:
a directing element operable to receive an illumination beam and to controllably direct the illumination beam onto a substrate within an illumination pupil of the metrology apparatus.
26. The metrology apparatus of aspect 25, wherein the guiding element comprises a scanning mirror or a spatial light modulator.
27. The metrology apparatus of claim 25 or 26, wherein the point illumination source comprises a coherent or partially coherent radiation source providing the illumination beam to the guiding element.
28. The metrology apparatus of aspects 25, 26 or 27, wherein the point illumination source is operable in a first mode of operation in which the illumination beam is scanned over a plurality of point illumination locations within the illumination pupil; the metrology apparatus is further operable to acquire an image for each of the spot shot locations.
29. The metrology apparatus of aspect 28 wherein the point illumination source is operable in a second mode of operation in which the illumination beam is scanned within the illumination pupil at a faster scan speed than in the first mode of operation; the metrology apparatus is further operable to acquire images relating to the integration of scattered radiation during the scan.
30. The metrology apparatus of any one of aspects 25 to 29, comprising:
at least one first detector operable to acquire scattered radiation at an imaging plane of the metrology device, the scattered radiation having been scattered by a target after receipt of the illumination beam; and
at least one second detector operable to acquire the scattered radiation at a pupil plane of the metrology apparatus.
31. A method of measuring, comprising:
illuminating the target using a point radiation source scanned over an illumination pupil;
acquiring the resulting scattered radiation that has been scattered from the target; and
only the scattered radiation comprised within one or more desired diffraction orders is detected.
32. The method of claim 31, wherein the detecting step comprises selectively passing scattered radiation that is included within the one or more desired diffraction orders and blocking the scattered radiation that is not included within the one or more desired diffraction orders.
33. The method of aspect 32, comprising: the scattered radiation is selectively passed in synchronism with the scanning of the point radiation source over the illumination pupil such that the desired diffraction order is always detected.
34. The method of any one of aspects 31 to 33, comprising: one or more regions in the detection pupil corresponding to the one or more desired diffraction orders are selected using a selection module in the detection pupil.
35. The method of aspect 34, wherein the one or more regions are selected based on an illumination wavelength of the point radiation source and a pitch of the target.
36. The method of any of claims 31 to 35, wherein the point radiation source is a coherent radiation source.
37. A metrology apparatus for determining a parameter of interest of an object, comprising:
point illumination source:
a directing element operable to receive an illumination beam and controllably direct the point illumination source onto the target within an illumination pupil of the metrology apparatus;
An objective lens for acquiring scattered radiation that has been scattered from the target;
a selection module located within a detection pupil of the metrology apparatus and operable to select one or more regions in the detection pupil corresponding to one or more desired diffraction orders within the scattered radiation; and
a detector for detecting the one or more desired diffraction orders. 38. The metrology apparatus of claim 37, wherein the selection module is operable to selectively pass the scattered radiation that is included within the one or more desired diffraction orders and block the scattered radiation that is not included within the one or more desired diffraction orders such that only the scattered radiation that is included within one or more desired diffraction orders is detected on the detector.
39. The metrology apparatus of claim 38, comprising a synchronization module operable to selectively pass the scattered radiation in synchronization with a scan of the point radiation source over the illumination pupil such that a desired diffraction order is always detected.
40. The metrology apparatus of any one of claims 37 to 39, wherein the guiding element comprises a scanning mirror or a spatial light modulator.
41. The metrology apparatus of any one of claims 37 to 40, wherein the point illumination source comprises a coherent or partially coherent radiation source providing the illumination beam to the guiding element.
42. The metrology apparatus of any one of claims 37 to 41, wherein the point illumination source is operable to scan the illumination beam over a plurality of point illumination locations within the illumination pupil.
43. The metrology apparatus of aspect 42 further operable to integrate scattered radiation over the detector within one or more desired diffraction orders included on the illumination scan.
44. The metrology apparatus of any one of claims 37 to 43, wherein the selection module comprises a pixel selector.
45. The metrology apparatus of aspect 44, wherein the pixel selector comprises a digital micro-mirror array.
Although specific reference is made to "metrology apparatus/tool/system" or "inspection apparatus/tool/system," these terms may refer to the same or similar type of tool, apparatus or system. For example, inspection or metrology equipment including embodiments of the present invention can be used to determine characteristics of structures on a substrate or on a wafer. For example, an inspection apparatus or metrology apparatus including embodiments of the present invention may be used to detect defects in a substrate or in structures on a substrate or on a wafer. In such embodiments, a characteristic of interest of a structure on a substrate may relate to a defect in the structure, the absence of a particular portion of the structure, or the presence of an unwanted structure on the substrate or on a wafer.
The targets or target structures described herein (more generally, structures on a substrate) may be metrology targets specifically designed and formed for measurement purposes. In other embodiments, the property of interest may be measured on one or more structures that are functional portions of a device formed on a substrate. Many devices have a regular grating-like structure. The terms "structure", "target grating" and "target structure" as used in the present invention do not require that the structure has been provided specifically for the measurement being performed. In addition, the pitch of the metrology targets may be close to the resolution limit of the optical system of the scatterometer or may be smaller, but may be much larger than the size of typical non-target structures (optionally product structures) made by the lithographic process in the target portion C. In practice, the lines and/or spaces of overlapping gratings within the target structure may be made to include smaller structures that are similar in size to non-target structures.
Although specific reference may be made in this text to the use of lithographic apparatus in the manufacture of ICs, it should be understood that the lithographic apparatus described herein may have other applications. Possible other applications include the manufacture of integrated optical systems, guidance and detection patterns for magnetic domain memories, flat panel displays, liquid Crystal Displays (LCDs), thin film magnetic heads, etc.
Although specific reference may be made herein to embodiments of the invention in the context of a lithographic apparatus, embodiments of the invention may be used in other apparatuses. Embodiments of the invention may form part of a mask inspection apparatus, metrology apparatus or any apparatus that measures or processes an object such as a wafer (or other substrate) or mask (or other patterning device). These devices may be generally referred to as lithographic tools. Such a lithographic tool may use vacuum conditions or ambient (non-vacuum) conditions.
While specific reference has been made above to the use of embodiments of the invention in the context of optical lithography, it will be appreciated that the invention is not limited to optical lithography and may be used in other applications, for example imprint lithography, where the context allows.
While specific embodiments of the invention have been described above, it will be appreciated that the invention may be practiced otherwise than as described. The above description is intended to be illustrative and not restrictive. Accordingly, it will be apparent to those skilled in the art that modifications may be made to the invention as described without departing from the scope of the claims set out below.

Claims (15)

1. A method of determining a target-dependent correction parameter for a measurement of a target, the measurement being affected by a target-dependent sensor error contribution having a dependence on the target and/or a stack comprising the target, the method comprising:
obtaining first measurement data relating to measurements of a reference target, the first measurement data comprising at least a first set of intensity parameter values and a corresponding second set of intensity parameter values;
obtaining second measurement data relating to measurements of the reference target, the second measurement data comprising a third set of intensity parameter values;
determining a target invariant correction parameter from the first and second measurement data, the target invariant correction parameter being a component of the target dependent correction parameter that is independent of the target and/or stack; and
the target-dependent correction parameters are determined from the target invariant correction parameters.
2. The method according to claim 1, comprising: the second measurement data is acquired by illuminating the reference target and detecting the third set of intensity parameter values from the scattered radiation, the third set of intensity parameter values comprising an angle-resolved intensity parameter distribution.
3. The method of claim 2, characterized by one of the following:
-said irradiating said reference target comprises irradiating said target with incoherent radiation, and
-said illuminating the reference target comprises illuminating the reference target with a point radiation source scanned over an illumination pupil, and wherein optionally comprising: during scanning over a detector located in the detection pupil plane, radiation scattered from the reference target is integrated.
4. A method according to any preceding claim, comprising:
detecting the third set of intensity parameter values at a detection pupil plane, an
Optionally comprising: the third set of intensity parameter values is detected at the image plane by integrating radiation scattered from the reference target in accordance with the point of the illumination pupil.
5. The method of any preceding claim, wherein the determining a target invariant correction parameter comprises: determining a reference correction parameter from the first measurement data, the reference correction parameter comprising a target-dependent correction parameter related to the reference target; and
the target invariant correction parameter is determined from the reference correction parameter and the second measurement data.
6. The method of claim 5, wherein determining the target invariant correction parameter comprises: dividing the reference correction parameter by the second measurement data.
7. The method of any of claims 5 or 6, wherein the first set of intensity parameter values relates to intensity values obtained using at least one point illumination source at each of a plurality of first point illumination locations within an illumination pupil, and the second set of intensity parameter values relates to intensity values obtained using the at least one point illumination source at each of a plurality of second point illumination locations within the illumination pupil, wherein each of the plurality of first point illumination locations has a respective point symmetry location in the plurality of second point illumination locations such that the plurality of first point illumination locations and the plurality of second point illumination locations together comprise a plurality of point symmetry point illumination locations.
8. The method of claim 7, wherein the baseline correction parameters are determined from a plurality of pairs of the intensity parameter values, each pair of intensity parameter values comprising: intensity parameter values from the first set of intensity parameter values and intensity parameter values from the second set of intensity parameter values, and each pair of intensity parameter values corresponds to each pair of point-symmetrical point-illuminated locations of the plurality of pairs of point-symmetrical point-illuminated locations.
9. The method according to claim 7 or 8, comprising: acquiring the first measurement data by scanning the at least one point illumination source over the plurality of first point illumination locations and the plurality of second point illumination locations, and obtaining intensity parameter values from images acquired at each of the plurality of first point illumination locations and the plurality of second point illumination locations.
10. The method of claim 9, wherein scanning the at least one point illumination source comprises moving at least one illumination aperture within the illumination pupil.
11. The method of claim 9, wherein scanning the at least one point illumination source comprises directing the at least one point illumination source over the plurality of first point illumination locations and the plurality of second point illumination locations using a directing element.
12. The method of any of claims 7 to 11, wherein each of the first and second sets of intensity parameter values relates to a first orientation of the reference target relative to sensor optics used to obtain the first measurement data, and the first measurement data further comprises a fourth and fifth set of intensity parameter values, each of the fourth and fifth sets of intensity parameter values relating to intensity values obtained using the at least one point illumination source at each of the plurality of first and second point illumination locations, respectively, for a second orientation of the reference target relative to the sensor optics.
13. The method according to any one of claims 1 to 11, wherein the first measurement data relates only to a first orientation of the reference target.
14. A metrology apparatus for determining a property of interest of a target, the metrology apparatus being operable to perform the method of any one of claims 1 to 13, the metrology apparatus comprising a point illumination source comprising:
a directing element operable to receive an illumination beam and controllably direct the illumination beam onto a substrate within an illumination pupil of the metrology apparatus.
15. The metrology apparatus of claim 14, wherein the guiding element comprises a scanning mirror or a spatial light modulator.
CN202280029347.1A 2021-04-19 2022-03-23 Measuring tool calibration method and related measuring tool Pending CN117242400A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP21169097.9 2021-04-19
EP21176858.5 2021-05-31
EP21176858 2021-05-31
PCT/EP2022/057659 WO2022223230A1 (en) 2021-04-19 2022-03-23 Metrology tool calibration method and associated metrology tool

Publications (1)

Publication Number Publication Date
CN117242400A true CN117242400A (en) 2023-12-15

Family

ID=76197338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280029347.1A Pending CN117242400A (en) 2021-04-19 2022-03-23 Measuring tool calibration method and related measuring tool

Country Status (1)

Country Link
CN (1) CN117242400A (en)

Similar Documents

Publication Publication Date Title
US10379445B2 (en) Metrology method, target and substrate
CN112005157B (en) Metrology apparatus and method for determining a characteristic of one or more structures on a substrate
WO2017055106A1 (en) Metrology method, target and substrate
JP2016503520A (en) Dose and focus determination method, inspection apparatus, patterning device, substrate, and device manufacturing method
US11181828B2 (en) Method of determining a value of a parameter of interest of a patterning process, device manufacturing method
EP3531191A1 (en) Metrology apparatus and method for determining a characteristic of one or more structures on a substrate
EP3611567A2 (en) Improvements in metrology targets
US20190214318A1 (en) Method and apparatus to monitor a process apparatus
CN114667446A (en) Method for filtering images and associated metrology apparatus
EP3605230A1 (en) Metrology apparatus and method for determining a characteristic of one or more structures on a substrate
CN111095112B (en) Metrology in lithography process
CN116569111A (en) Metrology methods and associated apparatus
EP4080284A1 (en) Metrology tool calibration method and associated metrology tool
CN117242400A (en) Measuring tool calibration method and related measuring tool
EP4312079A1 (en) Methods of mitigating crosstalk in metrology images
TWI822310B (en) Metrology method and device
EP4246231A1 (en) A method for determining a vertical position of a structure on a substrate and associated apparatuses
KR20230171940A (en) Metrology tool calibration methods and associated metrology tools
EP4160314A1 (en) Method for measuring at least one target on a substrate
EP4187321A1 (en) Metrology method and associated metrology tool
CN117501185A (en) Measuring method and device
EP3462239A1 (en) Metrology in lithographic processes
CN114341738A (en) Measuring method and device for determining complex value field
WO2024056296A1 (en) Metrology method and associated metrology device
WO2023222328A1 (en) Illumination module and associated methods and metrology apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination