WO2024141235A1 - Multichannel lock-in camera for multi-parameter sensing in lithographic processes - Google Patents

Multichannel lock-in camera for multi-parameter sensing in lithographic processes Download PDF

Info

Publication number
WO2024141235A1
WO2024141235A1 PCT/EP2023/084624 EP2023084624W WO2024141235A1 WO 2024141235 A1 WO2024141235 A1 WO 2024141235A1 EP 2023084624 W EP2023084624 W EP 2023084624W WO 2024141235 A1 WO2024141235 A1 WO 2024141235A1
Authority
WO
WIPO (PCT)
Prior art keywords
illumination
aspects
camera
measurement signal
target
Prior art date
Application number
PCT/EP2023/084624
Other languages
French (fr)
Inventor
Henricus Petrus Maria Pellemans
Padmakumar RAMACHANDRA RAO
Original Assignee
Asml Netherlands B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Asml Netherlands B.V. filed Critical Asml Netherlands B.V.
Publication of WO2024141235A1 publication Critical patent/WO2024141235A1/en

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70605Workpiece metrology
    • G03F7/706835Metrology information management or control
    • G03F7/706837Data analysis, e.g. filtering, weighting, flyer removal, fingerprints or root cause analysis
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70605Workpiece metrology
    • G03F7/706843Metrology apparatus
    • G03F7/706849Irradiation branch, e.g. optical system details, illumination mode or polarisation control
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70605Workpiece metrology
    • G03F7/706843Metrology apparatus
    • G03F7/706851Detection branch, e.g. detector arrangements, polarisation control, wavelength control or dark/bright field detection
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F9/00Registration or positioning of originals, masks, frames, photographic sheets or textured or patterned surfaces, e.g. automatically
    • G03F9/70Registration or positioning of originals, masks, frames, photographic sheets or textured or patterned surfaces, e.g. automatically for microlithography
    • G03F9/7092Signal processing

Definitions

  • the present disclosure relates to inspection sensors, for example, alignment and scatterometer sensors used in connection with lithographic processes.
  • a lithographic apparatus is a machine that applies a desired pattern onto a substrate, usually onto a target portion of the substrate.
  • a lithographic apparatus can be used, for example, in the manufacture of integrated circuits (ICs).
  • a patterning device which can be a mask or a reticle, can be used to generate a circuit pattern to be formed on an individual layer of the IC.
  • This pattern can be transferred onto a target portion (e.g., comprising part of, one, or several dies) on a substrate (e.g., a silicon wafer). Transfer of the pattern is typically via imaging onto a layer of radiationsensitive material (photoresist or simply “resist”) provided on the substrate.
  • photoresist radiationsensitive material
  • a single substrate will contain a network of adjacent target portions that are successively patterned.
  • lithographic apparatuses include so-called steppers, in which each target portion is irradiated by exposing an entire pattern onto the target portion at one time, and so-called scanners, in which each target portion is irradiated by scanning the pattern through a radiation beam in a given direction (the “scanning”- direction) while synchronously scanning the target portions parallel or anti-parallel to this scanning direction. It is also possible to transfer the pattern from the patterning device to the substrate by imprinting the pattern onto the substrate.
  • lithographic operation During lithographic operation, different processing steps can entail different layers to be sequentially formed on the substrate. Accordingly, it can be necessary to position the substrate relative to prior patterns formed thereon with a high degree of accuracy.
  • alignment marks are placed on the substrate to be aligned and are located with reference to a second object.
  • a lithographic apparatus can use an alignment apparatus for detecting positions of the alignment marks and for aligning the substrate using the alignment marks to ensure accurate exposure from a mask. Misalignment between the alignment marks at two different layers is measured as overlay error.
  • parameters of the patterned substrate are measured.
  • Parameters can include, for example, the overlay error between successive layers formed in or on the patterned substrate and critical linewidth of developed photosensitive resist. This measurement can be performed on a product substrate and/or on a dedicated metrology target.
  • a fast and non-invasive form of a specialized inspection tool is a scatterometer in which a beam of radiation is directed onto a target on the surface of the substrate and properties of the scattered or reflected beam are measured.
  • the properties of the substrate can be determined. This can be done, for example, by comparing the reflected beam with data stored in a library of known measurements associated with known substrate properties.
  • Spectroscopic scatterometers direct a broadband radiation beam onto the substrate and measure the spectrum (intensity as a function of wavelength) of the radiation scattered into a particular narrow angular range.
  • angularly resolved scatterometers use a monochromatic radiation beam and measure the intensity of the scattered radiation as a function of angle.
  • Such optical scatterometers can be used to measure parameters, such as critical dimensions of developed photosensitive resist or overlay error (OV) between two layers formed in or on the patterned substrate.
  • Properties of the substrate can be determined by comparing the properties of an illumination beam before and after the beam has been reflected or scattered by the substrate.
  • a lithographic system can output only a finite number of fabricated devices in a given timeframe. There is demand for faster lithographic fabrication, which in turn drives advances in faster inspection techniques.
  • Optical inspection of a target on a wafer can be performed using a plurality of photon wavelengths. A given wavelength can provide information about the target that many not be readily apparent with another wavelength. Using multiple parameters during inspection, such as multiple wavelengths, can come with a time cost, thereby slowing lithographic fabrication speeds.
  • a metrology system can comprise an illumination system, a camera, and an analyzer system.
  • the illumination system is configured to transmit illumination toward a target.
  • the illumination has a plurality of illumination parameters associated with a corresponding plurality of modulation frequencies.
  • the camera is configured to receive a scattered illumination from the target.
  • the camera is further configured to generate, per pixel of the camera, a measurement signal encoded with signatures of the plurality of modulation frequencies.
  • the analyzer system is configured to, per pixel of the camera, demodulate the measurement signal based on the plurality of modulation frequencies.
  • the analyzer system is further configured to output a phase, an amplitude, or the phase and amplitude of demodulated components of the measurement signal corresponding to the modulation frequencies.
  • a lithographic apparatus comprises an illumination source, a projection system, and a metrology system.
  • the illumination source is configured to illumination a pattern of a patterning device.
  • the projection system is configured to project an image of the pattern onto a substrate.
  • the metrology system can comprise an illumination system, a camera, and an analyzer system.
  • the illumination system is configured to transmit illumination toward a target.
  • the illumination has a plurality of illumination parameters associated with a corresponding plurality of modulation frequencies.
  • the camera is configured to receive a scattered illumination from the target.
  • the camera is further configured to generate, per pixel of the camera, a measurement signal encoded with signatures of the plurality of modulation frequencies.
  • FIG. 2 shows more details of a reflective lithographic apparatus, according to some aspects.
  • FIG. 6 shows an inspection apparatus, according to some aspects.
  • the support structure MT holds the patterning device MA in a manner that depends on the orientation of the patterning device MA with respect to a reference frame, the design of at least one of the lithographic apparatus 100 and 100’, and other conditions, such as whether or not the patterning device MA is held in a vacuum environment.
  • the support structure MT can use mechanical, vacuum, electrostatic, or other clamping techniques to hold the patterning device MA.
  • the support structure MT can be a frame or a table, for example, which can be fixed or movable. By using sensors, the support structure MT can ensure that the patterning device MA is at a desired position, for example, with respect to the projection system PS.
  • the patterning device MA can be transmissive (as in lithographic apparatus 100’ of FIG. IB) or reflective (as in lithographic apparatus 100 of FIG. 1A).
  • Examples of patterning devices MA include reticles, masks, programmable mirror arrays, or programmable LCD panels.
  • Masks are well known in lithography, and include mask types such as binary, alternating phase shift, or attenuated phase shift, as well as various hybrid mask types.
  • An example of a programmable mirror array employs a matrix arrangement of small mirrors, each of which can be individually tilted so as to reflect an incoming radiation beam in different directions. The tilted mirrors impart a pattern in the radiation beam B, which is reflected by a matrix of small mirrors.
  • Lithographic apparatus 100 and/or lithographic apparatus 100’ can be of a type having two (dual stage) or more substrate tables WT (and/or two or more mask tables).
  • the additional substrate tables WT can be used in parallel, or preparatory steps can be carried out on one or more tables while one or more other substrate tables WT are being used for exposure.
  • the additional table may not be a substrate table WT.
  • the substrate table WT can be moved accurately (for example, so as to position different target portions C in the path of the radiation beam B).
  • the first positioner PM and another position sensor can be used to accurately position the mask MA with respect to the path of the radiation beam B (for example, after mechanical retrieval from a mask library or during a scan).
  • Mask table MT and patterning device MA can be in a vacuum chamber V, where an in-vacuum robot IVR can be used to move patterning devices such as a mask in and out of vacuum chamber.
  • an out-of-vacuum robot can be used for various transportation operations, similar to the in-vacuum robot IVR.
  • Both the in-vacuum and out-of-vacuum robots can be calibrated for a smooth transfer of any payload (e.g., mask) to a fixed kinematic mount of a transfer station.
  • the support structure (for example, mask table) MT and the substrate table WT are scanned synchronously while a pattern imparted to the radiation beam B is projected onto a target portion C (i.e., a single dynamic exposure).
  • the velocity and direction of the substrate table WT relative to the support structure (for example, mask table) MT can be determined by the (de- )magnification and image reversal characteristics of the projection system PS.
  • lithographic apparatus 100 includes an extreme ultraviolet (EUV) source, which is configured to generate a beam of EUV radiation for EUV lithography.
  • EUV extreme ultraviolet
  • the EUV source is configured in a radiation system, and a corresponding illumination system is configured to condition the EUV radiation beam of the EUV source.
  • FIG. 2 shows the lithographic apparatus 100 in more detail, including the source collector apparatus SO, the illumination system IL, and the projection system PS.
  • the source collector apparatus SO is constructed and arranged such that a vacuum environment can be maintained in an enclosing structure 220 of the source collector apparatus SO.
  • An EUV radiation emitting plasma 210 can be formed by a discharge produced plasma source.
  • a plasma of excited tin (Sn) (e.g., excited via a laser) is provided to produce EUV radiation.
  • the collector chamber 212 can include a radiation collector CO, which can be a so-called grazing incidence collector.
  • Radiation collector CO has an upstream radiation collector side 251 and a downstream radiation collector side 252. Radiation that traverses collector CO can be reflected off a grating spectral filter 240 to be focused in a virtual source point INTF.
  • the virtual source point INTF is commonly referred to as the intermediate focus, and the source collector apparatus is arranged such that the intermediate focus INTF is located at or near an opening 219 in the enclosing structure 220.
  • the virtual source point INTF is an image of the EUV radiation emitting plasma 210.
  • Grating spectral filter 240 is used in particular for suppressing infra-red (IR) radiation.
  • the radiation traverses the illumination system IL, which can include a faceted field mirror device 222 and a faceted pupil mirror device 224 arranged to provide a desired angular distribution of the radiation beam 221, at the patterning device MA, as well as a desired uniformity of radiation intensity at the patterning device MA.
  • the illumination system IL can include a faceted field mirror device 222 and a faceted pupil mirror device 224 arranged to provide a desired angular distribution of the radiation beam 221, at the patterning device MA, as well as a desired uniformity of radiation intensity at the patterning device MA.
  • More elements than shown can generally be present in illumination optics unit IL and projection system PS.
  • the grating spectral filter 240 can optionally be present, depending upon the type of lithographic apparatus. Further, there can be more mirrors present than those shown in the FIG. 2, for example there can be one to six additional reflective elements present in the projection system PS than shown in FIG. 2.
  • Collector optic CO is depicted as a nested collector with grazing incidence reflectors 253, 254, and 255, just as an example of a collector (or collector mirror).
  • the grazing incidence reflectors 253, 254, and 255 are disposed axially symmetric around an optical axis O and a collector optic CO of this type is preferably used in combination with a discharge produced plasma source, often called a DPP source.
  • FIG. 3 shows a lithographic cell 300, also sometimes referred to a lithocell or cluster, according to some aspects.
  • Lithographic apparatus 100 or 100’ can form part of lithographic cell 300.
  • Lithographic cell 300 can also include one or more apparatuses to perform pre- and post-exposure processes on a substrate. Conventionally these include spin coaters SC to deposit resist layers, developers DE to develop exposed resist, chill plates CH, and bake plates BK.
  • a substrate handler, or robot, RO picks up substrates from input/output ports VOl, I/O2, moves them between the different process apparatuses and delivers them to the loading bay LB of the lithographic apparatus 100 or 100’ .
  • alignment marks are generally provided on the substrate, and the lithographic apparatus includes one or more inspection apparatuses for accurate positioning of marks on a substrate.
  • These alignment apparatuses are effectively position measuring apparatuses.
  • Different types of marks and different types of alignment apparatuses and/or systems are known from different times and different manufacturers.
  • a type of system widely used in current lithographic apparatus is based on a self-referencing interferometer as described in U.S. Patent No. 6,961,116 (den Boef et al.). Generally marks are measured separately to obtain X- and Y-positions.
  • a combined X- and Y-measurement can be performed using the techniques described in U.S. Publication No. 2009/195768 A (Bijnen et al.), however. The full contents of both of these disclosures are incorporated herein by reference.
  • FIG. 4A shows a cross-sectional view of an inspection apparatus 400 that can be implemented as a part of lithographic apparatus 100 or 100’, according to some aspects.
  • inspection apparatus 400 can be configured to align a substrate (e.g., substrate W) with respect to a patterning device (e.g., patterning device MA).
  • Inspection apparatus 400 can be further configured to detect positions of alignment marks on the substrate and to align the substrate with respect to the patterning device or other components of lithographic apparatus 100 or 100’ using the detected positions of the alignment marks.
  • Such alignment of the substrate can ensure accurate exposure of one or more patterns on the substrate.
  • the terms “inspection apparatus,” “metrology system,” or the like can be used herein to refer to, e.g., a device used for measuring a property of a structure (e.g., overlay sensor, critical dimension sensor, or the like), a device or system used in a lithographic apparatus to inspect an alignment of a wafer (e.g., alignment sensor), or the like.
  • a device used for measuring a property of a structure e.g., overlay sensor, critical dimension sensor, or the like
  • a device or system used in a lithographic apparatus to inspect an alignment of a wafer e.g., alignment sensor
  • inspection apparatus 400 can include an illumination system 412, a beam splitter 414, an interferometer 426, a detector 428, a beam analyzer 430, and an overlay calculation processor 432.
  • Illumination system 412 can be configured to provide an electromagnetic narrow band radiation beam 413 having one or more passbands.
  • the one or more passbands can be within a spectrum of wavelengths between about 500 nm to about 900 nm.
  • the one or more passbands can be discrete narrow passbands within a spectrum of wavelengths between about 500 nm to about 900 nm.
  • Illumination system 412 can be further configured to provide one or more passbands having substantially constant center wavelength (CWL) values over a long period of time (e.g., over a lifetime of illumination system 412).
  • CWL center wavelength
  • Such configuration of illumination system 412 can help to prevent the shift of the actual CWL values from the desired CWL values, as discussed above, in current alignment systems. And, as a result, the use of constant CWL values can improve long-term stability and accuracy of alignment systems (e.g., inspection apparatus 400) compared to the current alignment apparatuses.
  • beam splitter 414 can be configured to receive radiation beam 413 and split radiation beam 413 into at least two radiation sub-beams.
  • radiation beam 413 can be split into radiation sub-beams 415 and 417, as shown in FIG. 4A.
  • Beam splitter 414 can be further configured to direct radiation sub-beam 415 onto a substrate 420 placed on a stage 422.
  • the stage 422 is movable along direction 424.
  • Radiation sub-beam 415 can be configured to illuminate an alignment mark or a target 418 located on substrate 420. Alignment mark or target 418 can be coated with a radiation sensitive film.
  • alignment mark or target 418 can have one hundred and eighty degrees (i.e., 180°) symmetry. That is, when alignment mark or target 418 is rotated 180° about an axis of symmetry perpendicular to a plane of alignment mark or target 418, rotated alignment mark or target 418 can be substantially identical to an unrotated alignment mark or target 418.
  • the target 418 on substrate 420 can be (a) a resist layer grating comprising bars that are formed of solid resist lines, or (b) a product layer grating, or (c) a composite grating stack in an overlay target structure comprising a resist grating overlaid or interleaved on a product layer grating. The bars can alternatively be etched into the substrate.
  • This pattern is sensitive to chromatic aberrations in the lithographic projection apparatus, particularly the projection system PL, and illumination symmetry and the presence of such aberrations will manifest themselves in a variation in the printed grating.
  • One in-line method used in device manufacturing for measurements of line width, pitch, and critical dimension makes use of a technique known as “scatterometry”. Methods of scatterometry are described in Raymond et al., “Multiparameter Grating Metrology Using Optical Scatterometry”, J. Vac. Sci. Tech. B, Vol. 15, no. 2, pp. 361-368 (1997) and Niu et al., “Specular Spectroscopic Scatterometry in DUV Lithography”, SPIE, Vol.
  • beam splitter 414 can be further configured to receive diffraction radiation beam 419 and split diffraction radiation beam 419 into at least two radiation sub-beams, according to an aspect.
  • Diffraction radiation beam 419 can be split into diffraction radiation sub-beams 429 and 439, as shown in FIG. 4A.
  • beam splitter 414 is shown to direct radiation sub-beam 415 towards alignment mark or target 418 and to direct diffracted radiation sub-beam 429 towards interferometer 426, the disclosure is not so limiting. Other optical arrangements can be used to obtain the similar result of illuminating alignment mark or target 418 on substrate 420 and detecting an image of alignment mark or target 418.
  • interferometer 426 can be configured to receive radiation sub-beam 417 and diffracted radiation sub-beam 429 through beam splitter 414.
  • diffracted radiation sub-beam 429 can be at least a portion of radiation sub-beam 415 that can be reflected from alignment mark or target 418.
  • interferometer 426 comprises any appropriate set of optical-elements, for example, a combination of prisms that can be configured to form two images of alignment mark or target 418 based on the received diffracted radiation sub-beam 429. It should be appreciated that a good quality image need not be formed. It can be enough to have the features of alignment mark 418 resolved.
  • Interferometer 426 can be further configured to rotate one of the two images with respect to the other of the two images 180° and recombine the rotated and unrotated images interferometrically.
  • [0077] measuring position variations for various polarizations (position shift between polarizations).
  • This data can be obtained using any type of alignment sensor, for example, a SMASH (SMart Alignment Sensor Hybrid) sensor, as described in U.S. Patent No. 6,961,116 that employs a selfreferencing interferometer with a single detector and four different wavelengths, and extracts the alignment signal in software, or Athena (Advanced Technology using High order ENhancement of Alignment), as described in U.S. Patent No. 6,297,876, which directs each of seven diffraction orders to a dedicated detector, which are both incorporated by reference herein in their entireties.
  • SMASH SMart Alignment Sensor Hybrid
  • Athena Advanced Technology using High order ENhancement of Alignment
  • beam analyzer 430 can be configured to receive and determine an optical state of diffracted radiation sub-beam 439.
  • the optical state can be a measure of beam wavelength, polarization, or beam profile.
  • Beam analyzer 430 can be further configured to determine a position of stage 422 and correlate the position of stage 422 with the position of the center of symmetry of alignment mark or target 418. As such, the position of alignment mark or target 418 and, consequently, the position of substrate 420 can be accurately known with reference to stage 422.
  • beam analyzer 430 can be configured to determine a position of inspection apparatus 400 or any other reference element such that the center of symmetry of alignment mark or target 418 can be known with reference to inspection apparatus 400 or any other reference element.
  • Beam analyzer 430 can be a point or an imaging polarimeter with some form of wavelength-band selectivity. In some aspects, beam analyzer 430 can be directly integrated into inspection apparatus 400, or connected via fiber optics of several types: polarization preserving single mode, multimode, or imaging, according to other aspects. [0080] In some aspects, beam analyzer 430 can be further configured to determine the overlay data between two patterns on substrate 420. One of these patterns can be a reference pattern on a reference layer. The other pattern can be an exposed pattern on an exposed layer. The reference layer can be an etched layer already present on substrate 420. The reference layer can be generated by a reference pattern exposed on the substrate by lithographic apparatus 100 and/or 100’.
  • Beam analyzer 430 can be further configured to process information related to a particular property of an exposed pattern in that layer.
  • beam analyzer 430 can process an overlay parameter (an indication of the positioning accuracy of the layer with respect to a previous layer on the substrate or the positioning accuracy of the first layer with respective to marks on the substrate), a focus parameter, and/or a critical dimension parameter (e.g., line width and its variations) of the depicted image in the layer.
  • Other parameters are image parameters relating to the quality of the depicted image of the exposed pattern.
  • an array of detectors (not shown) can be connected to beam analyzer 430, and allows the possibility of accurate stack profile detection as discussed below.
  • detector 428 can be an array of detectors.
  • a bundle of multimode fibers For the detector array, a number of options are possible: a bundle of multimode fibers, discrete pin detectors per channel, or CCD or CMOS (linear) arrays.
  • CCD or CMOS linear arrays.
  • the use of a bundle of multimode fibers enables any dissipating elements to be remotely located for stability reasons.
  • Discrete PIN detectors offer a large dynamic range but each need separate pre-amps. The number of elements is therefore limited.
  • CCD linear arrays offer many elements that can be read-out at high speed and are especially of interest if phase-stepping detection is used.
  • a second beam analyzer 430’ can be configured to receive and determine an optical state of diffracted radiation sub-beam 429, as shown in FIG. 4B.
  • the optical state can be a measure of beam wavelength, polarization, or beam profile.
  • Second beam analyzer 430’ can be identical to beam analyzer 430.
  • second beam analyzer 430’ can be configured to perform one or more of the functions of beam analyzer 430, such as determining a position of stage 422 and correlating the position of stage 422 with the position of the center of symmetry of alignment mark or target 418. As such, the position of alignment mark or target 418 and, consequently, the position of substrate 420, can be accurately known with reference to stage 422.
  • processor 432 receives information from detector 428 and beam analyzer 430.
  • processor 432 can be an overlay calculation processor.
  • the information can comprise a model of the product stack profile constructed by beam analyzer 430.
  • processor 432 can construct a model of the product mark profile using the received information about the product mark.
  • processor 432 constructs a model of the stacked product and overlay mark profile using or incorporating a model of the product mark profile. The stack model is then used to determine the overlay offset and minimizes the spectral effect on the overlay offset measurement.
  • Processor 432 can create a basic correction algorithm based on the information received from detector 428 and beam analyzer 430, including but not limited to the optical state of the illumination beam, the alignment signals, associated position estimates, and the optical state in the pupil, image, and additional planes.
  • the pupil plane is the plane in which the radial position of radiation defines the angle of incidence and the angular position defines the azimuth angle of the radiation.
  • Processor 432 can utilize the basic correction algorithm to characterize the inspection apparatus 400 with reference to wafer marks and/or alignment marks 418.
  • processor 432 can be further configured to determine printed pattern position offset error with respect to the sensor estimate for each mark based on the information received from detector 428 and beam analyzer 430.
  • the information includes but is not limited to the product stack profile, measurements of overlay, critical dimension, and focus of each alignment marks or target 418 on substrate 420.
  • Processor 432 can utilize a clustering algorithm to group the marks into sets of similar constant offset error, and create an alignment error offset correction table based on the information.
  • the clustering algorithm can be based on overlay measurement, the position estimates, and additional optical stack process information associated with each set of offset errors.
  • the overlay is calculated for a number of different marks, for example, overlay targets having a positive and a negative bias around a programmed overlay offset.
  • the target that measures the smallest overlay is taken as reference (as it is measured with the best accuracy). From this measured small overlay, and the known programmed overlay of its corresponding target, the overlay error can be deduced. Table 1 illustrates how this can be performed.
  • the smallest measured overlay in the example shown is -1 nm. However this is in relation to a target with a programmed overlay of -30 nm. The process may have introduced an overlay error of 29 nm.
  • the smallest value can be taken to be the reference point and, relative to this, the offset can be calculated between measured overlay and that expected due to the programmed overlay. This offset determines the overlay error for each mark or the sets of marks with similar offsets. Therefore, in the Table 1 example, the smallest measured overlay was -1 nm, at the target position with programmed overlay of 30 nm. The difference between the expected and measured overlay at the other targets is compared to this reference. A table such as Table 1 can also be obtained from marks and target 418 under different illumination settings, the illumination setting, which results in the smallest overlay error, and its corresponding calibration factor, can be determined and selected. Following this, processor 432 can group marks into sets of similar overlay error. The criteria for grouping marks can be adjusted based on different process controls, for example, different error tolerances for different processes.
  • processor 432 can confirm that all or most members of the group have similar offset errors, and apply an individual offset correction from the clustering algorithm to each mark, based on its additional optical stack metrology. Processor 432 can determine corrections for each mark and feed the corrections back to lithographic apparatus 100 or 100’ for correcting errors in the overlay, for example, by feeding corrections into the inspection apparatus 400.
  • enumerative adjectives e.g., “first,” “second,” “third,” or the like
  • first wavelength and second wavelength can be used in a manner analogous to “i th wavelength” and “j th wavelength” to distinguish two wavelengths without specifying a particular order, hierarchy, or quantity.
  • an element in a drawing is not limited to any particular enumerative adjective.
  • detector 428, beam analyzer 430, and/or beam analyzer 430’ can comprise an image-based detector (e.g., a camera).
  • a camera can comprise multiple pixels to resolve an image (e.g., a charged-coupled device (CCD) camera).
  • CCD charged-coupled device
  • Commercially available cameras can typically be optimized for the human viewing experience (e.g., red-green-blue (RGB) color sensitivity). To achieve RGB sensitivity, commercially available cameras can implement color filters at each pixel.
  • RGB red-green-blue
  • lock-in detection can use the principles of lock-in amplifiers to provide sensitive detection and selective filtering of weak or noisy signals and can improve SNR.
  • Lock-in amplifier techniques can provide improved accuracy, faster detection times, and reduced noise when performing optical measurements such as alignment position sensing, multi-angle scatterometry, or the like.
  • Lock-in detection can employ homodyne (single frequency) detection, heterodyne (multifrequency) detection, and other well-known variants and optimizations. For simplicity of discussion, one frequency per lock-in channel will be used to explain aspects disclosed herein (e.g., one modulation frequency per channel), but it should be understood that aspects of the disclosure are envisaged with other well-known lock-in detection features.
  • single channel lock-in detection can work by detecting a signal with an arbitrary number of frequency components.
  • the lock-in detector can be given specific frequency to look for.
  • the lock-in detector can then filter out all frequency components except the component that has the specified modulation frequency fa (signal of interest).
  • the graph in FIG. 5A shows a composite signal 502 that can be received at a lock-in detector, according to some aspects.
  • FIGS. 5A, 5B, 5C, and 5D have vertical axes that represent an amplitude of a signal and horizontal axes that represent time.
  • additional constraints for modulation frequencies can be further defined so as to enhance a performance of detection system 804.
  • the total measurement time can comprise a first time period and a second time period.
  • the first time period can be the time at the beginning of the measurement, during which the inspection apparatus is in a steady state (e.g., illumination is on, modulation is operating, target is within field of view of the inspection apparatus).
  • the second time can be the time during which the signal from the target is analyzed (e.g., t meas ).
  • the channel separation (in frequency) can be an exact multiple of the inverse of the sum of the first and second time periods.
  • pre-generated table 1302 can be implemented along with a circular-shift register 1314.
  • Circular register 1314 can allow table 1302 to be repeated when t meas is set to a time period that is longer than one cycle of a cosine/sine table.
  • a condition can be imposed such that summation operation 1310 be performed over an exact multiple of the corresponding modulation period.
  • a condition can be that summation operation 1310 is performed at exact multiples of each modulation period.
  • the use of pre-generated table 1302 can be more efficient than extrapolating or pre-loading additional elements that extend pre-generated table 1302.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)

Abstract

A metrology system can include an illumination system, a camera, and an analyzer system. The illumination system transmits illumination toward a target. The illumination has a plurality of illumination parameters associated with a corresponding plurality of modulation frequencies. The camera receives scattered illumination from the target and generates, per pixel of the camera, a measurement signal encoded with signatures of the plurality of modulation frequencies. The analyzer system, per pixel of the camera, demodulates the measurement signal based on the plurality of modulation frequencies outputs a phase, an amplitude, or the phase and amplitude of demodulated components of the measurement signal corresponding to the modulation frequencies.

Description

MULTICHANNEL LOCK-IN CAMERA FOR MULTI-PARAMETER SENSING IN LITHOGRAPHIC PROCESSES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority of a US application US 63/477,929 which was filed on 30 December 2022 and US application US 63/509,432 which was filed on 21 June 2023, both of which are incorporated herein in their entireties by reference.
FIELD
[0002] The present disclosure relates to inspection sensors, for example, alignment and scatterometer sensors used in connection with lithographic processes.
BACKGROUND
[0003] A lithographic apparatus is a machine that applies a desired pattern onto a substrate, usually onto a target portion of the substrate. A lithographic apparatus can be used, for example, in the manufacture of integrated circuits (ICs). In that instance, a patterning device, which can be a mask or a reticle, can be used to generate a circuit pattern to be formed on an individual layer of the IC. This pattern can be transferred onto a target portion (e.g., comprising part of, one, or several dies) on a substrate (e.g., a silicon wafer). Transfer of the pattern is typically via imaging onto a layer of radiationsensitive material (photoresist or simply “resist”) provided on the substrate. In general, a single substrate will contain a network of adjacent target portions that are successively patterned. Known lithographic apparatuses include so-called steppers, in which each target portion is irradiated by exposing an entire pattern onto the target portion at one time, and so-called scanners, in which each target portion is irradiated by scanning the pattern through a radiation beam in a given direction (the “scanning”- direction) while synchronously scanning the target portions parallel or anti-parallel to this scanning direction. It is also possible to transfer the pattern from the patterning device to the substrate by imprinting the pattern onto the substrate.
[0004] During lithographic operation, different processing steps can entail different layers to be sequentially formed on the substrate. Accordingly, it can be necessary to position the substrate relative to prior patterns formed thereon with a high degree of accuracy. Generally, alignment marks are placed on the substrate to be aligned and are located with reference to a second object. A lithographic apparatus can use an alignment apparatus for detecting positions of the alignment marks and for aligning the substrate using the alignment marks to ensure accurate exposure from a mask. Misalignment between the alignment marks at two different layers is measured as overlay error.
[0005] In order to monitor the lithographic process, parameters of the patterned substrate are measured. Parameters can include, for example, the overlay error between successive layers formed in or on the patterned substrate and critical linewidth of developed photosensitive resist. This measurement can be performed on a product substrate and/or on a dedicated metrology target. There are various techniques for making measurements of the microscopic structures formed in lithographic processes, including the use of scanning electron microscopes and various specialized tools. A fast and non-invasive form of a specialized inspection tool is a scatterometer in which a beam of radiation is directed onto a target on the surface of the substrate and properties of the scattered or reflected beam are measured. By comparing the properties of the beam before and after it has been reflected or scattered by the substrate, the properties of the substrate can be determined. This can be done, for example, by comparing the reflected beam with data stored in a library of known measurements associated with known substrate properties. Spectroscopic scatterometers direct a broadband radiation beam onto the substrate and measure the spectrum (intensity as a function of wavelength) of the radiation scattered into a particular narrow angular range. By contrast, angularly resolved scatterometers use a monochromatic radiation beam and measure the intensity of the scattered radiation as a function of angle.
[0006] Such optical scatterometers can be used to measure parameters, such as critical dimensions of developed photosensitive resist or overlay error (OV) between two layers formed in or on the patterned substrate. Properties of the substrate can be determined by comparing the properties of an illumination beam before and after the beam has been reflected or scattered by the substrate.
[0007] A lithographic system can output only a finite number of fabricated devices in a given timeframe. There is demand for faster lithographic fabrication, which in turn drives advances in faster inspection techniques. Optical inspection of a target on a wafer can be performed using a plurality of photon wavelengths. A given wavelength can provide information about the target that many not be readily apparent with another wavelength. Using multiple parameters during inspection, such as multiple wavelengths, can come with a time cost, thereby slowing lithographic fabrication speeds.
SUMMARY
[0008] Accordingly, it is desirable to improve multi-parameter inspection techniques so as to increase fabrication speed and throughput.
[0009] In some aspects, a metrology system can comprise an illumination system, a camera, and an analyzer system. The illumination system is configured to transmit illumination toward a target. The illumination has a plurality of illumination parameters associated with a corresponding plurality of modulation frequencies. The camera is configured to receive a scattered illumination from the target. The camera is further configured to generate, per pixel of the camera, a measurement signal encoded with signatures of the plurality of modulation frequencies. The analyzer system is configured to, per pixel of the camera, demodulate the measurement signal based on the plurality of modulation frequencies. The analyzer system is further configured to output a phase, an amplitude, or the phase and amplitude of demodulated components of the measurement signal corresponding to the modulation frequencies. [0010] In some aspects, a lithographic apparatus comprises an illumination source, a projection system, and a metrology system. The illumination source is configured to illumination a pattern of a patterning device. The projection system is configured to project an image of the pattern onto a substrate. The metrology system can comprise an illumination system, a camera, and an analyzer system. The illumination system is configured to transmit illumination toward a target. The illumination has a plurality of illumination parameters associated with a corresponding plurality of modulation frequencies. The camera is configured to receive a scattered illumination from the target. The camera is further configured to generate, per pixel of the camera, a measurement signal encoded with signatures of the plurality of modulation frequencies. The analyzer system is configured to, per pixel of the camera, demodulate the measurement signal based on the plurality of modulation frequencies. The analyzer system is further configured to output a phase, an amplitude, or the phase and amplitude of demodulated components of the measurement signal corresponding to the modulation frequencies.
[0011] present disclosure are described in detail below with reference to the accompanying drawings. It is noted that the present disclosure is not limited to the specific aspects described herein. Such aspects are presented herein for illustrative purposes only. Additional aspects will be apparent to those skilled in the relevant art(s) based on the teachings contained herein.
BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES
[0012] The accompanying drawings, which are incorporated herein and form part of the specification, illustrate the present disclosure and, together with the description, further serve to explain the principles of the present disclosure and to enable those skilled in the relevant art(s) to make and use aspects described herein.
[0013] FIG. 1 A shows a reflective lithographic apparatus, according to some aspects.
[0014] FIG. IB shows a transmissive lithographic apparatus, according to some aspects.
[0015] FIG. 2 shows more details of a reflective lithographic apparatus, according to some aspects.
[0016] FIG. 3 shows a lithographic cell, according to some aspects.
[0017] FIGS. 4A and 4B show inspection apparatuses, according to some aspects.
[0018] FIGS. 5A, 5B, 5C, and 5D show signals that can be received at a lock-in detector, according to some aspects.
[0019] FIG. 6 shows an inspection apparatus, according to some aspects.
[0020] FIG. 7 shows a flow diagram of a detector, according to some aspects.
[0021] FIGS. 8 and 9 show flow diagrams of detection systems, according to some aspects.
[0022] FIG. 10 shows a detector, according to some aspects.
[0023] FIGS. 11A and 11B show a pupil plane through which beams of illumination are propagated, according to some aspects.
[0024] FIG. 12 shows a computer system, according to some aspects. [0025] FIGS. 13 and 14 show flow diagrams of operations that can be used implemented with detection systems, according to some aspects.
[0026] The features of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. Additionally, generally, the leftmost digit(s) of a reference number identifies the drawing in which the reference number first appears. Unless otherwise indicated, the drawings provided throughout the disclosure should not be interpreted as to-scale drawings.
DETAILED DESCRIPTION
[0027] The aspects described herein, and references in the specification to “one aspect,” “an aspect,” “an exemplary aspect,” “an example aspect,” etc., indicate that the aspects described can include a particular feature, structure, or characteristic, but every aspect may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same aspect. Further, when a particular feature, structure, or characteristic is described in connection with an aspect, it is understood that it is within the knowledge of those skilled in the art to effect such feature, structure, or characteristic in connection with other aspects whether or not explicitly described.
[0028] Spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “on,” “upper” and the like, can be used herein for ease of description to describe one element or feature’s relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus can be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein can likewise be interpreted accordingly.
[0029] The terms “about,” “approximately,” or the like can be used herein to indicate the value of a given quantity that can vary based on a particular technology. Based on the particular technology, the terms “about,” “approximately,” or the like can indicate a value of a given quantity that varies within, for example, 10-30% of the value (e.g., ±10%, ±20%, or ±30% of the value).
[0030] Aspects of the present disclosure can be implemented in hardware, firmware, software, or any combination thereof. Aspects of the disclosure can also be implemented as instructions stored on a computer-readable medium, which can be read and executed by one or more processors. A machine- readable medium can include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine -readable medium can include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others. Furthermore, firmware, software, routines, and/or instructions can be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc. The term “machine -readable medium” can be interchangeable with similar terms, for example, “computer program product,” “computer-readable medium,” “non-transitory computer- readable medium,” or the like. The term “non-transitory” can be used herein to characterize one or more forms of computer readable media except for a transitory, propagating signal.
[0031] Before describing such aspects in more detail, however, it is instructive to present an example environment in which aspects of the present disclosure can be implemented.
[0032] Example Lithographic Systems
[0033] FIGS. 1A and IB show a lithographic apparatus 100 and a lithographic apparatus 100’, respectively, in which aspects of the present disclosure can be implemented. Lithographic apparatus 100 and lithographic apparatus 100’ each include the following: an illumination system (illuminator) IL configured to condition a radiation beam B (for example, deep ultra violet or extreme ultra violet radiation); a support structure (for example, a mask table) MT configured to support a patterning device (for example, a mask, a reticle, or a dynamic patterning device) MA and connected to a first positioner PM configured to accurately position the patterning device MA; and, a substrate table (for example, a wafer table) WT configured to hold a substrate (for example, a resist coated wafer) W and connected to a second positioner PW configured to accurately position the substrate W. Lithographic apparatus 100 and 100’ also have a projection system PS configured to project a pattern imparted to the radiation beam B by patterning device MA onto a target portion (for example, comprising one or more dies) C of the substrate W. In lithographic apparatus 100, the patterning device MA and the projection system PS are reflective. In lithographic apparatus 100’, the patterning device MA and the projection system PS are transmissive.
[0034] The illumination system IL can include various types of optical components, such as refractive, reflective, catadioptric, magnetic, electromagnetic, electrostatic, or other types of optical components, or any combination thereof, for directing, shaping, or controlling the radiation beam B.
[0035] The support structure MT holds the patterning device MA in a manner that depends on the orientation of the patterning device MA with respect to a reference frame, the design of at least one of the lithographic apparatus 100 and 100’, and other conditions, such as whether or not the patterning device MA is held in a vacuum environment. The support structure MT can use mechanical, vacuum, electrostatic, or other clamping techniques to hold the patterning device MA. The support structure MT can be a frame or a table, for example, which can be fixed or movable. By using sensors, the support structure MT can ensure that the patterning device MA is at a desired position, for example, with respect to the projection system PS.
[0036] The term “patterning device” MA should be broadly interpreted as referring to any device that can be used to impart a radiation beam B with a pattern in its cross-section, such as to create a pattern in the target portion C of the substrate W. The pattern imparted to the radiation beam B can correspond to a particular functional layer in a device being created in the target portion C to form an integrated circuit.
[0037] The patterning device MA can be transmissive (as in lithographic apparatus 100’ of FIG. IB) or reflective (as in lithographic apparatus 100 of FIG. 1A). Examples of patterning devices MA include reticles, masks, programmable mirror arrays, or programmable LCD panels. Masks are well known in lithography, and include mask types such as binary, alternating phase shift, or attenuated phase shift, as well as various hybrid mask types. An example of a programmable mirror array employs a matrix arrangement of small mirrors, each of which can be individually tilted so as to reflect an incoming radiation beam in different directions. The tilted mirrors impart a pattern in the radiation beam B, which is reflected by a matrix of small mirrors.
[0038] The term “projection system” PS can encompass any type of projection system, including refractive, reflective, catadioptric, magnetic, electromagnetic and electrostatic optical systems, or any combination thereof, as appropriate for the exposure radiation being used, or for other factors, such as the use of an immersion liquid on the substrate W or the use of a vacuum. A vacuum environment can be used for EUV or electron beam radiation since other gases can absorb too much radiation or electrons. A vacuum environment can therefore be provided to the whole beam path with the aid of a vacuum wall and vacuum pumps.
[0039] Lithographic apparatus 100 and/or lithographic apparatus 100’ can be of a type having two (dual stage) or more substrate tables WT (and/or two or more mask tables). In such “multiple stage” machines, the additional substrate tables WT can be used in parallel, or preparatory steps can be carried out on one or more tables while one or more other substrate tables WT are being used for exposure. In some situations, the additional table may not be a substrate table WT.
[0040] The lithographic apparatus can also be of a type wherein at least a portion of the substrate can be covered by a liquid having a relatively high refractive index, e.g., water, so as to fill a space between the projection system and the substrate. An immersion liquid can also be applied to other spaces in the lithographic apparatus, for example, between the mask and the projection system. Immersion techniques are well known in the art for increasing the numerical aperture of projection systems. The term “immersion” as used herein does not mean that a structure, such as a substrate, must be submerged in liquid. For example, a liquid can be located between the projection system and the substrate during exposure.
[0041] Referring to FIGS. 1A and IB, the illuminator IL receives a radiation beam from a radiation source SO. The source SO and the lithographic apparatus 100, 100’ can be separate physical entities, for example, when the source SO is an excimer laser. In such cases, the source SO is not considered to form part of the lithographic apparatus 100 or 100’, and the radiation beam B passes from the source SO to the illuminator IL with the aid of a beam delivery system BD (in FIG. IB) including, for example, suitable directing mirrors and/or a beam expander. In other cases, the source SO can be an integral part of the lithographic apparatus 100, 100’ , for example, when the source SO is a mercury lamp. A radiation system can comprise the source SO, the illuminator IL, and/or the beam delivery system BD.
[0042] The illuminator IL can include an adjuster AD (in FIG. IB) for adjusting the angular intensity distribution of the radiation beam. Generally, at least the outer and/or inner radial extent (commonly referred to as “G-outcr” and “G-inncr,” respectively) of the intensity distribution in a pupil plane of the illuminator can be adjusted. In addition, the illuminator IL can comprise various other components (in FIG. IB), such as an integrator IN and a condenser CO. The illuminator IL can be used to condition the radiation beam B to have a desired uniformity and intensity distribution in its cross section.
[0043] Referring to FIG. 1A, the radiation beam B is incident on the patterning device (for example, mask) MA, which is held on the support structure (for example, mask table) MT, and is patterned by the patterning device MA. In lithographic apparatus 100, the radiation beam B is reflected from the patterning device (for example, mask) MA. After being reflected from the patterning device (for example, mask) MA, the radiation beam B passes through the projection system PS, which focuses the radiation beam B onto a target portion C of the substrate W. With the aid of the second positioner PW and position sensor IF2 (for example, an interferometric device, linear encoder, or capacitive sensor), the substrate table WT can be moved accurately (for example, so as to position different target portions C in the path of the radiation beam B). Similarly, the first positioner PM and another position sensor IF1 can be used to accurately position the patterning device (for example, mask) MA with respect to the path of the radiation beam B. Patterning device (for example, mask) MA and substrate W can be aligned using mask alignment marks Ml, M2 and substrate alignment marks Pl, P2.
[0044] Referring to FIG. IB, the radiation beam B is incident on the patterning device (for example, mask MA), which is held on the support structure (for example, mask table MT), and is patterned by the patterning device. Having traversed the mask MA, the radiation beam B passes through the projection system PS, which focuses the beam onto a target portion C of the substrate W. The projection system has a pupil conjugate PPU to an illumination system pupil IPU. Portions of radiation emanate from the intensity distribution at the illumination system pupil IPU and traverse a mask pattern without being affected by diffraction at the mask pattern and create an image of the intensity distribution at the illumination system pupil IPU.
[0045] The projection system PS projects an image of the mask pattern MP, where the image is formed by diffracted beams produced from the mark pattern MP by radiation from the intensity distribution, onto a photoresist layer coated on the substrate W. For example, the mask pattern MP can include an array of lines and spaces. A diffraction of radiation at the array and different from zeroth order diffraction generates diverted diffracted beams with a change of direction in a direction perpendicular to the lines. Undiffracted beams (i.e., so-called zeroth order diffracted beams) traverse the pattern without any change in propagation direction. The zeroth order diffracted beams traverse an upper lens or upper lens group of the projection system PS, upstream of the pupil conjugate PPU of the projection system PS, to reach the pupil conjugate PPU. The portion of the intensity distribution in the plane of the pupil conjugate PPU and associated with the zeroth order diffracted beams is an image of the intensity distribution in the illumination system pupil IPU of the illumination system IL. The aperture device PD, for example, is disposed at or substantially at a plane that includes the pupil conjugate PPU of the projection system PS.
[0046] The projection system PS is arranged to capture (e.g., using a lens or lens group L) the zeroth order diffracted beams, first order diffracted beams, and/or higher order diffracted beams (not shown). In some aspects, dipole illumination for imaging line patterns extending in a direction perpendicular to a line can be used to utilize the resolution enhancement effect of dipole illumination. For example, first- order diffracted beams interfere with corresponding zeroth-order diffracted beams at the level of the wafer W to create an image of the line pattern MP at highest possible resolution and process window (i.e., usable depth of focus in combination with tolerable exposure dose deviations). In some aspects, astigmatism aberration can be reduced by providing radiation poles (not shown) in opposite quadrants of the illumination system pupil IPU. Further, in some aspects, astigmatism aberration can be reduced by blocking the zeroth order beams in the pupil conjugate PPU of the projection system associated with radiation poles in opposite quadrants. This is described in more detail in US 7,511,799 B2, issued Mar. 31, 2009, which is incorporated by reference herein in its entirety.
[0047] With the aid of the second positioner PW and position sensor IFD (for example, an interferometric device, linear encoder, or capacitive sensor), the substrate table WT can be moved accurately (for example, so as to position different target portions C in the path of the radiation beam B). Similarly, the first positioner PM and another position sensor (not shown in FIG. IB) can be used to accurately position the mask MA with respect to the path of the radiation beam B (for example, after mechanical retrieval from a mask library or during a scan).
[0048] In general, movement of the mask table MT can be realized with the aid of a long-stroke module (coarse positioning) and a short-stroke module (fine positioning), which form part of the first positioner PM. Similarly, movement of the substrate table WT can be realized using a long-stroke module and a short-stroke module, which form part of the second positioner PW. In the case of a stepper (as opposed to a scanner), the mask table MT can be connected to a short-stroke actuator or can be fixed. Mask MA and substrate W can be aligned using mask alignment marks Ml, M2, and substrate alignment marks Pl, P2. Although the substrate alignment marks (as illustrated) occupy dedicated target portions, they can be located in spaces between target portions (known as scribe-lane alignment marks). Similarly, in situations in which more than one die is provided on the mask MA, the mask alignment marks can be located between the dies.
[0049] Mask table MT and patterning device MA can be in a vacuum chamber V, where an in-vacuum robot IVR can be used to move patterning devices such as a mask in and out of vacuum chamber. Alternatively, when mask table MT and patterning device MA are outside of the vacuum chamber, an out-of-vacuum robot can be used for various transportation operations, similar to the in-vacuum robot IVR. Both the in-vacuum and out-of-vacuum robots can be calibrated for a smooth transfer of any payload (e.g., mask) to a fixed kinematic mount of a transfer station.
[0050] The lithographic apparatus 100 and 100’ can be used in at least one of the following modes: [0051] 1. In step mode, the support structure (for example, mask table) MT and the substrate table WT are kept essentially stationary, while an entire pattern imparted to the radiation beam B is projected onto a target portion C at one time (i.e., a single static exposure). The substrate table WT is then shifted in the X and/or Y direction so that a different target portion C can be exposed.
[0052] 2. In scan mode, the support structure (for example, mask table) MT and the substrate table WT are scanned synchronously while a pattern imparted to the radiation beam B is projected onto a target portion C (i.e., a single dynamic exposure). The velocity and direction of the substrate table WT relative to the support structure (for example, mask table) MT can be determined by the (de- )magnification and image reversal characteristics of the projection system PS.
[0053] 3. In another mode, the support structure (for example, mask table) MT is kept substantially stationary holding a programmable patterning device, and the substrate table WT is moved or scanned while a pattern imparted to the radiation beam B is projected onto a target portion C. A pulsed radiation source SO can be employed and the programmable patterning device is updated as needed after each movement of the substrate table WT or in between successive radiation pulses during a scan. This mode of operation can be readily applied to maskless lithography that utilizes a programmable patterning device, such as a programmable mirror array.
[0054] Combinations and/or variations on the described modes of use or entirely different modes of use can also be employed.
[0055] In a further aspect, lithographic apparatus 100 includes an extreme ultraviolet (EUV) source, which is configured to generate a beam of EUV radiation for EUV lithography. In general, the EUV source is configured in a radiation system, and a corresponding illumination system is configured to condition the EUV radiation beam of the EUV source.
[0056] FIG. 2 shows the lithographic apparatus 100 in more detail, including the source collector apparatus SO, the illumination system IL, and the projection system PS. The source collector apparatus SO is constructed and arranged such that a vacuum environment can be maintained in an enclosing structure 220 of the source collector apparatus SO. An EUV radiation emitting plasma 210 can be formed by a discharge produced plasma source. In some aspects, a plasma of excited tin (Sn) (e.g., excited via a laser) is provided to produce EUV radiation.
[0057] The radiation emitted by the EUV radiation emitting plasma 210 is passed from a source chamber 211 into a collector chamber 212 via an optional gas barrier or contaminant trap 230 (in some cases also referred to as contaminant barrier or foil trap), which is positioned in or behind an opening in source chamber 211. The contaminant trap 230 can include a channel structure. Contamination trap 230 can also include a gas barrier or a combination of a gas barrier and a channel structure. The contaminant trap or contaminant barrier 230 further indicated herein at least includes a channel structure.
[0058] The collector chamber 212 can include a radiation collector CO, which can be a so-called grazing incidence collector. Radiation collector CO has an upstream radiation collector side 251 and a downstream radiation collector side 252. Radiation that traverses collector CO can be reflected off a grating spectral filter 240 to be focused in a virtual source point INTF. The virtual source point INTF is commonly referred to as the intermediate focus, and the source collector apparatus is arranged such that the intermediate focus INTF is located at or near an opening 219 in the enclosing structure 220. The virtual source point INTF is an image of the EUV radiation emitting plasma 210. Grating spectral filter 240 is used in particular for suppressing infra-red (IR) radiation.
[0059] Subsequently the radiation traverses the illumination system IL, which can include a faceted field mirror device 222 and a faceted pupil mirror device 224 arranged to provide a desired angular distribution of the radiation beam 221, at the patterning device MA, as well as a desired uniformity of radiation intensity at the patterning device MA. Upon reflection of the beam of radiation 221 at the patterning device MA, held by the support structure MT, a patterned beam 226 is formed and the patterned beam 226 is imaged by the projection system PS via reflective elements 228, 229 onto a substrate W held by the wafer stage or substrate table WT.
[0060] More elements than shown can generally be present in illumination optics unit IL and projection system PS. The grating spectral filter 240 can optionally be present, depending upon the type of lithographic apparatus. Further, there can be more mirrors present than those shown in the FIG. 2, for example there can be one to six additional reflective elements present in the projection system PS than shown in FIG. 2.
[0061] Collector optic CO, as illustrated in FIG. 2, is depicted as a nested collector with grazing incidence reflectors 253, 254, and 255, just as an example of a collector (or collector mirror). The grazing incidence reflectors 253, 254, and 255 are disposed axially symmetric around an optical axis O and a collector optic CO of this type is preferably used in combination with a discharge produced plasma source, often called a DPP source.
[0062] Example Lithographic Cell
[0063] FIG. 3 shows a lithographic cell 300, also sometimes referred to a lithocell or cluster, according to some aspects. Lithographic apparatus 100 or 100’ can form part of lithographic cell 300. Lithographic cell 300 can also include one or more apparatuses to perform pre- and post-exposure processes on a substrate. Conventionally these include spin coaters SC to deposit resist layers, developers DE to develop exposed resist, chill plates CH, and bake plates BK. A substrate handler, or robot, RO picks up substrates from input/output ports VOl, I/O2, moves them between the different process apparatuses and delivers them to the loading bay LB of the lithographic apparatus 100 or 100’ . These devices, which are often collectively referred to as the track, are under the control of a track control unit TCU, which is itself controlled by a supervisory control system SCS, which also controls the lithographic apparatus via lithography control unit LACU. Thus, the different apparatuses can be operated to maximize throughput and processing efficiency.
[0064] Example Inspection Apparatus
[0065] In order to control the lithographic process to place device features accurately on the substrate, alignment marks are generally provided on the substrate, and the lithographic apparatus includes one or more inspection apparatuses for accurate positioning of marks on a substrate. These alignment apparatuses are effectively position measuring apparatuses. Different types of marks and different types of alignment apparatuses and/or systems are known from different times and different manufacturers. A type of system widely used in current lithographic apparatus is based on a self-referencing interferometer as described in U.S. Patent No. 6,961,116 (den Boef et al.). Generally marks are measured separately to obtain X- and Y-positions. A combined X- and Y-measurement can be performed using the techniques described in U.S. Publication No. 2009/195768 A (Bijnen et al.), however. The full contents of both of these disclosures are incorporated herein by reference.
[0066] FIG. 4A shows a cross-sectional view of an inspection apparatus 400 that can be implemented as a part of lithographic apparatus 100 or 100’, according to some aspects. In some aspects, inspection apparatus 400 can be configured to align a substrate (e.g., substrate W) with respect to a patterning device (e.g., patterning device MA). Inspection apparatus 400 can be further configured to detect positions of alignment marks on the substrate and to align the substrate with respect to the patterning device or other components of lithographic apparatus 100 or 100’ using the detected positions of the alignment marks. Such alignment of the substrate can ensure accurate exposure of one or more patterns on the substrate.
[0067] The terms “inspection apparatus,” “metrology system,” or the like can be used herein to refer to, e.g., a device used for measuring a property of a structure (e.g., overlay sensor, critical dimension sensor, or the like), a device or system used in a lithographic apparatus to inspect an alignment of a wafer (e.g., alignment sensor), or the like.
[0068] In some aspects, inspection apparatus 400 can include an illumination system 412, a beam splitter 414, an interferometer 426, a detector 428, a beam analyzer 430, and an overlay calculation processor 432. Illumination system 412 can be configured to provide an electromagnetic narrow band radiation beam 413 having one or more passbands. In an example, the one or more passbands can be within a spectrum of wavelengths between about 500 nm to about 900 nm. In another example, the one or more passbands can be discrete narrow passbands within a spectrum of wavelengths between about 500 nm to about 900 nm. Illumination system 412 can be further configured to provide one or more passbands having substantially constant center wavelength (CWL) values over a long period of time (e.g., over a lifetime of illumination system 412). Such configuration of illumination system 412 can help to prevent the shift of the actual CWL values from the desired CWL values, as discussed above, in current alignment systems. And, as a result, the use of constant CWL values can improve long-term stability and accuracy of alignment systems (e.g., inspection apparatus 400) compared to the current alignment apparatuses.
[0069] In some aspects, beam splitter 414 can be configured to receive radiation beam 413 and split radiation beam 413 into at least two radiation sub-beams. For example, radiation beam 413 can be split into radiation sub-beams 415 and 417, as shown in FIG. 4A. Beam splitter 414 can be further configured to direct radiation sub-beam 415 onto a substrate 420 placed on a stage 422. In one example, the stage 422 is movable along direction 424. Radiation sub-beam 415 can be configured to illuminate an alignment mark or a target 418 located on substrate 420. Alignment mark or target 418 can be coated with a radiation sensitive film. In some aspects, alignment mark or target 418 can have one hundred and eighty degrees (i.e., 180°) symmetry. That is, when alignment mark or target 418 is rotated 180° about an axis of symmetry perpendicular to a plane of alignment mark or target 418, rotated alignment mark or target 418 can be substantially identical to an unrotated alignment mark or target 418. The target 418 on substrate 420 can be (a) a resist layer grating comprising bars that are formed of solid resist lines, or (b) a product layer grating, or (c) a composite grating stack in an overlay target structure comprising a resist grating overlaid or interleaved on a product layer grating. The bars can alternatively be etched into the substrate. This pattern is sensitive to chromatic aberrations in the lithographic projection apparatus, particularly the projection system PL, and illumination symmetry and the presence of such aberrations will manifest themselves in a variation in the printed grating. One in-line method used in device manufacturing for measurements of line width, pitch, and critical dimension makes use of a technique known as “scatterometry”. Methods of scatterometry are described in Raymond et al., “Multiparameter Grating Metrology Using Optical Scatterometry”, J. Vac. Sci. Tech. B, Vol. 15, no. 2, pp. 361-368 (1997) and Niu et al., “Specular Spectroscopic Scatterometry in DUV Lithography”, SPIE, Vol. 3677 (1999), which are both incorporated by reference herein in their entireties. In scatterometry, light is reflected by periodic structures in the target, and the resulting reflection spectrum at a given angle is detected. The structure giving rise to the reflection spectrum is reconstructed, e.g. using Rigorous Coupled- Wave Analysis (RCWA) or by comparison to a library of patterns derived by simulation. Accordingly, the scatterometry data of the printed gratings is used to reconstruct the gratings. The parameters of the grating, such as line widths and shapes, can be input to the reconstruction process, performed by processing unit PU, from knowledge of the printing step and/or other scatterometry processes.
[0070] In some aspects, beam splitter 414 can be further configured to receive diffraction radiation beam 419 and split diffraction radiation beam 419 into at least two radiation sub-beams, according to an aspect. Diffraction radiation beam 419 can be split into diffraction radiation sub-beams 429 and 439, as shown in FIG. 4A.
[0071] It should be noted that even though beam splitter 414 is shown to direct radiation sub-beam 415 towards alignment mark or target 418 and to direct diffracted radiation sub-beam 429 towards interferometer 426, the disclosure is not so limiting. Other optical arrangements can be used to obtain the similar result of illuminating alignment mark or target 418 on substrate 420 and detecting an image of alignment mark or target 418.
[0072] As illustrated in FIG. 4A, interferometer 426 can be configured to receive radiation sub-beam 417 and diffracted radiation sub-beam 429 through beam splitter 414. In an example aspect, diffracted radiation sub-beam 429 can be at least a portion of radiation sub-beam 415 that can be reflected from alignment mark or target 418. In an example of this aspect, interferometer 426 comprises any appropriate set of optical-elements, for example, a combination of prisms that can be configured to form two images of alignment mark or target 418 based on the received diffracted radiation sub-beam 429. It should be appreciated that a good quality image need not be formed. It can be enough to have the features of alignment mark 418 resolved. Interferometer 426 can be further configured to rotate one of the two images with respect to the other of the two images 180° and recombine the rotated and unrotated images interferometrically.
[0073] In some aspects, detector 428 can be configured to receive the recombined image via interferometer signal 427 and detect interference as a result of the recombined image when alignment axis 421 of inspection apparatus 400 passes through a center of symmetry (not shown) of alignment mark or target 418. Such interference can be due to alignment mark or target 418 being 180° symmetrical, and the recombined image interfering constructively or destructively, according to an example aspect. Based on the detected interference, detector 428 can be further configured to determine a position of the center of symmetry of alignment mark or target 418 and consequently, detect a position of substrate 420. According to an example, alignment axis 421 can be aligned with an optical beam perpendicular to substrate 420 and passing through a center of image rotation interferometer 426. Detector 428 can be further configured to estimate the positions of alignment mark or target 418 by implementing sensor characteristics and interacting with wafer mark process variations.
[0074] In a further aspect, detector 428 determines the position of the center of symmetry of alignment mark or target 418 by performing one or more of the following measurements:
[0075] 1. measuring position variations for various wavelengths (position shift between colors);
[0076] 2. measuring position variations for various orders (position shift between diffraction orders); and
[0077] 3. measuring position variations for various polarizations (position shift between polarizations). [0078] This data can be obtained using any type of alignment sensor, for example, a SMASH (SMart Alignment Sensor Hybrid) sensor, as described in U.S. Patent No. 6,961,116 that employs a selfreferencing interferometer with a single detector and four different wavelengths, and extracts the alignment signal in software, or Athena (Advanced Technology using High order ENhancement of Alignment), as described in U.S. Patent No. 6,297,876, which directs each of seven diffraction orders to a dedicated detector, which are both incorporated by reference herein in their entireties.
[0079] In some aspects, beam analyzer 430 can be configured to receive and determine an optical state of diffracted radiation sub-beam 439. The optical state can be a measure of beam wavelength, polarization, or beam profile. Beam analyzer 430 can be further configured to determine a position of stage 422 and correlate the position of stage 422 with the position of the center of symmetry of alignment mark or target 418. As such, the position of alignment mark or target 418 and, consequently, the position of substrate 420 can be accurately known with reference to stage 422. Alternatively, beam analyzer 430 can be configured to determine a position of inspection apparatus 400 or any other reference element such that the center of symmetry of alignment mark or target 418 can be known with reference to inspection apparatus 400 or any other reference element. Beam analyzer 430 can be a point or an imaging polarimeter with some form of wavelength-band selectivity. In some aspects, beam analyzer 430 can be directly integrated into inspection apparatus 400, or connected via fiber optics of several types: polarization preserving single mode, multimode, or imaging, according to other aspects. [0080] In some aspects, beam analyzer 430 can be further configured to determine the overlay data between two patterns on substrate 420. One of these patterns can be a reference pattern on a reference layer. The other pattern can be an exposed pattern on an exposed layer. The reference layer can be an etched layer already present on substrate 420. The reference layer can be generated by a reference pattern exposed on the substrate by lithographic apparatus 100 and/or 100’. The exposed layer can be a resist layer exposed adjacent to the reference layer. The exposed layer can be generated by an exposure pattern exposed on substrate 420 by lithographic apparatus 100 or 100’. The exposed pattern on substrate 420 can correspond to a movement of substrate 420 by stage 422. In some aspects, the measured overlay data can also indicate an offset between the reference pattern and the exposure pattern. The measured overlay data can be used as calibration data to calibrate the exposure pattern exposed by lithographic apparatus 100 or 100’, such that after the calibration, the offset between the exposed layer and the reference layer can be minimized.
[0081] In some aspects, beam analyzer 430 can be further configured to determine a model of the product stack profile of substrate 420, and can be configured to measure overlay, critical dimension, and focus of target 418 in a single measurement. The product stack profile contains information on the stacked product such as alignment mark, target 418, or substrate 420, and can include mark process variation-induced optical signature metrology that is a function of illumination variation. The product stack profile can also include product grating profile, mark stack profile, and mark asymmetry information. An example of beam analyzer 430 is Yieldstar™, manufactured by ASML, Veldhoven, The Netherlands, as described in U.S. Patent No. 8,706,442, which is incorporated by reference herein in its entirety. Beam analyzer 430 can be further configured to process information related to a particular property of an exposed pattern in that layer. For example, beam analyzer 430 can process an overlay parameter (an indication of the positioning accuracy of the layer with respect to a previous layer on the substrate or the positioning accuracy of the first layer with respective to marks on the substrate), a focus parameter, and/or a critical dimension parameter (e.g., line width and its variations) of the depicted image in the layer. Other parameters are image parameters relating to the quality of the depicted image of the exposed pattern. [0082] In some aspects, an array of detectors (not shown) can be connected to beam analyzer 430, and allows the possibility of accurate stack profile detection as discussed below. For example, detector 428 can be an array of detectors. For the detector array, a number of options are possible: a bundle of multimode fibers, discrete pin detectors per channel, or CCD or CMOS (linear) arrays. The use of a bundle of multimode fibers enables any dissipating elements to be remotely located for stability reasons. Discrete PIN detectors offer a large dynamic range but each need separate pre-amps. The number of elements is therefore limited. CCD linear arrays offer many elements that can be read-out at high speed and are especially of interest if phase-stepping detection is used.
[0083] In some aspects, a second beam analyzer 430’ can be configured to receive and determine an optical state of diffracted radiation sub-beam 429, as shown in FIG. 4B. The optical state can be a measure of beam wavelength, polarization, or beam profile. Second beam analyzer 430’ can be identical to beam analyzer 430. Alternatively, second beam analyzer 430’ can be configured to perform one or more of the functions of beam analyzer 430, such as determining a position of stage 422 and correlating the position of stage 422 with the position of the center of symmetry of alignment mark or target 418. As such, the position of alignment mark or target 418 and, consequently, the position of substrate 420, can be accurately known with reference to stage 422. Second beam analyzer 430’ can also be configured to determine a position of inspection apparatus 400, or any other reference element, such that the center of symmetry of alignment mark or target 418 can be known with reference to inspection apparatus 400, or any other reference element. Second beam analyzer 430’ can be further configured to determine the overlay data between two patterns and a model of the product stack profile of substrate 420. Second beam analyzer 430’ can also be configured to measure overlay, critical dimension, and focus of target 418 in a single measurement.
[0084] In some aspects, second beam analyzer 430’ can be directly integrated into inspection apparatus 400, or it can be connected via fiber optics of several types: polarization preserving single mode, multimode, or imaging, according to other aspects. Alternatively, second beam analyzer 430’ and beam analyzer 430 can be combined to form a single analyzer (not shown) configured to receive and determine the optical states of both diffracted radiation sub-beams 429 and 439.
[0085] In some aspects, processor 432 receives information from detector 428 and beam analyzer 430. For example, processor 432 can be an overlay calculation processor. The information can comprise a model of the product stack profile constructed by beam analyzer 430. Alternatively, processor 432 can construct a model of the product mark profile using the received information about the product mark. In either case, processor 432 constructs a model of the stacked product and overlay mark profile using or incorporating a model of the product mark profile. The stack model is then used to determine the overlay offset and minimizes the spectral effect on the overlay offset measurement. Processor 432 can create a basic correction algorithm based on the information received from detector 428 and beam analyzer 430, including but not limited to the optical state of the illumination beam, the alignment signals, associated position estimates, and the optical state in the pupil, image, and additional planes. The pupil plane is the plane in which the radial position of radiation defines the angle of incidence and the angular position defines the azimuth angle of the radiation. Processor 432 can utilize the basic correction algorithm to characterize the inspection apparatus 400 with reference to wafer marks and/or alignment marks 418.
[0086] In some aspects, processor 432 can be further configured to determine printed pattern position offset error with respect to the sensor estimate for each mark based on the information received from detector 428 and beam analyzer 430. The information includes but is not limited to the product stack profile, measurements of overlay, critical dimension, and focus of each alignment marks or target 418 on substrate 420. Processor 432 can utilize a clustering algorithm to group the marks into sets of similar constant offset error, and create an alignment error offset correction table based on the information. The clustering algorithm can be based on overlay measurement, the position estimates, and additional optical stack process information associated with each set of offset errors. The overlay is calculated for a number of different marks, for example, overlay targets having a positive and a negative bias around a programmed overlay offset. The target that measures the smallest overlay is taken as reference (as it is measured with the best accuracy). From this measured small overlay, and the known programmed overlay of its corresponding target, the overlay error can be deduced. Table 1 illustrates how this can be performed. The smallest measured overlay in the example shown is -1 nm. However this is in relation to a target with a programmed overlay of -30 nm. The process may have introduced an overlay error of 29 nm.
Figure imgf000018_0001
[0087] The smallest value can be taken to be the reference point and, relative to this, the offset can be calculated between measured overlay and that expected due to the programmed overlay. This offset determines the overlay error for each mark or the sets of marks with similar offsets. Therefore, in the Table 1 example, the smallest measured overlay was -1 nm, at the target position with programmed overlay of 30 nm. The difference between the expected and measured overlay at the other targets is compared to this reference. A table such as Table 1 can also be obtained from marks and target 418 under different illumination settings, the illumination setting, which results in the smallest overlay error, and its corresponding calibration factor, can be determined and selected. Following this, processor 432 can group marks into sets of similar overlay error. The criteria for grouping marks can be adjusted based on different process controls, for example, different error tolerances for different processes.
[0088] In some aspects, processor 432 can confirm that all or most members of the group have similar offset errors, and apply an individual offset correction from the clustering algorithm to each mark, based on its additional optical stack metrology. Processor 432 can determine corrections for each mark and feed the corrections back to lithographic apparatus 100 or 100’ for correcting errors in the overlay, for example, by feeding corrections into the inspection apparatus 400.
[0089] Example Multi-Channel Lock-In Camera for Inspection Apparatuses
[0090] Market demands have exacerbated the need for faster lithographic fabrication of electronic chips (e.g., integrated circuits). However, to ensure that electronic chip devices are accurately printed, inspection apparatuses like those described above can be used to make sure device fabrication meets fabrication tolerances.
[0091] The term “throughput” is commonly understood as the amount of material or items passing through a system or process. In some aspects, the term “throughput” can be used to characterize a rate of lithographic fabrication. For example, throughput can refer to a rate at which lithographic fabrication is completed on wafers, a rate at which a wafer clears a particular fabrication step and moves to the next step, or the like. Throughput can be a performance marker of a lithographic apparatus. It is desirable for lithographic systems to output as many products as possible in as little time as possible. Lithographic fabrication can comprise several complex processes. Each part of the process can involve tradeoffs that balance quality (e.g., sub-nanometer accuracy, high yield) and drawbacks (e.g., slower fabrication, cost). Even small lithographic errors in circuit printing can lead to non-conforming device behavior (i.e., faulty devices). To improve pattern-transfer accuracy, lithography can include inspection of printed marks on a substrate. The inspection can be used to ascertain a conformity of a printed pattern on a substrate or to align a substrate in order to properly receive a new pattern. However, the added time of the inspection process can adversely impact throughput.
[0092] In some aspects, optical inspection of a target on a wafer can be performed using a plurality of colors (or wavelengths) of illumination. A given wavelength can provide information about the target that many not be readily apparent with another wavelength. As used herein, concepts directed to “multiple wavelengths,” “multiple photon frequencies,” “multiple parameter values,” or the like, can be used to characterize narrowband values in the pertinent property or parameter. In a non-limiting example of a wavelength parameter, a first wavelength can be characterized as comprising a narrowband of wavelengths centered at a first central wavelength. A second wavelength can similarly be characterized as comprising a narrowband of wavelengths centered at a second central wavelength. A characterization of the first wavelength as being different from the second wavelength can be interpreted as the first central wavelength being different from the second central wavelength.
[0093] In some aspects, enumerative adjectives (e.g., “first,” “second,” “third,” or the like) can be used to distinguish elements that share a likeness, but without establishing an order, hierarchy, or quantity (unless otherwise noted). For example, the terms “first wavelength” and “second wavelength” can be used in a manner analogous to “ith wavelength” and “jth wavelength” to distinguish two wavelengths without specifying a particular order, hierarchy, or quantity. Furthermore, an element in a drawing is not limited to any particular enumerative adjective.
[0094] In some aspects, detector 428, beam analyzer 430, and/or beam analyzer 430’ (FIGS. 4A and 4B) can comprise an image-based detector (e.g., a camera). A camera can comprise multiple pixels to resolve an image (e.g., a charged-coupled device (CCD) camera). Commercially available cameras can typically be optimized for the human viewing experience (e.g., red-green-blue (RGB) color sensitivity). To achieve RGB sensitivity, commercially available cameras can implement color filters at each pixel. Specifically, a given pixel of the camera can be sensitive to a specific color (e.g., one pixel has a red filter, a next pixel has a green filter, a pixel after the next has a blue filter, and the pattern is iterated to all pixels — other arrangements are possible).
[0095] In some aspects, color cameras as described above can present challenges to an inspection sensor intended to be used for lithographic processes. For example, a commercial camera can be inadequate for lithography inspection since such inspections are performed with more than three wavelengths. Lithographic inspection can also rely on wavelengths outside of the visible RGB range, which typical cameras are not built for. Lithography inspection is also concerned with signal to noise ratio (SNR).
[0096] Furthermore, the number of photons collected at a detector becomes an increasingly important factor as inspection times are made shorter in order to increase lithographic throughput. A drawback of color cameras with pixel filters can be that a fraction of the total pixels can be limited to one specific color and will not respond to photons of a different color. In other words, a pixel that responds to green wavelengths rejects photons that have a non-green wavelength (the rejected photons are wasted). As a workaround, using a monochromatic camera can remedy the deficiencies of the pixel color filters by allowing the camera to receive all photons regardless of their wavelength. However, this means that the illumination may be sourced via sequential wavelength stepping in order to allow discrimination of the color signals from one another (e.g., first use far-infrared, then near-infrared, then red, then green, and so on). But this may have the effect of increasing inspection times, which undesirably reduces throughput.
[0097] Aspects disclosed herein allow the use of a monochromatic camera to detect multiple wavelengths simultaneously. Moreover, aspects disclosed herein are not limited to detection of multiple wavelengths, but can apply to simultaneous detection of multiple settings of an adjustable parameter, multiple values of a parameter that can have more than one value, or permutations of settings or values of two or more parameters (e.g., twelve wavelengths, ten wavelengths, two polarizations, five wavelengths at one polarization and five wavelengths of at another polarization, four angles of incidence, or the like). For simplicity of discussion, aspects will be described with respect to wavelength and one pixel, but it should be appreciated that a wavelength is merely one possible parameter and the one pixel is among many pixels of a camera.
[0098] It is instructive to first present some general aspects of lock-in detection techniques before describing lock-in implementation in image-based inspection.
[0099] In some aspects, lock-in detection can use the principles of lock-in amplifiers to provide sensitive detection and selective filtering of weak or noisy signals and can improve SNR. Lock-in amplifier techniques can provide improved accuracy, faster detection times, and reduced noise when performing optical measurements such as alignment position sensing, multi-angle scatterometry, or the like. Lock-in detection can employ homodyne (single frequency) detection, heterodyne (multifrequency) detection, and other well-known variants and optimizations. For simplicity of discussion, one frequency per lock-in channel will be used to explain aspects disclosed herein (e.g., one modulation frequency per channel), but it should be understood that aspects of the disclosure are envisaged with other well-known lock-in detection features.
[0100] In some aspects, single channel lock-in detection can work by detecting a signal with an arbitrary number of frequency components. The lock-in detector can be given specific frequency to look for. The lock-in detector can then filter out all frequency components except the component that has the specified modulation frequency fa (signal of interest). The graph in FIG. 5A shows a composite signal 502 that can be received at a lock-in detector, according to some aspects.
[0101] It is noted that the graphs in FIGS. 5A, 5B, 5C, and 5D have vertical axes that represent an amplitude of a signal and horizontal axes that represent time.
[0102] In some aspects, composite signal 502 can have multiple frequency components (in the nonlimiting example of FIG. 5A, there are three frequency components fa, fa, and fa, as well as noise; noise usually covers a range of frequencies, but, for simplicity, fnotse will represent the range of noise frequencies). The noise-component is visible in composite signal 502 as the random, jagged spikes in the signal. FIG. 5B shows a signal 504 corresponding to the ^-component. FIG. 5C shows a signal 506 corresponding to the 2-component. FIG- 5D shows a signal 508 corresponding to the 3-component.
[0103] A desirable aspect of lock-in detection is that the detection technique is able to lock onto a desired /(-component of composite signal 502 while suppressing the non- ) components. For example, if the lock-in detector is configured to detect signals with frequency fa (signal 504), the lock-in detector can effectively disregard the fa, fa , and faotse components of composite signal 502 and lock onto signal 504 buried within. Hence, the amplitude and phase <ptof the ^-component can be extracted from composite signal 502. Similarly, the lock-in detector can be configured to lock onto the - component (returns amplitude fa and phase <p2 , the fa -component (returns amplitude fa and phasep3}, or an arbitrary /^-component.
[0104] Extending this concept to multi-channel lock-in detection, each channel can be configured to be sensitive to a distinct modulation frequency (fa, fa, fa,..., fa). When receiving a signal with a mix of frequencies, each detection channel can lock onto a corresponding frequency component of the signal while rejecting non-corresponding frequency components. This allows extraction of amplitude and phase of each frequency component of the received signal.
[0105] FIG. 6 shows an inspection apparatus 600, according to some aspects. In some aspects, features of inspection apparatus 600 can be implemented in inspection apparatus 400 (FIGS. 4A and/or 4B) so as to allow simultaneous multi-channel detection while using the structures described in reference to FIGS. 4A and/or 4B.
[0106] In some aspects, inspection apparatus 600 can comprise an illumination system 602 (or source branch, illumination source branch, illumination branch, or the like), a detection system 604 (or detection branch, or the like), and an optical system 606.
[0107] In some aspects, illumination system 602 can comprise an illumination source 608, modulator 610, and a combiner 612. Illumination source 608 can comprise source elements 608-1 through 608-n (e.g., first source element, second source element,..., nth source element). Modulator 610 can comprise modulator elements 610-1 through 610-n ((e.g., first modulator element, second modulator element,..., nth modulator element).
[0108] In some aspects, detection system 604 can comprise a detector 614 and analyzer 616 (it is understood that a camera can have multiples of analyzer 616 (e.g., an analyzer for each pixel), though a strict one-to-one correspondence is not required; see FIG. 9). Analyzer 616 can comprise analyzer elements 616-1 through 616-n (e.g., first analyzer element, second analyzer element,..., nth analyzer element). Analyzer elements 616-1 through 616-n can operate in the digital domain. It should be appreciated that parts of optical system 606 can belong to illumination system 602, detection system 604, or both. For example, optical system 606 can comprise an objective that collects scattered illumination from target 618 disposed on substrate 620. Optical system can comprise beam splitter 414 (FIGS. 4 A and 4B) to direct illumination from illumination system 602 toward target 618 and to direct scattered illumination from target 618 to detector 614.
[0109] In some aspects, illumination system 602 and detection system 604 can work together to provide lock-in detection capabilities. Inspection system 600 can also comprise a reference system 622. Reference system 622 can act as a master clock and provide timing information (e.g., master frequency, modulation frequencies, tick counts, or the like) to illumination system 602 and detection system 604. Modulation frequencies can be based on the master frequency (e.g., subharmonics of the master frequency). In one example, timing information can be provided in the form of a periodic signal (e.g., a step function of a given frequency).
[0110] In some aspects, source elements 608- 1 through 608-n can generate illumination with photon wavelengths
Figure imgf000022_0001
through
Figure imgf000022_0002
(e.g., first wavelength, second wavelength,..., nth wavelength), respectively. Source elements 608-1 through 608-n can be coupled to modulator elements 610-1 through 610-n, respectively. Based on the master frequency, modulator elements 610-1 through 610-n can modulate the illumination with wavelengths through 2nat frequencies j through fn (e.g., first frequency, second frequency,..., nth frequency), respectively. The logic applies to all couplings of source and modulator elements, down to modulator element 610-n, which can modulate the illumination with wavelength
Figure imgf000023_0001
at a frequency fn. Each of modulator elements 610-1 through 610-n (and therefore each of frequencies j through fn) can define channels (e.g., channels 1 through n, hence multichannel).
[0111] As alluded to above, the modulation is not limited to wavelengths, but other parameters or combination of parameters of illumination. For example a 1st parameter can be associated with a first polarization of illumination (and/or a wavelength, and/or an angle of incidence), a 2nd parameter can be associated with a second polarization of the illumination (and/or wavelength and/or an angle of incidence), or the like.
[0112] In some aspects, combiner 612 can combine the differently parametrized illumination from source elements 608-1 through 608-n to generate a beam of illumination 624 that comprises a mix of illumination parameters (different wavelengths, polarizations, angles of incidence, or the like). Optical system 606 can direct beam of illumination 624 toward target 618. Target 618 can scatter the photons of beam of illumination 624. The scattered illumination can be collected by optical system 606 and directed to detector 614 as scattered illumination 626. A desirable feature is that the parametrizations of beam of illumination 624 need not be sequential in time. All the different parameters of beam of illumination 624 can be overlapping in time (e.g., simultaneous) — a feature that also applies to scattered illumination 626. The lock-in functions at the detection branch is capable of demodulating so that illumination with different parameters of illumination can be distinguished. Demodulation can be defined as the process by which a modulating signal is extracted from its carrier signal.
[0113] In some aspects, detector 614 can comprise a camera. The camera can receive timing information from reference system 622 (e.g., a master periodic reference signal having a master frequency). It is desirable to have a camera with a very high sampling rate such that the true shape of the detected composite signal can be faithfully recreated or well-approximated.
[0114] As mentioned earlier, the discussion will focus on a single pixel of detector 614, but it should be understood that the other pixels can work in the same manner. The pixels of detector 614 can have a monochrome response. That is, the pixels respond to each photon received regardless of parametrization (e.g., no pixels are rejected based on color, as opposed to the concept of a color camera with color filters). Each pixel can be, for example, a quanta image sensor (QIS) — which is photon counting image sensor. Other types of pixelated sensors are also envisaged. Each pixel can generate a measurement signal 628 based on the amount of photons received. In some aspects, as photons of scattered illumination 626 are received at a pixel of detector 614, the resulting measurement signal 628 from the pixel can be a composite signal. An effect of encoding beam of illumination 624 with the modulation frequencies j through fn is that the composite measurement signal 628 also carries the encoding of the modulation frequencies ^ through fn. Therefore, the composite measurement signal 628 comprises information of the different parametrizations imposed by the illumination branch as well as the effects from the interaction with target 618 (e.g., parameters can be multiple wavelengths, multiple wavelengths at one polarization, multiple wavelengths at another polarization, multiple angles of incidence, or the like). A non-limiting example of a composite signal (with three parameters) is shown in composite signal 502 (FIG. 5A).
[0115] In some aspects, analyzer elements 616-1 through 616-n can be used to demodulate measurement signal 628 into different channels 1 through n. Each channel can be responsible for outputting the respective amplitude and phase of each frequency component (that is, and <p±, I2 and <p2,..., and In and <pn). Each pixel of detector 614 can be considered as having n channels. The multichannel feature is a desirable feature, particularly in view of limitations of commercially available lock- in cameras, which are limited to locking onto one modulation frequency (i.e., single channel). Detector 614 and analyzer 616 together function as a multi-channel lock-in camera. When the information from all pixels of the multi-channel lock-in camera are compiled, the result is a hyper-parametrized image (e.g., a hyperspectral image) that shows the intensity (amplitudes /„) and/or phases <pn for all of the n parameter settings (e.g., wavelength, polarization, angle of incidence, or the like). In one non-limiting example of a four-wavelength measurement, the output of the multi-channel lock-in camera can be used to generate 4 images (one for each wavelength) for the intensities /t, I2, 13, and /4 from each pixel and/or 4 images for the phases <p±, <p2, <p3, and <p4 from each pixel.
[0116] In some aspects, no photons are intentionally rejected by detector 614, thereby allowing for efficient use of the full intensity provided by the illumination source (as opposed to intentional rejection of colors by the color filters of a color camera). Furthermore, since illumination with different parameters can be overlapped in time, the non-sequential aspect of the measurement technique can allow for much faster inspection of lithographed target(s) 618 and with high SNR owing to the feature of not rejecting photons.
[0117] FIG. 7 shows a flow diagram of a detector 714, according to some aspects. In some aspects, detector 714 can comprise structures and functions similar to detector 614 described in reference to FIG. 6. Therefore, unless otherwise noted, descriptions of elements of FIG. 6 can also apply to corresponding elements of FIG. 7 (e.g., reference numbers sharing the two right-most numeric digits) and will not be rigorously reintroduced. Such elements in FIG. 7 can include scattered illumination 726 and measurement signal 728 — structures and functions can be inferred from descriptions of similar elements in FIG. 6.
[0118] In some aspects, detector 714 can be a camera (e.g., a QIS camera). A QIS camera can have some desirable properties (e.g., high readout speed and low added noise). Detector 714 can comprise a pixel 730, a sample clock 732, a comparator 734, and a counter 736. Pixel 730, sample clock 732, and comparator 734 can operate in analog domain 751 and digital domain 753 (can serve as the transition between analog and digital). Counter 736 can operate in the digital domain.
[0119] In some aspects, pixel 730 can receive scattered illumination 726 (e.g., from target 618 (FIG. 6)). Every pixel of detector 714, including pixel 730, can be sampled at a very high frequency rate (e.g., kHz-MHz ranges). The sampling rate can be determined by sample clock 732. Furthermore, the sampling rate can be set to a frequency that is an integer multiple of modulation frequencies j through fn. The relationship between the sampling frequency and the modulation frequencies can be such that the Nyquist criterion is satisfied so as to avoid signal distortion (e.g., the highest useable frequency is less than half of the sampling frequency). The criterion can be relaxed if the sampling frequency is much higher than the modulation frequencies. The sampling frequency can be set to coincide with a master clock 755 (e.g., provided by reference system 622 (FIG. 6)).
[0120] In some aspects, the intrinsic gain of pixel 730 can be high to reduce input referred read noise. The high gain/low noise design allows for single photo-electron resolution. Comparator 734 can receive the analog voltage (or current) signal from pixel 730 and binarize the analog signal (hence, comparators disclosed herein can also be referred to as analog-to-digital converters, and a grouping of converters can be part of an analog-to-digital converter system). The result is a train of digital pulses (pulse train 738) over an integration period. The pulses can be counted by counter 736. The digital counts can be used to estimate the photon-arrival rate and, therefore, the intensity at the pixel via digital processing. Counter 736 can output measurement signal 728. Measurement signal 728 (e.g., pixel output) can be processed and analyzed to extract amplitude and/or phases of the different channels corresponding to modulation frequencies through fn.
[0121] FIG. 8 shows a flow diagram of a detection system 804, according to some aspects. In some aspects, detection system 804 can comprise structures and functions similar to the detection system and detector described in reference to FIGS. 6 and 7. Therefore, unless otherwise noted, descriptions of elements of FIGS. 6 and 7 can also apply to corresponding elements of FIG. 8 (e.g., reference numbers sharing the two right-most numeric digits) and will not be rigorously reintroduced. Such elements in FIG. 8 can include scattered illumination 826, measurement signal 828, pixel 830, sample clock 832, comparator 834, pulse train 838, analog domain 851, digital domain 853, master clock 855, and analyzer 816 — structures and functions can be inferred from descriptions of similar elements in FIGS. 6 and 7. [0122] In some aspects, measurement signal 828 can be received at analyzer 816. During a sampling period, the pulses in pulse train 838 can be integrated using counter 736 (FIG. 7) (e.g., measurement signal 828 can comprise integrated pulses over sampling periods). In the absence of a dedicated counter (e.g., counter 736 (FIG. 7)), measurement signal 828 can comprise pulse train 838. Or in an alternative description, analyzer 816 can be considered a counter since analyzer 816 can receive binarized pulses (counts) as input.
[0123] In some aspects, finite-time lock-in detection of the intensity samples (pulses) can be performed at multiple modulation frequencies j through fn by leveraging analyzer elements of analyzer 816 (e.g., analyzer elements 616-1 through 616-n (FIG. 6)). Analyzer can implement pre-generated cosine and sine tables 857 that correspond to frequencies through fn . The frequency information can be determined based on the master frequency that is used to modulate the illumination at illumination system 602 (FIG. 6) (e.g., obtain timing information from reference system 622). Analyzer 816 can combine (e.g., element-wise multiplication, multiplication followed by summation, or the like) the precalculated sine and cosine tables with the data in measurement signal 828.
[0124] In some aspects, for discrete-time sampling, the expressions for an and bn can be given by:
Figure imgf000026_0001
[0125] Here, s(i) is the sampled signal, n is the harmonic referred to the fundamental frequency, P is the time period over which the analysis is performed (e.g., tmeas as used further below), NP is the number of sampled points within one period, and At is the time interval between samples. In some aspects, a per-element multiplication of the time-sampled signal s(i) with the pre-calculated elements an(i) and bn(i) of the cosine and sine tables can be performed. Then, a summation over the per-element multiplied values over one measurement period can also be performed. The time-sampled signal s (i) can be interpreted as the total sum of detected photons, divided by the detection time interval At. The division by At represents an intensity normalization step. The normalization step can be performed at a later stage or even omitted altogether to simplify processing, for example, if only relative intensities are of interest.
[0126] In some aspects, analyzer 816 can determine cosine factors
Figure imgf000026_0002
through an and sine factors
Figure imgf000026_0003
through bn using the above-noted calculations. The sine and cosine factors can then be used to determine intensities (amplitudes through /n) and/or phases <p and <pn for all of the parameter settings 1 through n. The cosine factors
Figure imgf000026_0005
through an and sine factors
Figure imgf000026_0004
through bn can be referred to as the representation of the amplitude and phase in Cartesian form (that is, in a non-limiting example, amplitude and phase can be represented in the Cartesian form as the coefficients an and bnof the Fourier series in cosine-sine form). It is to be appreciated that cosine-only or sine-only tables can be used to simplify analysis. The cosine-only (or sine-only) tables implementation can be used for boundary conditions that work for discrete cosine transforms (e.g., if the carrier phase is not shifted). Analyzer 816 can be followed by a further analyzer 817. Analyzer 817 can perform operations on the output of analyzer 816 to further refine the measurement data. For example, analyzer 817 can perform integration (sum), averaging, filtering, or the like. It is to be appreciated that analyzers 816 and 817 can be separate, as shown, or a single device (e.g., a single computer, processor system, or the like).
[0127] In some aspects, finite-time detection can be performed by specifying a measurement sampling time tmeas (i.e., an inverse of a measurement sampling rate, which can be different from the sampling rate limit of the camera). The measurement sampling rate can be selected via, e.g., configuring detection system 804 to use a frequency that is divisible by the frequency of master clock 855 (e.g., integer multiple of the period of the master clock). The measurement sampling rate can be configured to meet the Nyquist criterion for the highest frequency among the frequencies f± through fn.
[0128] In some aspects, the frequencies j through fn can form at least some of the Fourier components associated with or derived from the measurement integration time tmeas. That is, the period of each modulation frequency can fit an exact number of times in tmeas. Frequencies j through fn can be evenly spaced in frequency domain. To prevent channel cross-talk, frequencies j through fn can be chosen such that no one frequency is a harmonic of another frequency. The measurement sampling rate
Figure imgf000027_0001
integer multiple of the frequency separation between at least two of the frequencies j through fn. If longer integrations are desired (longer tmeas), the technique disclosed herein allows for increasing the integration time in multiples of the base (lowest) tmeas allowed by the hardware. Though it should be noted that the frequency space will be affected accordingly. In the situation that a combination of f± through fn fulfills the condition of all having an integer multiple of periods, then using two times, three times, or more times this time period can still fulfill the condition. In contrast, increasing the measurement time by, for example, two times can allow the frequency spacing to be halved, but there is no requirement to do so.
[0129] In some aspects, additional constraints for modulation frequencies can be further defined so as to enhance a performance of detection system 804. The total measurement time can comprise a first time period and a second time period. The first time period can be the time at the beginning of the measurement, during which the inspection apparatus is in a steady state (e.g., illumination is on, modulation is operating, target is within field of view of the inspection apparatus). The second time can be the time during which the signal from the target is analyzed (e.g., tmeas ). Then, the channel separation (in frequency) can be an exact multiple of the inverse of the sum of the first and second time periods.
[0130] In some aspects, the number of operations per pixel can be substantially greater compared to the basic QIS flow shown in FIG. 7. Therefore, it is desirable to digitally process the operations in order to leverage the constants advancements in computer power.
[0131] While FIG. 8 shows a single pixel treatment of lock-in detection, it is also envisaged that multiple pixels can have combined analysis streams (e.g., multiplexing and demultiplexing).
[0132] FIG. 9 shows a flow diagram of a detection system 904, according to some aspects. In some aspects, detection system 904 can comprise structures and functions similar to the detection system and detector described in reference to FIGS. 6-8. Therefore, unless otherwise noted, descriptions of elements of FIGS. 6-8 can also apply to corresponding elements of FIG. 9 (e.g., reference numbers sharing the two right-most numeric digits) and will not be rigorously reintroduced. Such elements in FIG. 9 can include scattered illumination 926, measurement signal 928, pixels 930-1 through 930-m, sample clock 932, comparator 934, analog domain 951, digital domain 953, master clock 955, and analyzers 916 and 917 — structures and functions can be inferred from descriptions of similar elements in FIGS. 6-8.
[0133] In some aspects, detection system can also comprise a pixel read combiner 940 (e.g., a multiplexer) and a demultiplexer 942. Pixel read combiner 940 can combine the analog signals generated by pixels 930-1 through 930-m resulting from receipt of scattered illumination 926. The illumination incident on each one of pixels 930-1 through 930-m can have n parameters (i.e., n modulation frequencies associated with photon wavelengths, polarization, angles, or the like). Pixel read combiner 940 and demultiplexer 942 can interface with sample clock 932 such that measurement signal 928 can comprise demultiplexed train pulses. The demultiplexed train pulses can be discriminated based on their association with respective ones of pixels 930-1 through 930-m. Analyzer 916 can be used in a finite-time lock-in detection of the data stream originating from multiple pixels (as opposed to a single pixel as was shown in FIG. 8). Alternatively, the measurement signal 928 can be separated based on the demultiplexing and sent to respective ones of a plurality of analyzers. A plurality of analyzers can be grouped together in an analyzer system.
[0134] FIG. 10 shows a detector 1014, according to some aspects. In some aspects, detector 1014 can comprise structures and functions similar to the detection system and detector described in reference to FIGS. 6-9. Therefore, unless otherwise noted, descriptions of elements of FIGS. 6-9 can also apply to corresponding elements of FIG. 10 (e.g., reference numbers sharing the two right-most numeric digits) and will not be rigorously reintroduced.
[0135] In some aspects, detector 1014 can be an integrated QIS camera having a stack of layers. Detector 1014 can comprise a pixel layer 1044, a mixed signal IC layer 1046, and a logic layer 1048 (e.g., first, second, and third layers, respectively). Pixel layer can receive illumination for subsequent conversion to a digital signal. Mixed signal IC layer 1046 can provide conversion of analog signals generated at the pixel layer to digital signals (e.g., the above-mentioned comparators can be part of mixed signal IC layer 1046). Components like the above-mentioned analyzers can be part of logic layer 1048. Logic layer 1048 can provide the digital processing for finite-time lock-in detection.
[0136] FIGS. 11A and 11B show a pupil plane 1150 through which beams of illumination 1124 are propagated, according to some aspects. In some aspects, elements of FIGS. 11A and 1 IB can be similar to some elements described in reference to FIGS. 6-10. Therefore, unless otherwise noted, descriptions of elements of FIGS. 6-10 can also apply to corresponding elements of FIGS. 11A and 11B (e.g., reference numbers sharing the two right-most numeric digits) and will not be rigorously reintroduced. Such elements in FIG. 11 A and 1 IB can include beams of illumination 1124, target 1118, and substrate 1120 — structures and functions can be inferred from descriptions of similar elements in FIGS. 6-8.
[0137] In some aspects, beams of illumination 1124 can comprise two or more beams of illumination, for example, beams 1 through k (in this non-limiting example, k is 12). For FIG. 11A shows the head- on view of pupil plane 1150 with optical axis at the center, as well as the disposition of beams 1 through k. For clarity, only beams 1, 2, 7, and 8 are shown in FIG. 1 IB. Beam 1 can be diametrically opposite to beam 7. Beam 2 can be diametrically opposite to beam 8. The setup shown in FIGS. 11 A and 1 IB is useful for performing angle-resolved scatterometry. Different pieces of information about target 1118 can be obtained by inspecting with different angles of incidence. As shown, beams 2 and 8 can have an angle of incidence a on target 1118 and beams 1 and 7 can have an angle of incidence ? on target 1118. An optical element (e.g., a lens) can be disposed at, or proximal to, pupil plane 1150 so as to cause beams 1 through k to converge at target 1118.
[0138] In the interest of shortening measurement time (and thereby increase throughput), in some aspects, it is desirable to overlap the “on” periods of beams 1 through k to overlap in time (e.g., simultaneity). The scattered illumination from target 1118 can then be scattered and directed to a detector. The challenge is then to parse the received radiation so as to discern which parts of the detected radiation map to which parts of the sourced beams 1 through k. This is where lock-in camera techniques described herein can be of use. Each of beams 1 through k can be assigned to a channel. That is, each beam can be modulated at a given frequency 1 through n. In the simplest case, n can be equal to k. In aspects where each beam has more than one parameter (e.g., wavelength and/or polarization), more modulation frequencies can be introduced (e.g., two wavelengths for each of twelve beams can implement twenty four modulation frequencies).
[0139] FIG. 12 shows a computer system 1200, according to some aspects. Various aspects and components therein can be implemented, for example, using computer system 1200 or any other well- known computer systems.
[0140] In some aspects, computer system 1200 can comprise one or more processors (also called central processing units, or CPUs), such as a processor 1204. Processor 1204 can be connected to a communication infrastructure or bus 1206.
[0141] In some aspects, one or more processors 1204 can each be a graphics processing unit (GPU). In some aspects, a GPU is a processor that is a specialized electronic circuit designed to process mathematically intensive applications. The GPU can have a parallel structure that is efficient for parallel processing of large blocks of data, such as mathematically intensive data common to computer graphics applications, images, videos, etc.
[0142] In some aspects, computer system 1200 can further comprise user input/output device(s) 1203, such as monitors, keyboards, pointing devices, etc., that communicate with communication infrastructure 1206 through user input/output interface(s) 1202. Computer system 1200 can further comprise a main or primary memory 1208, such as random access memory (RAM). Main memory 1208 can comprise one or more levels of cache. Main memory 1208 has stored therein control logic (i.e., computer software) and/or data.
[0143] In some aspects, computer system 1200 can further comprise one or more secondary storage devices or memory 1210. Secondary memory 1210 can comprise, for example, a hard disk drive 1212 and/or a removable storage device or drive 1214. Removable storage drive 1214 can be a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup device, and/or any other storage device/drive. Removable storage drive 1214 can interact with a removable storage unit 1218. Removable storage unit 1218 can comprise a computer usable or readable storage device having stored thereon computer software (control logic) and/or data. Removable storage unit 1218 can be a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, and/ any other computer data storage device. Removable storage drive 1214 reads from and/or writes to removable storage unit 1218 in a well-known manner.
[0144] In some aspects, secondary memory 1210 can comprise other means, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system 1200. Such means, instrumentalities or other approaches can comprise, for example, a removable storage unit 1222 and an interface 1220. Examples of the removable storage unit 1222 and the interface 1220 can comprise a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface.
[0145] In some aspects, computer system 1200 can further comprise a communication or network interface 1224. Communication interface 1224 enables computer system 1200 to communicate and interact with any combination of remote devices, remote networks, remote entities, etc. (individually and collectively referenced by reference number 1228). For example, communication interface 1224 can allow computer system 1200 to communicate with remote devices 1228 over communications path 1226, which can be wired and/or wireless, and which can comprise any combination of LANs, WANs, the Internet, etc. Control logic and/or data can be transmitted to and from computer system 1200 via communications path 1226.
[0146] In some aspects, it is envisaged that lock-in functions can be implemented in a variety of ways. For example, tmeas can be set to a minimum possible value such that there is just enough aggregation of the optical signal to discern a useful SNR. In a further example, tmeas can correspond to one or more cycles of a modulation signal (e.g., 2?t radians). For efficiency, the pre-generated cosine and/or sine tables described above can cover some finite range of time (e.g., the minimum possible value of tmeas as described in the non-limiting example above). However, when measuring for longer periods (e.g., aggregating for longer than the minimum time so as to improve SNR), the limited pre-generated cosine and/or sine tables should be extended by a corresponding amount. This can lead to additional hardware real estate for the analysis, which can increase cost and complexity.
[0147] In some aspects, the pre-generated tables and use thereof can be implemented in a manner that simplifies digital calculation.
[0148] FIG. 13 shows a flow diagram 1300 of operations performed in connection with detection systems disclosed herein, according to some aspects. In some aspects, flow diagram 1300 can be implemented via any one of analyzers 616, 816, 817, 916, and/or 917 (FIGS. 6, 8, and 9) or a combination thereof. A pre-generated table 1302 (e.g., a cosine table, a sine table, or the like) can comprise discrete elements (the element number is tracked via the index j). The values of pre-generated table 1302 are denoted by
Figure imgf000031_0001
, p2 , p3 , .. . , pj . Any of the detection systems disclosed herein can generate a measurement signal 1304 based on receipt of scattered illumination from a target. Measurement signal 1304 can be discretized (e.g., in digital form), the elements of which are considered in the context of the index j (e.g., m , mj+1, m.j+2, and so on) that was used for describing pre-generated table 1302. The element mj can be the first element of measurement signal 1304. In some aspects, other elements can precede the element rrij.
[0149] In some aspects, the analysis of measurement signal 1304 can be performed by combining measurement signal 1304 and pre-generated table 1302 (e.g., via multiplication operation 1306). For example, the multiplication pjxmj can be performed. The result is a quantity V . Since pre-generated 1302 can correspond to a periodic table (e.g., a cosine table with a periodicity of j elements), the elements of pre-generated table next multiplication can be p1xm/+1 to generate quantity v;+1. This operation can be performed multiple times for corresponding discrete elements (e.g., to generate Vy+2 and so on), which can be denoted as quantities 1308. Quantities 1308 can be aggregated (e.g., via summation operation 1310) over an integration period (e.g., tmeas). The aggregation operation can be accompanied by a normalization operation based on the number of elements that were summed over (e.g., to extract a correct value for amplitude of modulation). The aggregated can be output 1312, which can be one of the coefficients an and bn that allow for the determination of phase or amplitude (output coefficients are also illustrated in FIGS. 8 and 9).
[0150] In some aspects, to increase analysis efficiency, pre-generated table 1302 can be implemented along with a circular-shift register 1314. Circular register 1314 can allow table 1302 to be repeated when tmeas is set to a time period that is longer than one cycle of a cosine/sine table. To facilitate the use of shift register 1314, a condition can be imposed such that summation operation 1310 be performed over an exact multiple of the corresponding modulation period. In some aspects, when summation operation 1310 is performed in each modulation channel, a condition can be that summation operation 1310 is performed at exact multiples of each modulation period. The use of pre-generated table 1302 can be more efficient than extrapolating or pre-loading additional elements that extend pre-generated table 1302.
[0151] In some aspects, the process can be reset and restarted for the next integration period. The process can be iterated for a number of different pre-generated tables that correspond to different modulation frequencies such that information can be extracted from the different modulation channels. One or more of the operations of flow diagram 1300 can be performed using a processor of the camera or an external processor (e.g., a cpu or gpu of a personal computer).
[0152] FIG. 14 shows a flow diagram 1400 of operations performed in connection with detection systems disclosed herein, according to some aspects. In some aspects, flow diagram 1400 can have some features in common with flow diagram 1300 (FIG. 13). Unless otherwise noted, descriptions of elements of FIG. 13 can also apply to FIG. 14. Elements appearing in FIG. 14 that correspond to elements in FIG. 13 can have like reference numbers (e.g., reference numbers sharing the two rightmost numeric digits). Examples of such elements in FIG. 6 can include, for example, pre-generated table 1402, measurement signal 1404, multiplication operation 1406, quantity 1408, summation operation 1410, and output 1412.
[0153] In some aspects, a storage register 1416 can be implemented in addition to one or more processes already described in reference to FIG. 13. Using storage register 1416, quantity 1408 can be stored in memory (e.g., RAM, cache, non-volatile memory, or the like). The values of quantity 1408 stored via storage register 1416 can then be subtracted (e.g., via subtraction operation 1418) from the sum output by summation operation 1410. The subtraction can be lined up in time such that a total measurement time is accounted for. The total measurement time can comprise a first time period and a second time period, as described above. The delay can shift a time window of aggregation (e.g., a moving time window for summation operation 1410). This implementation can allow the coefficients an and bn (output 1412) to be output in a continuous manner. This form of continuous output can be regarded as a form of “finite impulse response” filter. A finite impulse response (FIR) filter can be thought of as a filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely or for long periods.
[0154] In some aspects, the operations in flow diagrams 1400 (and 1300 (FIG. 13)) can be performed at a speed that is consistent with the sampling rate of the detection system. However, this can lead to a prohibitively large data output volume. The problem of large data output can be mitigated by implementing a decimation operation 1420 to output 1412. Continuous output can also be achieved in flow diagram 1300 (FIG. 13) by implementing a suitable input from an external timing mechanism.
[0155] In some aspects, a non-transitory, tangible apparatus or article of manufacture comprising a non-transitory, tangible computer useable or readable medium having control logic (software) stored thereon is also referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer system 1200, main memory 1208, secondary memory 1210, and removable storage units 1218 and 1222, as well as tangible articles of manufacture embodying any combination of the foregoing. Such control logic, when executed by one or more data processing devices (such as computer system 1200), causes such data processing devices to operate as described herein.
[0156] Based on the teachings contained in this disclosure, it will be apparent to those skilled in the relevant art(s) how to make and use aspects of this disclosure using data processing devices, computer systems and/or computer architectures other than that shown in FIG. 12. In particular, aspects described herein can operate with software, hardware, and/or operating system implementations other than those described herein.
[0157] The embodiments may further be described using the following clauses: 1. A metrology system comprising: an illumination system configured to transmit illumination toward a target, the illumination having a plurality of illumination parameters associated with a corresponding plurality of modulation frequencies; a camera configured to receive a scattered illumination from the target and to generate, per pixel of the camera, a measurement signal encoded with signatures of the plurality of modulation frequencies; and an analyzer system configured to, per pixel of the camera, demodulate the measurement signal based on the plurality of modulation frequencies and to output a phase, an amplitude, or the phase and amplitude of demodulated components of the measurement signal corresponding to the modulation frequencies.
2. The metrology system of clause 1, wherein the illumination system mixes different illumination parameters from a set of parameters.
3. The metrology system of clause 2, wherein the set of parameters comprises one or more wavelengths, one or more polarizations, and one or more angles of incidence at the target.
4. The metrology system of clause 1, wherein: the illumination system is further configured to apply the plurality of illumination parameters simultaneously to the transmitted illumination; and the analyzer system is further configured to perform the demodulating simultaneously for the plurality illumination parameters.
5. The metrology system of clause 1, further comprising: a multiplexer configured to combine measurement signals from a group of pixels of the camera; and a demultiplexer configured to demultiplex the combined measurement signals, wherein demodulating of the measurement signal is performed using the demultiplexed measurement signals.
6. The metrology system of clause 1, wherein each pixel of the camera is sensitive to a plurality of wavelengths in parallel.
7. The metrology system according to clause 1, wherein a structure of the camera is tiered and comprises a pixel layer, an analog-to-digital layer, and a logic layer.
8. The metrology system of clause 1, further comprising a time reference system configured to provide a timing basis for each of the modulation frequencies.
9. The metrology system of clause 1, further comprising a digital-to-analog converter system configured to receive the measurement signals from pixels of the camera in an analog state and to output the measurement signals in a digital state.
10. The metrology system of clause 1, wherein: the demodulating of the measurement signal is characterized by a measurement sampling time-meas - and tmeas is an integer multiple of periods of the modulation frequencies.
11. The metrology system of clause 1, wherein: the demodulating of the measurement signal is characterized by a measurement sampling rate fmeas- and fmeas is an integer multiple of a frequency separation between at least two of the modulation frequencies.
12. The metrology system of clause 1, wherein the analyzer system is further configured to perform the demodulating of the measurement signal by combining data in the measurement signal with at least a cosine table, at least a sine table, one or more sine-only tables, or one or more cosine-only tables.
13. The metrology system of clause 12, wherein the combining of the data is performed via a multiplication operation.
14. The metrology system of clause 12, wherein the analyzer system is further configured to perform the demodulating of the measurement signal using a shift register on the at least a cosine table, the at least a sine table, the one or more sine-only tables, or the one or more cosine-only tables.
15. The metrology system of clause 12, wherein: an output of the combining of the data is a plurality of discrete quantities; and the analyzer system is further configured to aggregate the discrete quantities.
16. The metrology system of clause 15, wherein the aggregating of the discrete quantities is performed over a measurement sampling time tmeas.
17. The metrology system of clause 15, wherein the aggregating of the discrete quantities is performed over a moving time window.
18. The metrology system of clause 17, wherein the outputting of the phase, the amplitude, or the phase and amplitude of demodulated components is performed continuously based on the moving time window.
19. The metrology system of clause 18, wherein the analyzer system is further configured to decimate continuous outputs based on the moving time window to reduce a data output rate per pixel of the camera.
20. A lithographic apparatus comprising: an illumination source configured to illuminate a pattern of a patterning device; a projection system configured to project an image of the pattern onto a substrate; and a metrology system comprising: an illumination system further configured to transmit illumination toward a target, the illumination having a plurality of illumination parameters associated with a corresponding plurality of modulation frequencies; a camera configured to receive a scattered illumination from the target and to generate, per pixel of the camera, a measurement signal encoded with signatures of the plurality of modulation frequencies; and an analyzer system configured to, per pixel of the camera, demodulate the measurement signal based on the plurality of modulation frequencies and to output a phase, an amplitude, or the phase and amplitude of demodulated components of the measurement signal corresponding to the modulation frequencies.
21. The lithographic apparatus of clause 20, wherein the illumination system mixes different illumination parameters from a set of parameters.
22. The lithographic apparatus of clause 21, wherein the set of parameters comprises one or more wavelengths, one or more polarizations, and one or more angles of incidence at the target.
23. The lithographic apparatus of clause 20, wherein: the illumination system is further configured to apply the plurality of illumination parameters simultaneously to the transmitted illumination; and the analyzer system is further configured to perform the demodulating simultaneously for the plurality illumination parameters.
24. The lithographic apparatus of clause 20, further comprising: a multiplexer configured to combine measurement signals from a group of pixels of the camera; and a demultiplexer configured to demultiplex the combined measurement signals, wherein the demodulating of the measurement signal is performed using the demultiplexed measurement signals.
25. The lithographic apparatus of clause 20, wherein each pixel of the camera is sensitive to a plurality of wavelengths in parallel.
26. The lithographic apparatus of clause 20, wherein a structure of the camera is tiered and comprises a pixel layer, an analog-to-digital layer, and a logic layer.
27. The lithographic apparatus of clause 20, further comprising a time reference system configured to provide a timing basis for each of the modulation frequencies.
28. The lithographic apparatus of clause 20, further comprising an analog-to-digital converter system configured to receive measurement signals from pixels of the camera in analog form and to output the measurement signals in digital form.
29. The lithographic apparatus of clause 20, wherein: the demodulating of the measurement signal is characterized by a measurement sampling time-meas - and tmeas is an integer multiple of periods of the modulation frequencies.
30. The lithographic apparatus of clause 20, wherein: the demodulating of the measurement signal is characterized by a measurement sampling rate fmeas- and fmeas is an integer multiple of a frequency separation between at least two of the modulation frequencies. 31. The lithographic apparatus of clause 20, wherein the analyzer system is further configured to perform the demodulating of the measurement signal by combining data in the measurement signal with at least a cosine table, at least a sine table, one or more sine-only tables, or one or more cosine-only tables.
32. The lithographic apparatus of clause 31, wherein the combining of the data is performed via a multiplication operation.
33. The lithographic apparatus of clause 31, wherein the analyzer system is further configured to perform the demodulating of the measurement signal using a shift register on the at least a cosine table, the at least a sine table, the one or more sine-only tables, or the one or more cosine-only tables.
34. The lithographic apparatus of clause 31, wherein: an output of the combining of the data is a plurality of discrete quantities; and the analyzer system is further configured to aggregate the discrete quantities.
35. The lithographic apparatus of clause 34, wherein the aggregating of the discrete quantities is performed over a measurement sampling time tmeas.
36. The lithographic apparatus of clause 34, wherein the aggregating of the discrete quantities is performed over a moving time window.
37. The lithographic apparatus of clause 36, wherein the outputting of the phase, the amplitude, or the phase and amplitude of demodulated components is performed continuously based on the moving time window.
38. The metrology system of clause 37, wherein the analyzer system is further configured to decimate continuous outputs based on the moving time window to reduce a data output rate per pixel of the camera.
[0158] The terms “radiation,” “beam,” “light,” “illumination,” or the like can be used herein to refer to one or more types of electromagnetic radiation, for example, ultraviolet (UV) radiation (for example, having a wavelength X of 365, 248, 193, 157 or 126 nm), extreme ultraviolet (EUV or soft X-ray) radiation (for example, having a wavelength in the range of 5-100 nm such as, for example, 13.5 nm), or hard X-ray working at less than 5 nm, as well as particle beams, such as ion beams or electron beams. Generally, radiation having wavelengths between about 400 to about 700 nm is considered visible radiation; radiation having wavelengths between about 780-3000 nm (or larger) is considered IR radiation. UV refers to radiation with wavelengths of approximately 100-400 nm. Within lithography, the term “UV” also applies to the wavelengths that can be produced by a mercury discharge lamp: G- line 436 nm; H-line 405 nm; and/or, Lline 365 nm. Vacuum UV, or VUV (i.e., UV absorbed by gas), refers to radiation having a wavelength of approximately 100-200 nm. Deep UV (DUV) generally refers to radiation having wavelengths ranging from 126 nm to 428 nm, and in some aspects, an excimer laser can generate DUV radiation used within a lithographic apparatus. It should be appreciated that radiation having a wavelength in the range of, for example, 5-20 nm relates to radiation with a certain wavelength band, of which at least part is in the range of 5-20 nm. [0159] Although some aspects of the present disclosure are described in the context of lithographic apparatuses in the manufacture of ICs, it should be understood that lithographic apparatuses described herein can be used in other applications, for example, in the manufacture of integrated optical systems, guidance and detection patterns for magnetic domain memories, flat-panel displays, LCDs, thin-film magnetic heads, etc. Those skilled in the art will appreciate that, in the context of such alternative applications, any use of the terms “wafer” or “die” herein can be considered as specific examples of the more general terms “substrate” or “target portion”, respectively. A substrate can be processed before or after exposure in, for example, a track unit (a tool that typically applies a layer of resist to a substrate and develops the exposed resist) and/or a metrology unit. Where applicable, aspects disclosed herein can be applied to such and other substrate processing tools. Furthermore, a substrate can be processed more than once, for example in order to create a multi-layer IC, so that the term substrate used herein can also refer to a substrate that already contains multiple processed layers.
[0160] Furthermore, although some aspects of the present disclosure are described in the context of optical lithography, it should be understood that aspects of the present disclosure are not limited to optical lithography. For example, in imprint lithography, a topography in a patterning device defines the pattern created on a substrate. The topography of the patterning device can be pressed into a layer of resist supplied to the substrate whereupon the resist is cured by applying electromagnetic radiation, heat, pressure or a combination thereof. The patterning device is moved out of the resist leaving a pattern in it after the resist is cured.
[0161] It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by those skilled in relevant art(s) in light of the teachings herein.
[0162] The present disclosure has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. The foregoing description of specific aspects will so fully reveal the general nature of the present disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific aspects, without undue experimentation and without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed aspects, based on the teaching and guidance presented herein.
[0163] It is to be understood that the Detailed Description section, and not the Summary and Abstract sections, is intended to be used to interpret the claims. The Summary and Abstract sections can set forth one or more, but not necessarily all, aspects of the present disclosure as contemplated by the inventor(s), and thus, are not intended to limit the present disclosure and the appended claims in any way. The breadth and scope of the protected subject matter should not be limited by any of the above-described aspects, but should be defined in accordance with the following claims and their equivalents.

Claims

1. A metrology system comprising: an illumination system configured to transmit illumination toward a target, the illumination having a plurality of illumination parameters associated with a corresponding plurality of modulation frequencies; a camera configured to receive a scattered illumination from the target and to generate, per pixel of the camera, a measurement signal encoded with signatures of the plurality of modulation frequencies; and an analyzer system configured to, per pixel of the camera, demodulate the measurement signal based on the plurality of modulation frequencies and to output a phase, an amplitude, or the phase and amplitude of demodulated components of the measurement signal corresponding to the modulation frequencies.
2. The metrology system of claim 1, wherein the illumination system mixes different illumination parameters from a set of parameters.
3. The metrology system of claim 2, wherein the set of parameters comprises one or more wavelengths, one or more polarizations, and one or more angles of incidence at the target.
4. The metrology system of claim 1, wherein: the illumination system is further configured to apply the plurality of illumination parameters simultaneously to the transmitted illumination; and the analyzer system is further configured to perform the demodulating simultaneously for the plurality illumination parameters.
5. The metrology system of claim 1, further comprising: a multiplexer configured to combine measurement signals from a group of pixels of the camera; and a demultiplexer configured to demultiplex the combined measurement signals, wherein demodulating of the measurement signal is performed using the demultiplexed measurement signals.
6. The metrology system of claim 1, wherein each pixel of the camera is sensitive to a plurality of wavelengths in parallel.
7. The metrology system according to claim 1, wherein a structure of the camera is tiered and comprises a pixel layer, an analog-to-digital layer, and a logic layer.
8. The metrology system of claim 1, further comprising a time reference system configured to provide a timing basis for each of the modulation frequencies.
9. The metrology system of claim 1, further comprising a digital-to-analog converter system configured to receive the measurement signals from pixels of the camera in an analog state and to output the measurement signals in a digital state.
10. The metrology system of claim 1, wherein: the demodulating of the measurement signal is characterized by a measurement sampling time-meas - and tmeas is an integer multiple of periods of the modulation frequencies.
11. The metrology system of claim 1 , wherein: the demodulating of the measurement signal is characterized by a measurement sampling rate fmeas- and fmeas is an integer multiple of a frequency separation between at least two of the modulation frequencies.
12. The metrology system of claim 1, wherein the analyzer system is further configured to perform the demodulating of the measurement signal by combining data in the measurement signal with at least a cosine table, at least a sine table, one or more sine-only tables, or one or more cosine-only tables.
13. The metrology system of claim 12, wherein the combining of the data is performed via a multiplication operation.
14. The metrology system of claim 12, wherein the analyzer system is further configured to perform the demodulating of the measurement signal using a shift register on the at least a cosine table, the at least a sine table, the one or more sine-only tables, or the one or more cosine-only tables.
15. The metrology system of claim 12, wherein: an output of the combining of the data is a plurality of discrete quantities; and the analyzer system is further configured to aggregate the discrete quantities.
PCT/EP2023/084624 2022-12-30 2023-12-06 Multichannel lock-in camera for multi-parameter sensing in lithographic processes WO2024141235A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263477929P 2022-12-30 2022-12-30
US63/477,929 2022-12-30
US202363509432P 2023-06-21 2023-06-21
US63/509,432 2023-06-21

Publications (1)

Publication Number Publication Date
WO2024141235A1 true WO2024141235A1 (en) 2024-07-04

Family

ID=89223171

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/084624 WO2024141235A1 (en) 2022-12-30 2023-12-06 Multichannel lock-in camera for multi-parameter sensing in lithographic processes

Country Status (1)

Country Link
WO (1) WO2024141235A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6297876B1 (en) 1997-03-07 2001-10-02 Asm Lithography B.V. Lithographic projection apparatus with an alignment system for aligning substrate on mask
US6961116B2 (en) 2002-06-11 2005-11-01 Asml Netherlands B.V. Lithographic apparatus, device manufacturing method, and device manufactured thereby
US7511799B2 (en) 2006-01-27 2009-03-31 Asml Netherlands B.V. Lithographic projection apparatus and a device manufacturing method
US20090195768A1 (en) 2008-02-01 2009-08-06 Asml Netherlands B.V. Alignment Mark and a Method of Aligning a Substrate Comprising Such an Alignment Mark
US8706442B2 (en) 2008-07-14 2014-04-22 Asml Netherlands B.V. Alignment system, lithographic system and method
WO2016050453A1 (en) * 2014-10-03 2016-04-07 Asml Netherlands B.V. Focus monitoring arrangement and inspection apparatus including such an arragnement
WO2016192865A1 (en) * 2015-06-05 2016-12-08 Asml Netherlands B.V. Alignment system
WO2021110416A1 (en) * 2019-12-05 2021-06-10 Asml Holding N.V. Overlay measurement system using lock-in amplifier technique

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6297876B1 (en) 1997-03-07 2001-10-02 Asm Lithography B.V. Lithographic projection apparatus with an alignment system for aligning substrate on mask
US6961116B2 (en) 2002-06-11 2005-11-01 Asml Netherlands B.V. Lithographic apparatus, device manufacturing method, and device manufactured thereby
US7511799B2 (en) 2006-01-27 2009-03-31 Asml Netherlands B.V. Lithographic projection apparatus and a device manufacturing method
US20090195768A1 (en) 2008-02-01 2009-08-06 Asml Netherlands B.V. Alignment Mark and a Method of Aligning a Substrate Comprising Such an Alignment Mark
US8706442B2 (en) 2008-07-14 2014-04-22 Asml Netherlands B.V. Alignment system, lithographic system and method
WO2016050453A1 (en) * 2014-10-03 2016-04-07 Asml Netherlands B.V. Focus monitoring arrangement and inspection apparatus including such an arragnement
WO2016192865A1 (en) * 2015-06-05 2016-12-08 Asml Netherlands B.V. Alignment system
WO2021110416A1 (en) * 2019-12-05 2021-06-10 Asml Holding N.V. Overlay measurement system using lock-in amplifier technique

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"MULTICHANNEL LOCK-IN CAMERA FOR MULTI-PARAMETER SENSING IN LITHOGRAPHIC PROCESSES", vol. 712, no. 95, 21 July 2023 (2023-07-21), XP007151598, ISSN: 0374-4353, Retrieved from the Internet <URL:https://www.researchdisclosure.com/database/RD712095> [retrieved on 20230721] *
NIU ET AL.: "Specular Spectroscopic Scatterometry in DUV Lithography", SPIE, vol. 3677, 1999, XP000981735, DOI: 10.1117/12.350802
RAYMOND ET AL.: "Multiparameter Grating Metrology Using Optical Scatterometry", J. VAC. SCI. TECH. B, vol. 15, no. 2, 1997, pages 361 - 368, XP000729016, DOI: 10.1116/1.589320

Similar Documents

Publication Publication Date Title
US11994808B2 (en) Lithographic apparatus, metrology systems, phased array illumination sources and methods thereof
NL2009001A (en) Methods and patterning devices for measuring phase aberration.
WO2020239516A1 (en) Self-referencing interferometer and dual self-referencing interferometer devices
US20180164699A1 (en) Measurement System, Lithographic System, and Method Of Measuring a Target
US20230213868A1 (en) Lithographic apparatus, metrology systems, illumination switches and methods thereof
US20240036485A1 (en) Lithographic apparatus, metrology systems, and methods thereof
US20240241453A1 (en) Metrology systems, temporal and spatial coherence scrambler and methods thereof
US20230273531A1 (en) Spectrometric metrology systems based on multimode interference and lithographic apparatus
US11789368B2 (en) Lithographic apparatus, metrology system, and illumination systems with structured illumination
US20240077308A1 (en) Systems and methods for measuring intensity in a lithographic alignment apparatus
US20230341785A1 (en) Lithographic apparatus, metrology systems, and methods thereof
US20230058714A1 (en) Lithographic apparatus, metrology systems, illumination sources and methods thereof
US11971665B2 (en) Wafer alignment using form birefringence of targets or product
WO2024141235A1 (en) Multichannel lock-in camera for multi-parameter sensing in lithographic processes
US20230213871A1 (en) Lithographic apparatus, multi-wavelength phase-modulated scanning metrology system and method
US20240094641A1 (en) Intensity order difference based metrology system, lithographic apparatus, and methods thereof
WO2024052061A1 (en) Measuring contrast and critical dimension using an alignment sensor
US20230324817A1 (en) Lithographic apparatus, metrology system, and intensity imbalance measurement for error correction
WO2023198444A1 (en) Metrology apparatus with configurable printed optical routing for parallel optical detection
WO2023285138A1 (en) Metrology systems with phased arrays for contaminant detection and microscopy
WO2022258275A1 (en) Integrated optical alignment sensors
WO2024141215A1 (en) Metrology system based on multimode optical fiber imaging and lithographic apparatus
WO2024022839A1 (en) Metrology system using multiple radiation spots
WO2023072880A1 (en) Inspection apparatus, polarization-maintaining rotatable beam displacer, and method
WO2024141216A1 (en) Lithographic apparatus and inspection system for measuring wafer deformation