WO2019145278A1 - Procédés et systèmes de détermination de caractéristiques de pré-balayage - Google Patents

Procédés et systèmes de détermination de caractéristiques de pré-balayage Download PDF

Info

Publication number
WO2019145278A1
WO2019145278A1 PCT/EP2019/051461 EP2019051461W WO2019145278A1 WO 2019145278 A1 WO2019145278 A1 WO 2019145278A1 EP 2019051461 W EP2019051461 W EP 2019051461W WO 2019145278 A1 WO2019145278 A1 WO 2019145278A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
sem
parameter
image
change
Prior art date
Application number
PCT/EP2019/051461
Other languages
English (en)
Inventor
Yu Cao
Thomas I. Wallow
Chen Zhang
Jen-Shiang Wang
Original Assignee
Asml Netherlands B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Asml Netherlands B.V. filed Critical Asml Netherlands B.V.
Publication of WO2019145278A1 publication Critical patent/WO2019145278A1/fr

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70605Workpiece metrology
    • G03F7/70616Monitoring the printed patterns
    • G03F7/70625Dimensions, e.g. line width, critical dimension [CD], profile, sidewall angle or edge roughness

Definitions

  • the present description relates to methods of, and apparatuses for, determining information associated with a feature on an object prior to that object being scanned by a scanning electron microscope (“SEM”).
  • SEM scanning electron microscope
  • a lithographic apparatus is a machine that applies a desired pattern onto a substrate, usually onto a target portion of the substrate.
  • a lithographic apparatus can be used, for example, in the manufacture of integrated circuits (ICs).
  • a patterning device which is alternatively referred to as a mask or a reticle, may be used to generate a circuit pattern to be formed on an individual layer of the IC.
  • This pattern can be transferred onto a target portion (e.g., including part of, one, or several dies) on a substrate (e.g., a silicon wafer). Transfer of the pattern is typically via imaging onto a layer of radiation-sensitive material (resist) provided on the substrate.
  • resist radiation-sensitive material
  • a single substrate will contain a network of adjacent target portions that are successively patterned.
  • lithographic apparatus include so-called steppers, in which each target portion is irradiated by exposing an entire pattern onto the target portion at one time, and so-called scanners, in which each target portion is irradiated by scanning the pattern through a radiation beam in a given direction (the“scanning”- direction) while synchronously scanning the substrate parallel or anti parallel to this direction. It is also possible to transfer the pattern from the patterning device to the substrate by imprinting the pattern onto the substrate.
  • Manufacturing devices such as semiconductor devices, typically involves processing a substrate (e.g., a semiconductor wafer) using a number of fabrication processes to form various features and multiple layers of the devices. Such layers and features are typically manufactured and processed using, e.g., deposition, lithography, etch, chemical-mechanical polishing, and ion implantation. Multiple devices may be fabricated on a plurality of dies on a substrate and then separated into individual devices. This device manufacturing process may be considered a patterning process.
  • a patterning process involves a patterning step, such as optical and/or nanoimprint lithography using a lithographic apparatus, to provide a pattern on a substrate and typically, but optionally, involves one or more related pattern processing steps, such as resist development by a development apparatus, baking of the substrate using a bake tool, etching using the pattern using an etch apparatus, etc. Further, one or more metrology processes are typically involved in the patterning process.
  • a patterning step such as optical and/or nanoimprint lithography using a lithographic apparatus
  • one or more related pattern processing steps such as resist development by a development apparatus, baking of the substrate using a bake tool, etching using the pattern using an etch apparatus, etc.
  • one or more metrology processes are typically involved in the patterning process.
  • Metrology processes are used at various steps during a patterning process to monitor and control the process.
  • metrology processes are used to measure one or more characteristics of a substrate, such as a relative location (e.g., registration, overlay, alignment, etc.) or parameter (e.g., line width, critical dimension (CD), thickness, etc.) of features formed on the substrate during the patterning process, such that, for example, the performance of the patterning process can be determined from the one or more characteristics. If the one or more characteristics are unacceptable (e.g., out of a predetermined range for the characteristic(s)), the measurements of the one or more characteristics may be used to alter one or more parameters of the patterning process such that further substrates manufactured by the patterning process have an acceptable characteristic(s).
  • a relative location e.g., registration, overlay, alignment, etc.
  • parameter e.g., line width, critical dimension (CD), thickness, etc.
  • errors may be introduced in other parts of the patterning process, such as in etch, development, bake, etc., and similarly can be characterized in terms of, e.g., overlay errors, CD errors, etc.
  • the errors may directly cause a problem in terms of the function of the device, including failure of the device to function or one or more electrical problems of the functioning device.
  • SEMs scanning electron microscopes
  • CD critical dimension
  • EP edge point
  • SEMs have high resolving power and are capable of resolving features of the order of 30 nm or less, 20 nm or less, 10 nm or less, or 5 nm or less. SEM images of semiconductor devices are often used in the semiconductor fab to observe what is happening at the device level.
  • the measurement information (such as extracted from SEM images of device structures) can be used for process modeling, existing model calibration (including recalibration), defect detection, estimation, characterization or classification, yield estimation, process control or monitoring, etc.
  • a method including, obtaining image data representing a plurality of scanning electron microscope (“SEM”) images, each SEM image comprising a representation of a feature, and each SEM image being associated with a respective scan of the feature by an SEM; determining, for each SEM image, a parameter associated with each of a plurality of gauge positions along the feature; determining a change in the parameter from each SEM image to a subsequent SEM image of the plurality of SEM images; determining, for each gauge position, a rate of the change for the parameter based on a difference in a location of the parameter between at least two of the plurality SEM images; and generating feature data representing a reconstruction of the feature prior to the SEM being applied to the feature by extrapolating, for the parameter associated with each gauge position, an original location of the parameter associated with each gauge position based on the rate of change of the parameter.
  • SEM scanning electron microscope
  • a computing device including memory and at least one processor.
  • the at least one processor may be operable to: obtain image data representing a plurality of scanning electron microscope (“SEM”) images, each SEM image comprising a representation of a feature, and each SEM image being associated with a respective scan of the feature by an SEM; determine, for each SEM image, a parameter associated with each of a plurality of gauge positions along the feature; determine a change in the parameter from each SEM image to a subsequent SEM image of the plurality of SEM images; determine, for each gauge position, a rate of the change for the parameter based on a difference in a location of the parameter between at least two of the plurality SEM images; and generate feature data representing a reconstruction of the feature prior to the SEM being applied to the feature by extrapolating, for the parameter associated with each gauge position, an original location of the parameter associated with each gauge position based on the rate of change of the parameter.
  • SEM scanning electron microscope
  • a non-transitory computer readable medium including instructions that, when executed by at least one processor of a machine, cause the machine to: obtain image data representing a plurality of scanning electron microscope (“SEM”) images, each SEM image comprising a representation of a feature, and each SEM image being associated with a respective scan of the feature by an SEM; determine, for each SEM image, a parameter associated with each of a plurality of gauge positions along the feature; determine a change in the parameter from each SEM image to a subsequent SEM image of the plurality of SEM images; determine, for each gauge position, a rate of the change for the parameter based on a difference in a location of the parameter between at least two of the plurality SEM images; and generate feature data representing a reconstruction of the feature prior to the SEM being applied to the feature by extrapolating, for the parameter associated with each gauge position, an original location of the parameter associated with each gauge position based on the rate of change of the parameter.
  • SEM scanning electron microscope
  • FIG. 1 is an illustrative diagram of an exemplary lithographic apparatus, in accordance with various embodiments
  • FIG. 2 is an illustrative diagram of an exemplary lithographic cell or cluster, in accordance with various embodiments
  • FIG. 3 is an illustrative diagram of an exemplary scanning electron microscope (“SEM”), in accordance with various embodiments
  • FIG. 4 is an illustrative diagram of an exemplary electron beam inspection apparatus, in accordance with various embodiments.
  • FIG. 5 is an illustrative flowchart of an exemplary process for modeling and/or simulating at least part of a patterning process, in accordance with various embodiments
  • FIG. 6 is an illustrative flowchart an exemplary process for calibrating a model, in accordance with various embodiments
  • FIG. 7A is an illustrative diagram of an application of an SEM to a feature and the results thereof, in accordance with various embodiments
  • FIG. 7B is an illustrative diagram of a plurality of images of an SEM being applied to a feature in succession, in accordance with various embodiments;
  • FIG. 7C is an illustrative flowchart of an exemplary process for generating image data representing images of a feature, in accordance with various embodiments
  • FIG. 8A is an illustrative diagram of an exemplary pattern analyzed in accordance with various embodiments.
  • FIGS. 8B and 8C are illustrative diagrams of exemplary images of a feature having gauge lines applied thereto for determining a distance between edge points along the gauge lines prior to and after application of an SEM, respectively, in accordance with various embodiments;
  • FIG. 9 is an illustrative flowchart of an exemplary process for determining original geometric parameters of a feature, in accordance with various embodiments.
  • FIG. 10 is an illustrative diagram of an exemplary part of a pattern analyzed based on intensity values, in accordance with various embodiments
  • FIG. 11 is an illustrative diagram of an exemplary technique for determining a parameter, such as an edge, of a feature using intensity information, in accordance with various embodiments;
  • FIG. 12 is an illustrative diagram of various rates of changes along a contour describing a feature, in accordance with various embodiments
  • FIG. 13 is an illustrative diagram of an exemplary process for extrapolating original parameters of a feature based on a determined contour function, in accordance with various embodiments.
  • FIG. 14 schematically depicts a computer system that may implement embodiments of this disclosure.
  • FIG. 1 is an illustrative diagram of an exemplary lithographic apparatus, in accordance with various embodiments.
  • the apparatus may include:
  • an illumination system (illuminator) IL configured to condition a radiation beam B (e.g.,
  • a support structure e.g., a mask table constructed to support a patterning device
  • a mask (e.g., a mask) MA and connected to a first positioner PM configured to accurately position the patterning device MA in accordance with certain parameters;
  • a substrate table e.g., a wafer table
  • WTa constructed to hold a substrate (e.g., a resist-coated wafer) W and connected to a second positioner PW configured to accurately position the substrate W in accordance with certain parameters
  • a projection system e.g., a refractive projection lens system
  • PS configured to project a pattern imparted to the radiation beam B by patterning device MA onto a target portion C (e.g., including one or more dies) of the substrate W.
  • the illumination system may include various types of optical components, such as refractive, reflective, magnetic, electromagnetic, electrostatic or other types of optical components, or any combination thereof, for directing, shaping, or controlling radiation.
  • optical components such as refractive, reflective, magnetic, electromagnetic, electrostatic or other types of optical components, or any combination thereof, for directing, shaping, or controlling radiation.
  • the patterning device support structure holds the patterning device in a manner that depends on the orientation of the patterning device, the design of the lithographic apparatus, and other conditions, such as, for example, whether or not the patterning device is held in a vacuum environment.
  • the patterning device support structure can use mechanical, vacuum, electrostatic or other clamping techniques to hold the patterning device.
  • the patterning device support structure may be a frame or a table, for example, which may be fixed or movable as required.
  • the patterning device support structure may ensure that the patterning device is at a desired position, for example, with respect to the projection system. Any use of the terms“reticle” or“mask” herein may be considered synonymous with the more general term“patterning device.”
  • patterning device used herein should be broadly interpreted as referring to any device that can be used to impart a radiation beam with a pattern in its cross-section such as to create a pattern in a target portion of the substrate. It should be noted that the pattern imparted to the radiation beam may not exactly correspond to the desired pattern in the target portion of the substrate, for example if the pattern includes phase- shifting features or so called assist features. Generally, the pattern imparted to the radiation beam will correspond to a particular functional layer in a device being created in the target portion, such as an integrated circuit.
  • the patterning device may be transmissive or reflective.
  • Examples of patterning devices include masks, programmable mirror arrays, and programmable LCD panels.
  • Masks are well known in lithography, and may include mask types such as binary, alternating phase-shift, and attenuated phase-shift, as well as various hybrid mask types.
  • An example of a programmable mirror array employs a matrix arrangement of small mirrors, each of which can be individually tilted to reflect an incoming radiation beam in different directions. The tilted mirrors may impart a pattern in a radiation beam, which is reflected by the mirror matrix.
  • projection system used herein should be broadly interpreted as encompassing any type of projection system, including, but not limited to, refractive, reflective, catadioptric, magnetic, electromagnetic and electrostatic optical systems, or any combination thereof, as appropriate for the exposure radiation being used, or for other factors such as the use of an immersion liquid or the use of a vacuum. Any use of the term“projection lens” herein may be considered as synonymous with the more general term“projection system.”
  • the apparatus is of a transmissive type (e.g., employing a transmissive mask).
  • the apparatus may be of a reflective type (e.g., employing a programmable mirror array of a type as referred to above, or employing a reflective mask).
  • the lithographic apparatus may be of a type having two (dual stage) or more tables (e.g., two or more substrate table, two or more patterning device support structures, or a substrate table and metrology table).
  • the additional tables may be used in parallel, or preparatory steps may be carried out on one or more tables while one or more other tables are being used for pattern transfer.
  • the lithographic apparatus may also be of a type where at least a portion of the substrate may be covered by a liquid having a relatively high refractive index (e.g., water), to fill a space between the projection system and the substrate.
  • a liquid having a relatively high refractive index e.g., water
  • An immersion liquid may also be applied to other spaces in the lithographic apparatus, for example, between the mask and the projection system. Immersion techniques are well known in the art for increasing the numerical aperture of projection systems.
  • immersion as used herein does not mean that a structure, such as a substrate, must be submerged in liquid, but rather only means that liquid is located between the projection system and the substrate during exposure.
  • the illuminator IL may receive a radiation beam from a radiation source SO.
  • the source and the lithographic apparatus may be separate entities, for example when the source is an excimer laser. In such cases, the source is not considered to form part of the lithographic apparatus and the radiation beam is passed from the source SO to the illuminator IL with the aid of a beam delivery system BD including, for example, suitable directing mirrors and/or a beam expander. In other cases, however, the source may be an integral part of the lithographic apparatus, for example when the source is a mercury lamp.
  • the source SO and the illuminator IL, together with the beam delivery system BD if required, may be referred to as a radiation system.
  • the illuminator IL may include an adjuster AD for adjusting the angular intensity distribution of the radiation beam. Generally, at least the outer and/or inner radial extent (commonly referred to as s-outer and s-inner, respectively) of the intensity distribution in a pupil plane of the illuminator can be adjusted.
  • the illuminator IL may include various other components, such as an integrator IN and a condenser CO. The illuminator IL may be used to condition the radiation beam such that the beam may have a desired uniformity and intensity distribution in its cross section.
  • the radiation beam B is incident on the patterning device (e.g., mask) MA, which is held on the patterning device support (e.g., mask table MT), and is patterned by the patterning device. Having traversed the patterning device (e.g., mask) MA, the radiation beam B passes through the projection system PS, which focuses the beam onto a target portion C of the substrate W.
  • the second positioner PW and position sensor IF e.g., an interferometric device, linear encoder, 2-D encoder, or capacitive sensor
  • the substrate table WTa can be moved accurately, e.g., to position different target portions C in the path of the radiation beam B.
  • the first positioner PM and another position sensor can be used to accurately position the patterning device (e.g., mask) MA with respect to the path of the radiation beam B, e.g., after mechanical retrieval from a mask library, or during a scan.
  • movement of the patterning device support (e.g., mask table) MT may be realized with the aid of a long-stroke module (coarse positioning) and a short-stroke module (fine positioning), which form part of the first positioner PM.
  • movement of the substrate table WTa may be realized using a long-stroke module and a short-stroke module, which may form part of the second positioner PW.
  • the patterning device support (e.g., mask table) MT may be connected to a short- stroke actuator only, or may be fixed.
  • Patterning device (e.g., mask) MA and substrate W may be aligned using mask alignment marks Ml, M2 and substrate alignment marks PI, P2.
  • the substrate alignment marks as illustrated occupy dedicated target portions, they may be located in spaces between target portions (these are known as scribe-lane alignment marks).
  • the mask alignment marks may be located between the dies.
  • Small alignment markers may also be included within dies, in amongst the device features, in which case it may be desirable that the markers be as small as possible and not require any different patterning or other process conditions than adjacent features.
  • the depicted apparatus could be used in at least one of the following modes: [0049]
  • the patterning device support (e.g., mask table) MT and the substrate table WTa are kept essentially stationary, while an entire pattern imparted to the radiation beam is projected onto a target portion C at one time (e.g., a single static exposure).
  • the substrate table WTa may then be shifted in the X and/or Y direction so that a different target portion C can be exposed.
  • the maximum size of the exposure field limits the size of the target portion C imaged in a single static exposure.
  • the patterning device support (e.g., mask table) MT and the substrate table WTa are scanned synchronously while a pattern imparted to the radiation beam is projected onto a target portion C (e.g., a single dynamic exposure).
  • the velocity and direction of the substrate table WTa relative to the patterning device support (e.g., mask table) MT may be determined by the (de-)magnification and image reversal characteristics of the projection system PS.
  • the maximum size of the exposure field limits the width (in the non-scanning direction) of the target portion in a single dynamic exposure, whereas the length of the scanning motion determines the height (in the scanning direction) of the target portion.
  • the patterning device support (e.g., mask table) MT is kept essentially stationary holding a programmable patterning device, and the substrate table WTa is moved or scanned while a pattern imparted to the radiation beam is projected onto a target portion C.
  • a pulsed radiation source is employed and the programmable patterning device is updated as required after each movement of the substrate table WTa or in between successive radiation pulses during a scan.
  • This mode of operation can be readily applied to maskless lithography that utilizes programmable patterning device, such as a programmable mirror array of a type as referred to above.
  • Lithographic apparatus LA is of a so-called dual stage type, which has two tables WTa, WTb (e.g., two substrate tables), and two stations - an exposure station and a measurement station - between which the tables can be exchanged. For example, while a substrate on one table is being exposed at the exposure station, another substrate can be loaded onto the other substrate table at the measurement station and various preparatory steps carried out.
  • the preparatory steps may include mapping the surface control of the substrate using a level sensor LS and measuring the position of alignment markers on the substrate using an alignment sensor AS, both sensors being supported by a reference frame RF.
  • a second position sensor may be provided to enable the positions of the table to be tracked at both stations.
  • another table without a substrate waits at the measurement station (where optionally measurement activity may occur).
  • This other table has one or more measurement devices and may optionally have other tools (e.g., cleaning apparatus).
  • the table without a substrate moves to the exposure station to perform, e.g., measurements and the table with the substrate moves to a location (e.g., the measurement station) where the substrate is unloaded and another substrate is load.
  • FIG. 2 is an illustrative diagram of an exemplary lithographic cell or cluster, in accordance with various embodiments.
  • a lithographic apparatus LA may form part of a lithographic cell LC, also sometimes referred to as a lithocell or lithocluster, which also may include, in some embodiments, an apparatus to perform one or more pre- and post-pattern transfer processes on a substrate.
  • these include one or more spin coaters SC to deposit a resist layer, one or more developers DE to develop patterned resist, one or more chill plates CH, and one or more bake plates BK.
  • a substrate handler, or robot, RO may pick up a substrate from input/output ports I/O 1 , 1/02, and may move it between the different process devices and delivers it to the loading bay LB of the lithographic apparatus.
  • These devices which are often collectively referred to as the track, are under the control of a track control unit TCU which is itself controlled by the supervisory control system SCS, which also controls the lithographic apparatus via lithographic control unit LACU.
  • the different apparatus may be operated to maximize throughput and processing efficiency.
  • the substrate that is processed (e.g., exposed) by the patterning process is processed correctly and consistently, it is desirable to inspect a processed substrate to measure one or more properties such as overlay error between subsequent layers, line thickness, critical dimension (CD), etc. If an error is detected, an adjustment may be made to the patterning process, e.g., in terms of changing a design of, or changing a tool for designing, the patterning process, controlling an executing patterning process, etc.
  • An inspection apparatus can be used for such measurement.
  • An inspection apparatus is used to determine one or more properties of a substrate, and in particular, how one or more properties of different substrates or different layers of the same substrate vary from layer to layer and/or across a substrate and/or across different substrates, e.g., from substrate to substrate.
  • the inspection apparatus may be integrated into the lithographic apparatus LA or the lithocell LC or may be a stand-alone device.
  • An inspection apparatus to determine one or more properties of a substrate can take various different forms.
  • the inspection apparatus may use photon electromagnetic radiation to illuminate the substrate and detect radiation redirected by the substrate; such inspection apparatuses may be referred to as bright-field inspection apparatuses.
  • a bright-field inspection apparatus may use radiation with a wavelength in, for example, the range of 150-900 nm.
  • the inspection apparatus may be image -based, i.e., taking an image of the substrate, and/or diffraction-based, i.e., measuring intensity of diffracted radiation.
  • the inspection apparatus may inspect product features (e.g., features of an integrated circuit to be formed using the substrate or features of a mask) and/or inspect specific measurement targets (e.g., overlay targets, focus/dose targets, CD gauge patterns, etc.).
  • Inspection of, e.g., semiconductor wafers is often done with optics-based sub-resolution tools (bright-field inspection).
  • optics-based sub-resolution tools (bright-field inspection).
  • certain features to be measured are too small to be effectively measured using bright-field inspection.
  • bright-field inspection of defects in features of a semiconductor device can be challenging.
  • features that are being made using patterning processes e.g., semiconductor features made using lithography
  • the density of features is also increasing.
  • An example inspection technique is electron beam inspection.
  • Electron beam inspection involves focusing a beam of electrons on a small spot on the substrate to be inspected.
  • An image is formed by providing relative movement between the beam and the substrate (hereinafter referred to as scanning the electron beam) over the area of the substrate inspected and collecting secondary and/or backscattered electrons with an electron detector.
  • the image data is then processed to, for example, identify defects.
  • the inspection apparatus may be an electron beam inspection apparatus (e.g., the same as or similar to a scanning electron microscope (SEM)) that yields an image of a structure (e.g., some or all the structure of a device, such as an integrated circuit) exposed or transferred on the substrate.
  • SEM scanning electron microscope
  • FIG. 3 is an illustrative diagram of an exemplary scanning electron microscope (“SEM”), in accordance with various embodiments.
  • FIG. 3 schematically depicts an embodiment of an electron beam inspection apparatus 200. hr some embodiments, a primary electron beam 202 emitted from an electron source 201 is converged by condenser lens 203 and then may pass through a beam deflector 204, an E x B deflector 205, and an objective lens 206 to irradiate a substrate 100 on a substrate table 101 at a focus.
  • SEM scanning electron microscope
  • a two-dimensional electron beam image can be obtained by detecting the electrons generated from the sample in synchronization with, e.g., two dimensional scanning of the electron beam by beam deflector 204 or with repetitive scanning of electron beam 202 by beam deflector 204 in an X or Y direction, together with continuous movement of the substrate 100 by the substrate table 101 in the other of the X or Y direction.
  • the electron beam inspection apparatus has a field of view for the electron beam defined by the angular range into which the electron beam can be provided by the electron beam inspection apparatus (e.g., the angular range through which the deflector 204 can provide the electron beam 202).
  • the spatial extent of the field of the view is the spatial extent to which the angular range of the electron beam can impinge on a surface (wherein the surface can be stationary or can move with respect to the field).
  • a signal detected by secondary electron detector 207 is converted to a digital signal by an analog/digital (A/D) converter 208, and the digital signal is sent to an image processing system 300.
  • the image processing system 300 may have memory 303 to store all or part of digital images for processing by a processing unit 304.
  • the processing unit 304 e.g., specially designed hardware or a combination of hardware and software or a computer readable medium comprising software
  • the processing unit 304 is configured to convert or process the digital images into datasets representative of the digital images.
  • the processing unit 304 is configured or programmed to cause execution of a method described herein.
  • image processing system 300 may have a storage medium 301 configured to store the digital images and corresponding datasets in a reference database.
  • a display device 302 may be connected with the image processing system 300, so that an operator can conduct necessary operation of the equipment with the help of a graphical user interface.
  • Processing unit 304 may correspond to one or more processors, which may include any suitable processing circuitry capable of controlling operations and functionality of one or more components/modules of system 300, as well as facilitating communications between various components within system 300 and/or with one or more other systems/components.
  • processing unit 304 may include a central processing unit (“CPU”), a graphic processing unit (“GPU”), one or more microprocessors, a digital signal processor, or any other type of processor, or any combination thereof.
  • processing unit 304 may be performed by one or more hardware logic components including, but not limited to, field-programmable gate arrays (“FPGA”), application specific integrated circuits (“ASICs”), application-specific standard products (“ASSPs”), system-on-chip systems (“SOCs”), and/or complex programmable logic devices (“CPLDs”).
  • processing unit 304 may include local memory, which may store program systems, program data, and/or one or more operating systems. However, processing unit 304 may run an operating system (“OS”) for one or more components of system 300, and/or one or more firmware applications, media applications, and/or applications resident thereon.
  • processing unit 304 may run a local client script for reading and rendering content received from one or more websites. For example, processing unit 304 may run a local JavaScript client for rendering HTML or XHTML content received from a particular URL.
  • Memory 303 may include one or more types of storage mediums such as any volatile or non-volatile memory, or any removable or non-removable memory implemented in any suitable manner to store data for system 300. For example, information may be stored using computer-readable instructions, data structures, and/or program systems.
  • Various types of storage/memory may include, but are not limited to, hard drives, solid state drives, flash memory, permanent memory (e.g., ROM), electronically erasable programmable read-only memory (“EEPROM”), CD-ROM, digital versatile disk (“DVD”) or other optical storage medium, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other storage type, or any combination thereof.
  • memory 303 may be implemented as computer-readable storage media (“CRSM”), which may be any available physical media accessible by processing unit 304 to execute one or more instructions stored within memory 303.
  • CRSM computer-readable storage media
  • one or more applications may be run by processing unit 304, and may be stored in memory 303.
  • FIG. 4 is an illustrative diagram of an exemplary electron beam inspection apparatus, in accordance with various embodiments.
  • the system may be used to inspect a sample 90 (such as a substrate) on a sample stage 89 and may include, amongst other aspects, a charged particle beam generator 81, a condenser lens module 82, a probe forming objective lens module 83, a charged particle beam deflection module 84, a secondary charged particle detector module 85, and an image forming module 86.
  • Charged particle beam generator 81 may be configured to generate a primary charged particle beam 91.
  • Condenser lens module 82 may condense the generated primary charged particle beam 91.
  • Probe forming objective lens module 83 may focus the condensed primary charged particle beam into a charged particle beam probe 92.
  • Charged particle beam deflection module 84 scans the formed charged particle beam probe 92 across the surface of an area of interest on sample 90 secured on sample stage 89.
  • charged particle beam generator 81, condenser lens module 82 and probe forming objective lens module 83, or their equivalent designs, alternatives or any combination thereof, together may form a charged particle beam probe generator, which generates scanning charged particle beam probe 92.
  • Secondary charged particle detector module 85 may be configured to detect secondary charged particles 93 emitted from the sample surface (maybe also along with other reflected or scattered charged particles from the sample surface) upon being bombarded by charged particle beam probe 92 to generate a secondary charged particle detection signal 94.
  • Image forming module 86 e.g., a computing device
  • secondary charged particle detector module 85 and image forming module 86, or their equivalent designs, alternatives or any combination thereof, together may form an image forming apparatus which forms a scanned image from detected secondary charged particles emitted from sample 90 being bombarded by charged particle beam probe 92.
  • a monitoring module 87 may be coupled to image forming module 86 of the image forming apparatus to monitor, control, etc., the patterning process and/or derive a parameter for patterning process design, control, monitoring, etc., using the scanned image of sample 90 received from image forming module 86. Therefore, in an embodiment, monitoring module 87 may be configured or programmed to cause execution of a method described herein. In an embodiment, monitoring module 87 may correspond to a computing device. In an embodiment, monitoring module 87 may correspond to a computer program to provide functionality herein and encoded on a computer readable medium forming, or disposed within, monitoring module 87.
  • the electron current in the system of FIG. 4 is significantly larger compared to, e.g., a CD SEM such as depicted in FIG. 3, such that the probe spot is large enough so that the inspection speed can be fast.
  • the resolution may not be as high as compared to a CD SEM because of the large probe spot.
  • the SEM images may be processed to extract contours that describe the edges of objects, representing device structures, also referred to as features, in the image. These contours are then typically quantified via parameters, such as CD, at user- defined cut-lines. Thus, typically, the images of device structures are compared and quantified via metrics, such as an edge-to-edge distance (CD) measured on extracted contours or simple pixel differences between images.
  • contour extraction may be performed, for example, using a tool such as Hitachi CD-SEM, available from Hitachi High Technologies America, Inc. of Schaumberg, IL.
  • one or more mathematical models can be provided that describe one or more steps and/or apparatuses of the patterning process, including typically the pattern transfer step.
  • a simulation of the patterning process can be performed using one or more mathematical models to simulate how the patterning process forms a patterned substrate using a measured or design pattern provided by a patterning device.
  • FIG. 5 is an illustrative flowchart of an exemplary process for modeling and/or simulating at least part of a patterning process, in accordance with various embodiments.
  • models may represent different patterning processes and need not include all the models described herein.
  • Process 500 may begin, for example, at step 502.
  • a source model is provided that represents optical characteristics of the illumination of a patterning device.
  • the optical characteristics may include, but are not limited to, a radiation intensity distribution, a bandwidth, and/or a phase distribution.
  • the source model can represent the optical characteristics of the illumination that include, but not limited to, numerical aperture settings, illumination sigma (s) settings as well as any particular illumination shape (e.g. off-axis radiation shape such as annular, quadrupole, dipole, etc.), where s, also referred to herein by the word“sigma,” corresponds to an outer radial extent of the illuminator.
  • Process 500 may also include a step 504, where a projection optics model is provided that represents optical characteristics (including changes to the radiation intensity distribution and/or the phase distribution caused by the projection optics) of the projection optics.
  • the projection optics model in some embodiments, may represent the optical characteristics of the projection optics, including aberration, distortion, one or more refractive indexes, one or more physical sizes, one or more physical dimensions, etc.
  • Patterning device model module 120 captures how the design features are laid out in the pattern of the patterning device, and may include a representation of detailed physical properties of the patterning device, as described, for example, in U.S. Patent No. 7,587,704, which is incorporated herein by reference in its entirety.
  • the objective of the simulation is to accurately predict, for example, parameters such as edge placements and CDs, which can then be compared against the device design.
  • the device design is generally defined as the pre-optical proximity correction (“OPC”) patterning device layout, and will be provided in a standardized digital file format such as GDSII or OASIS. A more detailed description of OPC may be found below.
  • Process 500 may also include a step 505, where a design layout model may be provided that represents optical characteristics (including changes to the radiation intensity distribution and/or the phase distribution caused by a given design layout) of a design layout (e.g., a device design layout corresponding to a feature of an integrated circuit, a memory, an electronic device, etc.), which is the representation of an arrangement of features on or formed by the patterning device.
  • the design layout model can represent one or more physical properties of a physical patterning device, as described, for example, in U.S. Patent No. 7,587,704, which is incorporated by reference in its entirety. Since the patterning device used in the lithographic projection apparatus can be changed, it is desirable to separate the optical properties of the patterning device from the optical properties of the rest of the lithographic projection apparatus including at least the illumination and the projection optics.
  • an aerial image may be simulated from the source model of step 502, the projection optics model of step 504, and the design layout model of step 506.
  • An aerial image (“AI”) is the radiation intensity distribution at substrate level.
  • Optical properties of the lithographic projection apparatus e.g., properties of the illumination, the patterning device, and the projection optics dictate the aerial image.
  • a resist layer on a substrate is exposed by the aerial image and the aerial image is transferred to the resist layer as a latent“resist image” (“RI”) therein.
  • the resist image (“RI”) can be defined as a spatial distribution of solubility of the resist in the resist layer.
  • a simulation from the aerial image of step 508 may be performed, and at step 512, a resist image can be simulated from the simulation at step 510.
  • the resist model can be used to calculate the resist image from the aerial image, an example of which can be found in U.S. Patent Application Publication No. US 2009- 0157360, the disclosure of which is hereby incorporated by reference in its entirety.
  • the resist model typically describes the effects of chemical processes which occur during resist exposure, post exposure bake (“PEB”) and development, in order to predict, for example, contours of resist features formed on the substrate and so it typically related only to such properties of the resist layer (e.g., effects of chemical processes which occur during exposure, post-exposure bake and development).
  • the optical properties of the resist layer e.g., refractive index, film thickness, propagation and polarization effects
  • connection between the optical model and the resist model is a simulated aerial image intensity within the resist layer, which arises from the projection of radiation onto the substrate, refraction at the resist interface, and multiple reflections in the resist film stack.
  • the radiation intensity distribution (aerial image intensity) may then be turned into a latent“resist image” by absorption of incident energy, which is further modified by diffusion processes and various loading effects.
  • Efficient simulation methods that are fast enough for full-chip applications approximate the realistic 3-dimensional intensity distribution in the resist stack by a 2-dimensional aerial (and resist) image.
  • the resist image can be used as an input to a post-pattern transfer process model module at step 514.
  • the post-pattern transfer process model of step 514 may define a performance of one or more post-resist development processes (e.g., etch, development, etc.).
  • Simulation of the patterning process can, for example, predict contours, CDs, edge placement (e.g., edge placement error), etc., in the resist and/or etched image.
  • the objective of the simulation is to accurately predict, for example, edge placement, and/or aerial image intensity slope, and/or CD, etc. of the printed pattern.
  • These values can be compared against an intended design to, for example, correct the patterning process, identify where a defect is predicted to occur, etc.
  • the intended design is generally defined as a pre-OPC design layout that can be provided in a standardized digital file format such as GDSII or OASIS, however persons of ordinary skill in the art will recognize that other file formats may also be employed.
  • the model formulation describes most, if not all, of the known physics and chemistry of the overall process, and each of the model parameters desirably corresponds to a distinct physical and/or chemical effect.
  • the model formulation thus sets an upper bound on how well the model can be used to simulate the overall manufacturing process.
  • An application of the one or more models described herein in sophisticated fine-tuning steps of the patterning process such as fine-tuning steps applied to the illumination, projection system, and/or patterning device design.
  • fine-tuning steps applied to the illumination, projection system, and/or patterning device design include, for example, but are not limited to, optimization of numerical aperture, optimization of coherence settings, customized illumination schemes, use of phase shifting features in or on a patterning device, optical proximity correction (“OPC”) in the patterning device layout, placement of sub-resolution assist features in the patterning device layout or other methods generally defined as "resolution enhancement techniques" (RET).
  • OPC optical proximity correction
  • optical proximity correction addresses the fact that the final size and placement of a printed feature on the substrate will not simply be a function of the size and placement of the corresponding feature on the patterning device.
  • the position of a particular edge of a given feature will be influenced to a certain extent by the presence or absence of other adjacent features.
  • these proximity effects arise from coupling of radiation from more than one feature.
  • proximity effects arise from diffusion and other chemical effects during post exposure bake (“PEB”), resist development, and etching that generally follow lithographic exposure.
  • proximity effects should be predicted utilizing sophisticated numerical models, and corrections or pre-distortions are applied to the design of the patterning device before successful manufacturing of devices becomes possible. These modifications may include shifting or biasing of edge positions or line widths and/or application of one or more assist features that are not intended to print themselves, but will affect the properties of an associated primary feature.
  • model-based patterning process design requires good process models and considerable computational resources, given the many millions of features typically present in a chip design.
  • model-based design is generally not an exact science, but an iterative process that does not always resolve all possible weaknesses of a device design. Therefore, post-OPC designs (e.g., patterning device layouts after application of all pattern modifications by OPC and any other RET's), should be verified by design inspection (e.g., intensive full-chip simulation using calibrated numerical process models), in order to reduce the possibility of design flaws being built into the manufacturing of a patterning device.
  • model parameters may be inaccurate from, for example and without limitation, measurement and reading errors, and/or there may be other imperfections in the system.
  • precise calibration of the model parameters extremely accurate simulations can be performed. So, since computational patterning process evaluation should involve robust models that describe the patterning process precisely, a calibration procedure for such models should be used to achieve models that are valid, robust and accurate across the applicable process window.
  • the inspection apparatus may be SEM that yields an image of one or more structures (e.g., one or more test (or calibration) patterns, or one or more patterns corresponding to some or all the structures of a device, exposed or transferred on the substrate.
  • calibration is done by printing a certain number of 1- dimensional and/or 2-dimensional gauge patterns on a substrate (e.g., the gauge patterns may be specially designated measurement patterns or may be device parts of a design device pattern as printed on the substrate) and performing measurements on the printed patterns. More specifically, those 1- dimensional gauge patterns are line-space patterns with varying pitch and CD, and the 2-dimensional gauge patterns typically include line-ends, contacts, and/or SRAM (Static Random Access Memory) patterns. These patterns are then imaged onto a substrate and resulting substrate CDs or contact hole (also known as a via or through-chip via) energy are measured. The original gauge patterns and their substrate measurements are then used jointly to determine the model parameters, which reduce or minimize the difference between model predictions and substrate measurements.
  • the one or more gauge or calibration patterns may not correspond to structures in a device. Instead, the one or more gauge or calibration patterns may possess enough similarities with one or more patterns in the device to allow for an accurate prediction of the one or more device patterns.
  • FIG. 6 is an illustrative flowchart an exemplary process for calibrating a model, in accordance with various embodiments.
  • Process in a non-limiting embodiment, may begin at step 602.
  • a design layout which can include gauge and optionally other test patterns, which can be in a standard format such as GDSII or OASIS, may be provided.
  • the design layout may be used to generate a patterning device layout, which can be in a standard format such as GDSII or OASIS and which may include OPC or other RET features.
  • two separate paths are taken - one for simulation and one for measurement.
  • a model may be provided at step 606.
  • the patterning device layout and the model from step 606 are used to create a simulated resist image.
  • the model provided at step 606, may correspond to a model of the patterning process for use in computational lithography, and the calibration process aims to make the model as accurate as possible so that computational lithography results are likewise accurate.
  • the simulated resist image may be used to determine predicted critical dimensions (CDs), etc.
  • the patterning device layout of step 604 may be is used with, or to form, a physical mask (e.g., a reticle), which is then imaged onto a substrate at step 612.
  • the patterning process e.g., NA, focus, dose, illumination source, etc. for optical lithography
  • measurements are then performed on the actual patterned substrate, which yields measured CDs, contours, etc.
  • a comparison is made between the measurements from obtained at step 614 and the predictions from step 610. If the comparison determines that the predictions match the measurements within a predetermined error threshold, the model is considered to be successfully calibrated at step 618. If not, changes are made to the model of step 606, and steps 608, 610, and 670 are repeated until the predictions generated using the model of step 606 match the measurements within the predetermined error threshold.
  • the model comprises an OPC model. While the description hereafter will focus on an OPC model as an embodiment, the model may be other than or in addition to an OPC model.
  • values of a geometric parameter are extracted from an image (e.g., an image generated using an electron beam such as a SEM image) of a formed pattern on a substrate for model calibration or for other purposes.
  • a gauge pattern can be used.
  • FIG. 7A is an illustrative diagram of an application of an SEM to a feature and the results thereof, in accordance with various embodiments.
  • a scan of a feature which may also be referred to as a structure, may be performed.
  • the processing of scanning a feature of a sample, for instance, is described in greater detail above with reference to FIGS. 3 and 4.
  • various scans are performed to determine information associated with the feature or features of the sample (e.g., a substrate). This information may be used to calibrate an optical proximity model (“OPM”); thereby assisting in predicting how a pattern on a mask will be imaged onto a substrate.
  • OPM optical proximity model
  • Each scan may be somewhat destructive.
  • Each pass of the electron beam (e.g., beam 91) over a sample to observe a feature thereon may cause some damage to the feature.
  • more information about the physical properties of that feature are able to be obtained.
  • multiple scans are performed (e.g., 5-24), and while the first scan will generally reflect the feature’s original geometry best, the amount of information about the feature is limited with only a single scan.
  • a beam is incident on a portion of a feature.
  • the feature may correspond to a raised structure, such as a protrusion extending upwards (e.g., positive z-direction) from a sample.
  • the feature may correspond to a lowered structure, such as a trench (e.g., a depression in a sample).
  • the incident beam may widen the structure or shrink the structure with each scan.
  • the beam that incidents on a slope of a feature may cause a portion of that feature to erode.
  • Each scan therefore, may cause more of the slope to erode. Therefore, in order to know the description of the feature prior to the first scan, an understanding of the way in which the slope of the feature changes from scan to scan is needed. This information may then be employed to extrapolate back to the original geometric description of the feature prior to the scans being applied to the feature via, for example, the SEM.
  • FIG. 7B is an illustrative diagram of a plurality of images of an SEM being applied to a feature in succession, in accordance with various embodiments.
  • FIG. 7B includes, in the illustrative embodiment, six images of six scans F1-F6 examining one feature on a sample.
  • Each scan of the feature may be employed by image forming module 86 of FIG. 4 to generate a scanned image (e.g., data representing the scanned image) of the feature.
  • Each scanned image which may also be referred to herein as a“frame,” may represent the approximate shape of the feature as detected by the SEM when the feature was scanned by the SEM.
  • each frame may correspond to a certain amount of dose applied to the sample by the successive scans.
  • frame FI may correspond to an image of a first scan, having a dose of 11 e/nm2 (e.g., 11 electrons per square nanometer of area).
  • Frame F2 may correspond to an image of a second scan, where another dose of 11 e/nm2 is applied to the sample by the scan.
  • a total dose of 22 e/nm2 are applied to the sample after the second scan.
  • Persons of ordinary skill in the art will recognize that any suitable dose may be applied, and the use of 11 e/nm2 is merely exemplary.
  • each successive scan may cause the feature to slightly change.
  • frame FI and frame F6 may both include a representation of the feature, however the scanned feature may be substantially wider (e.g., in the x-direction) in frame F6 as compared to frame FI .
  • the feature may correspond to a trench, for example. Therefore, each scan may erode a portion of a wall of the trench, thereby gradually widening the trench.
  • frame F6 may encompass more detail as to the shape/contour of the feature, the original shape/contour of the feature has been modified due to the various scans that have occurred.
  • FIG. 7C is an illustrative flowchart of an exemplary process for generating image data representing images of a feature, in accordance with various embodiments.
  • Process 750 may begin at step 702.
  • a detection signal representing scattering intensity associated with an SEM beam applied to a feature located on a sample may be received.
  • secondary charged particles 93 may be detected by secondary charged particle detector module 85 after scattering off of a feature of sample 90, in response to charged particle beam probe 92 incident on sample 90. Secondary charged particles 93 thereby generate a secondary charged particle detection signal 94, which may be received by image forming module 86.
  • an image of the feature (and/or additional features or additional regions of sample 90) may be generated.
  • image forming module 86 may generate a scanned image representing the detected feature.
  • a determination as to whether enough feature information has been obtained may occur. For example, a single scan being performed may produce a resolution of the feature that is too low for use in calibration and/or feature extraction. If, at step 706, it is determined that more information is needed, then process 750 may return to step 702, where additional scans are performed such that additional scanned images may be obtained. However, if at step 706, it is determined that enough feature information has been obtained, then process 750 may proceed to step 708.
  • first image data representing the one or more scanned images of the first feature may be generated.
  • image forming module 86 may generate image data representing the scanned images, and may store those images in memory (e.g., memory 303), or may provide the image data to one or more scanned images data stores.
  • FIG. 8A is an illustrative diagram of an exemplary pattern analyzed in accordance with various embodiments.
  • FIG. 8A illustrates an example image of a generally elliptical pattern 820 that is produced on a substrate from, for example, a nominally rectangular design layout 800 (e.g., as designed to be produced at the substrate). While the boundary of generally elliptical pattern 820 is depicted as a contour, it need not be a contour but rather the boundary can be the pixel data that represents the edge of the pattern 820. For instance, this latter scenario may correspond to a contour not (yet) having been extracted. Further, the pattern in FIG.
  • pattern 8 A may correspond to an elliptical shape that protrudes from the substrate, where an interior of pattern 820 is“higher” (extends in the positive z-direction) than points immediately outside the boundary.
  • pattern 820 need not be a protrusion but, alternatively, could be a trench type structure; in which case, the interior of pattern 820 is“below” (e.g., extends in the negative z-direction) the region immediately exterior of the boundary of pattern 820. If pattern 820 were a trench, for example, nominally rectangular design layout 800 may be smaller and generally in the interior of pattern 820.
  • gauges may be, in some embodiments, specified and evaluated.
  • gauges are the evaluation locations on the pattern to determine values of a geometric parameter such as CD, EP, etc.
  • the values of the gauges can be used for various purposes in design, control, and the like, of a patterning process, an apparatus of the patterning process or a tool used with design, control, etc., of a patterning process.
  • the values of gauges are used for calibration of, for example, an OPC model. Therefore, in that case, the calibration of the OPC model is effectively aiming to create a model that minimizes an error associated with the gauges. While an embodiment of the determination of gauge values for model calibration is described here specifically, it will be appreciated that the determination of gauge values can be used for various purposes.
  • an example gauge is illustrated as imaginary line 870 that is superimposed on the boundary (e.g., contour) of a shape of the pattern (e.g., feature) that is being measured.
  • gauge 870 for a parameter e.g., CD
  • a parameter e.g., CD
  • Gauge 870 is sometimes referred to as a cutline and thus may facilitate a measurement distance of a selected“cut” on the pattern.
  • the cutline is typically aligned in the x- and/or y-direction, however in some cases the cutline may be at a certain angle (e.g., 850 at angle Q).
  • Another example parameter may correspond to an evaluation point (“EP”) 860.
  • EP 860 in one embodiment, does not necessarily require another corresponding point on a line like a cutline.
  • An EP 860 for example may be normally collected from a substrate pattern contour (i.e., the pattern image is processed to create a contour and then the edge position at the EP is extracted from the contour at the desired EP).
  • the gauges are positioned at specific spots in a pattern layout and essentially represent the points at the boundary of the pattern. Desirably, a number of gauges are selected to be representative of the shape of the pattern. However, the number of gauges may be limited by, for example, throughput concerns and/or diminishing returns. For instance, while more gauges will generally provide greater accuracy, the additional gauges may not provide much more accuracy relative to the higher cost in measurement time). Indeed, thousands of different measurements and/or shapes may be made for any given OPC model. There are a variety of shapes present on any substrate that may be measured. Ideally, all of shapes should be measured well if they are to report values corresponding to what the actual OPC model would use as information corresponding to the gauge positions.
  • EPs evaluation points
  • evaluation of the EP gauges typically involves extraction of a contour of the pattern from the image of the pattern (e.g., using techniques of contour extraction known in the art), and determining which contour is able to represent the substrate pattern.
  • contour extraction can introduce artifacts and/or errors since the algorithms to extract the contour may not perfectly determine the contour around the entirety of the shape of the pattern. Such artifacts and/or errors can impact a model’ s accuracy.
  • the substrate pattern’s image quality should be high, and therefore more images (e.g., frames) of the feature can be captured and averaged together to obtain the higher quality image from which the contour is extracted.
  • images e.g., frames
  • the contour may typically be constructed with many more location points around the pattern boundary than what is required for EP sampling per pattern. However, even with those extra available location points, often interpolation is still needed to provide desired EP locations because the contour locations points may not correspond to all the desired EP locations. A more detailed description of how to efficiently and effectively determine EP locations is described in commonly-assigned U.S. Patent Application No. 62/515,921, the disclosure of which is incorporated herein by reference in its entirety.
  • FIGS. 8B and 8C are illustrative diagrams of exemplary images of a feature having gauge lines applied thereto for determining a distance between edge points along the gauge lines prior to and after application of an SEM, respectively, in accordance with various embodiments.
  • an example contour 802 is described representing a contour of a scanned image, such as one of frames F1-F6 of FIG. 5B.
  • contour 802 is shown as being substantially elliptical, persons of ordinary skill in the art will recognize that this is merely exemplary, and the particular contour used may depend on the particular feature under examination.
  • a number of imaginary lines or gauges 804a-804e may be superimposed over contour 802.
  • gauges 804a-e may be superimposed at various intervals along the x-direction of contour 802.
  • a spacing between each gauge may be specified so as to obtain a maximum number of gauges for contour 802. For example, as a distance between gauges 804a-e, collectively gauges 804, approaches zero, the number of gauges 804 may approach infinity.
  • intersection points 806a and 806b of each gauge 804 and contour 802 may be determined. While only two intersection points are shown within FIG. 8B, different features may produce different contours, and thus different numbers of intersection points may be obtained.
  • a position of intersection points 806a and 806b may be recorded as an approximate location of the boundary of the feature within a particular frame.
  • the position of the intersection points may, in some embodiments, correspond to a physical location - such as a physical location about the sample being scanned, however additionally or alternatively, the position may correspond to a pixel location within the particular frame.
  • measures reflective of the distance between one intersection point and another intersection point along a same gauge may be determined.
  • a measure indicating a distance between the intersection points may be determined.
  • the gauge and frame terms have been dropped for simplicity.
  • the measures, therefore, for each gauge of gauges 804a-e may be determined to give an approximate shape of contour 802 at all positions for a given frame.
  • FIG. 8C illustrates an example embodiment of a subsequent frame.
  • FIG. 8B represented a first frame (e.g., after a first feature is scanned)
  • FIG. 8C would correspond to a second frame (e.g., after the first feature has been scanned again).
  • the dimensions of the feature may have changed due to the damage incurred by the walls of the feature from the SEM.
  • contour 852 is described representing a contour of a subsequent scanned image (e.g., a subsequent one of frames F1-F6).
  • contour 802 of FIG. 8B is displayed herein in dashed to further exemplify the change in apparent shape of the feature in response to application of an SEM.
  • a similar process as that described by FIG. 8B may be employed here.
  • a plurality of gauges 804a-e may be superimposed on contour 852.
  • a same number of gauges used for a previous scanned image may be used in a current scanned image.
  • a same relative position e.g., relative along the x- and y-directions may be employed.
  • intersection points 856a and 856b of each gauge 804 and contour 852 may be determined.
  • a position of intersection points 856a and 856b may be recorded as an approximate location of the boundary of the feature within a particular frame.
  • IP IP(X1, Yl, G-804a, F2).
  • the same process may be repeated for each gauge, so as to obtain data representing the various intersection points of a given frame. This may allow for an approximation of a size of the feature to be determined, where the size of the feature corresponds to the size of the feature after the SEM is applied to the sample for that particular frame.
  • measures reflective of the distance between one intersection point and another intersection point along a same gauge may be determined for each subsequent frame.
  • a measure indicating a distance between the intersection points may be determined.
  • the measures, therefore, for each gauge of gauges 804a-e may be determined to give an approximate shape of contour 852 at all positions for a given frame.
  • the rate of change may be equivalent to DM/DI
  • the rate of change may be represented by a function.
  • the rate of change may be represented by an exponential function, however persons of ordinary skill in the art will recognize that additional and/or alternative functions may be used.
  • an example function that may describe the CD of a feature from frame to frame may be represented by Equation 1 :
  • CD n CD-n i + A e ⁇ r n — A Equation 1.
  • CD n corresponds to the CD measure of the n-th frame (or dose)
  • CD n -i corresponds to the CD measure of the (n-l)-th frame
  • A corresponds to an amount of erosion caused by the SEM at the n-th frame
  • g corresponds to the rate of change (e.g., inverse of the time constant)
  • n corresponds to a frame number.
  • Equation 1 describes CD for the n-th frame
  • parameters of a feature may be determined using a similar formula to that of Equation 1.
  • a parameter such as EP, may be determined using a formula substantially similar to Equation 1 , albeit having different values for A and/or g.
  • FIG. 9 is an illustrative flowchart of an exemplary process for determining original geometric parameters of a feature, in accordance with various embodiments.
  • Process 900 may, in a non-limiting embodiment, begin at step 902.
  • first parameter for a first gauge of each frame may be obtained.
  • the first parameter may correspond to a first measure, such as CD or EP.
  • image data representing a plurality of images of an SEM applied to a first feature of a sample may be obtained.
  • Each image may include a representation of a feature (or features) located on a substrate being scanned by the SEM.
  • multiple successive scans by the SEM may be applied to the substrate, each causing a portion of the feature(s) to erode away.
  • each scan also provides information regarding a shape/size/description of the feature.
  • a parameter associated with each of a plurality of gauge positions along the feature may be determined, in some embodiments, for each image. For instance, a contour (e.g., contour 802) describing the feature may be generated. A number of gauges (e.g., gauges 804a-e) may be superimposed to the contour. Intersection points of the gauges and the contour may be determined. As an illustrative example, a measure between a first intersection point 806a and a second intersection point 806b may be determined for each frame of the plurality of scanned images.
  • a difference between the first parameter for each frame may be determined. For example, if gauge 804a of frame FI has a measure Ml, and gauge 804a of frame F2 has a measure M2, then the difference from frame FI to frame F2 would be IM2-M1 I. Similarly, the difference between measure M_n of frame Fn and measure M_p of frame Fp may be IM_p - M_nl. Therefore, the differences between the measures of a particular gauge across each frame may be determined. In other words, the parameter associated with each gauge position for each image may be determined. As a result, data indicating a parameter (e.g., CD, EP) for each gauge along the counter, for each SEM image, may be determined.
  • a parameter e.g., CD, EP
  • a rate of change of the first parameter may be determined.
  • the rate of change may indicate how much the measure increases or decreases from one frame to the next (e.g., subsequent) frame.
  • each frame will involve a constant dose. Where dose is constant, frame count, time, and dose may be used essentially interchangeably in applying the algorithm.
  • a function representing the rate of change of the first parameter may be determined.
  • a function may be fit to the rate of change.
  • a model may be applied to the rate of change, and a least squared fit may be employed to obtain an accurate function representing the rate of change.
  • the function may be exponential, however additional and/or alternative functions may also be used.
  • a determination may be made as to whether or not more gauges are available/needed to be analyzed. For example, a plurality of gauges may be superimposed on a contour (e.g., gauges 804a-e). If, at step 910, it is determined that additional gauges are available, then process 900 may return to step 902 such that steps 902-908 may repeat for a next gauge. However, if at step 910 it is determined that no more gauges are available, then process 900 may proceed to step 912.
  • an original contour for a pre-scanned feature may be determined based on the function.
  • the function may represent how a feature changes after each application of an SEM. Using this function, an extrapolation of a shape/size of the first feature prior to any beams being applied thereto may be performed.
  • the original contour representing the pre-scanned feature may then be used for OPC calibration and/or OPC modeling.
  • FIG. 10 is an illustrative diagram of an exemplary part of a pattern analyzed based on intensity values, in accordance with various embodiments.
  • the measured boundary 1020 of the pattern is shown along with the applicable portion of the target polygon (which is shown for reference in this example and need not be“drawn” on the image).
  • the spatial bearing information used is the location of EP 1040 (i.e., the location of an EP on the simulated contour) along with a spatial bearing angle, which is represented in this example by imaginary line 1050. While only a single location of EP 1040 is marked here for convenience, it is apparent that a plurality of other locations of EPs 1040 at various points along the boundary 1020 is also presented. Further, the lines do not precisely pass through the EPs 1040 in this example merely so that the EPs 1040 can be visually seen. In practice, the spatial bearing angles will pass through the respective EPs 1040.
  • the intercept of the imaginary line 1050 with the boundary of the measured pattern image 1020 identifies the location of EP 1060 on the pattern image 1020 at that intersection.
  • the pixel data along the imaginary line 1050 can be processed to identify the location of the edge / boundary of the pattern, which was determined in this case as being the location where EP 1060 is marked.
  • This data processing can be done using an algorithm such as, for example, an analysis of the gradient of the pixel data and then the location of the EP 1060 being identified where the gradient data meets or crosses some threshold (e.g., greater than or equal to a threshold) associated with that gradient data.
  • this analysis can be performed with respect to all the EPs 1040 (using their respective spatial bearing information) along the boundary of the pattern to obtain the values of the locations of EPs 1060 along the perimeter of the pattern 1020.
  • the locations of EPs 1040, the imaginary lines 1050 and/or the locations of EPs 1060 need not be“drawn” on the image. Rather, the image data can be mathematically processed to achieve the same effect.
  • a distance between the location of an EP 1040 and its associated location of EP 1060 can be determined.
  • the distance is the Euclidean distance between the locations of EP 1040 and EP 1060.
  • this distance represents an edge placement error (EPE).
  • EPE edge placement error
  • an average determination of an evaluation point may be employed.
  • FIG. 11 is an illustrative diagram of an exemplary technique for determining a parameter, such as an edge, of a feature using intensity information, in accordance with various embodiments.
  • a beam such as an SEM
  • particles will scatter off that surface.
  • the scattering angle will vary.
  • the detection unit such as secondary charged particle detection module 85 of FIG. 5
  • the intensity of the scattered particles e.g., secondary charged particles 93
  • the intensity variation therefore, is related to the scattering angle, and thus the surface.
  • a first portion 1112 of a feature may be substantially horizontal.
  • scattering particles may be detected by a particle detector with a constant intensity value. That intensity, when processed by an image forming component (e.g., image forming module 86 of FIG. 4), may be represented by a first coloring 1102.
  • first coloring 1102 may be a first gray color, or a first gray intensity.
  • the intensity of the scattered particles may vary, indicative of the change from a constant surface associated with first portion 1112, to a sloping surface of second portion 1114. Therefore, when processed, that may be represented by a second coloring 1104.
  • the second coloring in some embodiments, may be different than first coloring 1104. For example, if the intensity is less, then second coloring 1104 may be“lighter” than first coloring 1102. Similarly, when the incident beam hits portion 1116 of the feature, which may also be substantially constant, the scattering intensity may again change. In some embodiments, when processed, the intensity of the scattering particles off of portion 1116 may be represented by third coloring 1106. Third coloring 1106 may be substantially similar to first color 1102, indicating that they are both associated with a similar surface type (e.g., flat/constant).
  • a determination of where first portion 1112 becomes second portion 1114, and similarly where second portion 1114 becomes third portion 1116, may be determined based on the changes in coloring associated with those portions. For example, a threshold may be used to determine at what point the coloring from first color 1102 to second coloring 1104 changes. For example, when the intensity (e.g., scattering intensity) of first coloring 1102 changes by more (e.g., greater than or equal to) than the threshold coloring, then that may indicate that an edge of second portion 1114 has been identified. Similarly, when the intensity of second coloring 1104 changes by more than the threshold coloring, then this may indicate that another edge of second portion 1114 has been identified.
  • These edge positions may be employed in a similar manner as the gauges described above with reference to FIGS. 8-10 to determine measures for the features, and therefore functions representing rates of changes of the measures at various positions along the contour.
  • a first derivative, as well as a second derivative, of the rate of change of a scattering intensity may be determined.
  • the first derivative of the scattering intensity changing from positive to negative, or vice versa may indicate a location of an edge of the feature.
  • the second derivative may then be employed to determine whether the location where the first derivative changes from positive to negative, or vice versa, corresponds to a local maxima or a local minima.
  • FIG. 12 is an illustrative diagram of various rates of changes along a contour describing a feature, in accordance with various embodiments.
  • the rates of changes R1-R12 may vary all along the contour.
  • each rate of change may be used to determine a function representing that rate of change.
  • a function similar to that of Equation 1 may be employed to describe each rate of change R1-R2.
  • the rate of change may indicate a change of a position of a point along the contour from frame to frame.
  • the functions may be combined to determine a contour function that represents a rate of change of the entire contour from frame to frame.
  • the contour function not only encompasses information about how a particular point along the contour changes, but how the entire contour changes from frame to frame.
  • an original shape/size of the feature may be extrapolated.
  • a first derivative, as well as a second derivative, of the rate of change of a scattering intensity may be determined.
  • FIG. 13 is an illustrative diagram of an exemplary process for extrapolating original geometric parameters of a feature based on a determined contour function, in accordance with various embodiments.
  • Process 1300 may, in some embodiments, begin at step 1302.
  • a set of functions representing rate of change at each point along a contour may be determined.
  • the contour may represent a scanned image of a feature.
  • the set of functions may be determined using one or more techniques such as, but not limited to, gauges, coloring intensity, and the like.
  • the functions may correspond to various points along the contour such that a rate of change of the entire contour is available.
  • the rate of change for instance, may indicate how the contour changes in shape/size from one frame to another as an SEM is applied to the feature.
  • the rate of change of a single point along the contour may be represented by an exponential function.
  • a contour function representing a shape of the feature may be determined based on the set of functions.
  • original feature parameters e.g., CD or EP
  • the contour function may reflect how the shape/size of the feature changes with each application of the SEM. Therefore, using this information, the original shape/size of the feature prior to the SEM being applied for the very first scan may be obtained.
  • FIG. 14 schematically depicts a computer system that may implement embodiments of this disclosure.
  • a computer system 1400 is shown.
  • the computer system 1400 includes a bus 1402 or other communication mechanism for communicating information, and a processor 1404 (or multiple processors 1404 and 3205) coupled with bus 1402 for processing information.
  • Computer system 1400 also includes a main memory 1406, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 1402 for storing information and instructions to be executed by processor 1404.
  • Main memory 1406 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1404.
  • Computer system 1400 further includes a read only memory (ROM) 1408 or other static storage device coupled to bus 1402 for storing static information and instructions for processor 1404.
  • ROM read only memory
  • a storage device 1410 such as a magnetic disk or optical disk, is provided and coupled to bus 1402 for storing information and instructions.
  • Computer system 1400 may be coupled via bus 1402 to a display 1412, such as a cathode ray tube (CRT) or flat panel or touch panel display for displaying information to a computer user.
  • a display 1412 such as a cathode ray tube (CRT) or flat panel or touch panel display for displaying information to a computer user.
  • An input device 1414 is coupled to bus 1402 for communicating information and command selections to processor 1404.
  • cursor control 1416 is Another type of user input device, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 1404 and for controlling cursor movement on display 1412.
  • This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • a touch panel (screen) display may also be used as an input device.
  • the computer system 1400 may be suitable to function as a processing unit herein in response to processor 1404 executing one or more sequences of one or more instructions contained in main memory 1406. Such instructions may be read into main memory 1406 from another computer- readable medium, such as storage device 1410. Execution of the sequences of instructions contained in main memory 1406 causes processor 1404 to perform a process described herein. One or more processors in a multi-processing arrangement may also be employed to execute the sequences of instructions contained in main memory 1406. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
  • Non volatile media include, for example, optical or magnetic disks, such as storage device 1410.
  • Volatile media include dynamic memory, such as main memory 1406.
  • Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise bus 1402. Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency (RF) and infrared (IR) data communications.
  • RF radio frequency
  • IR infrared
  • Computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD- ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
  • Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 1404 for execution.
  • the instructions may initially be borne on a magnetic disk of a remote computer.
  • the remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem.
  • a modem local to computer system 1400 can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal.
  • An infrared detector coupled to bus 1402 can receive the data carried in the infrared signal and place the data on bus 1402.
  • Bus 1402 carries the data to main memory 1406, from which processor 1404 retrieves and executes the instructions.
  • Computer system 1400 may also include a communication interface 1418 coupled to bus 1402.
  • Communication interface 1418 provides a two-way data communication coupling to a network link 1420 that is connected to a local network 1422.
  • communication interface 1418 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • communication interface 1418 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • communication interface 1418 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • Network link 1420 typically provides data communication through one or more networks to other data devices.
  • network link 1420 may provide a connection through local network 1422 to a host computer 1424 or to data equipment operated by an Internet Service Provider (ISP) 1426.
  • ISP 1426 in turn provides data communication services through the worldwide packet data communication network, now commonly referred to as the“Internet” 1428.
  • Internet 1428 uses electrical, electromagnetic or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on network link 1420 and through communication interface 1418, which carry the digital data to and from computer system 1400, are exemplary forms of carrier waves transporting the information.
  • Computer system 1400 can send messages and receive data, including program code, through the network(s), network link 1420, and communication interface 1418.
  • a server 1430 might transmit a requested code for an application program through Internet 1428, ISP 1426, local network 1422 and communication interface 1418.
  • one such downloaded application provides for a method as disclosed herein, for example.
  • the received code may be executed by processor 1404 as it is received, and/or stored in storage device 1410, or other non-volatile storage for later execution. In this manner, computer system 1400 may obtain application code in the form of a carrier wave.
  • a method comprising: obtaining image data representing a plurality of scanning electron microscope (“SEM”) images, each SEM image comprising a representation of a feature, and each SEM image being associated with a respective scan of the feature by an SEM; determining, for each SEM image, a parameter associated with each of a plurality of gauge positions along the feature; determining a change in the parameter from each SEM image to a subsequent SEM image of the plurality of SEM images; determining, for each gauge position, a rate of the change for the parameter based on a difference in a location of the parameter between at least two of the plurality SEM images; and generating feature data representing a reconstruction of the feature prior to the SEM being applied to the feature by extrapolating, for the parameter associated with each gauge position, an original location of the parameter associated with each gauge position based on the rate of change of the parameter.
  • SEM scanning electron microscope
  • determining the parameter comprises: determining one of an edge point (“EP”) and a critical dimension (“CD”) associated with each of the plurality of gauge positions.
  • determining the edge point comprises: applying a threshold scattering intensity to the scattering intensity values of each SEM image; and selecting the edge point by determining a location along each gauge position comprising a scattering intensity value that is greater than or equal to the threshold scattering intensity.
  • each SEM image comprises a greyscale image
  • applying the threshold scattering intensity comprises: applying a greyscale threshold, wherein the edge is selected based on a change of a greyscale intensity along a gauge position being greater than or equal to the greyscale threshold.
  • determining the rate of the change comprises: determining a function reflective of a change of the location of the parameter from a first SEM image of the plurality of SEM images to a second SEM image of the plurality of SEM images.
  • determining the rate of change further comprises: determining displacements of the edge point for each gauge position for each of the plurality of SEM images; and determining, based on the displacements, the rate of the change.
  • a computing device comprising: memory; and at least one processor operable to: obtain image data representing a plurality of scanning electron microscope (“SEM”) images, each SEM image comprising a representation of a feature, and each SEM image being associated with a respective scan of the feature by an SEM; determine, for each SEM image, a parameter associated with each of a plurality of gauge positions along the feature; determine a change in the parameter from each SEM image to a subsequent SEM image of the plurality of SEM images; determine, for each gauge position, a rate of the change for the parameter based on a difference in a location of the parameter between at least two of the plurality SEM images; and generate feature data representing a reconstruction of the feature prior to the SEM being applied to the feature by extrapolating, for the parameter associated with each gauge position, an original location of the parameter associated with each gauge position based on the rate of change of the parameter.
  • SEM scanning electron microscope
  • the representation comprises scattering intensity values
  • the edge point being determined comprises the at least one processor being operable to: apply a threshold scattering intensity to the scattering intensity values of each SEM image; and select the edge point by determining a location along each gauge position comprising a scattering intensity value that is greater than or equal to the threshold scattering intensity.
  • each SEM image comprises a greyscale image
  • the threshold scattering intensity being applied comprises the at least one processer being operable to: apply a greyscale threshold, wherein the edge is selected based on a change of a greyscale intensity along a gauge position being greater than or equal to the greyscale threshold.
  • the rate of the change being determined comprises the at least one processor being operable to: determine a function reflective of a change of the location of the parameter from a first SEM image of the plurality of SEM images to a second SEM image of the plurality of SEM images.
  • the at least one processor is further operable to: generate a contour function representing characteristics of a pattern representative of the feature, the contour function comprising the rate of the change for each gauge position such that an entire mapping of the feature is available, and wherein generating the feature data comprises generating the feature data based further on the contour function.
  • a non-transitory computer readable medium comprising instructions that, when executed by at least one processor of a machine, cause the machine to: obtain image data representing a plurality of scanning electron microscope (“SEM”) images, each SEM image comprising a representation of a feature, and each SEM image being associated with a respective scan of the feature by an SEM; determine, for each SEM image, a parameter associated with each of a plurality of gauge positions along the feature; determine a change in the parameter from each SEM image to a subsequent SEM image of the plurality of SEM images; determine, for each gauge position, a rate of the change for the parameter based on a difference in a location of the parameter between at least two of the plurality SEM images; and generate feature data representing a reconstruction of the feature prior to the SEM being applied to the feature by extrapolating, for the parameter associated with each gauge position, an original location of the parameter associated with each gauge position based on the rate of change of the parameter.
  • SEM scanning electron microscope
  • An embodiment may include a computer program containing one or more sequences of machine -readable instructions that enable practice of a method as described herein.
  • This computer program may be included, for example, with or within the apparatus of any of Figures 1-13.
  • a data storage medium e.g., semiconductor memory, magnetic or optical disk
  • an embodiment can be implemented by the provision of updated computer program products for causing a processor of the apparatus to perform a method as described herein.
  • An embodiment of the invention may take the form of a computer program containing one or more sequences of machine -readable instructions to cause execution of a method as disclosed herein, or a data storage medium (e.g., semiconductor memory, magnetic or optical disk) having such a computer program stored therein.
  • a data storage medium e.g., semiconductor memory, magnetic or optical disk
  • the machine readable instruction may be embodied in two or more computer programs.
  • the two or more computer programs may be stored on one or more different memories and/or data storage media.
  • Any controllers described herein may each or in combination be operable when the one or more computer programs are read by one or more computer processors located within at least one component of the lithographic apparatus.
  • the controllers may each or in combination have any suitable configuration for receiving, processing, and sending signals.
  • One or more processors are configured to communicate with the at least one of the controllers.
  • each controller may include one or more processors for executing the computer programs that include machine-readable instructions for the methods described above.
  • the controllers may include data storage medium for storing such computer programs, and/or hardware to receive such medium. So the controller(s) may operate according the machine readable instructions of one or more computer programs.
  • a topography in a patterning device defines the pattern created on a substrate.
  • the topography of the patterning device may be pressed into a layer of resist supplied to the substrate whereupon the resist is cured by applying electromagnetic radiation, heat, pressure or a combination thereof.
  • the patterning device is moved out of the resist leaving a pattern in it after the resist is cured.
  • the substrate referred to herein may be processed, before or after exposure, in for example a track (a tool that typically applies a layer of resist to a substrate and develops the exposed resist), a metrology tool and/or an inspection tool. Where applicable, the disclosure herein may be applied to such and other substrate processing tools. Further, the substrate may be processed more than once, for example in order to create a multi-layer IC, so that the term substrate used herein may also refer to a substrate that already contains multiple processed layers.
  • UV radiation e.g., having a wavelength of or about 365, 355, 248, 193, 157 or 126 nm
  • EUV radiation e.g., having a wavelength in the range of 5-20 nm
  • particle beams such as ion beams or electron beams.
  • projection optics should be broadly interpreted as encompassing various types of optical systems, including refractive optics, reflective optics, apertures and catadioptric optics, for example.
  • the term“projection optics” may also include components operating according to any of these design types for directing, shaping or controlling the projection beam of radiation, collectively or singularly.
  • the term“projection optics” may include any optical component in the lithographic projection apparatus, no matter where the optical component is located on an optical path of the lithographic projection apparatus.
  • Projection optics may include optical components for shaping, adjusting and/or projecting radiation from the source before the radiation passes the patterning device, and/or optical components for shaping, adjusting and/or projecting the radiation after the radiation passes the patterning device.
  • the projection optics generally exclude the source and the patterning device.
  • a figure of merit of the system or process can be represented as a cost function.
  • the optimization process boils down to a process of finding a set of parameters (design variables) of the system or process that optimizes (e.g., minimizes or maximizes) the cost function.
  • the cost function can have any suitable form depending on the goal of the optimization.
  • the cost function can be weighted root mean square (RMS) of deviations of certain characteristics (evaluation points) of the system or process with respect to the intended values (e.g., ideal values) of these characteristics; the cost function can also be the maximum of these deviations (i.e., worst deviation).
  • RMS root mean square
  • evaluation points should be interpreted broadly to include any characteristics of the system or process.
  • the design variables of the system or process can be confined to finite ranges and/or be interdependent due to practicalities of implementations of the system or process.
  • the constraints are often associated with physical properties and characteristics of the hardware such as tunable ranges, and/or patterning device manufacturability design rules, and the evaluation points can include physical points on a resist image or pattern on a substrate, as well as non-physical characteristics such as dose and focus.
  • the term“optimizing” and“optimization” as used herein refers to or means adjusting a patterning process apparatus, one or more steps of a patterning process, etc., such that results and/or processes of patterning have more desirable characteristics, such as higher accuracy of transfer of a design layout on a substrate, a larger process window, etc.
  • the term“optimizing” and “optimization” as used herein refers to or means a process that identifies one or more values for one or more parameters that provide an improvement, e.g., a local optimum, in at least one relevant metric, compared to an initial set of one or more values for those one or more parameters. "Optimum" and other related terms should be construed accordingly. In an embodiment, optimization steps can be applied iteratively to provide further improvements in one or more metrics.
  • illustrated components are depicted as discrete functional blocks, but embodiments are not limited to systems in which the functionality described herein is organized as illustrated.
  • the functionality provided by each of the components may be provided by software or hardware modules that are differently organized than is presently depicted, for example such software or hardware may be intermingled, conjoined, replicated, broken up, distributed (e.g., within a data center or geographically), or otherwise differently organized.
  • the functionality described herein may be provided by one or more processors of one or more computers executing code stored on a tangible, non- transitory, machine readable medium.
  • third party content delivery networks may host some or all of the information conveyed over networks, in which case, to the extent information (e.g., content) is said to be supplied or otherwise provided, the information may be provided by sending instructions to retrieve that information from a content delivery network.
  • information e.g., content
  • the word“may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must).
  • the words “include”,“including”, and“includes” and the like mean including, but not limited to.
  • the singular forms“a,”“an,” and“the” include plural referents unless the content explicitly indicates otherwise.
  • reference to“an” element or “a” element includes a combination of two or more elements, notwithstanding use of other terms and phrases for one or more elements, such as“one or more.”
  • the term “or” is, unless indicated otherwise, non exclusive, i.e., encompassing both “and” and “or.”
  • Terms describing conditional relationships e.g., "in response to X, Y,” “upon X, Y,”,“if X, Y,” “when X, Y,” and the like, encompass causal relationships in which the antecedent is a necessary causal condition, the antecedent is a sufficient causal condition, or the antecedent is a contributory causal condition of the consequent, e.g., "state X occurs upon condition Y obtaining” is generic to "X occurs solely upon Y” and "X occurs upon Y and Z.”
  • Such conditional relationships are not limited to consequences that instantly follow the antecedent obtaining, as some consequences may be delayed, and in conditional statements, antecedents are connected to
  • Statements in which a plurality of attributes or functions are mapped to a plurality of objects encompasses both all such attributes or functions being mapped to all such objects and subsets of the attributes or functions being mapped to subsets of the attributes or functions (e.g., both all processors each performing steps A-D, and a case in which processor 1 performs step A, processor 2 performs step B and part of step C, and processor 3 performs part of step C and step D), unless otherwise indicated.
  • statements that one value or action is“based on” another condition or value encompass both instances in which the condition or value is the sole factor and instances in which the condition or value is one factor among a plurality of factors.
  • statements that“each” instance of some collection have some property should not be read to exclude cases where some otherwise identical or similar members of a larger collection do not have the property, i.e., each does not necessarily mean each and every.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)

Abstract

Cette invention concerne des systèmes, des procédés et une programmation pour la détermination de caractéristiques de pré-balayage. Selon un mode de réalisation, des données d'image représentant une pluralité d'images de microscope électronique à balayage (MEB) peuvent être obtenues, chacune comprenant une représentation d'une caractéristique et chacune étant associée à un balayage respectif de la caractéristique par un microscope électronique à balayage. Pour chaque image, un paramètre associé à chacune d'une pluralité de positions de jauge peut être déterminé. Un changement du paramètre de chaque image de microscope électronique à balayage à une image suivante de microscope électronique à balayage peut être déterminé. Pour chaque position de jauge, un taux de changement pour le paramètre peut être déterminé sur la base d'une différence dans un emplacement du paramètre entre au moins deux de la pluralité d'images de microscope électronique à balayage. Des données de caractéristiques représentant une reconstruction de la caractéristique avant l'application du microscope électronique à balayage peuvent être générées par extrapolation d'un emplacement d'origine du paramètre sur la base du taux de changement du paramètre.
PCT/EP2019/051461 2018-01-26 2019-01-22 Procédés et systèmes de détermination de caractéristiques de pré-balayage WO2019145278A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862622612P 2018-01-26 2018-01-26
US62/622,612 2018-01-26

Publications (1)

Publication Number Publication Date
WO2019145278A1 true WO2019145278A1 (fr) 2019-08-01

Family

ID=65234549

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/051461 WO2019145278A1 (fr) 2018-01-26 2019-01-22 Procédés et systèmes de détermination de caractéristiques de pré-balayage

Country Status (2)

Country Link
TW (1) TW201935312A (fr)
WO (1) WO2019145278A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI799024B (zh) * 2021-12-22 2023-04-11 技嘉科技股份有限公司 自動量測訊號的控制系統與方法
TWI805181B (zh) * 2022-01-10 2023-06-11 由田新技股份有限公司 用於蝕刻製程的補償方法、補償系統及深度學習系統

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6392229B1 (en) * 1999-01-12 2002-05-21 Applied Materials, Inc. AFM-based lithography metrology tool
US20040051040A1 (en) * 2001-08-29 2004-03-18 Osamu Nasu Method for measuring dimensions of sample and scanning electron microscope
US20050247876A1 (en) * 2001-08-29 2005-11-10 Hitachi High-Technologies Corporation Sample dimension measuring method and scanning electron microscope
US20080179517A1 (en) * 2002-05-20 2008-07-31 Hitachi High-Technologies Corporation Sample dimension measuring method and scanning electron microscope
US20090157360A1 (en) 2007-12-05 2009-06-18 Jun Ye Methods and system for lithography process window simulation
US7587704B2 (en) 2005-09-09 2009-09-08 Brion Technologies, Inc. System and method for mask verification using an individual mask error model
US20090263024A1 (en) * 2008-04-17 2009-10-22 Hitachi High-Technologies Corporation Apparatus for data analysis
US20140206112A1 (en) * 2013-01-18 2014-07-24 Sematech, Inc. Method for reducing charge in critical dimension-scanning electron microscope metrology
US9646804B2 (en) * 2013-08-21 2017-05-09 Commissariat à l'énergie atomique et aux énergies alternatives Method for calibration of a CD-SEM characterisation technique

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6392229B1 (en) * 1999-01-12 2002-05-21 Applied Materials, Inc. AFM-based lithography metrology tool
US20040051040A1 (en) * 2001-08-29 2004-03-18 Osamu Nasu Method for measuring dimensions of sample and scanning electron microscope
US20050247876A1 (en) * 2001-08-29 2005-11-10 Hitachi High-Technologies Corporation Sample dimension measuring method and scanning electron microscope
US20080179517A1 (en) * 2002-05-20 2008-07-31 Hitachi High-Technologies Corporation Sample dimension measuring method and scanning electron microscope
US7587704B2 (en) 2005-09-09 2009-09-08 Brion Technologies, Inc. System and method for mask verification using an individual mask error model
US20090157360A1 (en) 2007-12-05 2009-06-18 Jun Ye Methods and system for lithography process window simulation
US20090263024A1 (en) * 2008-04-17 2009-10-22 Hitachi High-Technologies Corporation Apparatus for data analysis
US20140206112A1 (en) * 2013-01-18 2014-07-24 Sematech, Inc. Method for reducing charge in critical dimension-scanning electron microscope metrology
US9646804B2 (en) * 2013-08-21 2017-05-09 Commissariat à l'énergie atomique et aux énergies alternatives Method for calibration of a CD-SEM characterisation technique

Also Published As

Publication number Publication date
TW201935312A (zh) 2019-09-01

Similar Documents

Publication Publication Date Title
US11243473B2 (en) Measurement method and apparatus
US11875966B2 (en) Method and apparatus for inspection
TWI620004B (zh) 用於圖案校正之方法與系統及相關電腦程式產品
TWI782317B (zh) 用於改良圖案化程序之程序模型的方法以及改良圖案化程序之光學近接校正模型的方法
TW202113924A (zh) 半導體裝置幾何方法及系統
CN108139686B (zh) 处理参数的间接确定
US11953823B2 (en) Measurement method and apparatus
TWI749355B (zh) 用於校正圖案化程序之度量衡資料之方法及相關的電腦程式產品
TWI660235B (zh) 判定一經圖案化基板之一參數的方法及非暫時性電腦程式產品
WO2020094385A1 (fr) Prédiction d'un écart de spécification basée sur une caractéristique spatiale de variabilité de processus
TWI796056B (zh) 基於機器學習的顯影後或蝕刻後影像之影像產生
TW202028873A (zh) 自圖案化製程之圖案組判定候選圖案的方法
WO2019145278A1 (fr) Procédés et systèmes de détermination de caractéristiques de pré-balayage
WO2023104504A1 (fr) Métrologie sensible à un motif et à un processus environnants
TW202314769A (zh) 檢測資料篩選系統及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19701813

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19701813

Country of ref document: EP

Kind code of ref document: A1