KR101556430B1 - Interferometric defect detection and classfication - Google Patents

Interferometric defect detection and classfication Download PDF

Info

Publication number
KR101556430B1
KR101556430B1 KR1020117000031A KR20117000031A KR101556430B1 KR 101556430 B1 KR101556430 B1 KR 101556430B1 KR 1020117000031 A KR1020117000031 A KR 1020117000031A KR 20117000031 A KR20117000031 A KR 20117000031A KR 101556430 B1 KR101556430 B1 KR 101556430B1
Authority
KR
South Korea
Prior art keywords
sample
system
phase
defect
signal
Prior art date
Application number
KR1020117000031A
Other languages
Korean (ko)
Other versions
KR20110031306A (en
Inventor
환 제이. 정
Original Assignee
환 제이. 정
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US13072908P priority Critical
Priority to US61/130,729 priority
Priority to US13561608P priority
Priority to US61/135,616 priority
Priority to US12/190,144 priority patent/US7864334B2/en
Priority to US12/190,144 priority
Priority to US61/189,508 priority
Priority to US61/189,509 priority
Priority to US18950808P priority
Priority to US18951008P priority
Priority to US18950908P priority
Priority to US61/189,510 priority
Priority to US61/210,513 priority
Priority to US21051309P priority
Application filed by 환 제이. 정 filed Critical 환 제이. 정
Publication of KR20110031306A publication Critical patent/KR20110031306A/en
Application granted granted Critical
Publication of KR101556430B1 publication Critical patent/KR101556430B1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using infra-red, visible or ultra-violet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/9501Semiconductor wafers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using infra-red, visible or ultra-violet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/41Refractivity; Phase-affecting properties, e.g. optical path length
    • G01N21/45Refractivity; Phase-affecting properties, e.g. optical path length using interferometric methods; using Schlieren methods

Abstract

A system and method for using interference imaging in common for defect detection and classification is described. The illumination source generates coherent light and directs it towards the sample. An optical imaging system collects reflected or transmitted light from a sample containing dispersed and mirror components that are not highly diffracted by the sample. Variable phase control systems are used to adjust the relative phase of the dispersed and mirror components to change how they are interfered in the image plane. The final signal is compared to a reference signal for the same position on the sample, and the difference to the threshold is considered defective. The process is repeated many times with different relative phase shifts, and each defect defect location and difference signal is stored in memory. This data is used to calculate the amplitude and phase for each defect.

Description

{INTERFEROMETRIC DEFECT DETECTION AND CLASSFICATION}

The present invention relates to the advantages of U.S. Patent Application No. 19/190144, filed on August 12, 2008, the benefit of U.S. Provisional Patent Application No. 61 / 130,729, filed June 3, 2008, U.S. Provisional Patent Application No. 61 / 136,616, the benefit of U.S. Provisional Patent Application No. 61 / 189,508, filed on August 20, 2008, and the benefit of U.S. Provisional Patent Application No. 61 / 180,509, filed on August 20, 2008 And US Provisional Patent Application No. 61 / 189,510, filed on August 20, 2008, and U.S. Provisional Patent Application No. 61 / 210,513, filed on March 19, 2009, all of which are incorporated herein by reference.

The present invention relates generally to common-path interferometry. More particularly, the present invention relates to high-resolution common interference imaging for use in detecting defects in microlithographic devices, such as semiconductor devices and integrated circuits, and defects in photolithographic reticles .

Optical defect detection technology has been one of the key technologies to limit the ability to manufacture much smaller transistors. The optical defect detection technology has provided high performance and high efficiency that other technologies such as an electron beam microscope can provide to date. However, since the geometric shape used in the IC chip is continuously reduced, it is becoming difficult to detect defects with good reliability. The design rules of future generations of IC chips are too small to have any practical possibility that any existing optical defect detection technology can not be operated. Therefore, in order to extend the life span of the optical inspection technology to the future facility generation, overall improvement of the optical defect detection technique is desired.

Currently used optical defect detection systems include bright field systems and dark field systems. Unlike the bright field system, the dark field system excludes an undivided illumination beam from the image. However, there is a limit to the existing darkfield and bright field defect detection systems which cause difficulties in reliably detecting defects, especially as the design rules are gradually reduced. Separate channel interference techniques have been proposed using a beam splitter as two beams, a probe and a reference beam are generated and moved to an image sensor or subsystem through a different path. For example, a detachable passageway system designed for defect detection is disclosed in U.S. Patent Nos. 7,061,625, 7,095,507, 7,209,239, and 7,259,869. The above patents and other patents mentioned in the present invention, as well as all documents not granted patents identified in the present invention, are incorporated herein by reference. Another discrete passageway system designed for high-resolution surface profiling is Linnik interference (M. Francon, "Optical Interference ", Academy Publishers, New York and London, 1966, p. 289). These separate pathway interfering systems can in principle amplify the fault signal or can measure the amplification and phase of the fault signal. However, these systems are complex and costly as well as have serious defects, photon noise and sample pattern noise are excessive and are also unstable due to the two different passages taken by the probe and the reference beam. Small environmental confusion, such as floor vibration, acoustic disturbances, and temperature gradients, can make the system unstable easily. As a result, it is difficult to use this kind of channel interference system in an industrial environment.

Conventional contrast-contrast microscopes provide a fixed positive phase control for a specular component, typically? / 2 or-? / 2. Such a system typically uses an elongated light source such as an arc lamp or a halogen lamp. Although they are generally suitable for observing biological samples, conventional phase contrast microscopes are generally not suitable for detecting a wide range of defects present in semiconductor wafers and / or reticles.

U.S. Patent No. 7,295,303 discloses an approach similar to a phase contrast microscope which is not suitable for detecting a wide range of defects present in semiconductor wafers and / or reticles.

U.S. Patent No. 7,365,858 and U.S. Published Application 2005 / 0105097A1 disclose a system for imaging biological samples. Two operating modes are disclosed, namely "phase mode" and "amplitude mode". The purpose of the amplitude mode discussed is to obtain a high contrast raw image. In phase mode, the discussed technique is to extract only phase information. The above discussion refers to liquid crystal spatial light modulation performed in a pupil conjugate through the use of an additional lens group and a beam splitter with a trend of illumination power loss.

U.S. Patent No. 6,674,522 and U.S. Patent Application Publication No. 2008 / 0226157A1 disclose methods for defect detection systems and lithographic masks. They use a defocus or Zernike point spread function to detect defects. The method is complicated and requires a large amount of calculation resources, and is not suitable for the detection of small defects.

A system and method for coherent interference imaging is provided. According to some embodiments, a coherent interference imaging system for detecting and classifying defects in a sample is provided. The system comprising: an illumination source for generating light comprising far wavelengths such as EUV (13.5 nm) and long wavelengths such as 10 microns in far-infrared; An optical imaging system for collecting a portion of the light from the sample, comprising a dispersed component of light that is well dispersed by the sample and a mirror component of light that is not well diffracted or spatially reflected or transmitted by the sample; A variable phase control system for adjusting the relative phase of the dispersed component and the mirror component; a sensing system for measuring the intensity of the combined dispersed and also spatial component; And a processing system for determining whether a point includes a defect.

A precise positioning system allows the intensity signal from each point on the sample to be precisely mentioned and compared to the reference signal for that point by the computer. If the difference exceeds the set positive and negative thresholds, its position on the sample is recorded and displayed as a possible defect position along the level of the sample and the reference signal corresponding to that position.

This process may be repeated with different phase shift settings, because under certain conditions a defect may be missed with a given phase shift setting. The second scan with different phase shifts is likely to detect any missing defects during the first scan, but the two scans do not provide the additional information needed to precisely characterize the defects. However, the third scan with the third phase shift provides sufficient data to characterize the phase and the amplitude of the defect, and this data is also uncorrelated with its location relative to the circuit element to group similar defects It is useful to offset similar effects on product yield when remaining.

The reference signal from which the signal from the sample is compared is generated by the computer from the pattern image that is assumed to be on the sample, assuming no defects exist. If multiple pattern copies are possible and some are known to be defect free, or if the defect is known to be randomly distributed, the reference signal may be on one or more adjacent multiple or more corresponding locations on the same wafer, May be generated by a similar commonly interfering imaging system using the same phase shift and wavelength to scan the corresponding positions.

Other objects, features and advantages of the present invention will become more apparent from the following detailed description when read in conjunction with the accompanying drawings.

1 shows an embodiment of an interference flaw detection system according to some embodiments;
Figures 2A and 2B illustrate embodiments of a phase controller and attenuator in accordance with some embodiments.
3 illustrates an embodiment of an interference flaw detection system according to some embodiments.
Figures 4A and 4B illustrate an embodiment in which the optical path length is varied according to some embodiments.
5 illustrates an embodiment of a movable mirror used to vary the optical path length according to some embodiments;
Figure 6 illustrates an embodiment of an interference flaw detection system using a movable mirror phase controller in accordance with some embodiments.
7A-7C illustrate an embodiment of a compensation plate having a Fourier filter strip for use in an interference flaw detection system in accordance with some embodiments.
8 illustrates an embodiment of an arrangement of a folding prism for illumination light according to some embodiments;
9 illustrates a phase controller in combination with a polarization rotator according to some embodiments;
10 illustrates an embodiment of a polarization controller according to some embodiments;
11 illustrates an embodiment of a continuously variable attenuator using polarization in accordance with some embodiments.
Figure 12 illustrates an exemplary implementation of a system using the attenuator type shown in Figure 11;
Figures 13A-13C show details of a system in the vicinity of a pupil or aperture stop according to some embodiments.
Figure 14 illustrates an embodiment of an attenuator having lambda / 2 and lambda / 4 plates in accordance with some embodiments.
15 illustrates an embodiment of an interference flaw detection system with high incidence angle illumination in accordance with some embodiments.
16 illustrates an embodiment of an interference flaw detection system having a high incident angle illumination and a variable attenuator in accordance with some embodiments.
Figure 17 illustrates an embodiment of an interference flaw detection system with low-flare high incident angle illumination in accordance with some embodiments.
18 illustrates an embodiment of an interference flaw detection system having low flare high incident angle illumination and a variable attenuator in accordance with some embodiments.
Figure 19 illustrates an embodiment of an interference flaw detection system with high azimuthally rotatable high incident angle illumination in accordance with some embodiments.
Figure 20 illustrates an embodiment of an interference flaw detection system having azimuthally rotatable high incident angle illumination with a variable attenuator for a mirror component in accordance with some embodiments.
21 illustrates an embodiment of an interference flaw detection system having high incidence angle illumination capable of azimuthal rotation in accordance with some embodiments.
Figure 22 illustrates an embodiment of an interference flaw detection system having azimuthally rotatable high incident angle illumination with a variable attenuator for a mirror component in accordance with some embodiments.
Figure 23 illustrates an embodiment of an interference flaw detection system having an illumination through transmissive sample in accordance with some embodiments.
Figure 24 illustrates an embodiment of a sample inspection system incorporating reflection and transmission modes in accordance with some embodiments.
25-27 illustrate embodiments of various waveplates for use in operation of a detection system in a series of wavelength modes, in accordance with some embodiments.
Figure 28 illustrates an exemplary system configuration for two wavelengths in accordance with some embodiments.
29 illustrates an embodiment of an interference flaw detection system with low incident angle illumination provided with an elongated light source in accordance with some embodiments.
30 illustrates an embodiment of an interference flaw detection system having a high incident angle illumination with an elongated light source according to some embodiments;
31 illustrates an embodiment of an interference flaw detection system having a high incident angle illumination with a light source and phase control elongated in a path of dispersed light in accordance with some embodiments.
32A and 32B are views showing the shapes of defects used for numerical simulation;
33 to 35B are graphs showing the results of the numerical simulation.
Figure 36 shows a simulated enhanced contrast of an image of 40 nm defects by attenuating the intensity of the mirror component to 96%;
Figure 37 shows a simulated enhanced contrast of an image of 20 nm defects by attenuating the intensity of the mirror component to 99.9%;
Fig. 38 shows simulated signal intensity and phase of a 20 nm defect. Fig.
39 illustrates simulated phases of defect signals from 20 nm particles and 20 nm voids;
40 shows a spatial frequency bandwidth of a defective signal component;
Figure 41 illustrates an embodiment of a system configuration for reducing the number of sample scans.
Figures 42a-42c compare the size of the interference term and the size of the defect field for different defect sizes and sample reflectivity.
Figures 43A and 43B illustrate a design embodiment of a catadioptric imaging system.
Figures 44A-44F illustrate a coherent uniform illuminator design.
Figures 45A-45F illustrate an autofocus system design.
Figures 46A-46E illustrate dispersed openings and their performance.

A detailed description of the body of work of the present invention is provided below. Although various embodiments are described, it should be appreciated that the working bodies of the present invention are not limited to any one embodiment and include various alternatives, modifications and equivalents as well as combinations of features from different embodiments. In addition, while various specific items have been described in the following description in order to provide a thorough understanding of the working body of the present invention, some embodiments may be practiced without some or all of these items. Also, for clarity, any technical material known in the art has not been described in detail in order to avoid unnecessarily obscuring the working body of the present invention. The terms "reticle" and "mask" are used interchangeably and refer to a patterned object used as a master to create another patterned object.

An optical field can be described as a complex amplitude. Complex amplitudes may conveniently appear in a Cartesian or polar coordinate system. This is represented in real and imaginary parts in the Cartesian coordinate system and is represented in amplitude and phase in the polar coordinate system. Thus, the three terms, "complex amplitude", "real and imaginary part", and "amplitude and phase" are the same as used herein, and the three terms are treated the same and compatible with each other.

In addition, the word "light" is used as a shorthand for electromagnetic radiation having a very wide range of possible wavelengths, as described below. Also, in practice, the mirror component of the reflection is "substantially mirrored ", meaning that it not only contains mirror reflected light but also can contain a small amount of scattered light.

Ⅰ. Defect signal equation

Starting from the first principle, when a light beam with a narrow temporal frequency bandwidth strikes a sample, such as a wafer, most of the light is absorbed or mirror reflected (or not diffracted) A small part of the light is dispersed (or diffracted) by the defect. The light beam can be decomposed into several electric field components. Each field component of the ray is described as follows.

b? b | exp (i? b ); The complex amplitude of the mirror component, φ, is the phase of the mirror component that can be set to zero without compromising the universality of the signal equation.

a ≡ | a | exp (i (φ a + φ b )) ≡ (a x + ia y ) exp (iφ b ); Phi a is the phase of a with respect to the phase of b, and ax and ay are the actual amplitudes of a when the actual axis is in the direction of b Ingredients and virtual ingredients.

s ≡ | s | exp (i (φ s + φ b )) ≡ (s x + is y ) exp (iφ b ); S is the phase of s with respect to the phase of b, and sx and sy are the phase angles when the actual axis is oriented in the direction of < RTI ID = 0.0 > b, < s actual and virtual components.

Figure 112011000096113-pct00001

q a ≡ q a | exp (i (φ qa + φ b )); The complex amplitude of a part of the light beam whose polarized light is dispersed by a circuit pattern orthogonal to the polarization of b.

q s ≡ | q s | exp (i (φ qs + φ b )); The complex amplitude of a part of the light whose polarization is dispersed by a defect orthogonal to the polarization of b.

g? g ? exp (i? g +? b )); The complex amplitude of any existing stray light. Stray light is undesirable non-imaging forming light caused by unwanted reflections from lens surfaces and mechanical parts.

The light intensity detected by the image sensor can be displayed as follows. It should be appreciated that, in imaging, light in a narrow time frequency bandwidth can be treated as light in a single time frequency with the same intensity. This is intuitively accurate as well as easily verifiable mathematically.

The light intensity (I) detected by the detector element in the image plane is the sum of the product of the electric field amplitudes for the electric field mirror and the scattered light, and the stray light component is given by

Figure 112011000096113-pct00002

In the above formula, b * , a * , and s * are complex conjugates of b, a, and s, respectively.

The mirror component (b) separates from equation (1a) because it can be physically separated from other image intensities in the pupil plane. It should be appreciated that the complex amplitude is a function of position on the sample. Also, only the relative phases between the different components are significant. Thus, the absolute phase (phi b ) of the mirror component does not play any role, and can be set to zero without loss of generality. Also, if Φ b is set to 0, the complex amplitude of the mirror component must be recognized to define the direction of the actual axis of the complex plane coordinate system used.

It is assumed that the optical path length difference of the stray light with respect to the mirror component is larger than the interference length of the illumination light. Therefore, the stray light is illogically added without considering its relative phase in equation (1).

Equation (1c) indicates that the image contains not only the defect signal but also many other unnecessary components. In order to find the defect, components other than the defect signal need to be removed to some extent possible. This is typically done for example by die-die extraction of an image of an adjacent die from an image of the current die. Generally, in order to correctly check the defect signal in at least two die-die extraction, for example, [(current die image) - (left die image)] and [(current die image) - Do. Defects in the extracted image belong to the current die. Defects appearing in only one of the two extracted images belong to adjacent dies. Thus, by comparing two extracted images, it is possible to say clearly which defect belongs to which die. For memory area inspection, cell-cell image extraction rather than die-die image extraction is performed to minimize noise from the wafer pattern. This method works effectively because the opportunity to have defects at the same location on two different dies is negligibly small. The difference in image intensity after the die-die extraction can be expressed as follows.

Figure 112011000096113-pct00003

Equation (2c) is a general fault signal equation. It should be appreciated that the definition of defects herein includes not only defects of interest but also defects of little interest. A preferred embodiment of a defect of interest is sample pattern noise. The sample pattern noise is not actually noise, and such a term is a defect when used here. That is, the defect signal s includes the sample pattern noise as well as the defect signal of interest. Equation (2c) indicates that a comparison with existing or nonexistent defects of two signals is a mixed bag of different signal components. The first four terms constitute an implicit signal because they are present, even if the mirror component is filtered (sometimes referred to herein as the "ambiguous field"). The darkfield system detects this part of the signal. In equation (1b), the first four inputs, the low-dark-field signal, must always be positive. However, this is not a concern. Rather, it is the difference signal of equation (2C) that is used to find the defect. The dark portion of the defect signal, i.e. the first four terms in equation (2c), is a combination of positive and negative terms that depends not only on the defect pattern but also on the circuit pattern around the defect. Thus, the dark portion of the defect signal may be positive or negative, or zero, depending on the circuit pattern around the defect. This implies that the darkfield system can not detect faults in a consistent manner.

Further, as the defect size becomes larger than the wavelength, the amplitude of the dark field signal becomes small so that it can be easily disabled by the noise. The final term in the signal equation is the interference term (sometimes referred to herein as the "interference portion"). That is, the end term is caused by the interference between the defect signal amplitude and the mirror component. The sign and magnitude of the interference term depends not only on the intensity of the mirror component but also on the relative phase between the magnitude of the combined signal and the mirror component. If the phase difference between the coupling signal and the mirror component is ± 90 °, the fault signal is not detected.

The current bright field system simultaneously detects the dark field and the interference field without controlling the relative phase between the magnitude of the combined signal and the mirror component. In this case, not only is the defect signal low, but the implied field and interfering term may be bolstered or deleted according to the nature of the defect itself and the circuit pattern around it. This means that the current bright field system can not provide consistent defect detection performance.

Thus, current darkfield and bright field systems are seriously disadvantageous. More signal analysis shows that bright field systems are inevitably unclear for some types of defects. This will appear in the following section describing the high sensitivity mode.

It should be appreciated that although the solution described herein can be described at least theoretically in connection with the signal equation (2c), the theoretical description may belong to an ideal environment that does not limit the actual operating characteristics of the embodiment described in the present invention . The signal equation shows the importance of controlling the relative phase between the defect signal amplitude and the mirror component for consistent performance. By controlling the relative phase, the magnitude and sine of the interference term can be controlled. For example, if the relative phase is set to zero, the magnitude of the interference term can be a positive maximum value. If the relative phase is set to 180 °, the size of the interference term can be the minimum value (or the negative maximum value). Thus, control of the relative phase between the mirror component and the dispersed component can be used to maximize the magnitude of the interference term, and can also be used to vary the sign. The criterion for maximization in the present invention is to refer to the increase of the parameter but not necessarily to its actual maximum value and the criterion for minimization refers to the reduction of the parameter but not necessarily the actual minimum value .

Due to this ability to change the sign by changing the relative phase shift, it is possible to always match the sign and the implied field of the interference term. When the sign of the interference term and the ambiguity field are the same, they are bolted together. Maximizing the fault signal through control of the relative phase between the defect signal amplitude and the mirror component appears as a consistent system performance. Another important feature represented by equation (2c) shows the possibility of determining the amplitude and phase of the interferometry by scanning the sample multiple times with different relative phase values for a sample scan of the aperture.

Determination of the amplitude and phase of the interference term facilitates more precise fault classification as well as high defect detection sensitivity. For example, the defect size can be estimated from the amplitude information, and the defect type can be determined from the phase information. The optical signal amplitude of the defect does not directly provide the physical size of the defect. Rather, the optical signal amplitude provides only the "optical size" of the defect. The association between physical size and optical size can be complicated by making it difficult to precisely estimate the physical size of the defect with only the optical signal amplitude. However, Applicants can estimate general associations between physical size and optical size through experiments or simulations. Then, the physical size of the defect can be estimated appropriately from the association. If additional data, such as similar defect composition data, reticle pattern data, are used additionally, more precise characterization of the defect would be possible.

More precise characterization of defects allows for more precise determination as to whether repair is required. This possibility will be explained in the following catch-all mode. Precise defect classification is typically as important as reliable defect detection, since it can save time in the defect observation process, which is one or more expensive processes in semiconductor manufacturing.

The relative phase can be controlled by controlling the phase of the mirror component or the phase of the dispersed component. However, since the etendue of the mirror component is much smaller than the etendue of the dispersed component, it is usually easier to control the phase of the mirror component. Control of the relative phase between the dispersed component and the mirror component is one of the key features of the interference flaw detection and classification technique described herein. Its significance will be demonstrated by the following examples.

The signal equations represent different significance facts, namely the interference term,

Figure 112011000096113-pct00004
Is the actual defect signal amplified by the mirror component (b). That is, even if the original defect signal is small, the mirror component can be amplified in a large amount by the mirror component because it is usually very intense. This amplification process also proved to be noiseless (see, for example, Philips City, D., "Building an Electron-Optic System: Making it All Possible," John Willy & Sons Inc., 2000, pages 30-32 and page 123). This type of amplification is referred to as " noiseless parameter amplification ", where | b | is an amplification parameter. The basic theory about noise-less amplification is as follows. The magnitude of the interference term and the photon noise are proportional to | b |. Thus, the signal-to-noise ratio, which is the ratio between the two quantities, is independent of | b |. In the interference term, element '2' begins with the fact that there are actually two signal amplifiers working closely together. One amplifier is denoted by bs * , and the other amplifier is denoted by b * s. These are mutually close but constitutive or destructive to each other depending on the relative phase between the defect signal and the mirror component.

In order to maximize the amplification of the defect signal, it needs to be formed to operate in a mutually constructive manner by controlling the relative phase between the defect signal and the mirror component. The mutual configuration is maximized when the relative phase is 0 ° or 180 °. A perfect interconnection occurs when the relative phase is ± 90 °. The Applicant finds that there is only one noise amplifier represented by | b | 2 from Equation (1b) and is the main source of photon noise. This means that the mirror component can amplify the signal more than twice as much as the signal-noise.

Thus, if the dynamic range of the image sensor is wide enough, the mirror component can increase the signal-to-noise ratio of the signal by more than two times the inherent signal-to-noise ratio inherent in the signal itself. The value paid for element '2' is that the relative phase between the dispersed component and the mirror component must be controlled to maximize amplification. Thus, an increase in the signal-to-noise ratio requires phase control. Phase control requires knowledge of the relative phase to add more information to the signal. Thus, an increase in the signal-to-noise ratio does not violate the law of conservation of information.

The inherent signal-to-noise ratio is the ratio between the signal and the signal-to-noise, and the noise is contained in the signal itself. Signal - Noise is also called unique noise. The dynamic range of the detector is the ratio between the maximum signal range of the detector and the least detectable signal, which is typically assumed to be the noise level of the detector. The dynamic range is typically defined as the maximum number of gray levels that the detector can provide, i.e., the maximum signal range divided by the noise level.

Electronic amplifiers can increase the signal-to-noise ratio, including the cleanest electronic amplifier, such as a dynode in a photo-multiplier tube. They can only reduce the signal-to-noise ratio. Noise-less amplification by mirror components is a special case in that it can actually increase the signal-to-noise ratio. This is the best amplifier ever known. This is the most suitable amplifier for weak signals such as signals from very small defects and is superior to all electronic amplifiers in performance.

The systems and methods described herein fully utilize the power of noise-less amplification by mirror components to reliably detect very fine defects. Interference detection is a homodyne detection version in which the two interference beams have the same time frequency.

Recognize that the mirror element is a double-edged sword. If used as an amplifier by properly controlling its phase, its advantage can be very large. However, if not, it can be harmful in that it can only stay in the middle and be the main source of photon noise. This additional noise indicates that the test may be performed in a worse manner than the dark-field test system in some cases. This is one of the reasons why the existing bright field system is not consistently implemented. One of the main ideas described here is that mirror elements are used in the most advantageous way.

The embodiment shown in the following table shows the power of noise-less amplification. Embodiments are selected to represent the real world of high-end defect detection in the future. In an embodiment, the relative phase between the mirror component and the dispersed component is set to 0 DEG or 180 DEG to maximize noise-less amplification. Typical high-end image sensors such as CCD, TDI, CCD (time delay and integrated CCD) are considered. Detector noise is assumed to be additive and independent of the signal level. Since light intensity is ultimately the number of electrons generated at the detector, we should not be photons of the light beam, but as a unit of electrons generated by the detector.

In the embodiment shown in Table 1, the defect signal is very weak compared to the detector noise, but still very powerful compared to the inherent noise. Table 1 below shows that an undetectably weak defect signal for a conventional defect detection system can be a signal that can be easily detected through large noise-less amplification provided by a powerful mirror component and a wide image sensor dynamic range . In this embodiment, the signal-to-noise ratio was increased from 0.25 to 12.0 by the noise-less amplification process.

Figure 112011000096113-pct00005

Table 2 below shows how extremely fine signals from very small defects can be made detectable signals by large noise-less amplification provided by a powerful mirror component and a wide dynamic range of image sensors. In this case, it should be noted that the signal is weak compared to its inherent noise. However, the signal-to-noise ratio was increased from 0.005 to a considerable size of 1.69 by the noise-less amplification process. This represents a very reliable detectability even for single photon signals.

Figure 112011000096113-pct00006

In both cases, the signal-to-noise ratio of the amplified signal is greater than the inherent signal-to-noise ratio of the signal itself. This is one of the amazing powers of the technology described here that was not previously recognized or expected by the inventor's knowledge. The signal-to-noise ratio is still twice as small as the inherent signal-to-noise ratio due to the limited amplification of the signal. The table above shows us the importance of noise-less amplification of the signal by a mirror component for the detection of small or very small defects in the future. Noise-less amplification allows reliable detection of very weak fault signals with a noisy image sensor as long as the signal's inherent signal-to-noise ratio is high. It is impossible to detect such a very small defect without noise-less amplification of the defect signal.

In the real world, especially in high-speed applications such as high efficiency defect detection, if the fault signal is as weak as the exemplary signal shown in Table 2, it is not easy to find the fault with a large amount of noise-less amplification of the signal. It should be recognized that the noise read in high-speed applications often becomes a major noise component. However, the related advantages of the systems and methods described herein for existing technologies, such as bright field or dark field technology, are maintained. In both embodiments, the noise-less amplification greatly increased the signal-to-noise ratio. Basically, a large noise-less amplification causes the detector noise to fall off the equation. Only the inherent signal-to-noise ratio is relevant. The inherent signal-to-noise ratio is the ratio between the signal and the noise-retained signal-noise in the signal itself. It is shown through the embodiment of the limiting section of the dark field mode below that a large amount of noise-less signal amplification by the mirror component can be achieved even with embodiments having low reflectivity.

In signal amplification, the quality of the first stage amplifier is the most important. The mirror component provides the possibility of noise-less first stage signal amplification. The system and method described herein can take advantage of this by controlling the amplitude of the mirror component and also by controlling the relative phase between the defect signal amplitude and the mirror component. By realizing such noise-less amplification of the signal, a high signal-to-noise ratio can be achieved with the described technique if the original signal is weak. A high signal-to-noise ratio means a high sensitivity and a low false positive rate when detecting a defect. Noise-less amplification of a fault signal using a mirror component is one of the key features of the interference flaw detection and classification techniques described herein. Generally, the higher the noise-less amplification, the better the signal-to-noise ratio.

High noise-less amplification is obtained from strong mirror components. Therefore, strong, non-attenuated mirror components are generally preferred. This is the opposite of a conventional microscope in which the mirror component is blocked or severely attenuated to improve the contrast of the biomedical image. In the systems and methods described herein, the mirror component is attenuated when the dynamic range of the image sensor is too limited for application.

A phase controller may be used for deamplification of unwanted fault signals. The preferred embodiment is not actually noise but wafer pattern noise, which is an unnecessary defect signal. In most defect detection applications, it is desirable to suppress wafer pattern noise. If suppression of wafer pattern noise is more important than amplification of a faulty signal of interest, then the phase controller can be set to minimize wafer noise rather than maximizing the defect signal of interest. A more detailed discussion of pattern noise will be provided below. The terms "sample pattern noise", "wafer pattern noise", "sample noise", and "wafer noise" are used interchangeably to refer to the same kind of noise.

Another important fact revealed by experimenting with signal equations is that the intermediate frequency bandwidth of the interference term is different from the bandwidth of the implied field. The spatial frequency bandwidth of the interference term is less than the bandwidth of the implied field in a common shape (see page 40 for example). Intuitively, though not precisely, a defect image formed by an interference term is spatially wider than a defect image formed by an ambiguity term. This can be advantageous because it leads to high efficiency. The small bandwidth allows coarse sampling of the sample image to allow wide viewing of the imaging system with the same sized image sensor. High efficiency can typically be achieved with a wide field of view. The bandwidth of the ambiguity field is fixed as long as the numerical aperture of the imaging system is fixed, and does not depend on the angle of the ray of the mirror component. However, the bandwidth of the interference term depends not only on the numerical aperture of the imaging system, but also on the ray angle of the mirror component.

The spatial frequency bandwidth of the interference term can be minimized by minimizing the ray angle of the mirror component. The ray angle of the mirror component is minimized when the direction of the illumination light is perpendicular or nearly perpendicular to the sample surface. Thus, when only the interference term is used, or when the interference term is dominant, the vertical illumination of the sample or near vertical illumination can be selected for high efficiency. Vertical or nearly vertical illumination provides the added advantage of making the polarization more uniform across the pupil compared to large incidence angles. More uniform polarization across the pupil leads to a higher interference term. Another important fact to observe is that if the defect is smaller than the wavelength, the spatial shape of the interference term is the shape of the amplitude spreading function (APSF) of the imaging system. Even if the spatial frequency of the mirror component is not 0, the shape of the interference term is not changed. The effect is to provide a nonzero carrier frequency to the interference term.

If the mirror component comprises a single ray, the interference term can be represented as a multiplication of the amplitude point spread function (APSF) with the carrier frequency term. That is, the carrier frequency term is not always an element and can be processed separately. If we handle the carrier frequency term separately, there is no difference between the extracted image type of the very small defect and the APSF. This allows rapid numerical deconvolution with a fine-width sampling function that can contribute to the detector array.

The width of the sampling function is the width of the light-sensitive area in each pixel of the image sensor. A high sensitivity or high dynamic range generally requires a wide dynamic range. Thus, a fine-size detector in the array acts to reduce the maximum signal amplitude to some extent, and deconvolution is the equivalent of image magnification. Thus, optical image magnification with numerical deconvolution reduces the cost of the optical system. This issue is addressed in detail in the section on spatial frequency bandwidth below.

Sometimes it is useful to control the penetration depth of the illumination light to the sample surface. For example, if defects that need to be detected are placed on the sample surface or close to the sample surface, shallow penetration of the illumination light is preferred to more easily detect defects. If the defects that need to be detected are at the bottom of a deep trench, deep penetration of the illumination light is preferred in order to more easily detect defects. The penetration depth of the illumination light can not be arbitrarily controlled. However, if the printed pattern around the defect on the sample faces one direction, the penetration depth of the illumination light can be controlled to some extent by controlling the polarization of the illumination light. For example, if the polarization direction of the illumination light is set to be parallel to the direction of the printed pattern on the sample, the illumination light will penetrate the least amount.

If the polarization direction of the illumination light is set to be perpendicular to the direction of the printed pattern on the sample, the illumination light penetrates the deepest. This method of controlling the penetration depth of illumination light can be useful for defect detection because a high percentage of the printed pattern has a good edge direction.

Even if the polarized light of the illumination light is sometimes directed parallel to the direction of the printed pattern, the penetration of the illumination light may be too deep. In this case, we can consider the execution of a large incidence angle of illumination. It should be noted that the angle of incidence is defined as the angle between the ray and the perpendicular surface (not the surface itself).

Illumination with a large incidence angle may lead to reduced efficiency because it requires a finer sampling grid to precisely detect the signal. This leads to a high magnification ratio or small field of view for the same detector size. However, it may also be a beneficial effect with high angle illumination. If combined with s-polarized light of high angle illumination, penetration of the illumination light to the sample surface can be reduced more effectively than small incident angle illumination. High Angle entry is called "grazing incidence".

The reduction of the penetration of the illumination light into the wafer surface may reduce the so-called "wafer pattern noise ". Wafer pattern noise is caused when the printed pattern on the wafer changes finely from die to die due to a change in the manufacturing process that traverses the wafer. There are two types of wafer pattern noise. One is referred to as axial or longitudinal wafer pattern noise, and the other is referred to as transverse wafer pattern noise. High-angle illumination can reduce longitudinal wafer pattern noise. The lateral wafer pattern noise can be reduced by good Fourier filtering and aperture edge softening and obscuration. The effective and practical method of softening and darkening of the opening edge is described in the section on dispersed openings below.

Strictly speaking, wafer pattern noise is actually no noise at all. Rather, this is the kind of fault signal we do not care about. The reduction of the illumination light penetration can be significant if the surface contour of the wafer is very flat or if the direction of the wafer pattern edge is parallel to the direction of the s-polarized light of the illumination light. However, if the wafer has many x-direction edges when the y-direction edge or the direction of the pattern edge is not parallel to the direction of s-polarized light of the illumination light, the advantage is not significant.

Execution of high angle incidence illumination can be very inexpensive. Therefore, the cost advantage must be carefully analyzed before making a decision to use high angle incidence illumination.

The penetration depth control of the illumination light is not only the reason for the polarization control of the illumination light. Interactions between polarized light and defects and their surrounding patterns are generally complex, requiring experimental measurements and / or numerical modeling for prediction. High-angle illumination and polarization control will be further discussed in the section on High Incident Illumination below.

Ⅱ. System configuration

An interference flaw detection system according to an embodiment can be formed in many different ways. Many embodiments include a common path and provision for controlling the relative phase between the defect signal and the mirror component. A general system configuration will be provided in this section. Specific design embodiments and subsystem embodiments will be provided in other sections below.

1. Embodiment of System Configuration

FIG. 1 illustrates an embodiment of an interference flaw detection system 100. The light beam 118 is generated by an illumination source 112, which in one embodiment is a coherent source such as a laser. Any wavelength provided may be used if it is possible to form the basic components of the interference imaging system. Examples of wavelengths that may be used include ultraviolet light, deep ultraviolet light, vacuum ultraviolet light, extreme violet light, visible light, infrared light, far infrared light, and the like.

In FIG. 1, the beam 118 is reflected toward the surface of the sample 110 and illuminates the sample surface shown. The beam 118 covers the field of view of the image of the surface of the sample 110. The sample 110 may be a wafer, reticle, or other sample to be inspected. The sample 110 disperses (or diffracts) a portion of the illumination beam and specularly reflects the other portion.

A high resolution optical imaging system, including a front end lens system 116 and a rear end lens system 114, is arranged to collect dispersed and mirrored components of light and direct them to the image sensor 140. The aberration of the imaging system can cause the relative phase between the mirror component and the dispersed component to change from one scattered light to another dispersed light. This type of phase change can degrade system performance. Thus, the imaging system is substantially limited to diffraction, i.e. only a small amount of orbital deviation. It should be appreciated that although a ray optic technique has been used, similar diffractive optic techniques may be used and those skilled in the art understand the equivalents and optical and diffractive optic descriptions of optical phenomena.

The design and manufacture of such imaging systems is well known in the art. The front end lens system is designed to be telecentric at the sample side to achieve uniform performance across the field. Vertical incidence does not need to be complete. A substantial amount of vertical acquisition error, such as a small degree, is usually tolerable. The rear end lens system 114 need not be a vertical incidence.

For most applications involving defect detection, the image of the sample needs to be magnified, typically 100x or more. The magnification of the sample image is generally achieved by making the focal length of the rear end lens system 114 longer than the focal length of the front end lens system 116. [ In order to achieve high performance, the focus of the imaging system needs to be kept precise during the sample scan. Precision maintenance of the imaging system focus generally requires a servo-controlled autofocus system. An embodiment of a servo-controlled focus system is provided in the section of the following autofocus system.

It should be appreciated that other types of image sensors may be used for the system 100. Two-dimensional image sensors such as CCD, time delay and integrated CCD (TDI CCD) have been found suitable for many applications. As used herein, the term "image sensor" refers to the entire image sensing hardware system, not the light receiver. For example, in this embodiment, the image sensor 140 includes a controller 142 that will be described in detail below.

High sensitivity and high static range are preferred for image sensors. In order to detect a small signal, high noise-less amplification of the signal is generally desirable. However, the high noise-free amplification of the signal requires a wide dynamic range for the image sensor. Thus, the dynamic range of an image sensor or sensor system will be an important issue when extremely small defects need to be detected.

Exemplary embodiments of the system 100, as shown in FIG. 1, may be implemented (e.g., via a command such as software included in a computer readable or machine readable medium) adapted to control the operation of various components of the system, And a controller 142, such as a computer or similar machine. Controller 142 is configured to control the operation of system 100 and is coupled to sensor system 140 and receives and processes raw digital electronic signals from the sensor system as described in detail below And a processing unit ("processor") 152 that forms processed video signals. In an exemplary embodiment, the processor 152 processes the raw signal to characterize the defect as well as to determine whether a defect exists, as described below, and to transmit it to another signal (Digital image of an adjacent field or an ideal field stored in the memory 154). As used, the term "electronic or electrical signal" includes analogous physical quantities or other informational analogous and digital representations.

The controller 142 receives the electronic signal from the sensor system 140 and processes the signal to characterize or classify defects in the sample. As described, the controller 142 includes a processor 152; The processor may be a processor or a device capable of executing a series of software instructions or may be a general purpose or special purpose microprocessor, a finite state machine, a controller, a computer, a central processing unit (CPU), a graphics processing unit (GPU) A field programmable gate array (FPGA), or a digital signal processor, although the invention is not so limited.

A memory unit ("memory") 154 is operatively coupled to the processor 152. The term "memory ", as used herein, is intended to encompass all types of memory such as RAM, ROM, EPROM, PROM, EEPROM, disk, floppy disk, hard disk, CD-ROM, DVD And the like; The present invention is not limited thereto. In an exemplary embodiment, the controller 142 includes a port or drive 156 that is adapted to receive a removable processor readable medium 158, such as a CD-ROM, DVD memory stick, or similar storage medium.

The defect detection and classification methods described herein may be used in various embodiments to provide controller 142 with machine readable instructions (e.g., computer programs and / or software) for performing the control operations for the method and operating system 100 (E. G., Memory 154) that includes a < / RTI > In an exemplary embodiment, the computer program is operated on processor 152 off memory 154; Via a network connection or modem connection when stored on the removable media 158 or via a disk drive or port 156 or when stored outside of the controller 142 or other Via a computer or machine-readable medium in the form of a computer readable medium, from the persistent store to the main memory.

Computer programs and / or software modules may include multiple modules or objects to perform various methods of the present invention and to control the operation and functionality of the various components of system 100. The form of the computer programming language used for the code varies between object-oriented languages in the procedural coded language. The file or object need not have a one-to-one correspondence to the module or method described in accordance with the programmer's desire. The method and apparatus also include a combination of software, hardware, and firmware. The firmware may be downloaded to the processor 142 to perform various exemplary embodiments of the present invention.

The controller 142 optionally includes a display unit 146 that can be used to display information using a wide range of alphanumeric and graphical displays. For example, the display unit 146 is useful for displaying live signals, or processes signals. The controller 142 optionally includes a data input device 148 such as a keyboard that allows a user of the system 100 to input information to the controller 142 to manually control the operation of the system 100. [

In an exemplary embodiment, controller 142 is operatively connected to a portion of sensor system 140 or is part of a sensor system. In another exemplary embodiment, the controller 142 is operatively connected to the sample position adjustment system 150 to position the sample and attenuator 144 and adjust the phase using the phase controller and the attenuator 122 . The controller is shown only for system 100 for the sake of brevity, but may be included in all exemplary embodiments described herein.

As shown in FIG. 1, dispersed component 128 and mirror component 124 pass through the same optical system. Thus, this embodiment is commonly in the form of an interference system. This feature is advantageous for stability of system performance. This is because there is a common disturbance to interference that affects the optical path in the same amount and maintains the relative phase between the scattered and mirrored components.

In some embodiments, the phase controller and attenuator 122 are installed in the path of the mirror component 124. The mirror component passes through the phase controller 122 and its relative phase can be adjusted to maximize the defect detection sensitivity or to determine the phase and amplitude of each defect signal. The dispersed light beam 128 passes through the compensation plate 130 to compensate for the large path length difference between the mirror component and the dispersed component. The axial position of the compensator plate is flexible because the optical path length of the light beam does not depend on the axial position of the compensator plate. That is, even though most of the figures show the compensation plate and the phase controller on the same plane, the compensation plate need not be placed in the same plane as the phase controller to emphasize the fact that the compensation plate compensates for the long optical path length of the phase controller . The compensator plate may be disposed substantially above or below the phase controller. The flexibility of the compensation plate in the axial direction promotes the mechanical design around the compensation plate.

Phase control is a good feature and can be used to dramatically improve defect detection capabilities and will be described in detail below. According to some embodiments, especially when the dynamic range of the image sensor is too small for application, it may be desirable to add a pinhole stop to the passageway to improve image contrast, or to add a reflective coating to one of the surfaces of the phase controller component By addition, the mirror component 124 can be attenuated. The reflected portion of mirror component 124 is shown as beam 126 in FIG. The phase controller and attenuator are located in the main pupil plane or aperture stop, which can avoid power loss and complexity due to additional pupil relay systems, beam splitters, and other components that may be required.

Many other types of light sources can be used for the source 118. Bright sources are preferred for many applications because bright sources allow for clean spatial separation of mirror components from dispersed components in the pupil conjugate plane of an optical imaging system. The bright source makes the Fourier filtering very effective due to the small footprint for the mirror component in the pupil plane. A clean separation of mirror components from dispersed components and an effective Fourier filter are important for the optimal performance of the systems and methods described herein. In general, the brighter the source, the better. The lightest source that is currently available is the laser. Thus, lasers are the preferred source for many applications.

The sample may be illuminated with a laser in coherent or non-coherent form. However, non-coherent illumination by laser requires a costly speckle buster as compared to coherent illumination, as well as a serious drawback that makes fourier filtering ineffective. Thus, coherent illumination by a laser source is preferred. A method of achieving a uniform illumination intensity over the entire field is provided in the section of the coherent uniform illuminator below.

Many different types of lasers are suitable as illumination sources. For example, the laser may be in the form of a continuous wave or a pulse such as a mode-locked or a Q-switched laser. The laser may have multiple transient modes or minute transient bandwidths. However, a single spatial mode is generally preferred for coherent illumination. Other sources such as arc lamps, light emitting diodes (LEDs) may also be used. However, it is difficult to separate the mirror component from the dispersed component with this derived source. This is because some of the dispersed components may overlap the mirror component even if it is the pupil plane. This makes precise control of the relative phase between the dispersed component and the mirror component difficult. Impression of plane control results in poor performance. It is also difficult to implement an effective Fourier filter as a source derived from a fairly large footprint of the mirror component in the pupil plane.

Using a laser as a light source can damage hot spots or cause damage to some lens components. This problem can be alleviated by the use of lens design and specially designed dual lens materials such as fused silica, calcium fluoride, lithium fluoride.

The phase controller 122 should be placed in or close to the pupil or pupil conjugate of the optical imaging system so that the mirror component can be spatially separated from the dispersed component into a clean form and also to achieve uniform performance over the entire image field do. Ideally, the optical system is very simple and does not require the conjugation of the aperture stop of the optical imaging system. The phase controller 122 is disposed at or close to the aperture stop plane of the imaging system of FIG. In many applications, it is preferred to position or place the phase controller in the aperture stop plane of the optical imaging system, since this is not only bulky and costly, but also can reduce image quality and energy efficiency. When a laser is used as the light source 112 and the sample 110 is coherently illuminated, the magnitude of the mirror component in the pupil conjugate plane is typically as small as 1 mm, so that the phase controller is very compact, Do not interfere with other system components.

The ability to position or place the phase controller in or near the aperture stop plane of the optical imaging system is a practical advantage in many applications, even if the area is narrow or crowded with other components. This advantage is particularly valuable in existing or future defect detection system designs because it is difficult and costly to add more optical elements to relay the aperture stop to a less crowded area. In some other embodiments, the aperture stop plane may be relayed to a less crowded area by designing a high quality pupil relay system if the area of the aperture stop is narrow or crowded and the phase controller is not acceptable. However, this design causes undesirable side effects. Designing an appropriate pupil relay system for this highly etendue DUV optical system is both difficult and costly to design.

2. Phase controller

Figures 2a and 2b show an embodiment of a phase controller and an attenuator. The phase controller is used to change the relative phase between the dispersed component of the light from the sample and the mirror component. It should be appreciated that the absolute phase is generally not the object of interest. Rather, the object of interest is the relative phase between the dispersed component and the mirror component. Thus, the phase controller can be installed in the path of a mirror component or a dispersed component.

While most of the figures herein depict a phase controller disposed in the passageway of the mirror component, in some embodiments the phase controller is installed in the passageway of the dispersed component. There are various ways to change the phase of the beam of light. One technique for varying the phase is to vary the optical path length of the beam. The optical path length can be easily changed by varying the thickness of the optical material through which the beam passes. This type of phase controller can be manufactured in a variety of different ways. One method is to overlap two wedge-shaped glass plates as shown in FIG. 2A. Phase controller 122 uses upper glass wedge 222 and lower glass wedge 220. The incident light beam 124 is directed into the lower wedge 220 and at least a portion thereof passes through the upper wedge 222 as a light beam 212. By moving one of the wedge plates in the direction indicated by arrow 250, the optical path length of the penetrating beam is changed. For example, the upper wedge 222 may be moved to the right to increase the passage length and to the left to reduce the passage length.

The air gap between the upper wedge and the lower wedge allows the mirrored component beam to deviate from the desired path. This can cause the wavefront of the mirror-like part to tilt to the image plane. Tilted wavefronts can cause performance changes through the field, particularly in the sensitive operating mode, which will be described below. However, this problem can easily be solved. The mirrored component beam can be returned to its desired path by slightly tilting the entire phase controller block in the direction away from the beam exit direction. The required amount of tilt can be determined by measuring the wavefront tilt of the mirror-like component in the image plane. The wavefront gradient appears as a linear phase change of the mirror-like component through the field. Thus, the wavefront tilt can be measured during the phase controller calibration process, which will be described in the next section. In order to return the beam to its desired path, iterations of a pair of phase-blocking slopes are predicted.

The phase controller needs to be calibrated before use. The correction can be performed purely mechanically by precisely measuring the dimensions and position of the optical part of the controller. However, the better way is to optically perform this process, which can be done without difficulty. For example, the phase controller can be calibrated using a step-phase object such as a phase mask consisting of a two-dimensional array of islands each having a small path difference from its perimeter. An image of a stepped phase object exhibits a contrast inversion around the phase-step region when the phase of the mirror component passes through a 90 ° point. The image contrast strikes extremes at a phase angle of 0 and a 180 ° phase of the mirror component. Using this phenomenon and the mechanical characteristics of the phase controller, the phase controller can be precisely calibrated. Other patterns such as small pits, small islands, narrow valleys, and narrow mesas can be used for correction. This correction process provides a phase reference or a zero phase shift point.

If multiple identical patterns are placed across the field and correction is performed simultaneously across the field, we may not only achieve more accurate correction of the phase controller, but also set the phase reference across the field. The value of the phase reference should be the same if the imaging system is perfect. However, the actual imaging system is not perfect. Field, a slight change in the phase reference value present due to phase controller tilt, deviations, field curvature, etc. is predicted. The linearly changing portion of the phase reference value through the field can be eliminated by finely tilting the entire phase controller block. The nonlinear part of the change is due to drawbacks of the imaging system.

The first order effect of imaging system defects is a change in the phase reference value across the field. Thus, the magnitude of the change in the phase reference value is a good indicator of the quality of the imaging system. The change of the phase reference value through the field is not important in the catchall all operation mode and the dark night operation mode provided in the following section. However, this may be the subject for a sensitive operating mode to be provided in the following section. This is because the performance of the high sensitivity operating mode can be changed across the field. Therefore, it is important to keep the quality of the imaging system high.

It should be noted that there is another phase, called the gouy phase, that needs to be calibrated. However, correction of the high phase as long as the phase controller is corrected is easy. This phase is described in the section called variable pinhole stop below.

In an exemplary embodiment, the attenuator is added to the type of phase controller shown in Figure 2A by forming a reflective coating on at least one surface of the phase controller component. For example, in FIG. 2A, the reflective coating 224 is disposed on the surface of the lower wedge 220 as shown. According to this embodiment, a portion of the incident beam 124 is reflected by the coating 224 and dumped as shown by the dump beam 124. According to some embodiments, the amount of attenuation can be varied stepwise by forming several coatings in rows at different reflectivities and making the components movable.

FIG. 2B illustrates an embodiment of a reflective coating 224 shown along line A-A 'in FIG. 2A. In this embodiment, the coating 224 consists of three different reflective coatings 230, 232, 234 arranged as shown in the direction of arrow 240. By moving the lower wedge 220, different attenuation levels can be achieved.

3 shows another embodiment of the interference flaw detection system 300. The interference flaw detection system 300 shown in FIG. In Figure 3, the phase of the dispersed component, denoted by beam 128, is varied using a glass wedge 324. The coherent light source 112 generates an illumination beam 118 that is reflected toward the surface of the sample 110. [ The dispersed component of the reflected light is denoted by a beam 128, and the mirror component is denoted by a beam 124. [ By moving the upper wedge relative to the lower wedge, the effective path length and thus the phase of the dispersed component are varied. Mirror component 124 passes through component block 326 to compensate for the path length difference between the mirror component and the dispersed component. The front end lens system 316 and the rear end lens system 314 collect light from the sample 110 and then focus the light on the image sensor 140.

Other methods of varying the optical path length are shown in Figures 4A and 4B. In this embodiment, optically transparent liquid 410 is injected between electrodes 420 and 422 of a ring capacitor, as shown in FIG. 4A. The thickness of the liquid 410 is varied by varying the voltage across the capacitor electrodes 420 and 422. Liquid crystals may be used for the liquid 410, not the regular liquid. In this case, the optical path length is changed by changing the average orientation of the liquid crystal molecules. 4B shows a top view of the structure of FIG. 4A in a direction along line B-B '. The upper electrode 420 is shown along the liquid crystal 410.

A movable wedge-shaped glass plate or a transparent film strip can also be used as a simple continuous variable phase controller. However, this type of phase controller inevitably leaves the light beam from its ideal path, and as a result, adversely affects system performance.

Figure 5 shows an embodiment of a movable mirror used to vary the optical path length, according to another embodiment. The system includes a movable member 530 having a reflective surface. The incident mirror reflection beam 520 is partially reflected from the surface 534 of the member 530. The dispersed light beams 510 and 520 are reflected from the fixed reflecting member 536. Movable mirror-type phase controllers have been found to be particularly useful for applications that use extremely short wavelengths of light, such as vacuum ultraviolet light or ultra-ultraviolet light used in future-generation defect detection systems. This is because it is very difficult to find or develop a transmissive optical material for such wavelengths.

It should be appreciated that the phase control mirror does not always need to be highly reflective. For various applications, especially when the dynamic range of the image sensor is narrow, low reflectivity is preferred because attenuation of the mirror component is useful to achieve an appropriate image contrast. For example, bare glass without any coating has been found to be able to provide adequate reflectivity in some cases. In other embodiments, the phase controller may be configured with an electro-optic component, especially if a fast response is desired.

Figure 6 shows an embodiment of an interference flaw detection system using a movable mirror phase controller. The incident light beam 618 is directed to the surface of the sample 610, which may be a wafer, reticle, or other sample to be inspected. The dispersed components denoted by beams 510 and 512 pass through lens system 616 and are reflected from reflective member 536 before passing through lens system 614 which directs the beam to image sensor 640. The mirror component beam 520 is reflected from the surface of the movable reflecting member 530 as described with reference to Fig.

Although a continuously variable phase controller is shown in the various embodiments described herein, it should be appreciated that, according to some embodiments, a separate variable phase controller may be used. For example, if the total number of phase selections is limited to four, one selection of the phase value for the separate variable phase controller is 0 占 180 占 90 占. Three separate phase selections are performed in some applications, such as the catchall mode of operation described below. In this case, the one-time selection of the phase value is 0 °, ± 120 °. For example, a single selection of the phase value {0 °, 180 °} or {90 °, -90 °} is not much preferred if the number of phase selections is reduced by two because the sign of the interference term is Because it can not match the sine of the implied field for defects of amplitude type and faults of phase type.

A separate variable phase controller can be implemented in many different ways. One method of fabricating a separate variable phase controller is to deposit a thin film of precise thickness on a substrate or to etch the substrate to an exact depth. Here, although the separate variable phase controller may have a different physical shape than the continuously variable phase controller, they are not regarded as conceptually different kinds of phase controllers, and are considered as a subset of continuous variable phase controllers, Since the phase can be operated in a separate form.

A single phase controller may be shared by multiple wavelengths or may be used with broadband illumination. However, in this case, precise phase control for all wavelengths is very difficult to achieve.

If the phase of the phase controller can be changed rapidly, the system can be operated in a frequency conversion mode. The frequency conversion mode is a good choice when there is a significant amount of 1 / f noise. Rapid changes in the phase of the phase controller can be achieved in many different ways. This can be achieved, for example, by rapidly moving one of the glass members of the phase controller shown in Fig. 2A. If the phase controller is made of an electro-optic material, a very rapid phase change can be achieved by electronically controlling the phase controller. Frequency translation systems are very difficult to implement in scanning systems, particularly fast scanning systems, but it is very easy to implement non-squatting systems such as static or stepping systems.

3. Fourier filtering

The blocking of unwanted light in the pupil conjugate or aperture stop is called Fourier filtering because the light amplitude distribution in the pupil plane or aperture stop is the Fourier transform of the optical amplitude distribution in the object plane. Fourier filtering is a desirable feature in many applications because it can effectively reduce the amount of light reaching a detection array that is diffracted by a Manhattan mask or a wafer pattern. This reduces not only photon noise, but also sample pattern noise. It also makes the intensity of light more uniform across the field.

More uniform light intensity allows better use of the dynamic range of the image sensor for noise-free signal amplification. Most of the circuit patterns are formed from the x- or y-direction edges, thereby distributing (or diffracting) the light along two narrow bands in the pupil corresponding to the y and x directions of the circuit pattern. This kind of scattered light does not carry much information about defects but generates photon noise and pattern noise and can saturate the image sensor.

Therefore, it is preferable to filter this kind of light. Figures 7a-7c illustrate an embodiment of a compensator plate having opaque Fourier filter strips for use in an interference flaw detection system with near vertical illumination. 7A, the compensation plate 730 is shown as a narrow Fourier filter strip member 750, 752, 754, 756. The scattered light near the mirror beam is blocked by the opaque blocking plate 732, which has a wide width p wide enough to pass through the mirror beam. In this embodiment, the light scattered by the x and y wafer pattern features is seated on the filter strip members 750, 752, 754, 756 at the pupil plane or aperture stop. In this way, this kind of unnecessary light is very effectively filtered. What is needed is a pair of crossed strips of opaque material, such as metal.

It should be appreciated that the Fourier filter not only blocks the diffracted light from the periodic pattern, but also blocks the diffracted light from an aperiodic pattern such as a long line or edge directed in a direction perpendicular to the Fourier filter strip. It should be appreciated that the strip members 750, 752, 754, 756 do not block most of the defective signal light while blocking most of the unwanted light generated by the Manhattan pattern on the mask or wafer. This type of Fourier filter, which blocks unwanted light in two directions, is called a two-dimensional Fourier filter. Two-dimensional Fourier filtering is more effective than one-dimensional Fourier filtering in blocking unwanted light from a two-dimensional pattern on a sample. This means that the two-dimensional Fourier filter makes the intensity of the image more uniform across the field than the one-dimensional Fourier filter.

Uniform image intensity is important in many applications because it makes full use of the dynamic range of the image sensor for amplification of defect signals. Thus, an effective two-dimensional Fourier filter is fundamental to the high amplification and noise-free amplification of weak defect signals. This improves the useful dynamic range of the image sensor.

The width of the Fourier filter does not need to be uniform and can be varied across the pupil to more effectively block unwanted light. Unwanted light is generally stronger near the mirror component in the pupil plane. Thus, the Fourier filter strip generally needs to be sloped to optimize its performance. Medium to wide, narrow tapered Fourier filter strips at the ends are generally more effective at blocking unwanted light while minimizing the impact of opaque signal deposits.

The position of the strip need not be changed as long as the illumination beams 718 and 780 are kept in the same position. Thus, the Fourier filter does not need to drive any mechanism and can be installed in a permanent form.

It should be appreciated that the Fourier filter can have dual functions. The Fourier filter strip may be used as an aperture stop for the mirror component by stretching its inner end into the area through which the mirror component passes. If the aperture stop needs to be changed, the Fourier filter strip should be manufactured so that it can move along its length. The mechanical friction between the moving Fourier filter and the fixed compensation plate can be easily avoided by forming a large gap between the Fourier filter strip and the compensation plate. The formation of a fairly large gap between the Fourier filter strip and the compensation plate does not affect the performance of the imaging system, since the movement of the compensation plate does not affect the optical path length of any rays.

Thus, the two-dimensional Fourier filtering is not only simple and easy to achieve, but also minimizes the impact on the signal light. 7A shows an upper glass wedge 722 and a lower glass wedge 720. FIG. Figure 7B illustrates a cross-sectional view of the arrangement of Figure 7A along line C-C ', in accordance with some embodiments. The compensation plate 730 is shown having an opening in which the upper glass wedge 722 and the lower glass wedge 720 are disposed. The upper surface of the lower glass wedge 720 has a variable reflecting surface as shown and also as described with respect to Figures 2A and 2B.

Figure 7C illustrates a cross-sectional view of the arrangement of Figure 7A along line D-D ', in accordance with some embodiments. The compensation plate 730 is shown having an opening in which a lower glass wedge 722 and a lower glass wedge 720 having a reflective surface 724 are disposed. Relative motion between the upper glass wedge and the lower glass wedge is achieved by an elongate arm 726 and an actuator 770 connected to the upper glass wedge 722. As expected, the center of the illumination input prism 780 and the small pupil stop of diameter p for the mirror beam are arranged diagonally opposite one another in Figures 7 and 8.

It should be appreciated that, for most of the figures, the compensation plate and phase controller are disposed on the same plane or close to it to emphasize the fact that the compensation plate compensates the optical path length for the phase controller. However, this is not necessary because the axial position of the compensation plate is very flexible as described above. The flexibility of the axial position of the compensator plate can alleviate mechanical collisions or difficulties around the Fourier filter and the phase controller.

According to another embodiment, a Fourier plane blocker may be added to remove pattern diffraction other than that originating from the Manhattan pattern on the sample, if desired. This type of special Fourier breaker is typically custom designed and can be implemented in many different ways. For example, additional metal strips may be induced in the pupil plane. Another way is to insert a pellicle with a glass plate or print pattern on the pupil plane. This kind of flexibility allows near perfect filtering of noise generating light for almost all kinds of wafers or mask patterns. This is another advantageous feature of the systems and methods described herein.

Too much Fourier filtering may be detrimental because the Fourier filter blocks both the noise generating light as well as the defective signal light. Interception of signal light can impact the final defect signal in two ways. That is, it not only reduces the total amount of signal light but also causes the image of the defect to be blurred through the diffraction. There is typically an optimal amount of Fourier filtering depending on the pattern on the wafer. Thus, the desired amount of Fourier filtering depends on the particular application and can be determined without undue experimentation by one skilled in the art.

The Fourier filter does not always have to be made of an opaque material such as a metal strip. The Fourier filter may be made of a translucent material or even a completely transparent material such as a dielectric film. This type of Fourier filter is very effective in increasing the visibility of the signal or some pattern or feature. For some applications, such as observing complex patterns or features, a very sophisticated Fourier filter can be used to increase the image clock.

Fourier filters made of absorbing materials such as metals can not be hot during operation, especially in industries where a generally strong light source is used. The high temperature Fourier filter can heat the ambient air and this can cause mechanical problems as well as optical problems because it can distort the wavefront of the signal light. However, a problem with this kind of heat can be solved or mitigated by flowing a gas having a high thermal conductivity, such as helium, around the Fourier filter. Helium gas is particularly suitable because it has a very low refractive index and is therefore not very sensitive to its density.

4. Variable pin hole stop

The systems and methods described herein operate with or without a fixed pinhole stop in the path of the mirror component. However, in many applications it has been found that the variable pinhole stop in the passage of the mirror-like part can improve system performance.

Most of the figures illustrating the phase controller, i.e., Figures 2a, 7b, 7c, 9, 10, 11, 13b, 13c and 14, show a small stop for the mirror beam at the top of the phase controller As shown in Fig. The ideal position for the mirror beam stop is the pupil plane. This is because the system performance may change across the field if the pinhole stop is located away from the pupil plane. The principal pupil plane is the back focal plane of the front end lens system for a vertical incidence design.

The term "mirror component" can not be precisely defined because there is no clear boundary between the mirror component and the dispersed component. Mirror components must have a finer size and thus even have an extremely small amount of dispersed (or diffracted) components. Thus, the mirror component refers to a combination of light that is not actually dispersed (undiffracted) and light that is low-angle dispersed. The "mirror component" as used herein is allowed to have a small amount of low angle components.

Since the mirror component has a small amount of low-angle dispersed light, the feature of the mirror component can be changed by varying the amount of low-angle dispersed light that it contains. A change in the size of the mirror stop is one of the simplest devices that can be used to change the amount of light scattered in a mirror beam. A large mirror stop introduces more scattered light into the mirror beam, and vice versa. What is important is that the stop size is directly related to the spatial uniformity of the mirror component in the image plane. The large stop does not provide spatial uniformity of the mirror component in the image plane because it allows more scattered light to pass and vice versa. In other words, the large mirror stops level the local variation of image intensity, and vice versa.

More precisely, a large mirror stopper spatially equalizes the local variation of the complex amplitude of the mirror component in the image plane, and vice versa. That is, the mirror stop also spatially equalizes not only the intensity or amplitude, but also the phase change of the mirror component traversing the field of view. Mathematically speaking, the complex amplitude of the mirror component in the image plane is a convolution of the sample reflection power function with the diffraction pattern of the mirror stop in the image plane.

Thus, we can change the spatial uniformity of the mirror component in the image plane by changing the mirror stop size as well as changing the total amount of mirror components that can reach the image sensor. The size of the variable mirror stopper diameter is shown by the diameter (p) in FIGS. 7B and 7C. This variant of the mirror component can be used to improve defect detection capabilities. The mirror stop may be used for attenuating the mirror component because the small stop transmits less mirror component. Another way to attenuate the mirror component is described in the amplitude attenuation section below.

If the dynamic range of the image sensor is not sufficient, the defect signal is poorly characterized by a limited number of available gray levels, even though the entire dynamic range of the detector is fully utilized by the noise-less amplification of the signal. In this case, a partial attenuation amount of the mirror component is required in order to achieve an appropriate contrast in the biological image. By adjusting the size of the mirror stop, adequate attenuation of the mirror component can be easily achieved. The attenuation of the mirror component using the mirror stop has a side effect that makes the mirror component more uniform across the field.

Another advantageous feature with mirror stops is that the reflected light does not form a ghost image because it can be easily removed from the optical system. As is well known, an attenuator having a reflective coating can produce a ghost image through a second reflection with another surface. However, there are drawbacks. First, the mirror stop must absorb a lot of light energy for proper attenuation of the mirror component, which will be very hot. This can cause mechanical problems as well as optical problems, because hot stalls can heat the ambient air and heated air can distort the wavefront. However, problems with this kind of heat can be alleviated by filling the lens cavity with a gas having a high thermal conductivity and a low refractive index, such as helium. Helium gas is a good choice because its refractive index is very low and therefore very sensitive to its density.

The second drawback is the phase shift of the mirror component with pinhole size. This type of phase change is called high phase shift. This is a unique phenomenon and therefore can not be avoided easily. However, this phase shift is static and is therefore easily mapped onto the field and compensated. Thus, the phase change of the mirror component associated with the mirror stop size needs to be engaged, but not enthusiastically acclaimed. Actually, the mirror stop is found to be the size of the pin hole. The counterpart of the pinhole is a small mirror (pin mirror) that reflects a part of the incident light. The choice of mirror stop shape and shape depends on the application and the design of the optical system. The transmissive and reflective pinholes share the same optical properties. Thus, all the explanations associated with the transmissive mirror stops can be applied directly to the reflective mirror beam stops.

In most of the figures, the mirror beam stop and the Fourier filter components are shown as separate components to emphasize their separate function. However, in actual system design, it is preferable to combine two separate parts into one to simplify mechanical design and minimize potential mechanical impact. The two parts can be combined into one by extending the Fourier filter strip inwardly or by extending the mirror beam or pinhole opening outward. In a combined design, the size of the pinhole stop can be adjusted by moving the Fourier filter strip along its length.

5. Actuator

Variable phase controllers require several types of mechanical or electrical actuators. The most convenient place to place the actuator is on the right side of the phase controller. However, placing the actuator on the right side of the phase controller will block too much signal light. In some embodiments, the actuator is placed in the periphery of the optical imaging system, which is an attractive choice because it provides more space for the actuator. However, the disadvantage of this selection is that it requires some mechanism to transfer the actuator motion to the phase controller. The motion transfer mechanism must cover the cavity radius and can block the signal light. However, according to some embodiments, the problem of light blocking is solved by using a fixed position of the Fourier filter. By providing a motion transfer mechanism, such as a movable or rotatable wire, above or below the Fourier filter barrier strip, further blocking of light can be avoided.

7A and 7C, a motion transmitting member 726 is provided that is operated along the path of the Fourier filter member 754. [ Motion transfer member 726 is driven by actuator 770 and moves the upper wedge-shaped glass member in the variable phase shifting mechanism. Similarly, motion transfer mechanisms for other components, such as variable pinhole stops or wave plates, can be implemented to minimize additional light blocking. Sufficient space for the motion transfer mechanism can be secured easily because the axial position of the compensation plate is very flexible.

5. Darkness

The phase controller and its actuators inevitably blur (or block) part of the signal light. This type of light interruption not only reduces the overall amount of signal light reachable to the image sensor, but also reduces the resolution of the optical system by diffracting the light. This is an undesirable side effect that is minimized as much as possible. To achieve this, the optical components and actuators of the phase controller should be made as small as possible, or the actuators should be placed around the optical imaging system.

FIG. 8 shows an embodiment of the arrangement of folding prisms for illumination light according to another embodiment of the present invention. The compensation plate 830 is disposed in the Fourier filter strips 850, 852, 854, and 856 in a similar fashion to that shown in FIG. 7A. 8, an additional small reduction in darkness can be achieved by placing a folding prism 880 for the illumination light beam 818 in alignment with the Fourier filter strip 850 as shown. In addition, softening of the edge of darkening and aperture stop of the optical imaging system can reduce the undesirable side effects of edge diffraction. The effective and practical method of softening and darkening the edge of the opening is described in the section of the serrated opening below.

Rather, a good side effect is obtained from the large darkening caused by the blocking plate 732. This darkening acts as a guard band in the dark field mode. This large guard band along a two-dimensional Fourier transformer darkens darkness mode. This is because the darkfield mode is characterized by low noise and consequently darkening can maintain higher defect detection sensitivity compared to the darkfield mode with less darkness.

7. Polarization control of illumination light

The control of the polarization depth of the illumination light on the sample surface by controlling the polarization direction of the illumination light has already been described. However, penetration depth control is not the only reason for the polarization of illumination light. The detection sensitivity of some types of defects depends on the polarization of the illumination light. Therefore, the ability to change the polarization direction of the illumination light can be an important feature. The polarization of the illumination light can be easily and precisely controlled by the arrangement described here since the etendue of the illumination light beam is small. Existing polarization control devices can be used. If the polarization of the illumination light changes during passage through the illumination system, it can be measured and compensated. Polarization other than linear polarization is not necessary to maximize defect detection sensitivity unless the defect and its surrounding pattern have a helical structure. This has been found for semiconductor wafers and reticles. However, if linearly polarized light orthogonal to each other is provided at the same time, linearly or circularly polarized light in the diagonal direction can be used. However, in this case, the defect detection sensitivity can be compromised.

8. Polarization control of collected light

The polarization of the signal light may be different from the polarization of the mirror component. In order to achieve high defect detection sensitivity, the polarization of the mirror component is made as large as possible as the polarization of the signal light. Thus, in some embodiments, the polarization of the mirror component is changed in the path between the sample and the detector. This is easy and precise because the etendue of the mirror component is small.

Figure 9 shows a phase controller in combination with a polarizing rotator suitable for some embodiments. Figure 9 shows a bottom wedge-shaped glass plate 920 with a reflective coating 924, a movable top wedge-shaped glass plate 922, and a variable mirror stop 950. The rotatable lambda / 2 plate 960 is disposed above the variable pinhole stop 950. The incident mirror light beam 916 is partially reflected by the coating 924 and a portion of the beam 912 passes through the movable wedge-shaped glass plate 922 and the stop 950 and the rotatable? / 2 plate 960 ). The polarization control capability of the arrangement shown in Figure 9 is somewhat limited in that it can not transmit the polarization of the incident mirror component to any form of polarization. However, the arrangement can rotate the incident linearly polarized light in any direction. Polarization other than linearly polarized light is not required to maximize defect detection sensitivity unless the defect and its surrounding pattern have a helical structure. This has been found to be the case for semiconductor wafers and reticles. Therefore, the simple polarization control device shown in Fig. 9 will be suitable for wafer or reticle defect detection.

If more general polarization control is required, the slightly complex polarization controller shown in FIG. 10 may be used. 10 shows a lower wedge-shaped glass plate 1020 having a reflective coating 1024, a movable upper wedge-shaped glass plate 1022, and a variable stopper 1050. The rotatable lambda / 2 plate 1060 and the rotatable lambda / 4 plate 1062 are disposed on the variable pinhole stopper 1050. The incident mirror light beam 1016 is partially reflected by the coating 1024 and a portion of the beam 1012 is transmitted through a movable wedge-shaped glass plate 1022, a stop 1050, a rotatable? / 2 plate (1060) and the rotatable? / 4 plate (1062). The arrangement shown in Fig. 10 can convert the incident polarized light into any type of polarized light. The working principle is disclosed in R. M. A. Azzam and N. M. Bashara, "Ellipsometry and Polarized Light", Elsevier Science B. V., 1999, pp. 72-84, which is incorporated herein by reference.

A part of the dispersed component whose polarization is orthogonal to the polarization of the mirror component does not interfere with the mirror component and thus contributes to the dark field portion of the image. In some applications, this part of the orthogonal polarization in the dispersed component is filtered to increase the image contrast or reduce photon noise. The filtering of the dispersed ray orthogonal polarization is performed by inserting the unwanted components into a pathway of the dispersed component to linearly polarize the unwanted polarization components, removing the unwanted components with a linear polarizer, Can be achieved by matching.

9. Amplitude attenuation

As described above, the mirror component amplifies the defect signal. The stronger the mirror component, the greater the amplification. Thus, in most cases undamped or strong mirror components are preferred. This is in contrast to conventional microscopes where mirror components are blocked or severely attenuated to achieve high contrast in the biomedical image. However, too strong a mirror component can saturate the image sensor. Saturation of the image sensor not only reduces the defective signal in an undesirable way, but also causes distortion. In other words, if the dynamic range is saturated by a mirror component, the defect signal can not go through the gray level needed even if it is amplified as much as possible by the mirror component. In this case, sometimes a slight attenuation of the mirror component is needed to increase the contrast of the biomagnetism with increasing illumination intensity to increase the scattered component.

The attenuation of the mirror component using the mirror aperture stop to avoid detector saturation has already been described. This section describes different attenuation methods. The simplest method is to absorb the mirror component using some light absorbing material. However, this simple attenuation method is not suitable for wafer or reticle defect detection due to the high power of the mirror component which may damage the optical attenuator.

The most appropriate method of attenuation of a mirror component is to reflect an excessive portion of the mirror component away from the sensor plane. This kind of attenuator can be easily configured by forming a reflective dielectric coating on one of the phase controllers as shown in Figures 2A and 2B. The amount of attenuation can be varied by forming a number of different reflective coatings each having a different reflection force into rows and making them movable as shown in Figure 2b. This type of attenuator is simple and does not require additional optical components. However, this type of attenuator can generate ghost images due to its highly reflective surface.

Achieving continuous damping changes with this kind of simple attenuator is difficult. For increased performance, a continuously variable attenuator may be used. One method of fabricating a continuously variable attenuator is to utilize the polarization properties of light. It is well known that a continuously variable attenuator can be constructed by rotating the polarizer about the axis of a linearly polarized beam, or alternatively by rotating the polarization direction of the beam passing through the fixed polarizer. Figure 11 shows an embodiment of a continuously variable attenuator using a polarization beam splitter. 12 shows an exemplary implementation of a system using the attenuator format shown in FIG. Figures 13A-13C illustrate the system near the pupil or aperture stop in greater detail, in accordance with some embodiments.

11, a polarized laser beam 1116 is incident on a polarization beam splitter 1164 that reflects s-polarized light 1126 while transmitting p-polarized light 1110. By controlling the polarization direction of the incident light using the rotatable lambda / 2 plate 1162, the amount of the mirror component passing through the polarization beam splitter can be controlled in a continuous manner. After passing through the beam splitter 1164, the p-polarized light beam 1110 passes through the movable wedge-shaped glass plate 1122 and the variable aperture stop 1150 as described above. The outer rotatable lambda / 2 plate 1160 can be used to redirect the polarization of the outgoing light in any direction. This attenuation method is well suited for wafer or reticle inspection. However, this method is generally not perfect. This allows linear polarization to operate well. If more general polarization states need to be used, the optical component of the bookkeeping may be added to the attenuator.

Figure 14 shows an embodiment of an attenuator having a lambda / 2 and lambda / 4 plates that can be used to achieve certain polarization states. Beam 1416 is incident on a fixed polarization beam splitter 1464 that reflects s-polarized light 1426 while transmitting p-polarized light 1410. By controlling the polarization direction of the incident light using the rotatable lambda / 4 plate 1466 and the rotatable lambda / 2 plate 1462, the amount of the mirror component passing through the polarization beam splitter can be controlled in a continuous form. After passing through the fixed beam split polarizer 1464, the p-polarized light 1410 passes through the movable wedge-shaped glass plate 1422 and the variable stop 1450 as described above. The rotatable lambda / 2 plate 1460 and the rotatable lambda / 4 plate 1468 on the output side can be used to redirect the polarization of the exiting light under any condition. By rotating the? / 2 plate and the? / 4 plate, any kind of polarized light of the mirror component can be obtained with appropriate attenuation.

In Figure 12, the interference flaw detection system 1200 includes an illumination source 1212 that generates a coherent beam 1218. The beam 1218 is directed towards the surface of the sample 1210 as shown. The sample 1210 may be a wafer, a reticle, or other components to be inspected. The dispersed component from sample 1210 is again provided by beam 1228 and the mirror component is again provided by beam 1224. [ The high resolution mirror component including lens systems 1214 and 1216 collects the dispersed and mirror components of light and directs them to image sensor 1240. Subsystem 1270 is disposed in the path of mirror component 1224 and includes a phase controller, a variable attenuator, and one or more polarization rotors as shown and described with respect to Figures 11-14. The dispersed light beam 1228 passes through the compensation plate 1230 to compensate for the path length difference between the mirror component and the dispersed component. The beam dump 1226 receives a portion of the mirror component 1224 attenuated by the variable attenuator.

13A shows a compensation plate 1330 having narrower Fourier filter strip members 1350, 1352, 1354, and 1356. FIG. The illumination beam 1318 is reflected toward the sample (not shown) using a prism 1380. [ Subsystem 1370 is arranged as shown; A phase controller, a variable attenuator, and one or more polarization rotors as shown and described in Figs. 11 and 14. Figures 13B and 13C show cross-sectional views of the arrangement of Figure 13A along lines E-E 'and F-F'. 13B and 13C show a compensation plate 1330 having an opening in which various components of the subsystem 1370 are disposed. The flat beam beam splitter 1364 reflects the s-polarized light while transmitting the p-polarized light. By controlling the polarization direction of the incident light using the rotatable? / 2 plate 1362, the amount of the mirror component passing through the polarization beam splitter can be controlled in a continuous manner. The p-polarized light passes through the movable wedge-shaped glass plate 1322 and the variable stop 1350. The outer rotatable lambda / 2 plate 1360 can be used to redirect the polarization of the excited light in some direction.

10. High Incident Illumination

One source of noise that may be considered is the wafer pattern noise that is generated when the printed pattern on the wafer across the wafer is changed finely from the die-die due to a change in manufacturing process. The wafer pattern noise is increased according to the penetration depth of the illumination light to the wafer surface. Therefore, it is sometimes desirable to reduce the penetration depth of the illumination light to the wafer surface.

Short wavelength light, such as deep ultraviolet or extreme ultraviolet light, does not pass through the wafer surface much deeper because most of the materials used for wafer patterning are opaque to short wavelength light due to their strong absorption of short wavelength light. However, long wavelength light, such as visible light or near ultraviolet light, can penetrate the wafer surface considerably deeply due to the low absorption of light by most materials at this wavelength. The most common way to reduce penetration of the illumination light to the sample surface is to illuminate the sample with a high incidence angle with s-polarized light. It should be noted that the angle of incidence is defined as the angle between the ray and the surface normal, not the surface itself. Extremely large incidence is called grazing incidence.

However, this method has a pair of disadvantages. First, this can reduce not only the wafer pattern noise, but also the intensity of the defective signal light. Second, it can increase the spatial frequency bandwidth of the interference term shown in equation (2c) in the image plane. The increase in the spatial frequency bandwidth requires a fine sampling of the image to accurately detect the interference term. This can reduce the efficiency of the catchall mode of operation to be described in the following section.

Despite this drawback, in some applications, it is desirable to increase the angle of incidence of the illumination light to reduce wafer pattern noise, especially if the advantage is greater than the disadvantage. The systems and methods described herein are flexible with respect to the angle of incidence of the illumination. The system and the rice can accommodate a low incidence angle as well as a high incidence angle. 15 to 18 show such an embodiment.

15 shows an embodiment of an interference flaw detection system with high incident angle illumination. The interference flaw detection system 1500 includes an illumination source beam 1518 directed toward the surface of the sample 1510 as shown. The sample 1510 may be a wafer, a reticle, or another sample to be inspected. The scattered component from sample 1510 is shown as beam 1528 and the mirror component is shown as beam 1524. [

A high resolution optical system including lens systems 1514 and 1516 collects the dispersed and mirror components of light and directs them to an image sensor 1540. Subsystem 1570 is disposed in the path of mirror component 1524 and includes a phase controller and an attenuator as shown and described in Figures 2A and 2B. The dispersed light beam 1528 passes through the compensation plate 1530 to equalize the path lengths of the mirror component and the dispersed component. The beam dump 1526 accepts a portion of the mirror component 1524 rejected by the attenuator.

16 shows an embodiment of an interference flaw detection system having an incident angle illumination and a variable attenuator. The interference flaw detection system 1600 includes an illumination source beam 1618 directed toward the surface of the sample 1610 as shown. Sample 1610 may be a wafer, a reticle, or other sample to be inspected. The scattered component from sample 1610 is shown as beam 1628 and the mirror component is shown as beam 1624. [ A high resolution optical system comprising lens systems 1614 and 1616 collects the dispersed and mirrored components of light and directs them to the image sensor 1640. Subsystem 1670 is disposed in the path of mirror component 1524 and includes a phase controller and an attenuator as shown and described in Figures 9-11. The dispersed light beam 1628 passes through the compensator plate 1630 to equalize the path lengths of the mirror component and the dispersed component. The beam dump 1626 accepts a portion of the mirror component 1624 rejected by the variable attenuator.

17 shows an embodiment of an interference flaw detection system with low image flare and high incident angle illumination. The flare is an illumination light that is reflected or dispersed by the lens surfaces on the way to the sample wound on the sensing plane. The interference flaw detection system 1700 includes an illumination source beam 1718 that is directed toward the surface of the sample 1710 as shown. The sample 1710 may be a wafer, a reticle, or another sample to be inspected. The scattered component from sample 1610 is shown as beam 1728 and the mirror component is shown as beam 1724. [ A high resolution optical system including lens systems 1714 and 1716 collects the scattered and mirrored components of light and directs them to an image sensor 1740. Subsystem 1770 is disposed in the path of mirror component 1724 and includes a phase controller and an attenuator as shown and described in Figures 2A and 2B. The dispersed light beam 1728 passes through the compensation plate 1730 to equalize the path lengths of the mirror component and the dispersed component. The beam dump 1726 accepts a portion of the mirror component 1724 rejected by the attenuator.

Figure 18 illustrates an embodiment of an interference flaw detection system having a low image flare and high incident angle illumination and a variable attenuator in accordance with some embodiments. The interference flaw detection system 1800 includes an illumination source beam 1818 directed toward the surface of the sample 1810 as shown. The sample 1810 may be a wafer, a reticle, or another sample to be inspected. The scattered component from sample 1810 is shown as beam 1828 and the mirror component is shown as beam 1824. [ A high resolution optical system including lens systems 1814 and 1816 collects the scattered and mirrored components of light and directs them to an image sensor 1840. Subsystem 1870 is disposed in the path of mirror component 1824 and includes a phase controller and an attenuator as shown and described in Figures 9-11 and 14. The dispersed light beam 1828 passes through the compensation plate 1830 to equalize the path lengths of the mirror component and the dispersed component. The beam dump 1826 accepts a portion of the mirror component 1824 rejected by the variable attenuator.

As shown in Figs. 15 to 18, illumination with a high incidence angle can be achieved by moving the beam position toward the edge of the pupil plane / aperture stop or by supplying illumination light from the outside on the sample. The external routing of the illumination light to the sample will significantly reduce flare and stray light. All of the techniques described above for phase control and amplitude attenuation and polarization control of mirror components can be used.

11. Azimuth rotation of illumination light

The defect detection sensitivity generally depends not only on the angle of incidence but also on the incident azimuth angle of the illumination light. The azimuth is defined as the angle between the pattern on the sample and the normal projection of the incident beam to the sample. To maximize defect detection sensitivity in some applications, it is desirable to change the illumination azimuth angle so that an optimal angle can be found. An effective way to cover the actual azimuth is to place a rotatable prism or mirror at the conjugate position of the sample. Such a scheme is shown in Figs. 19 to 22. The configurations of Figures 19 and 20 are more flexible because the illumination system and the acquisition system share only the high power portion of the lens system.

19 shows an embodiment of an interference flaw detection system having a high angle of incidence illumination rotatable by an azimuth angle. The interference flaw detection system 1900 includes an illumination source beam 1918 that is directed toward a rotatable and tiltable surface 1920, such as a mirror or a prism. The reflected beam passes through lens system 1912, 1916 and is directed toward the surface of sample 1910 as shown. Sample 1910 can be a wafer, a reticle, or other sample to be inspected. The scattered component from the sample 1910 is shown as a beam 1928 and the mirror component is shown as a beam 1924.

The high-resolution optical system, including lens systems 1914 and 1916, and beam splitter 1972 collect scattered and mirrored components of light and direct them to an image sensor 1940. Subsystem 1970 is disposed in the path of mirror component 1924 and includes a phase controller and an attenuator as shown and described in Figures 2A and 2B. The dispersed light beam 1928 passes through the compensation plate 1930 to equalize the path lengths of the mirror component and the dispersed component. The beam dump 1926 accepts a portion of the mirror component 1924 rejected by the attenuator. Subsystem 1970 must move with the rotation of the mirror to follow the beam around the perimeter of the pupil. If a 50/50 beam splitter is used on the surface (1972), the optical efficiency of this scheme is not greater than 25% due to transmission and reflection through the beam splitter. Much higher efficiency is possible if a polarizing beam splitter is used for surface 1972 and a 1/4 wavelength plate is used for the illumination path between the beam splitter and the sample.

Figure 20 illustrates an embodiment of an interference flaw detection system having a variable attenuator for a mirror component that can find application and high angle of incidence illumination that can be rotated azimuthally. The interference flaw detection system 2000 includes an illumination source beam 2018 that is directed toward a rotatable and tiltable surface 2020, such as a mirror or a prism. The reflected beam passes through the lens system 2012, 2016 and is directed towards the surface of the sample 2010 as shown. The sample 2010 may be a wafer, a reticle, or another sample to be inspected. The scattered component from sample 2010 is shown as beam 2028 and the mirror component is shown as beam 2024. [ The high-resolution optical system including the lens systems 2014 and 2016 and the beam splitter 2072 collect the dispersed and mirror components of the light and direct them to the image sensor 2040. The subsystem 2070 disposed in the path of the mirror component 2024 includes a phase controller and a variable attenuator as shown and described in Figures 9-11 and 14. The subsystem 2070 must move with the rotation of the mirror 2020 to follow the beam around the perimeter of the pupil. The dispersed light beam 2028 passes through the compensation plate 2030 to equalize the path lengths for the mirror component and the dispersed component. The beam dump 2026 accepts a portion of the mirror component 2024 attenuated by the variable attenuator.

For some applications, there is a small space available in the middle section of the lens system for the beam splitter, especially for large-scale Tendu systems. In this case, the beam splitter can be replaced with a beam splitter or mirror placed where a lot of space is available. Figures 21 and 22 illustrate possible configurations. Figure 21 shows an embodiment of an interference flaw detection system with high angle of incidence illumination rotatable in azimuth. The interference flaw detection system 2100 includes an illumination source beam 2118 that is directed toward a rotatable and tiltable surface 2120, such as a mirror or a prism. The reflected beam is directed toward the surface of the sample 2110 as shown. The sample 2110 may be a wafer, a reticle, or another sample to be inspected. The scattered component from sample 2110 is shown as beam 2128 and the mirror component is shown as beam 2124. [ A high resolution optical system including lens systems 2114 and 2116 and beam splitter 2172 collects the scattered and mirrored components of light and directs them to image sensor 2140. The subsystem 2170 disposed in the path of the mirror component 2124 includes a phase controller and an attenuator as shown and described in Figures 2A and 2B. Subsystem 2170 must move with the rotation of mirror 2120 to follow the beam around the perimeter of the pupil. The dispersed light beam 2128 passes through the compensation plate 2130 to equalize the path lengths for the mirror component and the dispersed component. The beam dump 2126 accepts a portion of the mirror component 2124 attenuated by the attenuator.

Figure 22 shows an embodiment of an interference flaw detection system with azimuthally rotatable high incident angle illumination with a variable attenuator for a mirror component suitable for some applications. The interference flaw detection system 2200 includes an illumination source beam 2218 that is directed toward a rotatable and tiltable surface 2220, such as a mirror or a prism. The reflected beam is directed toward the surface of the sample 2210. The sample 2210 may be a wafer, a reticle, or another sample to be inspected. The scattered component from sample 2210 is shown as beam 2228 and the mirror component is shown as beam 2224. [

A high resolution optical system including lens systems 2214 and 2216 and beam splitter 2272 collects scattered and mirrored components of light and directs them to image sensor 2240. The subsystem 2070 disposed in the path of the mirror component 2224 includes a phase controller and a variable attenuator as shown and described in Figures 9-11 and 14. The subsystem 2270 must move with the rotation of the mirror 2220 to follow the beam around the perimeter of the pupil. The dispersed light beam 2228 passes through the compensation plate 2230 to equalize the path lengths for the mirror component and the dispersed component. The beam dump 2226 accepts a portion of the mirror component 2224 attenuated by the variable attenuator.

By rotating a prism or mirror disposed on the vertical pupil plane of the sample, it is possible in principle to rotate the azimuth of the illumination beam by 360 degrees. However, the 360 ° azimuthal rotation capability of the illumination light is actually difficult to achieve due to mechanical friction with other mechanical or optical components. In some embodiments, a 180 ° azimuthal rotation of the illumination light is used. In this case, azimuthal rotation of the 360 ° range of illumination light for the sample is achieved by rotating the sample 180 °. The 180 ° rotation of the sample does not normally cause problems because the pattern on the wafer or reticle is oriented significantly in the 0 ° -180 ° or 90 ° -270 ° directions. The azimuthal rotation of the illumination beam can be very effective in increasing defect detection sensitivity if combined with polarization control. The polarization control of the illumination need not be mechanically coupled to azimuthal rotation of the illumination light. Thus, the two controls can be executed independently and without difficulty. It should be appreciated that when the azimuthal rotation of the illumination beam is changed, the phase control in the path of the mirror component must be rotated about the lens axis to follow the illumination beam path.

12. Transmissive configuration

Some samples, such as reticles and biological tissues, are much more transmissive than reflections. To inspect the transmitted sample, the system may be in the transmissive mode.

Figure 23 shows an embodiment of an interference flaw detection system designed to pass through illumination through a transmission sample. The main difference from the previously described embodiment is the illumination path.

Other features are the same. The interference flaw detection system 2300 includes an illumination source that generates a coherent beam 2318. The beam 2318 is directed toward the transmitted sample 2310 as shown. Sample 2310 may be a wafer, a reticle, or other sample to be inspected. The scattered component from sample 2310 is shown as beam 2328 and the mirror component is shown as beam 2324. [

A high resolution optical system including lens systems 2314 and 2316 collects the dispersed and mirror components of light and directs them to the image sensor 2340. Subsystem 230 is disposed in the path of mirror component 2324 and includes a phase controller and a variable attenuator and one or more polarization controllers as shown and described in Figures 2a- 2b, Figures 9-11, do. The dispersed light beam 2328 passes through the compensation plate 2330 to equalize the path lengths for the mirror component and the dispersed component. The beam dump 2326 accepts a portion of the mirror component 2324 rejected by the variable attenuator.

Most reticles are transmissive and reflective. However, they are typically used in a transmission mode. In this case, the reticle's reflection, rather than the reflection, is of final concern. Unlike conventional reticle inspection tools, the composite transmission coefficient of one point on the reticle can be determined by measuring the intensity of transmitted light using a number of different phase shifts. Thus, the transmissive shape described herein can be used very effectively for inspection of reticles, especially for phase transfer reticles, in terms of performance and cost.

13. Dual Mode Shape

Some samples may be reflective and transmissive. A preferred embodiment is a reticle. To inspect this type of sample in a more perfect form, the system can simultaneously integrate reflection and transmission modes.

An exemplary configuration of this kind of system is shown in Fig. System 2400 includes a reflective inspection system 2402a and a transmissive inspection subsystem 2402b. The beam 2418 of a single sword is directed toward a sample 2410, e.g., a reticle. The reflected and transmitted light beam is simultaneously detected by two separate image sensors 2440a, 2440b. Phase control and attenuation are achieved through respective subsystems 2470a, 2470b. There is no change in the previous working principle described here. The above-described control of the relative phase, mirror amplitude, azimuthal rotation, and polarization is performed.

For reticle inspection, die-die extraction techniques typically can not be used. In this case, the reference image of the defect-free reticle can be generated from the reticle data used to manufacture the reticle pattern. This is a serious computational task typically performed by a computer. Then, the image of the actual reticle is compared with the computer generated image of the defect-free reticle to find the defect. In order to facilitate rapid data processing, images of defect-free reticles must be generated very quickly. Fully interfering illumination sources, such as lasers, minimize the computational complexity required for reticle imaging, allowing rapid image acquisition with minimal computational sources.

14. Multiple wavelength shapes

Generally, shorter wavelengths provide higher defect detection sensitivity. However, the detection sensitivity of some defects does not follow this general rule. Thus, in some applications, multiple wavelengths may be used to more effectively detect various defects. Multiple wavelengths can be implemented in a cost effective manner in a continuous or simultaneous mode of operation.

Continuous multiple wavelength

In this configuration, only one image is used, and one wavelength is used at a time to detect the defect. The hardware is simple, but it takes more time to operate compared to the shape for continuous multiple wavelength operation. Continuously variable phase controllers need not be modified to accommodate different wavelengths, but wave plates for amplitude attenuation and polarization control must be modified to handle different wavelengths.

25-27 illustrate some possible means for changing the pi / 2 plate. Figure 25 shows an embodiment of a carrousel 2510 that supports two pi / 2 plates for different inspection wavelengths, respectively. Figure 26 shows an embodiment of a rotary conveyor 2610 that supports three pi / 2 plates for different wavelengths. Figure 27 shows an embodiment of a rotary conveyor 2710 that supports four pi / 2 plates for four different wavelengths. A variation similar to that shown in Figs. 25-27 may be applied to the pi / 4 plate. When the wavelength is switched, the wavelength plate is also switched accordingly. The wavelength plate switching is achieved by rotating the wavelength plate rotary conveyor in an appropriate amount. The waveplate is rotated up to 90 ° to account for all possible amplitude attenuation and polarization states. Thus, up to four wave plates for four different wavelengths can be packaged in a single mount as shown in Fig. If the beam size is not very small compared to the area of each waveplate, then two or three plates in a single mount are more substantial, as shown in Figures 25 and 26.

Simultaneous multiple wavelength

Multiple wavelengths can be used simultaneously by adding a wavelength splitter and a separate image sensor for each wavelength. Figure 28 shows an exemplary system configuration for two wavelengths. System 2800 for inspection of sample 2810 uses two separate illumination source beams 2818a, 2818b with two different wavelengths. The two wavelengths are combined and separated by a dichroic wavelength splitter 2872. The two wavelengths share the same front end of acquisition optics 2816, which is typically the most important and most expensive component in the overall optical system. By sharing the front end of the acquisition optics 2816, the system achieves stability as well as simplicity. Inexpensive rear end lens components 2812 and 2814 have low optical power and thus are separated to provide maximum flexibility in magnification control, magnification adjustment, and sensor selection. Subsystems 2870a and 2870b are used to control phase and attenuation as shown and described in Figures 2a-2b, 9-11 and 14, respectively.

Each wavelength also uses its own compensation plates 2830a, 2830b and image sensors 2840a, 2840b. In some embodiments, 266 nm and 532 nm are used. The technology to produce these two wavelengths has matured, and a single laser system can provide two wavelengths, thus reducing non-use. It should be appreciated that short wavelengths, such as 193 nm, vacuum ultraviolet, extreme ultraviolet, etc., can be used to achieve much higher sensitivity. However, short wavelengths are difficult to handle. In some embodiments, more than two wavelengths are used by adding more wavelength dividers to the rear end optical path.

In order to remove the wavelength divider and preserve the image sensor, they may be arranged so that all the phase controllers are arranged next to each other in the same pupil plane. However, this shape makes the mechanical design more difficult and increases pupil darkness. In addition, the system may be configured such that multiple wavelengths or broadband illumination share the same phase controller. This shape reduces the number of phase controllers but makes precise control of the phase difficult.

15. Elongated  sauce

For many applications, a spatial mode laser producing highly interfering beams is a good light source as described above. However, in some embodiments, a light source other than a single mode laser may be used. For example, an elongated source such as an arc lamp may be used, as shown in Figures 29-31. An elongated source is defined as an incoherent source whose entrainment is greater than the square of its wavelength.

29 shows an embodiment of an interference flaw detection system using an elongated source with a low incident angle illumination system. The incident illumination beam 2918 is directed toward the sample 2910 using a beam splitter 2972. The mirrored component is denoted by beam 2924 and passes through a phase controller and attenuator 2970 similar to that of the subsystem as shown and described in Figs. 2a-2b, 9-11 and 14 and described. The dispersed component, denoted as beam 2928, passes through compensation plate 2930. The front end optical system 2916 and the rear end optical system 2914 collect light from the sample and direct it towards the image sensor 2970.

Figure 30 shows an embodiment of an interference flaw detection system with high incident angle illumination with an elongated source. The incident illumination beam 3018 is directed toward the sample 3010 using a beam splitter 3072. The mirrored component is represented by a beam 3024 passing through a phase controller and attenuator 3070 similar to that of the subsystem shown and described in Figs. 2A-2B, Figs. 9-11 and 14 and described. The dispersed component, denoted as beam 3028, passes through compensation plate 3030. The front end optical system 3016 and the rear end optical system 3014 collect light from the sample and direct it towards the image sensor 3040.

31 shows an embodiment of an interference flaw detection system having a high-angle-of-view illumination with an elongated light source and a phase control in the path of the scattered light. The incident illumination beam 3118 is directed toward the sample 3110 using a beam splitter 3172. [ The mirrored component is denoted by beam 3124 and passes through compensating plate 3130. [ The dispersed components denoted by beam 3128 pass through a phase controller and attenuator 3170 similar to the subsystems shown and described in Figs. 2a-2b and Figs. 9-11 and 14, respectively. The front end optical system 3116 and the rear end optical system 3114 collect light and direct it toward the image sensor 3140.

The elongated source has the advantage that it uniformly expands the light energy over a large area of the imaging system lens component. This reduces the possibility of lens damage due to the high power density of the illumination beam or mirror beam component. However, there are disadvantages associated with elongated light sources. For example, it is difficult to separate mirror components from dispersed components. Some of the dispersed components inevitably overlap with mirror components even in the pupil plane. This makes precise control of the relative phase between the dispersed component and the mirror component difficult. Inaccuracies in phase control usually result in poor performance. Another disadvantage is that the efficiency of single light is reduced due to increased pupil darkening. Also, because of the relatively large footprint of the blocking strips in the pupil plane in general, the Fourier filter makes it more difficult to identify the pattern noise with the stretched light source.

Ⅲ. Operating mode

The system described herein can be operated in many different ways. A detailed description of various different operating modes will be provided below.

1. High sensitivity mode

This mode aims at a particular type of defect, in particular a type of defect that can adversely affect chip production yield. The relative phase between the dispersed component and the mirror component is typically set to maximize the fault signal. However, the relative phase can minimize the wafer pattern noise or maximize the signal-to-noise ratio of the defective signal. In most cases, they are equivalent.

As already explained, the signal-to-noise ratio can be increased to twice the inherent signal-to-noise ratio through the noise-less amplification of the signal by the mirror component. As already shown, noise-less amplification is important for the detection of weak defective signals. If the detailed physical properties of the defect and peripheral circuit pattern are not known, the desired or ideal relative phase value may be determined experimentally. For example, the catchall mode to be introduced in the next section can be operated on a sample to experimentally determine an optimal phase value. On the other hand, if the physical properties of the defect are known, the optimal phase value for detection can be set based on a theoretical or numerical simulation.

Equation (2c) shows that Φ s , which represents the relative phase between the defect signal amplitude and the mirror component, is an important parameter to maximize the defect signal. This indicates that the extreme value of the defect signal occurs when? S = 0 占 or 180 占. If Φ s = 0 °, the value of the interference term becomes positive, and if Φ s = 180 °, the interference term becomes negative. As described above, the entire defect signal consists of an implied field and an interfering term. Thus, in order to maximize the total defect signal, the sign of the interference term must be changed to the same sign as the entire ambiguity field. The sign of the entire ambiguity field can not be controlled. This can be positive or negative depending on the defect and the physical characteristics of the surrounding pattern. Thus, the phase of the interference term can be controlled to obtain the maximum defect signal.

If the sine of the overall dark field is positive, the selection of Φ s = 0 ° maximizes the total defect signal. If the sine of the entire ambiguity field is negative, the selection of Φ s = 180 ° maximizes the total defect signal. In order to clearly illustrate the advantages of the described systems and methods, practical but simple defects are selected for numerical simulation.

Further, as described above, the relative phase can be changed by changing the phase of the mirror component or the dispersive component. However, in practice, it is much easier to change the phase of the mirror component since the mirror component generally has a low etendue. Thus, in all numerical simulations, the phase of the mirror component is varied to obtain an optimal relative phase value. Although the numerical simulation is limited to certain types of defects, the systems and methods described herein are generally applicable to any type of defect detection.

Figures 32a and 32b show the type of defect used for numerical simulations. The defect is cylindrical with the same height or depth as its diameter. Figure 32A shows a particle type defect 3210 having a diameter and height of "d ". 32B shows a void type defect 3212 having a diameter and a depth of "d ". The defect material is assumed to be the same as the sample material. This type of defect is called a phase defect because it induces a phase change in the reflected light rather than an amplitude change. If they are phase limited, they are at the extremes of the full spectrum of possible defect types.

Another extreme form of defects is amplitude-limited defects. Amplitude limited defects have properties opposite to those of phase defects, which have a height of zero but have a different reflection power from the surrounding area. Most real faults are neither pure phase nor pure amplitude. They generally have phase and amplitude differences from their surroundings. In this section, only signals from phase topology defects are simulated, but the formulas and computer programs used for simulation are generic and can handle other types of insulated cylindrical defects.

A wavelength of 266 nm was used for the simulations described herein, and the numerical aperture (NA) of the signal acquisition system was 0.9. The central darkening due to the phase controller and its mounting was assumed to be 0.2NA.

It was derived from the equation for image formation and is based on the scalar principle of diffraction. The scalar expression is not more precise than the vector expression. However, they are sufficiently precise to compare the performance of the systems and methods described herein with the prior art. They also provide a very precise amount of estimate of signal strength and shape for defects less than a quarter wavelength, which is a major concern of the present invention. In addition, scalar expressions generally allow for more explicit physical insight than the ejector type, and are therefore more appropriate to account for the important concepts implied by the systems and methods described herein. The effect of defect height is estimated to be a sudden phase change. This estimation is justified assuming that the imaging system is collecting only the radiative part of the light wave. This is not appropriate for near-field microscopes that collect photocopy parts of light waves. The induced equation is general enough to handle other types of insulated cylindrical defects. The following indications are used in the equations.

h: defect height,

a: relative amplitude of the defect

b: relative amplitude of the surrounding region,

ρ 1 : the numerical aperture of the central darkening of the imaging system

ρ 2 is the numerical aperture of the aperture stop of the imaging system

t: attenuator transmission amplitude

Φ: Phase (radian) added to the mirror component by the phase controller

The complex amplitude [O (τ)] of the sample reflectivity can be expressed as:

Figure 112011000096113-pct00007

The above equation (3) can be re-displayed as follows.

Figure 112011000096113-pct00008

The first angle type bracket represents a resultant phase object, and the second angle type bracket represents a pure amplitude object whose reflection power is zero. Thus, it can be generally said that very small defects can be decomposed into pure phase defects and pure amplitude defects.

Vertical illumination is employed to maintain the system's circular symmetry. Circular symmetry is maintained to avoid confusion in the signal graph. The oblique illumination can be easily modeled as a vertical illumination. Vertical illumination with a unit intensity is expressed as follows.

Illu (x, y) = 1; Lighting (5)

The complex amplitude [W (x, y)] of the reflected light is expressed as follows.

Figure 112011000096113-pct00009

If the coordinates are displayed again as wavelengths,

Figure 112011000096113-pct00010

The diffraction pattern amplitude [Q (α, β)] observed in the pupil plane is the Fourier transform of W (x ', y'). Therefore, the complex amplitude at the pupil plane is as follows.

Figure 112011000096113-pct00011

The pupil transmission [Pupil (ρ)] and the phase control on the mirror component are denoted as follows.

Figure 112011000096113-pct00012

The effect of the sample defocus on the detector or sensing plane can be introduced in the pupil as follows.

Figure 112011000096113-pct00013

If the pupil transmission and defocus effects are combined,

Figure 112011000096113-pct00014

The complex amplitude [V (?,?)] Of the reflected light in the pupil is as follows.

Figure 112011000096113-pct00015

The complex amplitude of the light in the image plane is the inverse Fourier transform of V (α, β), as follows.

Figure 112011000096113-pct00016

The light intensity [I (X ')] in the image plane is

Figure 112011000096113-pct00017

These equations are used for all defect signal simulations. The value of equation (15) is computed numerically, for example, using a Python programming language.

33 to 35B show numerical simulation results using the program. FIG. 33 shows a simulation result for a defect having a diameter of 40 nm. Curve 3310 depicts the simulation results for a conventional bright field mode system and curves 3312 and 3314 are described herein using a high sensitivity mode and a phase angle applied to the mirror component of 144 ° and -36 °, respectively Lt; / RTI > shows a simulated result of the interference method. Curve 3316 shows the simulation results for a conventional dark field system.

Figure 34 shows the simulation results for defects of 20 nm diameter. Curve 3410 depicts the simulation results for a conventional bright field mode system and curves 3412 and 3414 are described herein using a high sensitivity mode with a phase angle of < RTI ID = 0.0 > 117 ≪ / RTI > shows simulated results for the interference method. Curve 3416 shows the simulation results for a conventional dark field system.

35A shows a simulation result for a defect having a diameter of 10 nm. Curve 3510 illustrates the simulation results for a conventional bright field mode system and curves 3512 and 3514 illustrate the results of the method described herein using a high sensitivity mode with a phase angle of < RTI ID = 0.0 > 104 ≪ / RTI > shows simulated results for the interference method. Curve 3516 shows the simulation results for a conventional dark field system.

The letter "BF" shown in the figure means a conventional system using a bright field mode, and is included in the figure for comparison. "HS" in the figure means a high sensitivity mode. The angle value is the phase angle introduced into the mirror component to obtain the two extremes of the defect signal as described above. The positive angle corresponds to the case of? S = 0 占 and the negative angle corresponds to the case of? S = 180 占.

The angle phi s is not the phase angle introduced into the mirror component. Rather, Φ s is the sum of the phase angles introduced into the mirror component, and the inherent phase angle differences of the defect signal and the mirror component phase. The inherent phase angle difference is the phase angle difference that a conventional bright field mode system will have. The inherent phase angle difference of the simulated defect signal is -144 ° and -117 ° for 40nm, 20nm and 10nm defects. This unique phase angle difference is very different from 0 ° or ± 180 °. This is because the conventional bright field inspection mode can not be performed well or stably.

The phase controller adds or subtracts the appropriate positive phase angle to make the total phase angle difference between the defect and its periphery 0 or 180 degrees. In the simulated fault signal, the phase controller adds 144, 117, and 104 degrees to the inherent signal from the 40, 20, and 10 nm defects to make the overall phase difference 0 degrees. In addition, the phase controller adds -36 °, -63 ° and -76 ° from the 40nm, 20nm and 10nm defects to the unique signal to make the total phase difference -180 °.

The letters in Figures 33 to 35B indicate the phase angle plus the inherent phase angle difference of the phase controller. "BF" indicates that the phase angle is not added (or subtracted). Therefore, "BF" is equivalent to "HS: 0 DEG ", and is a high-sensitivity mode in which no phase is added to the mirror component. It should also be noted that the difference between the two phase angles corresponding to the two extreme fault signals from the characters in the figure is 180 °. In the figure, "DF" represents a dark field system.

Several top important facts can be derived from simulated results. First, the intensity of the dark field signal decreases very rapidly as the magnitude of the defect becomes smaller than 1/4 of the wavelength. If structurally interfering with scattered light from the surrounding pattern, the dark field signal is higher than shown in the figure. This kind of interference can not be controlled, it depends entirely on luck. Thus, it is generally expected that the darkfield defect signal is too low to be reliably detected for a defect whose size is less than one fourth of the wavelength. It is predicted that a significant portion of the serious bonding of semiconductor wafers in the near future will be significantly smaller than one fourth of the wavelength. Actually, the line width is expected to approach 1/4 wavelength, which is 193 nm, at which the wavelength is separated by the diffraction rate of the wafer at 193 nm. Therefore, the future of current darkfield inspection technology seems dark.

Second, the required phase change on the mirror component to make the relative phase between the defect signal and the mirror component 0 ° or 180 ° is not necessarily ± 90 ° even if the defect used in the simulation is a phase object. In fact, the amount of phase change required on the mirror component for the maximum defect signal depends on the size of the phase object. This is an important difference between this inspection technique and a phase contrast microscope in which a fixed ± 90 ° phase is added to the mirror component for maximum image contrast. This simple embodiment shows that a continuous variant of the relative phase between the defect signal and the mirror component is desirable for reliable defect detection. If signals from more common faults are simulated, they will more clearly indicate the desire to have a continuous variable form in the phase controller.

For example, if the signal from a pure amplitude defect is simulated, then the optimal phase value for the phase controller would be 0 ° or 180 °. This phase value is very different from that shown in the embodiment of the pure phase defect. In practice, the phase controller can provide any phase shift value to reliably detect all kinds of defects. Thus, the continuous variable form of the phase controller is undesirable, but it is necessary if the fault is to be reliably detected. The systems and methods described herein employ a phase controller that can vary the relative phase in a continuous manner.

Third, the defective signal is significantly amplified for the conventional bright signal by appropriately changing the relative phase. Also, signal amplification is more important when the defect size is small. The operational advantage of the maximum fault signal mode is improved signal stability. This is because the first order signal sensitivity to external perturbation is zero if the signal intensity is extreme. Thus, the systems and methods described herein can provide better defect detection sensitivity with good stability.

The phase controller may also be used to reduce the amplification of unwanted fault signals. The preferred embodiment is not actually noise but wafer pattern noise which is a defect signal. In most defect detection simulations, it is desirable to suppress wafer pattern noise. If the suppression of the wafer pattern noise is more important than the amplification of the fault signal of interest, then the phase controller can be set to minimize wafer pattern noise rather than maximizing the defect signal of interest.

In Figs. 33 to 35A, the bright-field signal is still considerably large. However, the important point is that this is true of the defect types used in the simulation. The bright field signal may be significantly smaller than some types of actual defects. To understand this topic, the fault signal can be recorded more explicitly. The defect signal s is the difference between the live signal from the defect and the mirror component [see equation (3)]. Therefore, the defect signal amplitude at the position of the defect is as follows.

Figure 112011000096113-pct00018

Where R d is the reflectivity of the defect

h is the height of the defect

Equation (17) gives the following facts.

Figure 112011000096113-pct00019

Where R sur is the reflectivity of the surrounding area

The fault signal is completely virtual. That is, the phase deviation Φ s between the defect signal and the mirror component is ± π / 2. In this case, the interference term shown in equation (2c) becomes 0, and does not contribute to the bright-field signal. As a result, the bright field signal is the same as the very low dark field signal for small defects. This indicates that the bright field system is inevitably incomplete for some types of defects.

A preferred embodiment is small highly reflective particles on top of a silicon wafer. The reflection power of the particle satisfies Eq. (18) because its reflection power is larger than the reflection power of silicon. If the particle satisfies Eq. (18) properly, the bright field system will have difficulty finding it. Figure 35b clearly illustrates the problem. The defect size is 10 nm, but in order to comply with equation (18), the reflectivity is 26% higher than the surrounding area.

Under these conditions, bright field 3521 and dark field 3522 are actually zero. However, the signal is fully recovered by controlling the relative phase between the dispersed component and the mirror component. The 90 ° associated shifts produce an interfering signal 3523, and a -90 ° associated phase shift generates a signal 3524. This embodiment illustrates the power of the interference flaw system and method described herein.

It appears to be counterintuitive that bright field systems can be very imperfect to bright defects. However, there is actually a reason. This is at least qualitatively understood by thinking of two extreme situations. We can see from our intuition that if the reflection power of the defect is less than or equal to the reflection power of the surrounding region, then the bright signal has a negative sign, that is, the dip of the biopsy, know. However, we know from our intuition that if the reflectivity of the defect is greater than the reflectivity of the surrounding region, the bright-field signal has a positive sign, that is, a vertex in its biography. This tells us that the bright field signal should be zero for some intermittent defect reflections. Thus, there is a fatal imperfection of the bright field system for some types of defects. If the defects are relatively large, then the opportunity to satisfy Eq. (18) is small.

Thus, there is less opportunity for the bright field system to become incomplete for some large defects. However, if the defect is much larger than the quarter wavelength, the chance of defects to satisfy equation (18) is significantly increased. Defect size is declining very quickly. Thus, it is predicted that the bright field system will not be able to reliably detect defects for the rapidly diminishing defect sizes associated with future technologies. The systems and methods described herein utilize the relative phase between the defect signal and the mirror component. In this embodiment, if the phase controller changes the phase of the mirror component to +/- pi / 2, the interference term recovers the overall intensity again.

In FIG. 33, the signal curve 3312 has a slightly larger absolute amplitude than the signal curve 3314. This is because the implied field and interference term shown in equation (2c) carry the same sine and add structurally for curve 3312. [ However, for curve 3314, the implied field and the interfering term carry the opposite sign and add destructively. Thus, in this particular embodiment, curve 3312 is a better choice for defect detection than curve 3312. In this particular embodiment, the difference between the two choices is small. The high sensitivity operating mode allows us to select the optimal signal curve for any particular type of defect.

Due to the diffraction from the sharp edges of the imaging system aperture, the defect signal generally changes the sine when the signal measurement point is moved towards the outer periphery of the signal as shown in Figures 33-35B. Thus, if the signal needs to be spatially integrated to maximize the overall signal, it is important to convert all signal portions to positive values before integration. The amount of noise is spatially uniform since the main noise source is the photon noise from the mirror component and both are spatially uniform. Thus, the signal has the largest signal-to-noise ratio at the center or apex of the signal and has a low signal-to-noise ratio at its periphery.

It is desirable if the signal conversion process forms a high weight toward the high signal-noise portion of the signal and forms a low weight towards the low signal-noise ratio of the signal. For example, the squaring and taking of the absolute value of the signal converts all parts of the signal to a positive value. However, the squaring of the signal automatically creates more weight towards the higher quality portion of the signal, while the taking of the absolute value of the signal creates the same weight for all signals. Thus, the square of the signal is a better conversion process than the taking of the absolute value of the signal. However, the square of the signal requires more computation time than the taking process. Therefore, if the computational resources are limited in a real system, some compromise between performance and speed is needed.

Contrast enhancement

As described above, the strong mirror component implies high noise-less amplification of the defect signal. High noise-less amplification of the defect signal leads to high defect contrast in the extracted image. This again leads to a more sensitive and stable defect detection system. Therefore, a strong mirror component is generally turned. It should be recognized that the stronger mirror component enhances the contrast of the extracted image but reduces the contrast of the biosignal. The contrast regarding the defect detection is not the raw image before extraction but the contrast of the extracted image. These standards are contrary to all conventional microscopes and their derivatives, including phase difference forms, and strive to increase the contrast of the biopsy. However, a mirror component that is too strong can saturate the image sensor if its dynamic range is not very large, and thus appears as a distortion of the defect signal in an undesirable way. This leads to an insufficient number of gray levels in the signal. Therefore, when the dynamic range of the image sensor is saturated, the contrast of the raw sample image needs to be increased in order to avoid distortion of the defect signal, and the mirror component needs to be reduced.

If the defect or wafer pattern is too small for the wavelength, considerable attenuation of the mirror component is useful to obtain adequately high high image contrast. Numerical simulation confirmed the effect of this method of contrast enhancement.

Fig. 36 shows an image of a 40-nm defect enhanced contrast by attenuating the intensity of the mirror component to 96%. Curve 3610 shows the result after attenuating the mirror component and curve 3612 shows the result before attenuating the mirror component.

Figure 37 shows the enhanced contrast of the image of the 20 nm defect achieved by attenuating the intensity of the mirror component to 99.9%. Curve 3710 shows the result after attenuating the mirror component, and curve 3712 shows the result before attenuating the mirror component. It should be noted that the amount of attenuation used in the simulation is excessive. These are not recommended and are not practical in many cases, but we used them to demonstrate the ability of the technology to enhance contrast.

As expected, small defects require strong attenuation of the mirror component to achieve the same image contrast. Defects on the wafer and the size of the circuit pattern will be slowly and consistently reduced, and achieving high dynamic range in an image sensor can be difficult and expensive. Therefore, strong attenuation of the mirror component is needed to cope with future small defects. This is why in many embodiments the attenuator is placed in the path of the mirror component.

One of the disadvantages of this type of contrast enhancement technology is the large loss of light energy. To compensate for the energy loss due to attenuation of the mirror component, more light can be supplied to the illumination path, or the detector signal can be integrated for a long time. For many applications, none of these options is undesirable because intense illumination beams can damage the sample and long detector integration times will reduce efficiency. Thus, contrast enhancement should be used with caution and other undesirable side effects should be taken into account. The use of large area illumination on the sample and a large detector array proportional thereto can reduce the likelihood of sample damage due to intense illumination while preserving efficiency, but this generally requires more expensive equipment design.

Preferably, even though the mirror components are severely attenuated in the simulation to clearly indicate contrast enhancement, in most real cases, more contrast enhancement is required due to the wide dynamic range of the image sensor used in the current defect detection system I never do that. Proper contrast enhancement is not only currently acceptable in practice, but it is also desirable to consider the current requirements for signal amplification and use of light energy and system efficiency.

Important conclusions can be drawn from the form of the defect image 3610 in Fig. 36 and the form of the defect image 3710 in Fig. The shape of the defect image indicates that the interference term is superior to a large amount of attenuation of the mirror component. If the attenuation is 99.9%, the interference term is still dominant. The interference term is formed by noise-less amplification of the signal by the mirror component, and the low reflection power of the sample has almost the same effect as the high attenuation of the mirror component. Thus, in spite of the very high attenuation of the mirror component, the superiority of the interference term is interpreted to mean that the noise-less signal amplification by the mirror component works very efficiently even with extremely low reflectance samples. This means that all systems and methods described herein that rely on noise-less signal amplification by a mirror component operate substantially well on any type of sample. In fact, the smaller the defect, the more noise-less amplification of the defect signal by the mirror component becomes more effective. A more precise support embodiment will be described in the section entitled " Limitation of dark night mode "below.

Selection of polarization

As mentioned above, in most cases the signal-to-noise ratio of the defect signal depends on the illumination and the polarization state of the collected light. Thus, it is important to select the correct polarization for the defect of interest. The choice of correct polarizations can be performed according to intuition, theoretical modeling, numerical simulation, or experimentation. However, it is generally impractical to check all the different polarization combinations because of their large number. As long as the defect and its adjacent pattern do not have a helical structure, polarization selection can be limited to the combination of linear polarization.

2. Catch All Mode

Defects can change the amplitude as well as the phase of the scattered light. Different types of defects affect the amplitude and phase of the scattered light differently. Thus, if the amplitude and phase of the scattered light are measured, not only more defects are captured, but more information about the defects is also obtained. The catchall mode depends on the determination of the amplitude and phase of the fault signal. Since the defect signal is completely determined by its amplitude and phase, if the noise is sufficiently low, the catchall mode can originally catch all kinds of defects in substantially one operation.

Defects can be classified more precisely when their amplitude and inter-information are useful. For example, the magnitude of the defect can be estimated from the amplitude information and phase information that can be determined if the defect is particulate or pore-like, or mesa or valley. An embodiment will be described in the section of "three scanning methods ".

If additional data such as sample substrate and pattern material and surrounding pattern features are used additionally, more precise defect classification is possible.

A more precise defect classification is typically a significant time saver in the process of costly defect observations. Defect observation generally requires the use of expensive but slow electron microscopes. In addition, the information collected in the catchall operation mode can be very useful for proper setup of other operating modes. The use of the catchall mode for proper setup of the other operating modes will not only save setup time but also enable quick automatic setup.

The catchall mode may be used for setting up the catchall mode itself. For example, different numbers of sample scans, each corresponding to a different phase shift, may also be operated multiple times with different polarizations. The results can then be compared with one another to determine the optimal number of sample scans and optimal polarization settings for optimal use of the catchall mode itself. Therefore, the catchall mode is a powerful mode. A single actuation of the catchall mode requires multiple sample scans. However, its efficiency is not expected to be lower than in other modes, because a single operation can capture all different types of defects and sample loading / unloading between multiple scans is not required. In addition, the efficiency reduction will be well compensated for by the increased efficiency in the defect observation process. Thus, catchall mode is a popular operating mode despite its low efficiency.

3 times scanning method

Equation (2c) shows that the interference term holds the cosine of the relative phase of the amplitude and the defect signal. In order to determine the amplitude and relative phase of the defect signal, at least three scans of the sample need to be used. The two scans are not enough because there are other unknown total ambiguities. The phase of the mirror component needs to be set differently for each scan. This can be achieved by correcting the phase controller. The calibration method for the phase controller has been described in the previous section.

Since the initial phase value of the mirror component is not important, any phase setting of the mirror component can be used. For example, if the phase value of the mirror component for the first scan of the sample is φ b and the phase shift for the second and third scans is θ 1 and θ 2, then the mirror component for the first through third scans Is expressed as follows.

Figure 112011000096113-pct00020

The image intensity for three sample scans is displayed as follows.

Figure 112011000096113-pct00021

Die-die [or cell-cell] The extracted intensity is as follows.

Figure 112011000096113-pct00022

The die-die extracted intensity retains the necessary amplitude and phase information of the defect signal. Thus, these die-die extracted intensities need to be stored for the entire wafer. This seems to require an unrealistic amount of memory space. However, it does not actually require much memory space, because the data is not zero only in the area around the defect, and this is actually very rare. It is necessary to store only a value that is not 0 or that is larger than the set threshold value. A value less than 0 or less than the threshold value need not be stored.

If θ 1 and θ 2 are not 0, θ 1 ≠ θ 2 , we can determine the complex amplitude (or equivalently amplitude and phase) of the fault signal from Eqs. (25), (26) have. The actual and imaginary parts of the complex amplitude of the amplified defect signal are:

Figure 112011000096113-pct00023

We can see that the entire ambiguous field is displayed as

Figure 112011000096113-pct00024

If θ 1 = -θ 2 = θ ≠ 0, then Eqs. (28), (29), and (30) decrease the following equation.

Figure 112011000096113-pct00025

There are a number of good choices for the values of [theta] 1 and [theta] 2. However, θ1 = - θ2 = 2π / 3, which is due to the final simplicity of the signal strength equation as shown in equation (38). θ 1 = -θ 2 = π / 3 or θ 1 = π / 3, θ 2 = a different choice, such as a 2π / 3, but also to work as, the signal strength indicator is a simple and symmetric as equation (38). If? 1 = -? 2 = 2? / 3, the equations (32), (33), and (34) are further reduced by the following equation.

Figure 112011000096113-pct00026

In this case, the amplified defect signal intensity (I s ) has the following simple expression.

Figure 112011000096113-pct00027

I s is the raw signal strength. The size depends not only on the illumination light but also on the intensity of the mirror component. Thus, in order to make the defect signal more persistent, I s should be standard for the intensity of the illumination light beam and the mirror component.

The illumination is made very uniformly through the field of view, but the intensity of the mirror component can vary considerably with respect to the entire field of view. A clean measurement of the intensity variation of the mirror component is difficult. Appropriate values for normalization are fine. The value of the local intensity of the mirror component can in most cases be estimated by the local average of the overall intensity. Therefore, the amplified defect signal intensity (I s ) can be properly normalized as follows.

Figure 112011000096113-pct00028

Where I ill is the intensity of the illumination in the sample plane.

I local is the local average of the total light intensity in the image plane.

I ' s is the normalized intensity of the amplified defect signal. I ill normalizes | s | 2 , and I local normalizes | b | 2 . The defect is generally detected by comparing the vertex value of I ' s with a setting value called a threshold value. A more detailed defect detection algorithm may be used to improve overall performance.

For example, I ' s are spatially integrated and their integrated value, not the vertex value, can be compared to a set threshold. Also, the numerical deconvolution of the defect image to the fine width of the detector element can be applied according to another method. A rapid deconvolution method will be described in the section "Spatial frequency bandwidth ". The normalized intensity of the amplified defect signal not only exposes the presence of the defect, but also provides crucial information about the magnitude of the defect.

The optical signal does not directly provide the physical size information of the defect. Rather, it only provides the "optical size" of the defect. The association between physical size and optical size can be complicated. Therefore, it is difficult to accurately estimate the physical size of the defect with only the optical signal. However, we can establish a general association between the physical size of the defect and the optical size through experimentation or simulation. And, the physical size of the defect can be estimated appropriately from general associations. If additional data such as defect composition data, reticle pattern data, etc. are used additionally, a more precise feature of the defect would be possible.

The phase (φ s ) of the defect signal for the mirror component is:

Figure 112011000096113-pct00029

The more significant phase value is the difference between phi s and the reference phase value described in the "phase controller" section. Thus, if the value of the reference phase is not zero, we extract the reference phase value from φ s . The phase information provides additional important information for more precise fault classification. For example, the phase information immediately determines if the defect is a particle type, a pore type, a mesa type, or a valley type. Precise and reliable defect classification is very important as reliable defect detection. Existing techniques rely on partial amplitude information only for fault classification, which results in a highly unreliable fault classification. The systems and methods described herein permit the use of amplitude and phase information for fault classification. The use of both expressions allows more accurate and reliable fault classification.

If additional information is used, such as defect composition data, reticle pattern data, etc., more uniform defect classification would be possible. A more precise and reliable defect classification capability is one of the important features of the systems and methods described herein. Defect phase information may also be used to properly set the phase controller for the high sensitivity operating mode.

Unfamiliar defects, such as wafer pattern noise, false defects, etc., are actually real defects. The catchall mode can be used most effectively to study or characterize this kind of defects, so that they can be best discriminated effectively.

FIG. 38 shows an example of the signal intensity and the phase of 20 nm. Curve 3810 is the signal strength and curve 3812 is the corresponding phase. In order to detect such defects, only the peak value of the signal intensity is generally required.

Equation (37) can be normalized to the illumination intensity and used to evaluate the intensity of the darkfield signal, which will determine whether the darkfield mode of operation can be used to reliably detect defects.

Equations (35) through (39) are particularly useful in real systems because they do not take much computation time to compute them and they are at least sensitive to any noise due to the same division of the phase angles. By choosing θ 1 = -θ 2 = 2π / 3 and using these equations, the triple scan method can completely determine the complex amplitude of the fault signal in a very effective manner.

The expression allows pixel-by-pixel parallel operations. Therefore, real-time calculation can be realized without difficulty by using a huge parallel calculation technique. For example, by using a large number of graphics processing units (GPUs) and supporting chip sets, powerful macro parallel computers can be built inexpensively with existing technologies.

Equation (38) or (39), which is the amplified defect signal strength, is the true indicator of the presence of a defect, and therefore the actual part of the defect signal is the intensity of the eye or the whole. By comparing it to the set threshold, we can tell whether the defect is large enough to be of interest. If the defect is of interest, we can characterize it by calculating the complex amplitude of the signal using Eqs. (35, 36). This provides some decisive information on what kind of defect is.

For example, FIG. 39 shows the phase of a defect signal from 20 nm particles and 20 nm voids. Curve 3910 shows the phase of the 20 nm cavity and curve 3912 shows the phase of the 20 nm particle. It can be seen from Figure 39 that particles and pores provide a phase angle of opposite sign to the complex amplitude of the defect signal. Therefore, even if the amplitude of the defect signal is the same, we can say which is the particle type defect and which is the pore type defect.

If the defect size is comparable to or greater than the resolution of the acquisition optics, the complex amplitude of the defect signal is deconvoluted with the complex amplitude of the point spread function of the imaging optics to obtain a more detailed picture of the defect deconvolve). This capability will help to further refine the fault classification. A more precise defect classification leads to considerable time savings in processing very expensive and slow defect observations in general, since defect observation is generally expensive but requires the use of slow electron microscopy. Therefore, the efficiency gain due to multiple sample scans will be well compensated for by the increased efficiency in the defect observation processing.

Another important feature is that equation (38) or equation (39) representing the strength of the amplified defect signal intensity does not depend on the phase value of the defect signal. This means that the catchall mode can potentially catch any kind of defect surrounded by any kind of pattern. This is why the catchall mode is a powerful mode. The conventional technique can not support the catchall mode because it can not measure the real and imaginary parts of the complex amplitude of the fault signal. They can only measure the real part. In this case, the signal strength is highly dependent on the relative phase value of the defect signal and its surrounding pattern phase. Thus, the prior art can not find all different kinds of defects. Rather, conventional techniques may miss a significant number of defects.

2 times scanning method

As described above, at least three sample scans are generally required to completely determine the complex amplitude of the defect signal. However, if the dark portion of the entire signal is negligible compared to the interference portion, two sample scans are sufficient to determine the complex amplitude of the defect signal. This can be seen from the equations (25) and (26). If we ignore the implicit part in the equation and set θ 1 = ± π / 2, then the above equation provides:

Figure 112011000096113-pct00030

The amplified defect signal intensity (I s ) is as follows.

Figure 112011000096113-pct00031

The normalized amplified defect signal intensity (I s ') is as follows.

Figure 112011000096113-pct00032

If the image sensor has a large dynamic range, the interference portion of the entire signal can be boosted in a large amount. In this case, the dark portion of the overall signal is too small, so we can use the two-scan method to speed up the catch-all operating mode.

4 times scan method

Simple choices for the four phase values of the mirror component are 0, π, π / 2, -π / 2. If the sample is scanned 4 times with a phase value of 0, π, π / 2, -π / 2 of the mirror component per scan,

Figure 112011000096113-pct00033

The die-die extracted intensity is as follows.

Figure 112011000096113-pct00034

The real and imaginary parts of the complex amplitude of the amplified defect signal are:

Figure 112011000096113-pct00035

In this case, the amplified defect signal intensity Is has the following simple expression.

Figure 112011000096113-pct00036

The normalized amplified defect signal strength is as follows.

Figure 112011000096113-pct00037

The phase (Φ s ) of the defect signal for the mirror component is:

Figure 112011000096113-pct00038

This four-time scan method provides a simple equation. The main drawback, however, is that the relative phase angle between the defect signal and the mirror component increases by 45 degrees. It should be noted that the maximum absolute phase angle for the triple scan method is 30 °. This makes the 4-scan method less susceptible to some defects than the 3-scan method. Different phase values other than {0,?,? / 2, -π / 2} may be selected to achieve better sensitivity than the three times scanning method. Possible different choices are {0, π / 4, π / 2, 3π / 4}, {± π / 8, ± 3π / 8} and so on. Other options, however, include the use of a regression method to determine the fault signal and to further complicate the analyzed representation of the fault signal (see the next subsection for a general representation of the fault signal). Another drawback of the 4 times scan method is that the efficiency is reduced compared to the 3 times scan method because it requires extra sample scan.

Higher scanning method

More independent image data leads to a good signal-to-noise ratio. Thus, to increase the signal-to-noise ratio, the sample may be scanned more than four times with different phase settings of the mirror component per scan. In this case, the amount of data is greater than the amount required to uniquely determine the complex amplitude of the fault signal. Thus, a regression method is employed to determine the fault signal. There are many different useful regression methods with known advantages and disadvantages. The most popular method is least-square regression. This is a good choice if the noise is random and allows analysis access to the current case. Analysis regression is important because it can save a lot of computation time. Other regression methods may be more appropriate if the noise is not random, but they do not allow analysis access. Thus, a least squares regression is provided here.

Assuming that the sample is scanned N times with different phase settings per scan, the theoretical die-die extracted image intensity (ΔI n (0) ) at n scans is expressed as:

Figure 112011000096113-pct00039

The error function is expressed as follows in least square regression.

Figure 112011000096113-pct00040

Where [Delta] In is the actual die-die extracted image intensity at n scans,

In (0) is the theoretical die-die extracted image intensity for n scans.

We must find the values D, s x , s y that minimize the error function. The slope of the error function for D, s x , s y is zero at its minimum. Therefore, the solution satisfies the following three equations.

Figure 112011000096113-pct00041

From equation (62)

Figure 112011000096113-pct00042

Substituting Eq. (65) into Eq. (63) and Eq. (64) gives the following.

Figure 112011000096113-pct00043

From equations (66) and (67)

Figure 112011000096113-pct00044

Equations (73,74) are generally the best solution for the complex amplitude of the amplified defect signal. Substituting Eq. (73) and Eq. (74) into Eq. (65) yields the following.

Figure 112011000096113-pct00045

Signal strength and phase are quickly calculated and used for fault detection and classification in the manner described above. Equation (75) can be normalized to illumination intensity and is used to evaluate the intensity of the darkfield signal. By evaluating the strength of the darkfield signal, we can tell if the darkfield mode of operation can be used to detect the fault.

In general, if N? 4, we can estimate the degree of integration of the measurement data by calculating the amount of residual error after regression. The residual error can be calculated quickly by replacing equations (73, 74, 75) with (61) and adding each term in the equation. By comparing the residual error to the setpoint, we can tell the soundness of the measurement. The measurement of residual error is particularly helpful for system troubleshooting. This is generally the first step in the system shooting process.

The equations (73 to 75) are reduced to equations (28 to 30) when N = 3.

If the phase setting is chosen to satisfy the following conditions,

Figure 112011000096113-pct00046

[As an example, the condition can be met only if all the [theta] n are selected with a uniform angular interval between them]

After that,

Figure 112011000096113-pct00047

As a result, in this case

Figure 112011000096113-pct00048

From equations (78, 78)

Figure 112011000096113-pct00049

It is easy to know that the equations (78 to 81) are reduced to the equations (28 to 30) when N = 3 and θ 0 = 0 and θ 1 = -θ 2 = 2π / 3. Also, these are reduced to the equations (53 to 56) when N = 4 and θ 0 = 0, θ 1 = π, and θ 2 = -Φ 3 = π / 2.

As shown, the regression process for the catchall mode can be performed analytically. Therefore, the operation of the catchall mode does not require an excessive calculation time even if the sample is scanned three times or more to obtain a more reliable defect signal. Limitedly, more scans mean lower efficiency. However, if a low signal-to-noise ratio or a high signal-to-noise ratio is required, more sample scans can be of considerable benefit. For example, a precise study of the fault signal can benefit from the supply of a fault signal with a high signal-to-noise ratio, which can be easily obtained by operating the catchall mode with numerous sample scans.

If N is large and the relative phase is rapidly changed and measurement data can be collected rapidly, the system can be operated in a heterodyne mode. The frequency conversion mode suffers from noise of less than 1 / f, so it can generally provide clear measurement data. The frequency conversion method can be easily implemented in a static or cascade system, but it is generally difficult to execute in a scanning system, especially in a fast scanning system.

Contrast enhancement

If the dynamic range of the image sensor is saturated, the contrast of the image needs to be increased in the catchall mode to preserve signal integrity. In this case, the same contrast enhancement technique as described in the high sensitivity mode section can be used.

Polarization (diversity)

As described above, the intensity of the defect signal may depend on the illumination state and the polarization state of the dispersed light. Thus, if the defect of interest is composed of different kinds of defects, the signal intensity depends on the polarization state differently and the image needs to be collected in multiple different polarization states to capture all different kinds of defects. This is called polarized light. In principle, coping with polarization polarization requires a large number of scans with different combinations of phase shift and polarization settings. In practice, this is not practical and requires a good choice to balance efficiency with the likelihood of missing one or two small defects. The basic understanding of optical physics helps to cope with polarized light. For example, as long as the defect and its adjacent pattern do not have a spiral structure, the polarization combination used can be limited to a linear polarization combination.

Spatial frequency bandwidth

The maximum spatial frequency of the complex amplitude distribution of the optical signal collected by the collecting lens is NA / lambda, and NA is the numerical aperture of the collecting lens. However, the maximum spatial frequency for the intensity distribution is 2NA / lambda because the intensity is the absolute square of the complex amplitude. However, if we examine Eq. (1) in more detail, it is found that only the implied background actually has a maximum spatial frequency of 2 NA / lambda. The maximum spatial frequency of the interference term is only approximately NA / lambda. This is because the maximum spatial frequency of the mirror component can be very small by illuminating the sample from almost vertical direction. This is illustrated in FIG. 40, which compares the spatial frequency bandwidth of the defect signal component with the normal incidence of illumination in the dark-field spatial frequency bandwidth. The maximum spatial frequency for the high-sensitivity mode and the dark-field mode is 2NA / lambda because they have and have implicit fields in the image measurement. However, the catchall mode drops all darkfield modes during signal processing and only uses it for interferometry. Thus, the maximum spatial frequency for the catchall mode is 2NA / lambda and NA / lambda. This has significant implications. The Nyquist-Shannon sampling theorem states that the spatial frequency of image sampling should be at least twice the maximum spatial frequency of the image to pick up all information in meditation and avoid signal aliasing. The Nyquist-Shannon sampling theorem is applied to an image sensor because the image sensor is one kind of sampling device.

This means that if we use the same image sensor for all modes, the image magnification for the catchall mode will pick up all the information we need about the defect and need not be as high as the high sensitivity mode or the dark field mode to avoid signal aliasing . This means that the image sensor can take on a large viewing field in the sample plane of the catchall mode. A large observation field means high efficiency. Thus, in theory, the efficiency reduction of the catchall mode due to at least multiple sample scans can be significantly compensated for by the increase of the observation field of view.

If the darkfield signal is small or negligible compared to the interference signal, we can reduce the magnification of the imaging system for a high sensitivity operating mode to increase the efficiency without affecting performance. As the defect size decreases, the darkfield signal becomes smaller and less important. The darkfield signal becomes extremely small or negligible in the future. Thus, future generations of interference flaw detection systems can use the same image magnification for the high sensitivity mode and the catchall mode. Further, in future generations of interference flaw detection systems, the dark field mode can be operated at a higher image magnification than the image magnification for the other operating modes due to the low intensity of such signal components. If the illumination ray path is fixed, the image magnification need not be changed. This suggests that the same fixed image magnification is used for all operating modes in future generations of interference flaw detection systems. A single fixed image magnification will not only stabilize the imaging system, but also simplify its operation, while reducing the manufacturing cost of the system.

It should be recognized that the Nyquist-Shannon sampling theorem is assumed to be a delta function as a sampling function. However, no real sampling function can be a delta function. The actual sampling function must have a finer width, otherwise they can not detect the signal. The image sensor is a kind of spatial sampling device. The width of the sampling function is the width of the light-sensitive area in each pixel of the image sensor. A high sensitivity or high dynamic range generally requires a wide light-sensitive area. Thus, the Nyquist-Shannon sampling theorem applies to real systems with the appropriate changes. However, the general discussion provided here still persists.

The standard way to eliminate the fine width of the sampling function is to deconvolve the image into a sampling function. This is equivalent to inverse Fourier filtering in which the Fourier transform of the image is multiplied by the inverse of the Fourier transform of the sampling function. However, the processing of deconvolution generally requires substantial computational resources. This is especially true for high-speed fault detection.

So that the deconvolution processing is substantially performed, the processing can be performed considerably and can be executed quickly. The simplification of the deconvolution process is very limited to arbitrary images. However, considerable simplification of the deconvolution process is possible for extracted images of very small defects whose size is much smaller than the wavelength. This is because in the extracted image of very small defects, the interference term is superior and the shape of the interference term is the same as the shape of the amplitude point spread function (APSF) of the imaging system so that the numerical aperture of the imaging system is fixed as long as it is fixed.

Figures 33 to 37 confirm this fact. Even if the spatial frequency of the mirror component is not zero, the shape of the interference term is not changed. The effect provides a nonzero carrier frequency to the interference term.

If the mirror component consists of a single ray, the interference term can be denoted by multiplying the APSF by the carrier frequency term. That is, the carrier frequency term can be factorized and processed separately. If we handle the carrier frequency term separately, the difference between the extracted image of a very small defect and the APSF is its intensity. In this case, because only one kind of signal function needs to be covered, the deconvolution process reduces the point by point rescaling of the signal function. The rescaling function can be easily generated by taking the ratio between the ideal APSF that is not affected by the fineness of the sampling function and the actual APSF that is affected by the fineness of the sampling function.

The deconvolution process is a simple discrete product of the defective image with a rescaling function. This is a very fast process with modern computers. Thus, in this case, the deconvolution process can be performed extremely quickly for very small defects. The noise is not amplified or influenced statistically by the deconvolution process as long as the noise is distributed statistically uniformly in the spatial frequency domain. The deconvolution makes the image appear to be sampled as an array of delta functions, referred to as comb functions, having the same distance as the detector array. With this type of data, the function corresponding to the ideal signal form can be precisely matched and the signal can be moved finely, so that the extraction of the reference signal can give a result that is almost non-existent if no defects exist . If the deconvolution of the entire signal is found to be computationally impractical in a given system embodiment, the deconvolution technique may be selectively applied only to weak or feeble or borderline defect signals to improve the accuracy of the detection process . Thus, the rapid deconvolution method provided herein will be a key element in the design of a low cost, very stable, high performance, high efficiency defect detection system.

Reduced number of sample scans

One way to increase the efficiency is to reduce the number of sample scans. The number of sample scans can be reduced by dividing the original light beam into multiple beams and installing a phase controller in each beam path.

FIG. 41 shows an exemplary system 400. The illumination beam 4118 is introduced into the imaging system near the pupil plane and is folded by a small prism to strike the sample with nearly vertical incidence. The mirror beam component 4124 and the dispersed component 4128 from the sample 4110 are combined using a beam splitter 4172 located near the pupil between the high NA lens assembly 4116 and the low NA lens assembly 4114 It is divided into two beams. After branching, a phase controller 4112 and a compensation plate 4130 are installed in each beam path. Each phase controller sets the relative phase between the dispersed component and the mirror component to one of the predetermined values. Two separate image sensors 4140 simultaneously measure the intensity of two separate images. Therefore, one sample scan can simultaneously produce two sets of image data. As a result, the total number of sample scans can be reduced by half in the exemplary system. Continuous reduction of the number of sample scans can be easily achieved by continually dividing each of the two beams using an additional beam splitter.

Additional phase controllers and image sensors need to be installed in each of the additional beam paths. Each phase controller sets the relative phase between the dispersed component and the mirror component to one of the set values. A plurality of separate image sensors simultaneously measure the intensity of a plurality of separated images. Thus, one sample scan can produce multiple sets of image data simultaneously. As a result, the total sample scan number can be reduced accordingly. Cascaded beam splitting can be performed as many times as necessary as long as the physical space allows them. This method can also be applied to high sensitivity operation modes in which each defect requires a different phase setting for optimal detection when the target defect holds a plurality of different kinds of defects. In this case, each phase controller is set to an optimal phase value for best detection of a different kind of defect, respectively. The actual effect is simultaneous operation of multiple high sensitivity modes. This kind of reduction in the number of scans can also be applied to polarimetric floating measurements by sensitizing the beam splitter polarization. However, this type of reduction in the number of scans has its own drawbacks. This reduces signal strength as well as complexity and cost of the optical system. If the signal strength is too low, the scan rate must be reduced to raise the signal strength to an acceptable level. Reducing the scan rate can reduce the efficiency gain that can be achieved by reducing the number of scans.

3. Dark Night Mode

The dark field mode is realized by completely blocking the mirror component. Additional two-dimensional Fourier filtering that generates light in such a scheme would make the dark field mode very quiet (or very low noise level). This will typically have much less photon noise than the dark field mode in existing facilities that use line lighting, which allows only one-dimensional Fourier filtering. However, as described above, even in two-dimensional Fourier filtering, the dark field mode is not a good choice for the detection of microscopic defects whose size is smaller than? / 4. However, darkfield mode is a good choice for fast detection of large defects because it produces a sufficiently strong signal for a large variety of different types of defects and typically requires a single scan of the sample. If you want to know the strength of the dark field signal in advance, the catch all mode is used first in the sample.

Another good use of the dark field mode is to find the best focus for the image sensor. This is because the darkfield mode does not transmit the focus information but blocks the mirror component that may affect the image as a component dispersed through the interference during the video poker. Since the dark field mode does not have a mirror component, it does not require a high dynamic range on the image sensor like other operating modes. A more important feature of the image sensor system for dark field mode is high sensitivity and fine pixels.

Limitations of dark night mode

The dark field mode is easy to operate because it does not require operation of the phase controller. In addition, various defects can be captured by one sample scan. Therefore, the dark field mode is generally the first choice when the signal is sufficiently strong or the noise-less amplification of the signal by the mirror component is insignificant due to the weaker mirror component. However, as described above, the darkfield mode has serious limitations in finding a very small defect due to the lack of noise-less signal amplification capability.

Limitations of darkness need to be more clearly known in order to avoid ineffective attempts using darkness mode. To better understand the limitations of the darkfield mode, the signal from the isolated defect is simulated and then separated into the darkfield and the interference. The wavelength of 266 nm and the numerical aperture of 0.9 of the imaging system were assumed. The central darkening was assumed to be 0.2NA. The phase controller was adjusted to maximize the interference term.

42A shows the dark field portion 4210 and the interference portion 4220 of the defect signal from the 80 nm insulated defect on the sample surface of only 1% reflectivity. The reflectivity of the defect itself is assumed to be 100% in all simulated cases. Figure 42A shows that the interference portion of the signal is larger than the dark field portion, even though the defect is very large and the reflectivity of the sample surface is very low. 42B shows the dark portion 4230 and the dark portion 4240 of the defect signal from the 40 nm insulated defect on the sample surface with only 0.1% reflectivity. That is, the reflection power of the peripheral region is 1/1000 of the defect reflection power. This indicates that even if the magnitude of the defect is smaller than 1/4 wavelength, the interference portion of the defect signal is larger than the dark-field portion even at an extremely low sample reflectance.

Figure 42c shows the dark portion 4260 and the dark portion 4250 of the defect signal from a 20nm insulated defect on a sample of only 0.1% reflectivity. In this case, the dark area is considerably smaller than the interference area. If the reflectivity of the sample is large, the interference portion becomes even more superior. Thus, we can say that in almost all real situations, the interference term is superior to all samples. That is, the technique of phase control and noise-less amplification described herein works well for all the different types of wafers and reticles that may actually be experienced. This is another important advantage of the systems and methods described herein. The darkfield mode proved useful only when the size of the defect is greater than 1/4 wavelength. However, most critical defects in the future are expected to be much smaller than 1/4 wavelength. Also, the darkfield mode can not precisely classify defects and is therefore not expected to be a popular operating mode in the future.

Most actual defects are not isolated from other features. Thus, the conclusion reached by simulating a signal from an isolated defect is not interpreted as the last word. However, in the case of isolated defects, it provides an average value for many different kinds of cases, so the conclusion is at least approximately correct. A similar conclusion can be reached for the transmission sample because the transmission sample is mathematically very similar to the reflective sample.

IV. Design Examples of Imaging Systems

High-quality imaging systems are one of the most important and most expensive components of most optical-based interference systems. As described above, the systems and methods described herein can be used in a variety of imaging systems including dioptric, catopric, and catadioptric systems. Dioptric and catoptric designs are well known for this type of application. Numerous books, patents, and other literature cover diopter and cataprtic designs comprehensively.

The catapratic design is not very well known, but it is very high performance. A design embodiment of two high performance cataract imaging systems will be provided herein. The design is based on U.S. Patent No. 5,031,976. A first design embodiment is shown in Figure 43A. Design prescription is shown below.

Figure 112011000096113-pct00050

Figure 112011000096113-pct00051

This design is for single wavelength applications. A wavelength of 266 nm was chosen for the exemplary design. All lenses and two cataract components 4313 and 4311 are made of fused silica in an exemplary design. The refractive index value of the molten silica is assumed to be 1.499684 for a 26 nm wavelength. However, other lens materials such as calcium fluoride, lithium fluoride and the like may also be used.

The lens component 4311 is a flat convex lens having a reflective coating on the flat side facing a sample 4310 spaced 1.5 mm apart. The central portion of the reflective coating is removed so that light from the sample can pass through the lens. After passing through the lens 4311 the image beam passes through another lens element 4312 and is reflected by the coating on the surface 4314 at the mirror element 4313 from which it passes again through the lens element 4312 And passes through the flat side of the element 4311 having the reflective coating. After the second reflection, light exits the element 4311, passes through the element 4312 for the third time, and passes through the intermediate focus near the rear of the element 4313 through the central aperture of the reflective coating on the surface 4314. All other lens elements of the optical train are refracted and the intermediate image is simply re-imaged to the left farther away from the detector array.

The illumination is introduced to pass through the compensation plate 4315 using the scheme shown in FIG. Another way of introducing a nearly vertical incidence illumination beam 4316 is through a small second off-axis aperture of the reflective coating on the surface 4314 on the lens / mirror element 4313. This reflects the mirror component from the sample 4310 from the opposite side of the surface 4314 and follows a very similar path as a component dispersed from the sample to the detector plane. Such an illumination method produces less flare because the illumination beam passes through a small number of optical components.

Not all lens elements need to be made of the same material. For example, lenses placed at high laser intensities can be made of laser-damage resistant materials, such as calcium fluoride, and the remainder can be made of fused silica. All lens surfaces are spherical. An aspherical surface is not needed to improve performance or to reduce the number of lens components, although an aspherical surface may be used.

The lens surface does not have an extreme radius of curvature. All these lens features lead to reasonable manufacturing tolerances. Thus, the lens system shown in Figure 43A can be manufactured without extreme difficulty. The numerical aperture of the design is 0.9. The observation field is very large with a 1.0 mm diameter field. The magnification is chosen to be 200x, but it can easily be changed without affecting the quality or performance of the system. The design Strehl ratio is 0.996 or higher than the total field. The diameter of the aperture stop is 47 mm. Compensation plate 4315 is placed close to the lens pupil and has a phase controller and a Fourier filter barrier strip in interference imaging applications. The clear opening diameter of the compensation plate is approximately equal to the 47 mm diameter of the opening stop. This is sufficient to install the phase controller in the middle without causing excessive central darkening. In addition, the design has a very low field curvature radius and distortion. The drawback of the design is the small working distance of 1.5 mm in the embodiment design. The design of the embodiment typically does not work with applications such as reticle inspection requiring large working distances due to peeling protection. However, the design is well suited for other applications such as wafer inspection which does not require large working distance.

Figure 43b shows another catadioptric design embodiment. Design prescription is as follows.

Figure 112011000096113-pct00052

Figure 112011000096113-pct00053

Figure 112011000096113-pct00054

Figure 112011000096113-pct00055

Figure 112011000096113-pct00056

This design has a portion between the sample surface 4331 and the interference image 4332 similar to the previous design but with a dichroic wavelength splitter 4333 that divides the beam into two legs near the pupil, One leg 4334 of the legs is at 266nm and the other leg 4335 is at 532nm. Each leg has its own compensation plate 4336 and a phase controller (not shown). The refractive index value of the molten silica is 1.499684 for the 266nm wavelength and 1.460705 for the 532nm wavelength. The refractive index value of BK7 glass is assumed to be 1.519473 for the wavelength of 532nm. The design has characteristics similar to a single wavelength design. The lens system can be manufactured without extreme difficulty. The numerical aperture and viewing field are the same as in the previous design. Physical size is similar. However, it was designed for two wavelength applications. The wavelength is selected to be 266 nm and 532 nm. Different wavelengths can be selected for the same design type. It has a wavelength divider and two separate phase controllers held in respective compensation plates to handle the two wavelengths independently.

As can be seen from the prescription, the front end lens system is shared by two wavelengths. The rear end lens system is completely separated to maximize lens flexibility. The design stripe ratio is 0.996 for the 266 nm wavelength and at least 0.985 for the 532 nm wavelength for the entire field. The field curvature radius and distortion are also very low. The design can easily be modified to accommodate more wavelengths by inserting more wavelength dividers into the rear end lens system. This design embodiment can also be applied to the defect detection system described herein.

Ⅴ. Subsystem

The systems and methods described herein do not rely on any particular illumination or focus subsystem. They can accommodate almost any subsystem. However, optimization of all inspection equipment in terms of performance and cost does not require the design of adaptive lighting and autofocus systems as well as excellent imaging system design.

Another simple and important part is to inhibit extraction from the aperture stop. In the following section, a new illumination system and a new autofocus system will be provided first. And a new method of manufacturing a low diffraction aperture will be provided with a complete theory. The subsystem provided is particularly suitable for interference checking systems. However, they can be effectively used for other optical facilities.

1. Coherent uniform illuminator

In some applications, such as interferometers, optical filtering, etc., partially interfering or totally interfering rather than coherent illumination is preferred. For most such applications, uniform illumination over an object plane with a top hat beam profile is preferred or required. However, the tool used to achieve good uniformity with non-coherent beams, such as lens arrays and light pipes, also has a coherent illumination source because the beams output from coherent sources such as lasers have Gaussian rather than a top hat intensity profile Since it does not work, achieving uniform illumination effectively requires a sophisticated approach. There are many well-known and energy efficient ways to convert the Gaussian beam profile into a top hat beam profile. According to some embodiments, another method is provided for converting a Gaussian beam profile to a top hat beam profile.

The most honest way to convert a Gaussian beam profile into a top hat beam profile is to partially absorb the high intensity portion of the beam using an absorbing material. However, this approach is energy inefficient and tends to damage the absorbing material if the input beam is made of intense or short pulses. A more energy efficient, less prone way to convert the Gaussian beam profile into a top hat beam profile is to relocate the beam's energy to the beam. This can be done using a pair of separate lenses (or lens groups).

Figure 44A illustrates this method. The first lens 4401 intentionally introduces an appropriate amount of spherical aberration into the input beam 4402 having a Gaussian shape as shown by curve 4407. [ The spherical aberration from the first lens rearranges the energy of the beam when it is propagated through the free space. By adjusting the shape and amount of spherical aberration and the propagation distance, the Gaussian beam can be converted into a uniform beam in the form of a top hat. The second lens 4403 is used because the spherical aberration not only relocates light energy but also introduces wavefront distortion. The second lens corrects the wavefront distortion introduced by the first lens so that the energy distortion at the focal plane 4405 is shown by curve 4406. Thus, the two lenses can convert the Gaussian beam into a top hat beam without distorting the wavefront.

This method is very energy efficient and can handle high power beams. However, this method is also disadvantageous, as it typically requires additional image relay system 4404, as shown in Figure 44A. Since the beam profile switcher provides a limited working space close to the desired uniform illumination field, a video relay system is used. As a result, the output beam profile in the form of a top hat from the beam profile switch is relayed to the illumination field 4408 using the imaging system. Otherwise, the top hat beam profile can vary considerably if the beam has to propagate long distances from its ideal focus conjugation. It should be appreciated that the light distribution of the relay image plane shown by curve 4409 is the same as the focal plane 4405 shown by curve 4406.

Relay systems typically require at least two lenses that are separate from each other. This is because the relay system not only requires that the top hat beam profile be relayed, but also requires that the light field be preserved in a flat wavefront. Sometimes it is very difficult to obtain space for a relay system. Generally, many mechanical interference problems are caused. This problem becomes even more serious if the relay system needs to be a zoom system. The embodiments described herein alleviate these problems.

Figure 44B illustrates the operation of the present invention in accordance with some embodiments. Briefly, the Gaussian input beam profile 4420 is converted to a profile 4421 that is formed to form a surrounding for the sinc function. At the sink function location 4424, the beam is incident on a phase plate 4425 having a groove disposed such that the sink function is negative to produce a phase change of 180 degrees in the transmitted beam. Continued propagation of the beam through the free space translates it from the sample plane 4426 to a top hat intensity profile 4423.

Diffraction theory tells us that the far-field diffraction pattern of the sink function beam is in the form of a top hat. Although the described embodiment uses a beam profile converter 4427 as in the prior art, the beam profile converter does not convert the input beam profile into a top hat profile. The beam profile 4421 converted at the image plane 4424 is not actually more uniform than the input beam profile 4420. [ The profile of the converted beam (4421) looks somewhat like a surround of the sink function. The beam profile converter 4427 converts the input beam into a desired profile without introducing wavefront distortion. The beam profile switch introduces an appropriate amount of spherical aberration through the first lens 4428 (or lens group) and compensates the wavefront distortion introduced by the lens 4428 to the second lens 4429 do.

This embodiment uses another optical component called "stepper" disposed behind the beam profile switcher. The phase stepper may be fabricated by forming an unevenly spaced groove on a glass substrate in a rectangular profile, as shown in Figure 44B. Precise grooves on glass substrates can be manufactured in a variety of different ways. For example, they can be produced by patterning the grooves with a lithographic technique followed by precision etching or deposition of the glass material. The phase stepper changes the phase of the selected portion of the incident wavefront to a discontinuous shape. The amount of phase step required is about 180 degrees.

After being phase-stepped, the resulting non-uniform beam 4422 looks somewhat like a sink function and is allowed to propagate over long distances through free space. While the beam is propagating through the free space, the beam profile changes to a top hat shape. The minimum distance required for propagation to become a top hat beam is:

Minimum propagation distance = 2D 2 /? (82)

D: Diameter of the beam in the phase stepper

λ: wavelength

(See "Introduction to Fully Optimal, Third Edition", Joseph W. Goodman, Roberts & Company, Colorado Eaglewood, 2005, page 75). There is a correlation between the size of the initial beam of free space growth and the size of the top hat beam at the illumination field. This association is well known and can be found in the same reference. The beam of illumination field 4426 is not completely uniform but includes ringing as shown in Figure 44B. This is because the beam profile of the initial plane of propagation 4424 is an incomplete sink function and has a finer size. The previous mismatch is fixed by smartly adding an absorber to the phase stepper, which is damaged by high input beam power. By omitting the absorber, this embodiment exchanges some amount of residual intensity non-uniformity for the ability to handle high power.

Most applications tolerate some amount of intensity non-uniformity. Thus, many of the embodiments described are still worthy of many applications, including optical inspection. As noted above, an important good feature of the described embodiments is that there is no need for a video relay system that may cause severe mechanical collisions with other parts or subsystems. This feature is very helpful in designing a real system.

Figure 44c shows a configuration according to another embodiment. It has a transform lens 4430 that converts the sync function beam 4422 from its focal plane 4426 to a top hat beam. Therefore, the function of the conversion lens in this design is the same as the long free space propagation path in the previous design. Basically, the free space multiplying and converting lens performs a Fourier transform of the input beam profile. The size of the top hat beam depends on the size of the input beam for the conversion lens and also on the focal length of the conversion lens, inversely proportional to the size of the input beam, and proportional to the focal length of the conversion lens.

By selecting the correct input beam size and / or the focal length for the conversion lens, the size of the top hat beam in the illumination field can be adjusted. The conversion lens is a costly alternative to free space propagation when space is too limited to meet the distance requirements of equation (82). If the conversion lens needs to have a longer focal length than the useful physical path length, a telephoto lens can be used as the conversion lens. In case reverse long distance is preferred, a reverse telephoto lens can be used as the conversion lens.

In the embodiment of Figure 44c, the lens or lenses are in the beam propagation passages. However, the conversion lens is simpler in shape and more flexible than the image relay lens required in the conventional system. Thus, the embodiment has advantages over the prior art, including the use of a lens in the beam propagation passages.

For many practical applications, high intensity at the beam edge is preferred. This type of beam is called a "superuniform beam" or "top hat beam". Figure 44d shows an embodiment of a very uniform beam profile 4460. [ The described technique is well suited for the generation of highly uniform beams that can be easily generated by forming the beam profile on the input side of a phase stepper, such as the envelope of the Fourier transform of the targeted ultra-uniform beam profile. Indeed, the described techniques are flexible and can be used to generate a wide variety of other beam profiles, such as beam profiles with multiple humps.

Figure 44E shows the result of an effort to achieve a top hat profile without using a beam profile switcher. The input Gaussian beam 4440 passes through a phase stepper 4425 that changes phase without changing the normal beam profile as shown by curve 4441. Curve 4442, which is the final result in illumination field 4426, is good for the Gaussian profile, but not as good as for the profile converter. This system is simple because it does not require a beam profile switcher. However, the beam in the illumination field is less uniform and / or less energy efficient than that shown in Figures 44b and 44c.

Up to now, one-dimensional uniform illumination has been considered. However, according to some embodiments, the expansion into a two-dimensional distribution is flat because the Gaussian beam profile of the input beam is in a discrete, variable form. Depending on the embodiment, the x- and y-directions are completely separate and can be treated independently. Therefore, these embodiments can be applied not only to one-dimensional but also to obtain a two-dimensional illumination distribution.

Some applications require simultaneous multiple field of view illumination. For example, there are systems with multiple spatial sensors that are spatially separated. Simultaneous illumination of multiple fields of view can be achieved easily. Fig. 44F shows an example thereof. The multiple field illumination is achieved by inserting a grating in front of or behind the phase stepper 4425. The grating diffracts the coherent incident beam into multiple diffraction orders. Each diffraction order illuminates one field of view.

Figure 44f shows only two separate illuminated fields of view to clarify the operating principle. Thus, two or more illumination fields may be easily achieved by inserting a grating that generates two or more diffraction orders or by inserting multiple diffraction gratings. The position of the illumination field can be controlled by appropriately selecting the pitch and orientation of the grating (s). In Figure 44f, the orientation of the grating is set to be the same as the orientation of the phase stepper so that the operating principle is clearly visible, but this is not required. The lattice orientation can be set in any direction to place the illumination field at a predetermined position.

High energy efficiency and good uniformity of field of view can also be achieved by designing appropriate lattice groove profiles. For example, the depth and shape of the grooves can be adjusted to achieve a correspondingly uniform illumination in each field of view. Very high energy efficiency can also be achieved by blazing the lattice home profile.

Thus, energy-efficiency, uniformity, and coherent illumination are provided as multiple as a single field of view. Important features of the coherent uniform illuminator of the present invention are summarized below.

1. Can produce a top hat lighting profile without using relay lens system, conversion lens or conversion lens system is used. However, the conversion lens or lens system is simpler and more flexible than the relay lens system.

2. Can generate other beam profiles, such as a superimposed beam profile.

3. Provides more flexibility in lighting system design.

4. Single or multiple field illumination can be easily obtained.

2. Auto focus system

Most high-resolution imaging systems require at least one autofocus system as a subsystem. An interference fault detection system is not an exception. Basically, an interference flaw detection system could be operated without an autofocus system if the environment is quiet and the sample stage is extremely precise. However, this ideal condition is rarely available in the real world. Thus, in general, it is desirable to have an autofocus system to ensure the stable performance of the overall system.

Autofocus systems are generally an important subsystem. The performance is generally critical to the performance of the overall system. However, the autofocus system does not require only performance. The autofocus system should be installed in an available space. In addition, the price should be reasonable. Embodiments of the present invention aim at these problems.

There are many other autofocus systems. However, they can be classified into two types. That is, it is an off-the-lens type and a through-the-lens type. The off-the-lens type autofocus system has its own advantages. However, most high-precision imaging systems require a trough-the-lens type autofocus system because they are insensitive to environmental anomalies such as temperature changes, atmospheric pressure fluctuations, and the like.

In most prior art, high precision, trough-the-lens autofocus systems use non-coherent light sources such as LEDs and arc lamps, which are significantly less bright than lasers. The use of a less bright light source forces the prior art trough-the-lens autofocus system to use a large etendue to provide sufficient light for the focus signal detector. The size of the etendue made by the autofocus system is physically large and expensive as well as sensitive to deviations and misalignment. According to some embodiments of the present invention, a laser is used as the light source. Altering the light source not only provides a higher focus signal but also simplifies the entire autofocus system. Likewise, other unique features are provided.

A single channel structure according to one embodiment is shown in FIG. 45a, which illustrates a focus system disposed for a calibration plate 130 and for a group of high NA and low NA imaging lenses 116, 114. The focusing system uses a single spatial-mode laser 4501 as a light source. Semiconductor lasers are good candidates. However, laser beams are generally very unstable in their position and pointing direction. Due to their inherent instability, it is desirable not to connect the laser directly to the autofocus optical system. An unstable laser beam can cause errors in the focus signal.

According to some embodiments, the laser is not directly connected to the autofocus optical system. Instead, the laser beam passes through a long single-mode optical fiber (4502). It is preferred that the single-mode fiber lengthens at least the foot so that the cladding mode in which laser light typically occurs due to incomplete connection with the fiber disappears. The single-mode optical fiber is a passive device that can convert the original instability in the light source into a change in output intensity that can be easily calibrated to stabilize the beam position and the pointing direction. Changing the beam position and the pointing direction changes the coupling efficiency of the laser beam to single-mode-fiber. A change in coupling efficiency at the input end causes a change in intensity at the output end.

The use of single-mode fibers as beam stabilizers is an important feature according to some embodiments. The output end of the fiber is bonded (or imaged) on the sample plane 110 and on a position sensitive detector (PSD) surface 4511. Since the autofocus light beam is obliquely focused on the sample surface by the lens 4503, the focussing of the sample surface causes a lateral movement of the laser beam at the PSD surface 4511. [ However, a small amount of tilt of the sample travels over the aperture of the imaging lens 4504, but does not change position on the position sensitive detector 4511. Thus, the system measures the sample focus position, but does not measure the sample tilt. Therefore, by reading the beam position from the PSD, the amount of focus change of the sample can be determined. The computer or controller connected to the PSD reads the PSD output and processes it to determine the focus error. If the focus error is greater than a predetermined value, the computer or controller sends a proper focus calibration signal to the focus actuator 4518 to perform the calibration operation. The focus error detection and calibration operations may be operated in an open or closed loop. The PSD is readily utilized to provide a variety of choices.

The described embodiment preferably does not use a beam splitter to couple autofocus beams to or from the imaging system. Instead, a small prism (or mirror) 4505 is used. This optical connection method has the following advantages over the beam splitter.

1. Simple optical connection.

2. Space occupancy is small.

3. Less chance of mechanical collision with other components.

4. Collect only mirror components. Scattered light rejection. Note that a portion of the autofocus light may be scattered by the sample. If the variable diffuse light is wound on the focus sensor, it may cause a focus error.

5. Maintain small size of auto focus system.

6. The auto-focus optical aberration can be made very small because it has a tendency to a small beam.

Thus, many of the described embodiments not only predict good performance, but are also cost effective.

The performance of an autofocus system follows considerably the choice of polarization. S-polarized light with an electric field parallel to the sample surface has less reflectivity and phase shift compared to p-polarized light in most sample phases. This means that s-polarized light can provide more consistent performance than p-polarized light. According to an optional embodiment, s-polarization is used as shown in Figs. 45A to 45C. The s-polarized light is represented by an array of circular dots in the beam path. There are several other ways of ensuring a pickup from the source of only s-polarized light. One approach is to simply install a polarizer in the beam path. Another approach is to use a polarization-preserving single-mode fiber between the source laser and the inlet to the autofocus optical system. The polarization-preserving fiber receives only s-and p-polarized light but only one polarized light while rapidly attenuating other polarized light. By rotating the fiber core in the calibration direction, the polarization-preserving fiber can be made to transmit only s-polarized light. If the light from the laser source is polarized, the polarization-preserving single mode fiber will be able to provide significantly higher energy efficiency than other types of fibers.

A common problem with most autofocus systems is that there is a time delay between the focus error detection and its correction due to the time delay in the focus signal processor and the slow response of the focus-error correction system. This is one of the main focus error sources for high-speed scanning systems where samples are scanned quickly under the imaging system. In this case, in order to reduce the focus error, the focus error must be detected prior to imaging of the sample and the focus error correction signal must be supplied in the forward direction for the focus-error correction system.

The autofocus beam must be placed on the sample surface in the forward position in the sample scan direction to detect the focus error earlier. This makes it necessary to move the autofocus beam position transversely at the sample surface to accommodate changes in scan speed and direction. The autofocus beam position at the sample surface can be easily moved laterally by moving the output end of the fiber in the lateral direction. This method works because the output end of the fiber is imaged on the sample surface as the previous state.

If the lateral movement needs to be precisely controlled, a slopable glass plate 4512 can be used as shown in Figure 45A. The beam can be moved laterally by tilting the glass plate. If the input beam is moved, the output beam is likewise shifted to the corresponding amount. The maintenance of the position of the correlation between the beam and the PSD is done by introducing an inclined glass plate forward of the PSD or by simply shifting the position of the PSD 4511.

The single channel autofocus system shown in Figure 45A is generally very unstable because it is sensitive to mechanical instability or temperature variations. One way to reduce this kind of problem is to set up multiple channels in a symmetric way. A multi-channel autofocus system constructed in a symmetrical manner is made dull for co-mode mechanical movement. 45B shows an example of a multi-channel autofocus system. The figure has two channels configured in a symmetrical manner. The beam position movement in the sample plane is achieved by tilting the glass plate 4512. The PSD is moved by the direct movement mechanism 4511.

45C shows another example of a two-channel autofocus system. In this case, the input and output beams are moved by tilting the glass plate, and consequently the PSD need not be moved. This structure uses only a few components, but it makes the beam alignment very difficult because the two channels are connected by assigning a slopable glass plate.

45B and 45C, if the beam path of one channel overlaps the beam path of the other channel, beam splitter 4513 is used to direct the return beam to the PSD. However, this is related to the problem of using a beam splitter. One problem is the loss of light energy. The use of a non-polarizing beam splitter sacrifices at least 75% of the available light energy. Energy loss is acceptable for most samples, but not for very low reflectance samples.

Another problem is that one part of the return beam from one channel re-enters the source lasers of the other channel. That is, the channels interfere with each other at their sources. This interference destabilizes the source laser and can cause focus errors. In order to make the source lasers stable, they optically drop each other. There are two solutions to solve this problem. One solution is to align the beam channels of the two channels such that they do not overlap each other as shown in Figure 45d. In this alignment arrangement, the return beam will still strike the core of the optical fiber, but whenever this happens, the direction of the return beam deviates further from the receiving angle of the single mode fiber, causing the return beam to be coupled to the fiber.

Another solution is to use a polarization beam splitter rather than a non-polarization splitter to place the Faraday rotator 4514 in the beam path, as shown in Figures 45B and 45C. The polarization beam splitter transmits p-polarized light and reflects s-polarized light. Thus, the laser beam passing through the polarization beam splitter becomes fully linearly polarized. The Faraday rotator is preferably designed to rotate an angle of incident linear polarization at 45 degrees. The beams first pass through the entrance passage and then through the Faraday rotator of their return passage twice.

Thus, the linear polarization of the laser beam is rotated 90 degrees by two Faraday rotors. That is, the original p-polarized light passing through the beam splitter of the incident path is converted to s-polarized light into the beam splitter of the return path. The beam splitter of the return path reflects the entire beam toward the PSD and does not transmit the returning laser beam towards the source laser. Thus, the Faraday rotator isolates the source lasers from one another. If the beam splitter 4513 and position sensitive detector 4511 are properly rotated about the beam axis, the laser beam can be 100% s-polarized when they occur on the sample. Thus, this method makes it possible to achieve high energy efficiency, channel-to-channel interference and s-polarization on the sample surface simultaneously.

The use of 1/4 wavelength plates instead of Faraday rotators achieves high energy efficiency and channel-to-channel non-interference. However, s-polarization can not be achieved on the sample surface. Therefore, the Faraday rotator is selected as preferred in many embodiments as compared to the quarter wave plate.

Figure 45E shows the top of a two-channel autofocus system. The autofocus channel is rotated about the sample such that light is avoided from the laser being diffracted from the sample entering the outgoing beam path. This method is generally very effective at avoiding diffracted light from the sample because diffracted light is highly localized in the x- and y-directions at the pupil plane.

In Fig. 45E, the two channels are disposed close to each other. If there is mechanical drift or creeping, then two similarly placed closely spaced channels will move or creep in the same direction. The focus signal extracted from multiple channels is made insensitive to this kind of co-mode operation of the channel.

Figure 45F shows another example of a multi-channel structure. In this case, the optical channels of the two channels traverse at the focal point on the sample, but are otherwise completely segregated. This structure requires more parts but is energy-efficient and does not require a Faraday rotator. In addition, alignment of the beam path is facilitated in such a structure because there is no connection of two channels at all. As shown in the example structure, the embodiment is not only a preferred embodiment, but also its physical alignment is simple and flexible.

Key features of the new autofocus system are summarized as follows.

1. Trough-the-lens structure.

2. Laser as light source.

3. Single beam (small etendue).

4. Stable source by using single mode fiber.

5. S-polarized sample surface.

6. Inter-channel interference.

7. Use a small prism or mirror to connect it out with the imaging system (non-beam splitter / combiner for this purpose).

8. Good rejection of diffracted light.

9. Feed-Forward Focus The ability to move the laser spot laterally in the sample plane for error correction.

10. Symmetrical dual or multiple channels insensitive to ambient disturbances.

The advantages of the new autofocus system are summarized as follows.

1. Simple system.

2. Many lights.

3. High efficiency.

4. Reliable performance.

5. Insensitivity to wafer pattern.

6. Less impact on the imaging pathway.

3. Serrated Aperture

Most optical systems require at least one aperture that defines the numerical aperture. Most of the openings are made of a thin metal plate with a hole of considerable size in the middle. This kind of aperture is easy to produce, but the sharp edges in the aperture in turn produce long-range diffraction in the image plane causing long-range interference between different parts of the image. Long-range interference is one of the major contributors to wafer pattern noise.

In order to reduce this undesirable effect, the opening edge is preferably ductile. That is, the lag between the 100% transmission area and the no transmission area should be steady and not steep. Gradual progression occurs in many different ways. The sawtooth method of the opening edge is selected because it has many advantages over other methods if done correctly. One advantage is that the teeth can be easily manufactured. That is, they can be machined directly on a thin metal plate, or they can be produced by etching using conventional semiconductor structure technology.

One of the most straightforward ways of making gradual progressive openings is to add a gradual absorbing coating close to the edge of the opening. This method is well known in the art. However, although this method is easy in theory, it is difficult to produce a progressive absorbing coating suitably, and it is also practically difficult because the coating can exhibit undesirable side effects, i.e., a particular phase change. Another prior art is to use a negative power lens made of an absorbing material. The effect is very similar to the effect of a gradual coating. However, this has the same kind of undesirable side effects.

U.S. Patent No. 6,259,055 describes a serrated opening. However, this does not provide any diffraction formula that can be used to properly design the serrated aperture. Also, the qualitative description of the diffractive nature of the serrated aperture is not accurate. In accordance with an embodiment of the present invention, a rigid diffraction formula is developed for a serration opening and how it is used to make a serration opening correctly.

A schematic diagram of the serrated opening is shown in Figure 46A. The teeth of aperture 4606 have a constant spacing structure 4608 rather than a random pitch. Teeth of non-periodic or random structures are not considered in the present invention because their diffraction patterns do not have a desirable shape. Even though it has the structure of a desirable period, a large amount of diffraction is inevitable because there is a transmission change of a sudden sawtooth edge.

However, the diffraction pattern from the periodic tooth can be destroyed by a discontinuous order. The lowest order, which is the zero order, is due to the average circle of the transmission field, and consequently does not affect the edge shape of the saw tooth. This means that the diffraction pattern of the zero order is identical to the pattern from the actual progressive aperture. Therefore, the diffraction pattern of the zero order is desired to be obtained from the serrated opening.

The actual progressive aperture produces only diffraction of the zero order. However, the serrated openings produce not only a zero order but also a higher diffraction order. These high orders are undesirable. To make a serrated aperture work such as an actual progressive aperture, the inventors had to study to ensure that only the zero order passes through the image sensor and all high diffraction orders deviate from the image sensor. In the case of linear-periodic saw teeth, all high diffraction orders were easy to misjudge or otherwise misjudge the image sensor. (Note: U.S. Patent No. 7,397,557). In the case of circular-periodic saw teeth, however, it was not so easy to judge where to go all the high diffraction orders. Thus, the inventors had to develop a rigid diffraction formula to anticipate going to all the higher orders.

The following symbols are used in all of the following equations.

(?,?); Polar coordinates in the aperture plane

(r, [theta]); Polar coordinates in the image plane

Figure 112011000096113-pct00057

J k (r) ; The kth order Bessel function of the first kind

N; Total number of teeth

τ: Fourier transform operator

The far-field diffraction pattern produced by the object performs the Fourier transform of the transmission pattern of the object. However, in order to apply the Fourier transform for the diffraction calculation by the serrated opening, the coordinates (r, r) must be determined at an appropriate ratio. There are two lengths that can be used as scaling units. These are the wavelengths and focal lengths of the optical system. (rho, r) is a pair of Fourier transform variables, if one of (r, r) is scaled by wavelength, the other one should be scaled to focal length. The most common measurement practice is that (r) is scaled to focal length and (r) is scaled to wavelength.

If (p) is expressed as a unit of focal length, then it is equal to the image space direction cosine of the ray passing through the pupil point from the center p. The maximum value of (rho) expressed by the focal length unit is referred to as the numerical aperture (NA) of the optical system. In other words, the coordinates of the image space location are represented by a wavelength unit, and the cosine ray direction cosine constitutes a convenient Fourier transform variable pair. It is also possible to work by changing the scaling convention in which another way is to scramble the wavelength p and the focal length r. In this case, (r) would be the same as the opening space direction cosine of the ray lying at (r) in the image plane. Two conventional ones are the same.

The derived diffraction formula uses an appropriately scaled coordinate system. The diffraction formula does not change if the coordinate scaling is switched between the two conventions. Thus, it is possible to freely switch between two scaling customs without worrying about changing the diffraction formula. The switching of the scaling convention is substantially the same as changing the determination of the coordinate variable. This change in judgment can provide considerable intuition to the diffraction formula.

The diffraction formula will only be derived for coherent normal illumination. This is because the diffraction formula can be derived directly from the normal illumination using the "moving theorem" of the Fourier transform when the diffraction in the non-interference case is only the intensity summation in the case of multiple coherence and is oblique illumination. (References: Robert & Company of Englewood, CO, USA, Joseph W. Goodman, "Guide to FLEEI Optical, Third Edition ", 2005, page 8). Teeth can have a variety of different tooth shapes. The details of the diffraction pattern depend on the tooth shape. 46A shows a tooth with linear teeth as an example. By the way, the nature of each diffraction order which is of most interest does not depend on the shape of the tooth part but only on the tooth pitch.

The amplitude transmission P (?,?) Of the serrated opening can be expressed as follows.

Figure 112011000096113-pct00058

Here, (rho) is the opening width between two adjacent tooth portions.

A Fourier transform equation (83) is needed to obtain the diffraction pattern. If we have a separated-variable form, P (ρ, φ) = f (ρ) ⋅ g (φ), we can use Weighted Hankel Transform So that the Fourier transform can be easily performed. Unfortunately, the form of P (ρ, φ) is a disjointed form of P (ρ, φ). (See, for example, Robert and Company of the Englewood, Colorado, USA, Guide to Fourier Optics, It is not in a variable form. However, you can still take some extra steps and convert it to Fourier. Here are two ways to do it. One way is to represent the sum of the N square functions as a weighted sum of the exp (jm?) Function, where m is an integer according to the process raised in Exercise 2-7 of the reference. The other way is to convert the sum of the N square functions into the integration of the discrete-variable functions, and use the weighted Hancock transformation. Although only the second method is shown here, both methods are accurate and produce the same result.

The sum of the N square functions can be easily converted to the integration of the discrete-variable functions using the delta function and the dummy variable (p '). In other words:

Figure 112011000096113-pct00059

Next, P (ρ, φ) is transformed into the following form.

Figure 112011000096113-pct00060

Now, the Fourier transform is applied to each part of P (?,?). The Fourier transform of the first part can be obtained using the Fourier-Bessel transform.

Figure 112011000096113-pct00061

The Fourier transform of the second part can be obtained using a weighted Hankel transform.

Figure 112011000096113-pct00062

Figure 112011000096113-pct00063

Figure 112011000096113-pct00064

The transformation of the square function in equation (89) can be performed as follows.

Figure 112011000096113-pct00065

Now, C k can be expressed as:

Figure 112011000096113-pct00066

The delta function's handcraft transformation is as follows.

Figure 112011000096113-pct00067

Now, the equation (87) can be expressed as follows.

Figure 112011000096113-pct00068

The Fourier transform of P (ρ, φ) can now be expressed as (The dummy integration variable p 'is now changed to (p)).

Figure 112011000096113-pct00069

Equation (95) shows that the total diffraction consists of a discrete diffraction order. If a zero order is taken from the second term, then:

Figure 112011000096113-pct00070

If we use the relational expression to add a + m diffraction order to a -m diffraction order with a single diffraction order, sin (-x) = -sin (x) and

Figure 112011000096113-pct00071

Equation (97) is the final result of developing the diffraction formula. Unfortunately, it still has the one-dimensional integration needed to be performed formally. However, numerical one-dimensional integration can be done much more accurately and quickly than numerical two-dimensional integration requiring numerical-dimensional Fourier transforms.

The first two terms in equation (97) constitute the zero-th order of diffraction and are those which wish to have from the serrated opening. If you separately record a zero diffraction order:

Figure 112011000096113-pct00072

The end term of equation (97) is all the high diffraction orders that should be excluded from the image sensor. However, it is not necessary to pay attention to all non-zero diffraction orders since only the first diffraction order is strongest and closest to the zero order in the image plane. Not all other high orders are weaker than the first order, but more important than the zero order in the image plane relative to the first order. Therefore, it is necessary to check only the diffraction order of one diffraction order and place it on the outside of the image sensor so as to make the serrated aperture working part. If we perform the first diffraction order from the final term of equation (97):

Figure 112011000096113-pct00073

The first order term has its maximum intensity along the direction, and cos (N?) = 占. Therefore, the amplitude appearing along the direction of the maximum intensity is as follows.

Figure 112011000096113-pct00074

In order to perform the serrated opening operation, it is sufficient to place the first diffraction order outside the image.

Both sharp edges of openings and sharp edges of random occlusions can produce long range diffraction effects. The same sawtechnology used for opening can be applied to cover to reduce the long range diffraction effect by arbitrary cover. Because of Babinet's principle, the diffraction formula for the sawtooth cover is the same as for the serrated aperture except for the reversal of the amplitude sign. (Reference: Cambridge University Press, 1999, Max Bonn and Emil Wolff, "The Fundamentals of Optical"). Thus, a new derivation of the diffraction formula for covering is not required.

Analytical diffraction equations are generally applicable to teeth with random teeth. However, the numerical value of the above formula is still to be determined and the action of the diffraction pattern should be seen. The specific shape of the tooth is selected to perform the numerical calculation of the diffraction formula and is clearly indicated by the function w (rho). The teeth having a linear tooth shape as shown in Fig. 46A can be easily shifted and manufactured easily. Therefore, the teeth having a linear tooth shape are selected to obtain numerical values of the diffraction pattern. In the case of a linear sawtooth, the function w (rho) appears as

Figure 112011000096113-pct00075

The diffraction pattern from the serrated openings with other types of teeth can be easily calculated for the linear teeth when properly changing the function w (rho) in the diffraction formula. With the above-described condition, it is necessary to examine only the zero and first diffraction order so that a suitable serrated opening can be designed. Therefore, only the zero order and the first order are considered here.

Figs. 46B and 46C are diagrams showing the radial distribution of the intensity of the zero diffraction order in the image plane. Fig. Equations (98) and (101) were used for numerical calculations. The value was normal with the peak amplitude of the diffraction by the non-serrated aperture. The vertex amplitude is located at the center of the diffraction pattern (r = 0) and its value is:

Figure 112011000096113-pct00076

The diffraction intensity of the non-serrated aperture is included for contrast purposes. Both sawtooth openings have a maximum value of NA = 0.9. However, their tooth pitches are different. 46B, curve 4612 represents the image plane intensity (rho 1 = 0.8NA, rho 2 = 0.9NA) of the zero diffraction order of the serrated aperture and curve 4610 is of a non-serrated aperture. 46C, curve 4614 represents the image plane intensity (rho 1 = 0.7NA, rho 2 = 0.9NA) of the zero diffraction order of the serrated aperture and curve 4616 is of a non-serrated aperture. 46B and 46C are as follows.

(1) The serrated opening produces an image that is elongated closer to the non-serrated opening. This feature was what the inventors wanted right from the serrated opening.

(2) The longer the saw tooth width, the smaller the diffraction effect of the long range in the image.

(3) Teeth are not effective in influencing the image form near the center of the image.

(4) The longer the saw tooth width, the smaller the vertex intensity of the image.

The sawtooth reduces the long range diffraction amplitude, but they also reduce the peak height of the zero diffraction order because they inevitably reduce the effective aperture area. This is an undesirable side effect of the serrated opening and is also a side effect in general flexible openings. Therefore, it is necessary to perform a good trade-off between the two effects in determining the sawtooth.

As described above, in order to perform the serrated opening operation, the inventors confirm that preferably only the zero diffraction order reaches the image sensor and all higher orders of diffraction deviate from the image sensor. By the way, the second and all the high diffraction orders go farther from the zero order than the first order. That is, if the first order deviates from the image sensor, all higher orders will automatically miss the image sensor. Therefore, the inventors paid attention only to the first order.

The inventors have found from the diffraction theory based on the periodic structure that the smaller the tooth pitch, the farther the first order goes from the zero order. If the saw teeth are manufactured sufficiently well, the inventors will be able to place the first diffraction order at a distance far enough from the zero order. However, the teeth are not on the straight edge but on the edge of the round opening. In this case, it is not possible to directly apply the diffraction theory based on the periodic structure.

Even if the majority of the first order light is far enough away from the zero order, there will still be a small amount of first order light that lies between the zero order and the main part of the first order light. This can be a serious problem if the first ordered light between the zero order and the main part of the first order is negligible. It does not seem possible to evaluate the intensity of this kind of annoying light in a simple way. Therefore, numerical calculation is adopted in the present invention.

Equations (100) and (101) were used for the numerical calculation of the first diffraction order intensity distribution. The normalization factor equal to the zero order case, which is equation (102), was used for normalization of the intensity.

46D, curve 4618 represents the radiation distribution of light in the first diffraction order for N = 1000, indicating the following fact.

(1) The first diffraction order light is diffused over a wide range. This is very different from the case of the teeth on the straight edge.

(2) There is practically no light between the zero order and the main part of the first order. This is an important feature of the serrated opening. This feature allows the serrated opening to work. The inventors were able to place the image sensor in a zone where the first order light does not exist. If the image sensor is too large to be placed inside the zone without first order light, the inventors could increase the number of teeth by increasing the area because the radius of the zone is approximately proportional to the number of teeth.

46E shows the radiation distribution of the first order light with respect to the other number of teeth around the opening circumference. Curve 4601 corresponds to 10 teeth around the opening circumference, curve 4602 corresponds to 100 teeth, curve 4603 corresponds to 1000 teeth, and curve 4604 corresponds to 10,000 teeth. This analysis indicates the following facts.

(1) The radius of the no-first-order-light zone is generally proportional to the number of teeth. This is true especially for larger than 1000 (N).

(2) The serration openings do not work well if the number of teeth is less than 100 due to the rapid reduction of the no-first-order-light zone where the number of teeth is reduced.

(3) With a large (N) capacity of greater than 1000,

Figure 112011000096113-pct00077

This value is consistent with the inventors' intuition based on diffraction by periodic structure. Equation (103) can be used to determine the number of teeth or the equivalent tooth pitch needed to place the first and all high diffraction orders outside the image sensor.

From the equation (103), the inventors could calculate the diffraction angle (very precise direction cosine) (? 1 ) of the first diffraction order as follows.

Figure 112011000096113-pct00078

Where f is the focal length of the lens system disposed between the aperture and the image plane; And

The physical radius of the aperture, r, is given by

Figure 112011000096113-pct00079

And, the physical pitch of the teeth, p, is expressed as

Figure 112011000096113-pct00080

The following equations are shown from the equations (103) to (106).

Figure 112011000096113-pct00081

Equation (107) is equal to the formula of the diffraction angle with respect to the tooth at the linear edge. The expression of the diffraction angle with respect to the tooth at the linear edge is the same as the expression for the periodic configuration such as the lattice. This means that if the pitch of the toothed portion is much smaller than the radius of curvature of the edge, then the curvature of the edge can be ignored and the tooth at the curve edge can be sawed to the straight edge.

This fact may also be detected because a short curved section can be considered as a straight line. This means that the edge sawing technique can be applied to any edge of arbitrary shape if the edge does not have sharp corners and the pitch of the teeth is significantly smaller than the radius of curvature of the edge. For example, an opening with an irregular shape can be considered. In this case, the curvature of the opening edge changes along the edge. However, if the aperture does not have sharp corners, the gear pitch can be obtained by satisfying the equation (107).

The tooth pitch need not be the same everywhere. If the pitch changes slowly along the edge, the sawtechnology described in the present invention can be expected to work at least to some extent.

The advantages of the serrated opening are summarized as follows.

1. less long-range diffraction. The height of the diffraction-limited image decreases in the region removed from the center core of the image.

2. Carefully selecting the tooth pitch, the first and the higher diffraction orders may be spaced from the image sensor.

3. No phase change introduced.

4. Easy to manufacture.

VI. Application of interference type defect detection and classification

The above-described embodiment is very suitable for high-resolution optical inspection or measurement which can take advantageous gain from judgment of both amplitude and phase of an optical signal. Partially applicable list: defect detection and classification of bare wafers; Crystal defect detection; Defect review; Detection and classification of reticle defects, including defects in reticles with phase change components.

Many advantages of various embodiments have been described. The advantages are as follows. High defect signal; High defect detection sensitivity; Small error fault detection; Small sample pattern noise; The ability to simultaneously capture other types of defects; The ability to distinguish between pores and particles or mesas and valleys; Very accurate and reliable defect classification; Improved detection consistency; A very effective utilization of the dynamic range of the image sensor relative to the amplitude of the defect signal with improved illumination uniformity across the induced field of view; Quick set-up of operating mode; Use of mode-locked laser at a lower cost than CW lasers; Eliminates the need for speckle busting and lowers prices; The ability to use flood illumination reduces the chance of wafer damage; Provides linear Fourier filtering with the ability to use coherent illumination leading to a well-defined diffraction order; A simple system structure that lowers prices; Removal of a pupil or aperture stop relay that lowers the price and reduces energy loss; And efficient energy use.

Although the present invention has been described in detail in order to explain the present invention clearly, it is to be understood that within the scope of the present invention, The process and apparatus described herein can be implemented in many alternative and different ways. Accordingly, the above-described embodiments are described for the purpose of describing the present invention and are not intended to limit the present invention. The present invention includes technical constructions that are changed and modified within the spirit of the appended claims.

110: Samples 114 and 116: Lens system
118: Beam 122: Phase controller
140: Sensor system 142: Controller
146: Display unit 154: Memory unit
220: lower glass wedge 222: upper glass wedge
410: liquid crystal 420, 422: electric
530: movable member

Claims (57)

  1. A common, coherent interference imaging system for detecting defects in a sample,
    An illumination source 112 for generating light 118 and directing it to sample 110,
    An optical imaging system 100 having an object plane and an image plane and configured to position a sample in the object plane and to collect scattered components 128 and mirror components 124 of light from the sample,
    A variable phase control system (122, 130) operatively disposed with respect to the optical imaging system and configured to adjust a relative phase between the dispersed component and the mirror component,
    And a sensing system (140) disposed in the image plane and adapted to sense and generate at least a portion of a combination of dispersed and mirrored components after the relative phase is adjusted,
    Wherein the dispersed component and the mirror component have a relative phase.
  2. The system according to claim 1, further comprising a sample position adjustment system (150) configured to position a sample relative to a sensing system to provide sample position information corresponding to the electronic signal.
  3. The system of claim 1, further comprising a signal processor (152) operatively connected or integrated with the sensing system and configured to receive an electronic signal from the sensing system to determine the presence or absence of a defect and compare it with a reference signal Wherein the common interfering imaging system comprises:
  4. 2. The system according to claim 1, characterized in that the variable phase control system is disposed in or near the aperture stop, or disposed in or near the conjugate plane of the aperture stop of the optical imaging system system.
  5. 2. The system of claim 1, wherein the variable phase control system is configured to adjust the relative phase by adjusting the phase of the mirror component.
  6. The system according to claim 1, further comprising a variable attenuating system (224) disposed in the path of the mirror component between the sample and the sensing system.
  7. 7. The system according to claim 6, wherein the amplitude of the mirror component is attenuated to increase signal modulation in the sensing system.
  8. The method of claim 1, further comprising the step of selecting and arranging the polarized optical component (s) so that the polarization of the incident light on the sample, the polarization of the mirror component from the sample reaching the detection system, (1060, 1062). ≪ / RTI >
  9. 2. The common interference imaging system of claim 1, wherein three or more comparisons of sample points made of different relative phase shifts are used to derive the amplitude and phase of the defect of the sample.
  10. The system of claim 1, wherein the illumination source (2818a, 2818b) generates light at multiple wavelengths, the system comprising one or more wavelength dividers (2872) for simultaneous multiple wavelength operation, Disposed in the optical path of the dispersed component, selectively transmitting a portion of the wavelength and reflecting the remaining wavelength,
    The variable phase control and detection system may be configured such that the relative phase of the mirror sample component and the dispersed sample component can be adjusted for each wavelength and that the signals from the mirror component and the dispersed component for each wavelength have the same wavelength and position And a system (2870a, 2870b) for each wavelength arranged to be able to be compared with the reference signal simultaneously by the signal processor.
  11. The system of claim 1, further comprising a Fourier plane filtering system (730) for selectively blocking light at or near the aperture stop, or near the conjugate plane of the aperture stop of the optical imaging system A common coherent imaging system.
  12. 2. The method of claim 1, wherein a portion of the light directed toward the sample is reflected from a surface of the sample and a portion is transmitted through the sample to generate two complementary sets of a mirror component and a dispersive component; The system includes a second imaging system 2402b, a compensation plate, a second variable phase control system 2470b, a second variable phase control system 2470b, a second variable phase control system 2470b, Further comprising a sensing system 2440b; Characterized in that the signal corresponding to the mirror component and the dispersed component for the transmitted beam and reflected beam is compared to a computer generated reflected and transmitted reference signal corresponding to the same position on the sample by the signal processor 152 A common coherent imaging system.
  13. 10. The method of claim 9, further comprising a processor (152) programmed to accept defective amplitudes and phases of the defect and classify the defects using similarities and differences between the amplitude and phase characteristics, system.
  14. The system of claim 1, wherein the optical imaging system comprises one or more beam splitters (2872), multiple phase controller systems (2870a, 2870b), and a plurality of different signals, each comprising a combination of a mirror component and a dispersed component associated with a point on a sample And a plurality of detection systems (2840a, 2840b) that enable simultaneous generation of the coherent interference images.
  15. The illumination system of claim 1, further comprising a pupil plane having an aperture plate (4606) having an edge exposed to darkening and transmitted light, each of the edges being configured to diffract the first and higher order images away from the sensing system Wherein the teeth are spaced a distance less than the wavelength of the light that is multiplied by the focal length of the lens between the imaging system and the sensing system and divided by the maximum field radius of the sensor. ≪ RTI ID = 0.0 > system.
  16. A method of using coherent interference imaging to detect defects in a sample,
    Directing light toward the sample,
    Collecting from the sample an optical imaging system with a dispersion component of light that is highly dispersed by the sample and a mirror component of light that is highly reflected or transmitted by the sample,
    Adjusting a relative phase of the dispersed component and the mirror component using a variable phase control system,
    Detecting at least a portion of the dispersed and mirror components after the phase adjustment and generating a first electronic signal indicative thereof;
    And comparing the first electronic signal with a reference signal corresponding to the same sample position of the first electronic signal to determine whether a defect is present or absent at the same sample position. Using interference imaging.
  17. 17. The method of claim 16, further comprising changing a mirror component intensity reaching an image plane.
  18. 17. The method of claim 16, further comprising the step of changing at least one of polarization of the incident illumination on the sample, polarization of the mirror illumination from the sample reaching the sensing plane, and polarization of the dispersed illumination from the sample reaching the sensing plane Wherein the common interfering imaging comprises:
  19. 17. The method of claim 16, wherein three or more comparisons of identical sample points with different relative phase shifts are used to derive the amplitude and phase of the defect.
  20. 20. The method of claim 19, further comprising classifying one or more defects based at least in part on similarities and differences between the amplitude and phase characteristics.
  21. delete
  22. delete
  23. delete
  24. delete
  25. delete
  26. delete
  27. delete
  28. delete
  29. delete
  30. delete
  31. delete
  32. delete
  33. delete
  34. delete
  35. delete
  36. delete
  37. delete
  38. delete
  39. delete
  40. delete
  41. delete
  42. delete
  43. delete
  44. delete
  45. delete
  46. delete
  47. delete
  48. delete
  49. delete
  50. delete
  51. delete
  52. delete
  53. delete
  54. delete
  55. delete
  56. delete
  57. delete
KR1020117000031A 2008-06-03 2009-06-02 Interferometric defect detection and classfication KR101556430B1 (en)

Priority Applications (14)

Application Number Priority Date Filing Date Title
US13072908P true 2008-06-03 2008-06-03
US61/130,729 2008-06-03
US13561608P true 2008-07-22 2008-07-22
US61/135,616 2008-07-22
US12/190,144 US7864334B2 (en) 2008-06-03 2008-08-12 Interferometric defect detection
US12/190,144 2008-08-12
US18950808P true 2008-08-20 2008-08-20
US18951008P true 2008-08-20 2008-08-20
US18950908P true 2008-08-20 2008-08-20
US61/189,510 2008-08-20
US61/189,509 2008-08-20
US61/189,508 2008-08-20
US21051309P true 2009-03-19 2009-03-19
US61/210,513 2009-03-19

Publications (2)

Publication Number Publication Date
KR20110031306A KR20110031306A (en) 2011-03-25
KR101556430B1 true KR101556430B1 (en) 2015-10-01

Family

ID=41398482

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020117000031A KR101556430B1 (en) 2008-06-03 2009-06-02 Interferometric defect detection and classfication

Country Status (5)

Country Link
EP (1) EP2286175A4 (en)
JP (1) JP5444334B2 (en)
KR (1) KR101556430B1 (en)
CN (1) CN102089616B (en)
WO (1) WO2009149103A1 (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI421469B (en) * 2010-03-10 2014-01-01 Ind Tech Res Inst Surface measure device, surface measure method thereof and correction method thereof
EP2384692A1 (en) * 2010-05-07 2011-11-09 Rowiak GmbH Method and device for interferometry
EP2738602B1 (en) 2011-07-25 2019-10-09 Citizen Watch Co., Ltd. Optical device, projector, production method, and production support device
CN102435547B (en) * 2011-09-15 2014-02-05 上海华力微电子有限公司 Sensitive photoresist tolerance degree detection method and wafer defect detection method
CN103018202B (en) * 2011-09-22 2014-10-01 中国科学院微电子研究所 Integrated circuit defect optical detection method and device
US8964088B2 (en) 2011-09-28 2015-02-24 Semiconductor Components Industries, Llc Time-delay-and-integrate image sensors having variable intergration times
WO2013172103A1 (en) * 2012-05-16 2013-11-21 株式会社 日立ハイテクノロジーズ Inspection device
JP6025419B2 (en) 2012-06-27 2016-11-16 株式会社ニューフレアテクノロジー Inspection method and inspection apparatus
KR101354729B1 (en) * 2012-08-23 2014-01-27 앰코 테크놀로지 코리아 주식회사 Misalignment value measuring method of semiconductor device and semiconductor device adapted the same
JP6001383B2 (en) * 2012-08-28 2016-10-05 株式会社日立ハイテクノロジーズ Defect inspection method and apparatus using the same
JP5993691B2 (en) 2012-09-28 2016-09-14 株式会社日立ハイテクノロジーズ Defect inspection apparatus and defect inspection method
JP5946751B2 (en) * 2012-11-08 2016-07-06 株式会社日立ハイテクノロジーズ Defect detection method and apparatus, and defect observation method and apparatus
KR101336946B1 (en) 2012-11-27 2013-12-04 한국기초과학지원연구원 Failure analysis appratus and method using measurement of heat generation distribution
JP5786270B2 (en) * 2013-03-06 2015-09-30 株式会社東京精密 Two-color interference measuring device
CN104180765B (en) * 2013-05-28 2017-03-15 甘志银 The method and device of substrate warpage is measured in chemical vapor depsotition equipment in real time
US9189705B2 (en) * 2013-08-08 2015-11-17 JSMSW Technology LLC Phase-controlled model-based overlay measurement systems and methods
JP6433268B2 (en) * 2014-03-31 2018-12-05 国立大学法人 東京大学 Inspection system and inspection method
WO2016011024A1 (en) * 2014-07-14 2016-01-21 Zygo Corporation Interferometric encoders using spectral analysis
JP5843179B1 (en) * 2014-09-19 2016-01-13 レーザーテック株式会社 Inspection apparatus and wavefront aberration correction method
CN104297744B (en) * 2014-10-16 2016-12-07 西安理工大学 The polarizational labelling of polarization lidar and compensation device and demarcation and compensation method
WO2016084239A1 (en) * 2014-11-28 2016-06-02 株式会社東京精密 Two-color interference measurement device
TWI574334B (en) * 2015-03-17 2017-03-11 陳勇吉 Method for wafer detection
CN104729712B (en) * 2015-03-30 2017-08-04 中国资源卫星应用中心 A kind of spaceborne Spaceborne Fourier Transform Spectrometer for Atmospheric Sounding data preprocessing method
JP2017129500A (en) * 2016-01-21 2017-07-27 株式会社ブイ・テクノロジー Phase shift amount measurement device
KR20180125173A (en) * 2016-04-13 2018-11-22 케이엘에이-텐코 코포레이션 Fault classification system and method based on electrical design intent
WO2018219639A1 (en) * 2017-06-02 2018-12-06 Asml Netherlands B.V. Metrology apparatus
CN107504919B (en) * 2017-09-14 2019-08-16 深圳大学 Wrapped phase three-dimension digital imaging method and device based on phase mapping
CN108195849A (en) * 2018-01-23 2018-06-22 南京理工大学 Position phase defect detecting system and method based on the safe graceful interferometer of short relevant dynamic

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7428057B2 (en) 2005-01-20 2008-09-23 Zygo Corporation Interferometer for determining characteristics of an object surface, including processing and calibration
US7864334B2 (en) 2008-06-03 2011-01-04 Jzw Llc Interferometric defect detection

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6501551B1 (en) * 1991-04-29 2002-12-31 Massachusetts Institute Of Technology Fiber optic imaging endoscope interferometer with at least one faraday rotator
JP2005003689A (en) * 1994-10-07 2005-01-06 Renesas Technology Corp Method and apparatus for inspecting defect in pattern on object to be inspected
US6288780B1 (en) * 1995-06-06 2001-09-11 Kla-Tencor Technologies Corp. High throughput brightfield/darkfield wafer inspection system using advanced optical techniques
JPH09281051A (en) * 1996-04-17 1997-10-31 Nikon Corp Inspection apparatus
US6259055B1 (en) * 1998-10-26 2001-07-10 Lsp Technologies, Inc. Apodizers for laser peening systems
JP3881125B2 (en) * 1999-02-17 2007-02-14 レーザーテック株式会社 Level difference measuring apparatus and etching monitor apparatus and etching method using the level difference measuring apparatus
EP1271604A4 (en) * 2001-01-10 2005-05-25 Ebara Corp Inspection apparatus and inspection method with electron beam, and device manufacturing method comprising the inspection apparatus
US7209239B2 (en) * 2002-10-02 2007-04-24 Kla-Tencor Technologies Corporation System and method for coherent optical inspection
US7317531B2 (en) * 2002-12-05 2008-01-08 Kla-Tencor Technologies Corporation Apparatus and methods for detecting overlay errors using scatterometry
JP4220287B2 (en) * 2003-03-31 2009-02-04 株式会社トプコン Pattern defect inspection system
US7138629B2 (en) * 2003-04-22 2006-11-21 Ebara Corporation Testing apparatus using charged particles and device manufacturing method using the testing apparatus
US7295303B1 (en) * 2004-03-25 2007-11-13 Kla-Tencor Technologies Corporation Methods and apparatus for inspecting a sample
WO2006105259A2 (en) * 2004-07-30 2006-10-05 Novalux, Inc. System and method for driving semiconductor laser sources for displays

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7428057B2 (en) 2005-01-20 2008-09-23 Zygo Corporation Interferometer for determining characteristics of an object surface, including processing and calibration
US7446882B2 (en) 2005-01-20 2008-11-04 Zygo Corporation Interferometer for determining characteristics of an object surface
US7616323B2 (en) 2005-01-20 2009-11-10 Zygo Corporation Interferometer with multiple modes of operation for determining characteristics of an object surface
US7864334B2 (en) 2008-06-03 2011-01-04 Jzw Llc Interferometric defect detection

Also Published As

Publication number Publication date
WO2009149103A1 (en) 2009-12-10
CN102089616A (en) 2011-06-08
EP2286175A1 (en) 2011-02-23
KR20110031306A (en) 2011-03-25
JP5444334B2 (en) 2014-03-19
EP2286175A4 (en) 2017-04-12
CN102089616B (en) 2013-03-13
JP2011523711A (en) 2011-08-18

Similar Documents

Publication Publication Date Title
US10274370B2 (en) Inspection apparatus and method
US9310290B2 (en) Multiple angles of incidence semiconductor metrology systems and methods
JP6377218B2 (en) Measuring system and measuring method
TWI659204B (en) Spectroscopic beam profile metrology
JP5529806B2 (en) Method and system for inspection of specimens using different inspection parameters
US7345825B2 (en) Beam delivery system for laser dark-field illumination in a catadioptric optical system
US7299147B2 (en) Systems for managing production information
US8559014B2 (en) High-resolution, common-path interferometric imaging systems and methods
KR101039103B1 (en) Inspection apparatus, lithographic system provided with the inspection apparatus and a method for inspecting a sample
US7324273B2 (en) Confocal self-interference microscopy from which side lobe has been removed
US7973921B2 (en) Dynamic illumination in optical inspection systems
TWI294518B (en) Scattermeter and method for measuring a property of a substrate
US9176048B2 (en) Normal incidence broadband spectroscopic polarimeter and optical measurement system
JP4944184B2 (en) EUV mask inspection system
KR101113602B1 (en) System for detection of wafer defects
KR100989377B1 (en) A scatterometer, a lithographic apparatus and a focus analysis method
TWI352878B (en) Lithographic device, and method
US6674522B2 (en) Efficient phase defect detection system and method
TWI564539B (en) Optical system, method for illumination control in the same and non-transitory computer-readable medium
US7502101B2 (en) Apparatus and method for enhanced critical dimension scatterometry
JP4953932B2 (en) Method and apparatus for characterization of angle resolved spectrograph lithography
TWI428582B (en) Interferometry apparatus, and interferometry method for determining characteristics of an object surface
US8467048B2 (en) Pattern defect inspection apparatus and method
JP3878107B2 (en) Defect inspection method and apparatus
EP2462486B1 (en) Object inspection systems and methods

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
LAPS Lapse due to unpaid annual fee