WO2020136697A1 - Dispositif d'inspection de défaut - Google Patents

Dispositif d'inspection de défaut Download PDF

Info

Publication number
WO2020136697A1
WO2020136697A1 PCT/JP2018/047448 JP2018047448W WO2020136697A1 WO 2020136697 A1 WO2020136697 A1 WO 2020136697A1 JP 2018047448 W JP2018047448 W JP 2018047448W WO 2020136697 A1 WO2020136697 A1 WO 2020136697A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
photoelectric conversion
light
illumination
Prior art date
Application number
PCT/JP2018/047448
Other languages
English (en)
Japanese (ja)
Inventor
英司 有馬
本田 敏文
雄太 浦野
松本 俊一
Original Assignee
株式会社日立ハイテク
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立ハイテク filed Critical 株式会社日立ハイテク
Priority to PCT/JP2018/047448 priority Critical patent/WO2020136697A1/fr
Publication of WO2020136697A1 publication Critical patent/WO2020136697A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects

Definitions

  • the present invention relates to a defect inspection device.
  • Defect inspection used in the manufacturing process of semiconductors, etc. requires detecting minute defects and measuring the dimensions of the detected defects with high accuracy.
  • non-destructive inspection of the sample for example, without degrading the sample, and when the same sample is inspected, substantially constant inspection results can be obtained regarding the number, position, size, and defect type of detected defects, for example. Be required to be.
  • it is required to inspect a large number of samples within a fixed time.
  • Patent Documents 1 and 2 describe defect inspection used in the manufacturing process of semiconductors and the like.
  • US Pat. No. 6,096,837 discloses splitting the full collection NA of a collection subsystem into different segments and directing scattered light collected in the different segments to separate detectors. The configuration is described.
  • Patent Document 2 describes a configuration in which a large number of detection systems having smaller apertures are arranged with respect to the full focusing NA.
  • the size of the image formed on the sensor surface is greatly affected by the position and focal length of the lens array.
  • magnifications (magnitudes) of the divided images formed on the sensor surface are different, image blur occurs when the divided images are integrated, and the detection sensitivity decreases.
  • -Patent Documents 1 and 2 do not mention the problem that image blur occurs when the divided images are integrated and the solution thereof.
  • the object of the present invention is to prevent the detection sensitivity from decreasing due to image blurring even when the divided images are integrated in the defect inspection apparatus.
  • a defect inspection apparatus includes an illumination unit that irradiates a sample with light emitted from a light source, a detection unit that detects scattered light generated from the sample, and the scattered light detected by the detection unit. And a signal processing unit for processing the electric signal converted by the photoelectric conversion unit to detect a defect in the sample, wherein the detection unit divides the opening.
  • the defect inspection apparatus it is possible to prevent deterioration of detection sensitivity due to image blurring even when the divided images are integrated.
  • FIG. 1 is an overall schematic configuration diagram of a defect inspection apparatus of Example 1.
  • FIG. It is a figure which shows an example of the illumination intensity distribution shape implement
  • FIG. 7 is a diagram showing a mechanism for observing a divided image of Example 2 with a two-dimensional camera.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment.
  • FIG. 11 is a diagram showing an example of a GUI for calibrating the size of a divided image according to the second embodiment. It is a figure which shows an example of arrangement
  • FIG. 7 is a diagram showing a mechanism for observing a divided image of Example 2 with a two-dimensional camera.
  • FIG. 7 is a diagram showing a mechanism for observing a divided image of Example 2 with a two-dimensional camera.
  • FIG. 9 is a diagram showing a block diagram of a control unit that calibrates the size and image position of a divided image acquired by the two-dimensional camera of the second embodiment.
  • the defect inspection apparatus includes an illumination unit 101, a detection unit 102, a photoelectric conversion unit 103, a stage 104, a signal processing unit 105, a control unit 53, a display unit 54, and an input unit 55.
  • the stage 104 is configured such that the sample W can be placed thereon, the actuator can move the sample W in the direction perpendicular to the surface, the sample 104 can rotate in the plane of the sample W, and the stage 104 can move in the direction parallel to the surface of the sample W. There is.
  • the illumination unit 101 includes a laser light source 2, an attenuator 3, an emitted light adjusting unit 4, a beam expander 5, a polarization control unit 6, and an illumination intensity distribution control unit 7 as appropriate.
  • the laser light beam emitted from the laser light source 2 is adjusted to a desired beam intensity by the attenuator 3 and adjusted to a desired beam position and a beam traveling direction by the emission light adjusting unit 4. Further, it is adjusted to a desired beam diameter by the beam expander 5, adjusted to a desired polarization state by the polarization control unit 6, and adjusted to a desired intensity distribution by the illumination intensity distribution control unit 7 to be an inspection target region of the sample W. Illuminated.
  • the incident angle of the illumination light with respect to the sample surface is determined by the position and angle of the reflection mirror of the emission light adjustment unit 4 arranged in the optical path of the illumination unit 101.
  • the incident angle of the illumination light is set to an angle suitable for detecting a minute defect.
  • the larger the illumination incident angle that is, the smaller the illumination elevation angle (the angle between the sample surface and the illumination optical axis), the more scattered light from the minute foreign matter on the sample surface becomes noise, and the more scattering from the minute irregularities on the sample surface.
  • Light (called haze) weakens, so it is suitable for detecting minute defects. Therefore, when the scattered light from the minute irregularities on the sample surface interferes with the detection of minute defects, the incident angle of the illumination light is preferably set to 75 degrees or more (elevation angle of 15 degrees or less).
  • the incident angle of the illumination light is preferably set to 60 degrees or more and 75 degrees or less (the elevation angle is 15 degrees or more and 30 degrees or less).
  • the polarization control of the illumination control unit 6 of the illumination unit 101 causes the polarization of illumination to be P-polarized light, so that the scattered light from defects on the sample surface is increased compared to other polarizations. To do.
  • the scattered light from the minute irregularities on the sample surface interferes with the detection of the minute defects
  • the scattered light from the minute irregularities on the sample surface is made to be S-polarized light by setting the polarization of the illumination to S polarization. Decrease.
  • the illuminating optical path is changed, and the illuminating optical path is substantially changed with respect to the sample surface.
  • Illumination light is emitted from a direction perpendicular to (vertical illumination).
  • the illumination intensity distribution on the sample surface is controlled by the illumination intensity distribution control unit 7 in the same manner as the oblique incidence illumination.
  • a beam splitter is inserted at the same position as the mirror 21.
  • vertical illumination that is incident substantially perpendicularly to the sample surface is suitable.
  • the laser light source 2 oscillates an ultraviolet or vacuum ultraviolet laser beam having a short wavelength (wavelength of 355 nm or less) as a wavelength that is difficult to penetrate into the sample in order to detect minute defects near the sample surface, and outputs 2 W.
  • the above high output is used.
  • the outgoing beam diameter is about 1 mm.
  • a laser that oscillates a visible or infrared laser beam as a wavelength that easily penetrates into the sample is used.
  • the attenuator 3 appropriately includes a first polarizing plate, a half-wave plate rotatable about the optical axis of illumination light, and a second polarizing plate.
  • the light that has entered the attenuator 3 is converted into linearly polarized light by the first polarizing plate, and the polarization direction is rotated in an arbitrary direction according to the slow axis azimuth angle of the half-wave plate, and the second polarizing plate pass.
  • the azimuth angle of the half-wave plate By controlling the azimuth angle of the half-wave plate, the light intensity is reduced at an arbitrary ratio.
  • the first polarizing plate is not always necessary.
  • the attenuator 3 the one in which the relationship between the input signal and the extinction ratio is calibrated in advance is used.
  • the attenuator 3 it is possible to use an ND filter having a gradation density distribution, or switch and use a plurality of ND filters having different densities.
  • the outgoing light adjusting unit 4 includes a plurality of reflecting mirrors.
  • a three-dimensional Cartesian coordinate system (XYZ coordinates) is tentatively defined, and it is assumed that the incident light on the reflecting mirror travels in the +X direction.
  • the first reflection mirror is installed so as to deflect the incident light in the +Y direction (incident/reflection in the XY plane), and the second reflection mirror deflects the light reflected by the first reflection mirror in the +Z direction. Installed (incident and reflection in YZ plane).
  • the position and traveling direction (angle) of the light emitted from the emission adjusting unit 4 are adjusted by parallel movement and tilt angle adjustment of each reflection mirror.
  • the first reflecting mirror's incident/reflecting surface (XY plane) and the second reflecting mirror's incident/reflecting surface (YZ plane) are arranged so as to be orthogonal to each other. Thereby, the position and angle adjustment in the XZ plane and the position and angle adjustment in the YZ plane of the light emitted from the emission adjustment unit 4 (traveling in the +Z direction) can be performed independently.
  • the beam expander 5 has two or more lens groups and has a function of expanding the diameter of the incident parallel light flux.
  • a Galileo type beam expander including a combination of a concave lens and a convex lens is used.
  • the beam expander 5 is installed on a translation stage having two or more axes, and its position can be adjusted so that its center coincides with a predetermined beam position. Further, a tilt angle adjusting function of the entire beam expander 5 is provided so that the optical axis of the beam expander 5 and a predetermined beam optical axis coincide with each other. By adjusting the distance between the lenses, it is possible to control the enlargement ratio of the luminous flux diameter (zoom mechanism).
  • the diameter of the light beam is expanded and collimation (quasi-parallel light conversion) is performed simultaneously by adjusting the lens interval.
  • the collimation of the light flux may be performed by installing a collimator lens upstream of the beam expander 5 independently of the beam expander 5.
  • the expansion factor of the beam diameter by the beam expander 5 is about 5 to 10 times, and the beam having a beam diameter of 1 mm emitted from the light source is expanded to about 5 mm to 10 mm.
  • the polarization controller 6 is composed of a half-wave plate and a quarter-wave plate, and controls the polarization state of illumination light to an arbitrary polarization state.
  • the beam monitor 22 measures the states of the light incident on the beam expander 5 and the light incident on the illumination intensity distribution control unit 7.
  • FIGS. 2 to 6 are schematic diagrams showing the positional relationship between the illumination optical axis 120 guided to the sample surface from the illumination unit 101 and the illumination intensity distribution shape.
  • the configuration of the illumination unit 101 in FIGS. 2 to 6 shows a part of the configuration of the illumination unit 101, and the emission light adjustment unit 4, the mirror 21, the beam monitor 22 and the like are omitted.
  • Fig. 2 shows a schematic diagram of the cross section of the incident surface of grazing incidence illumination (the surface including the illumination optical axis and the sample surface normal).
  • the grazing incidence illumination is inclined with respect to the sample surface within the incidence plane.
  • the illumination unit 101 produces a substantially uniform illumination intensity distribution in the incident plane.
  • the length of the portion where the illumination intensity is uniform is about 100 ⁇ m to 4 mm in order to inspect a wide area per unit time.
  • Fig. 3 shows a schematic diagram of a cross section of a plane that includes the sample surface normal and is perpendicular to the incidence plane of the oblique incidence illumination.
  • the illumination intensity distribution on the sample surface forms an illumination intensity distribution in which the intensity of the periphery is weaker than that of the center. More specifically, a Gaussian distribution that reflects the intensity distribution of light incident on the illumination intensity distribution control unit 7, or a first-order first-order Bessel function or sinc function that reflects the aperture shape of the illumination intensity distribution control unit 7.
  • the intensity distribution is similar to.
  • the length of the illumination intensity distribution within this plane reduces the haze generated from the sample surface, so that the illumination within the incidence plane is reduced. It is shorter than the length of the portion having uniform strength, and is about 2.5 ⁇ m to 20 ⁇ m.
  • the illumination intensity distribution controller 7 includes optical elements such as an aspherical lens, a diffractive optical element, a cylindrical lens array, and a light pipe described later. The optical element forming the illumination intensity distribution control unit 7 is installed perpendicularly to the illumination optical axis, as shown in FIGS.
  • the illumination intensity distribution control unit 7 includes an optical element that acts on the phase distribution and intensity distribution of incident light.
  • a diffractive optical element 71 (DOE: Diffractive Optical Element) is used as an optical element forming the illumination intensity distribution control unit 7 (FIG. 7).
  • the diffractive optical element 71 is formed by forming a fine undulation shape having a size equal to or less than the wavelength of light on the surface of a substrate made of a material that transmits incident light.
  • fused quartz is used for ultraviolet light.
  • a lithographic method is used for forming the fine relief shape.
  • the optical element provided in the illumination intensity distribution control unit 7 is provided with a translation adjusting mechanism of two or more axes and a rotation adjusting mechanism of two or more axes so that the relative position and angle of the incident light with respect to the optical axis can be adjusted. .. Further, a focus adjusting mechanism for moving in the optical axis direction is provided.
  • an aspherical lens, a combination of a cylindrical lens array and a cylindrical lens, or a combination of a light pipe and an imaging lens may be used.
  • the illumination intensity distribution control unit 7 has a spherical lens, and the beam expander 5 forms an elliptical beam that is long in one direction. It is formed of a plurality of lenses including a cylindrical lens.
  • a part or all of the spherical lens or the cylindrical lens included in the illumination intensity distribution control unit 7 is installed parallel to the sample surface, so that it is long in one direction on the sample surface and has a narrow width in the direction perpendicular thereto. An illumination intensity distribution is formed.
  • the variation of the illumination intensity distribution on the sample surface due to the variation of the state of the light entering the illumination intensity distribution control unit 7 is small, and the stability of the illumination intensity distribution is high. Further, compared with the case where a diffractive optical element, a microlens array, or the like is used for the illumination intensity distribution controller 7, the light transmittance is high and the efficiency is good.
  • the state of illumination light in the illumination unit 101 is measured by the beam monitor 22.
  • the beam monitor 22 measures and outputs the position and angle (traveling direction) of the illumination light that has passed through the emission light adjusting unit 4, or the position and the wavefront of the illumination light that enters the illumination intensity distribution control unit 7.
  • the position measurement of the illumination light is performed by measuring the position of the center of gravity of the light intensity of the illumination light.
  • an optical position sensor Position Sensitive Detector
  • an image sensor such as a CCD sensor or a CMOS sensor
  • the angle measurement of the illumination light is performed by an optical position sensor or an image sensor installed at a position farther from the light source than the position measuring means or at a condensing position by a collimator lens.
  • the illumination light position and the illumination light angle detected by the sensor are input to the control unit 53 and displayed on the display unit 54.
  • the emitted light adjusting section 4 is adjusted so as to return to the predetermined position.
  • the wavefront measurement of the illumination light is performed to measure the parallelism of the light incident on the illumination intensity control unit 7.
  • a spatial light phase modulator that is a type of a spatial light modulator (SLM: Spatial Light Modulator).
  • the wavefront accuracy measuring and adjusting means By inserting an appropriate phase difference for each position of the light flux cross section so that the wavefront becomes flat, by making the wavefront close to flat, that is, making the illumination light close to quasi-parallel light. You can By the above wavefront accuracy measuring and adjusting means, the wavefront accuracy (deviation from the predetermined wavefront (design value or initial state)) of the light incident on the illumination intensity distribution control unit 7 is suppressed to ⁇ /10 rms or less.
  • the illumination intensity distribution monitor 24 measures the illumination intensity distribution on the sample surface adjusted by the illumination intensity distribution control unit 7. As shown in FIG. 1, even when vertical illumination is used, the illumination intensity distribution monitor 24 similarly measures the illumination intensity distribution on the sample surface adjusted by the illumination intensity distribution control unit 7.
  • the illumination intensity distribution monitor 24 forms an image of the sample surface on an image sensor such as a CCD sensor or a CMOS sensor through a lens and detects it as an image.
  • the image of the illumination intensity distribution detected by the illumination intensity distribution monitor 24 is processed by the control unit 53, and the barycentric position of the intensity, the maximum intensity, the maximum intensity position, the width and the length of the illumination intensity distribution (greater than or equal to a predetermined intensity or the maximum intensity).
  • the width and length of the illumination intensity distribution area having a predetermined ratio or more with respect to the value are calculated and displayed on the display unit 54 together with the contour shape of the illumination intensity distribution, the sectional waveform, and the like.
  • the displacement of the height of the sample surface causes the displacement of the position of the illumination intensity distribution and the disturbance of the illumination intensity distribution due to defocusing.
  • the height of the sample surface is measured, and when the height is deviated, the deviation is corrected by the illumination intensity distribution control unit 7 or the height adjustment of the stage 104 by the Z axis.
  • the illuminance distribution shape (illumination spot 20) formed on the sample surface by the illumination unit 101 and the sample scanning method will be described with reference to FIGS. 8 and 9.
  • the stage 104 includes a translation stage, a rotation stage, and a Z stage (not shown) for adjusting the height of the sample surface.
  • the illumination spot 20 has an illumination intensity distribution that is long in one direction as described above, the direction is S2, and the direction substantially orthogonal to S2 is S1.
  • the rotary motion of the rotary stage scans in the circumferential direction S1 of a circle about the rotation axis of the rotary stage, and the translational motion of the translation stage scans in the translational direction S2 of the translation stage.
  • the illumination spot scans a spiral locus T on the sample W by scanning in the scanning direction S2 by a distance equal to or less than the longitudinal length of the illumination spot 20. Draw and scan the entire surface of sample 1.
  • a plurality of detecting units 102 are arranged so as to detect scattered light emitted from the illumination spot 20 in a plurality of directions. An example of the arrangement of the detection unit 102 with respect to the sample W and the illumination spot 20 will be described with reference to FIGS.
  • FIG. 10 shows a side view of the arrangement of the detection unit 102.
  • the angle formed by the detection direction of the detection unit 102 (the central direction of the detection aperture) with respect to the normal line of the sample W is defined as the detection zenith angle.
  • the detection unit 102 is configured by appropriately using a high angle detection unit 102h having a detected zenith angle of 45 degrees or less and a low angle detection unit 102l having a detected zenith angle of 45 degrees or more.
  • the high-angle detector 102h and the low-angle detector 102l each include a plurality of detectors so as to cover scattered light scattered in multiple directions at each detected zenith angle.
  • FIG. 11 shows a plan view of the arrangement of the low angle detection unit 102l.
  • the low-angle detection unit 102 includes a low-angle front detection unit 102lf, a low-angle side detection unit 102ls, a low-angle rear detection unit 102lb, and a low-angle front detection unit 102lf' and a low-angle front detection unit 102lf' which are symmetrical with respect to the illumination incidence plane.
  • a corner side detection unit 102ls' and a low angle rear detection unit 102lb' are appropriately provided.
  • the low-angle front detection unit 102lf has a detection azimuth angle of 0° or more and 60° or less
  • the low-angle side detection unit 102ls has a detection azimuth angle of 60° or more and 120° or less
  • the low-angle rear detection unit 102lb has a detection azimuth angle of 60° or more. It is installed above 120 degrees and below 180 degrees.
  • FIG. 12 shows a plan view of the arrangement of the high angle detection unit 102h.
  • the high-angle detection unit 102 includes a high-angle front detection unit 102hf, a high-angle side detection unit 102hs, a high-angle rear detection unit 102hb, and a high-angle side detection unit 102hs′ at positions symmetrical to the high-angle side detection unit 102hs and the illumination incident surface.
  • the high-angle front detection unit 102hf is installed so that the detected azimuth angle is 0° or more and 45° or less
  • the part 102b is installed at the detection azimuth angle of 135° or more and 180° or less.
  • the case where there are four high angle detection units 102h and six low angle detection units 102l is shown here, but the number is not limited to this, and the number and position of the detection units may be changed as appropriate.
  • FIG. 13 shows an example of a specific configuration diagram of the detection unit 102 having the image formation unit 102-A1.
  • the scattered light generated from the illumination spot 20 is condensed by the objective lens 1021, and the polarization direction is controlled by the polarization control filter 1022.
  • the polarization control filter 1022 for example, a half-wave plate whose rotation angle can be controlled by a drive mechanism such as a motor is applied.
  • the detection NA of the objective lens 1021 is preferably 0.3 or more.
  • the lower end of the objective lens is cut out as necessary so that the lower end of the objective lens 1021 does not interfere with the sample surface W.
  • the imaging lens 1023 forms an image of the illumination spot 20 at the position of the aperture 1024.
  • the aperture 1024 is an aperture set so that only the light in the region detected by the photoelectric conversion unit 103 in the image formed by the beam spot 20 is transmitted.
  • the aperture 1024 passes only the central portion of the Gaussian distribution where the light intensity is strong in the S2 direction, and blocks a weak light intensity region at the beam end.
  • the size of the image formed by the illumination spot 20 in the S1 direction is about the same size, and disturbances such as air scattering that occur when the illumination transmits air are suppressed.
  • the condenser lens 1025 is provided and collects the formed image of the aperture 1024 again.
  • the polarization beam splitter 1026 splits the light whose polarization direction has been converted by the polarization control filter 1022, according to the polarization direction.
  • the diffuser 1027 absorbs light in the polarization direction that is not used for detection by the photoelectric conversion unit 103.
  • the lens array 1028 forms as many images of the beam spots 20 on the photoelectric conversion unit 103 as there are arrays.
  • Each lens of the lens array 1028 is a cylindrical lens, and two or more cylindrical lenses are arranged in the curvature direction of the cylindrical lens in a plane perpendicular to the optical axis of the condenser lens 1025.
  • the combination of the half-wave plate 1022 and the polarization beam splitter 1026 causes the photoelectric conversion unit 103 to detect only the light of a specific polarization direction among the lights condensed by the objective lens 1021.
  • the polarization control filter 1022 may be a wire grid polarization plate having a transmittance of 80% or more, and only the light of a desired polarization direction can be extracted without using the polarization beam splitter 1026 and the diffuser 1027.
  • FIG. 13 Another configuration of the image forming unit 102-A1 of FIG. 13 is shown in FIG.
  • a plurality of images is formed on the photoelectric conversion unit 103 by one lens array 1028, but in FIG. 34A, three lens arrays 1028a, 1028b, and 1028c and one lens array 1028c are included.
  • An image is formed using a cylindrical lens.
  • the lens arrays 1028a and 1028b are lens arrays for magnification adjustment
  • the lens array 1028c is a lens array for image formation.
  • magnification here means an optical magnification, which can be obtained from the spread of the intensity distribution imaged on the photoelectric conversion units 1031 to 1034 and the peak position in FIG. 14B described later. Since the optical magnification varies depending on the focal length of the lens, the magnification can be set for each image formed on the photoelectric conversion unit 103 by the lens array 1028a and the lens array 1028b.
  • the lens array 1028a and the lens array 1028b are Kepler-type magnification adjusting mechanisms.
  • 34B and 34C show the intensity profile of an image of a sphere having a small size. It can be seen that the imaging positions of the photoelectric conversion units 10424 of 10424a to 10424c and 10426a to 10426c are the same.
  • the Keplerian type is used here, the present invention is not limited to this, and another adjusting mechanism such as a Galileo type magnification adjusting mechanism may be used.
  • the angle formed by the light beam incident on the objective lens 1021 and the optical axis is ⁇ 1. Further, the angle formed by the sample W and an axis perpendicular to the optical axis is ⁇ 2.
  • ⁇ 1 passes through the center of one of the lenses forming the lens array 1028 which is located at the position where the pupil of 1021 is relayed.
  • ⁇ 3 it is represented by the following formula 1.
  • the image formed at the positions 10421 to 10423 of the light receiving surface of the photoelectric conversion unit 103 is proportional to sin ⁇ 3(i) calculated from the direction ⁇ 1(i) of the principal ray incident on the lens i of 1028 forming the image. It becomes the size.
  • the intensity profile of the image of a sphere of minute size placed in W is shown in FIGS. 31 to 33.
  • FIG. 31 shows profiles of images formed on 10421
  • FIG. 32 shows 10422
  • FIG. 33 shows images of 10423.
  • 10421a to 10421c correspond to 1041a to 1041c, respectively.
  • 10422a to 10422c and 10423a to 10423c are intensity profiles of images corresponding to 1041a to 1041c. Since the intensity profiles shown in FIGS. 31 to 33 are formed by different lenses forming the lens array 1028, ⁇ 1(i) is different. Therefore, a value proportional to the magnification, sin ⁇ 3(i), is obtained. Change. As the numerical aperture of 102 increases, the change in ⁇ 1 increases within the same lens, and the change in magnification increases accordingly.
  • the image formed in this way is formed on the photoelectric conversion unit 103 described with reference to FIG. 16, it is connected to a signal line, for example, 1035-a, and the image of the pixels formed in the pixel blocks 1031 to 1034 has a constant pitch. Resolution is reduced. Therefore, the magnification of the individual cylindrical lens lenses 1028a1 to 1028aN, 1028b1 to 1028bN forming each of the lens arrays 1028a and 1028b shown in FIG. 34A is set to be a magnification ratio inversely proportional to sin ⁇ 3(i). This makes it possible to correct the change in magnification.
  • FIG. 14A shows a schematic view of the illumination spot 20 on the sample W. Further, FIG. 14B shows correspondence with image formation from the lens array 1028 to the photoelectric conversion unit 103.
  • the illumination spot 20 extends long in the S2 direction in FIG. W0 indicates a defect to be detected.
  • the objective lens 1021 is placed in a direction in which its optical axis is not orthogonal to the S2 direction.
  • the photoelectric conversion unit 103 divides this illumination spot into Wa to Wd and detects it. Although the number of divisions is four here, the number of divisions is not limited to this number, and the present invention can be embodied with an arbitrary number of divisions.
  • the scattered light from the defect W0 to be detected is condensed by the objective lens 1021 and guided to the photoelectric conversion unit 103.
  • the lens array 1028 is a cylindrical lens that forms an image only in one direction. Pixel blocks 1031, 1032, 1033, and 1034 corresponding to the number of lens arrays 1028 are formed in the photoelectric conversion unit 103.
  • the aperture 1024 shields a region where the amount of light is weak and which is not subjected to photoelectric conversion, so that the pixel blocks 1031 to 1034 can be formed close to each other.
  • the lens array 1028 is placed at the position where the pupil of the objective lens is relayed. Since an image is formed for each of the divided pupil regions, the image formed by the lens array 1028 has a narrowed aperture, and the depth of focus is expanded. As a result, it becomes possible to detect the image formation from the direction not orthogonal to S2.
  • the condenser lens 1025 has a large numerical aperture and is usually the same as the numerical aperture of the objective lens 1021.
  • a condenser lens with a large numerical aperture collects light scattered in various directions, which results in a shallow depth of focus.
  • s2 which is the longitudinal direction of the illumination
  • the optical axis of the objective lens 1021 are arranged so as not to intersect at right angles, the optical distance changes at the center of the visual field and the visual field end, and the image formed on the photoelectric conversion unit 103 has defocus. Will occur.
  • the lens array 1028 includes a pupil position of the condenser lens 1025, in other words, a relayed pupil position of the objective lens 1021, and in other words, a rear side of the condenser lens 1025. It is placed in the focal position.
  • the condenser lens 1025 is set to have a size equivalent to the pupil diameter so that ideally all the light incident on the aperture diameter of the objective lens 1021 can be imaged.
  • the lens array 1028 At the position of the lens array 1028, lights having similar incident directions to the condenser lens 1025 are distributed closely. As a result, when the lens array 1028 is placed at this position, it is equivalent to a reduction in the numerical aperture, and the depth of focus can be increased. In this way, the image is divided so that the numerical aperture becomes small, and the corresponding images are formed on the photoelectric conversion surface, and an image without defocus is formed to resolve minute defects.
  • Reference numerals 1031a to 1031d are pixel groups formed in the pixel block of the pixel block 1031 and form images of light from the sections W1 to W4 at the illumination spot positions, respectively.
  • Reference numerals 1031a1 to 1031aN are pixels belonging to 1031a, and each pixel outputs a predetermined current when photons are incident. The outputs of the pixels belonging to the same pixel group are electrically connected, and one pixel group outputs the sum of the current outputs of the pixels belonging to the pixel group.
  • 1032 to 1034 also output corresponding to Wa to Wd.
  • outputs corresponding to the same section from different pixel groups are electrically connected, and the photoelectric conversion unit 103 performs output corresponding to the number of photons detected from each section of W1 to W4.
  • the detection system of FIG. 13 is arranged so that the long axis direction of the image formed by the illumination spot 20 in the photoelectric conversion unit 103 and the direction of S2′ match.
  • S1 and S2 are defined as shown in FIG. 8
  • a vector in the length direction of the illumination spot is expressed as in Equation 2.
  • Equation 3 (See FIG. 15).
  • the two-dimensional plane excluding the optical axis of the objective lens 1021 is divided into two, a vector having a component in the Z direction and a vector not having it (see Formulas 5 and 6).
  • S2' in FIG. 13 is set in the direction rotated from the vector having no Z-direction component represented by Formula 6 by the angle represented by Formula 7.
  • ⁇ S1′′ is set so as to be orthogonal to this.
  • the lens array 1028 and the photoelectric conversion unit 103 are arranged.
  • the difference ⁇ d in the optical distance between the visual field center and the visual field end is expressed by the following expression 8.
  • the depth of focus DOF of the image of each lens array 1028 is expressed by the following equation 9.
  • the resolvable interval in the S2 direction is expressed by the following formula 10 based on the size of the Airy disk.
  • M is set so as to satisfy the following expression 11.
  • the internal circuit of the photoelectric conversion unit 103 will be described with reference to FIG. 14, the photoelectric conversion unit that outputs corresponding to the four sections corresponding to W1 to W4 has been described, but in FIG. 16, an example in which this is expanded into eight sections will be described.
  • Eight pixel groups are formed in each of the pixel blocks 1031 to 1034.
  • 1031a to 1031h are formed in 1031 and 1032 to 1034 are similarly formed in each group.
  • Reference numeral 1031a5 denotes the fifth pixel of 1031a, and an avalanche photodiode operating in the Geiger mode is connected to the signal line 1035-1a via the quenching resistor 1031a5q.
  • all the pixels belonging to the pixel group 1031a are connected to the signal line 1035-1a, and when photons are incident on the pixel, a current flows through the signal line 1035-1a.
  • the pixel of the pixel group 1032a is connected to the signal line 1035-2a.
  • all the pixel groups are provided with the signal lines to which the pixels belonging to the pixel group are electrically connected.
  • the pixel groups 1031a, 1032a,... 1034a respectively connect the signal lines to the signal line 1035-a by 1036-1a to 1036-4a in order to detect scattered light from the same position in the sample W. This signal is connected by the pad 1036-a and transmitted to the signal processing unit 105.
  • the pixels belonging to 1031b to 1034b are connected to the signal line 1035-b, connected by the pad 1036-b, and transmitted to the signal processing unit 105.
  • FIG. 16 The equivalent circuit of FIG. 16 is shown in FIG.
  • the N pixels 1031a1, 1031a2,..., 1031aN belonging to the pixel group 1031a in the pixel block 1031 are avalanche photodiodes and quenching resistors connected thereto.
  • the reverse voltage VR is applied to all the avalanche photodiodes formed in the photoelectric conversion unit 103, so that they operate in the Geiger mode.
  • a current flows through the avalanche photodiode, but the reverse bias voltage is lowered by the quenching resistor forming a pair and the current is electrically cut off again. In this way, a constant current flows each time a photon is incident.
  • the N pixels 1034a1 to 1034aN belonging to the pixel group 1034a in the pixel block 1034 are also Geiger mode avalanche photodiodes and a quenching resistor coupled thereto. All the pixels belonging to the pixel groups 1031a and 1034a correspond to the reflected or scattered light from the region Wa of the sample W. All of these signals are electrically coupled and connected to the current/voltage converter 103a. The current-voltage converter 103a outputs the signal 500-a converted into a voltage.
  • the pixels belonging to the pixel group 1031b of the pixel block 1031, 1031b1 to 1031bN, and the pixels belonging to the pixel group 1034b of the pixel block 1034, 1034b1 to 1034bN correspond to the light from the sample surface Wb, and these outputs Are all electrically coupled and connected to the current-voltage converter 103b.
  • 103b outputs the voltage signal 500-b. In this way, signals corresponding to all the areas obtained by dividing the illumination spot 20 are output.
  • FIG. 18 shows the data processing unit 105 when the illumination spot 20 is divided into Wa and Wh.
  • the block 105-lf is a block for processing the signals 500a-lf to 500h-lf obtained by photoelectrically converting the light detected by the low-angle front detector 102-lf.
  • the block 105-hb is a block for processing the signals 500a-hb to 500h-hb obtained by photoelectrically converting the light detected by the high-angle rear detection unit 102-hb.
  • a block for processing the output signal is provided.
  • the outputs of the high-frequency pass filters 1051a to 1051h are accumulated in the signal synthesizing unit 1053 for a plurality of rotations of the rotary stage, and the signals obtained at the same position on the sample W are added together and output as an array stream signal 1055-lf. To do.
  • the signal combining unit 1054 outputs an array stream signal 1056-lf obtained by adding together signals acquired at the same position and combining them.
  • the block 105-hb also performs the same calculation as the block 105-lf, and the array stream signal 1055-hb synthesized from the output of the high frequency pass filter and the array stream signal synthesized from the outputs of the low frequency pass filters 1052a to 1052h. Output 1056-hb.
  • the defect detection unit 1057 performs threshold processing after linearly adding the signals output from the plurality of photoelectric conversion units and subjected to the high frequency pass filter.
  • the low frequency signal integration unit 1058 integrates the low frequency pass filtered signals. The output of the low frequency signal integration unit 1058 is input to the defect detection unit 1057 and is used when determining the threshold value. It is estimated that the noise typically increases in proportion to the square root of the output of the low frequency signal integration unit 1058.
  • a threshold value proportional to the square root of the signal of the low frequency signal integration unit 1058 is given. Then, the signal of the defect detection unit 1057 that exceeds this is extracted as a defect.
  • the defect detected by the defect detection unit 1057 is output to the control unit 53 together with the signal strength and the detection coordinate in W.
  • the signal intensity detected by the low-frequency signal integration unit 1058 is also transmitted to the control unit 53 as roughness information of the sample surface and output to the display unit 54 or the like for the user who operates the apparatus.
  • the size of the image formed on the sensor surface is greatly affected by the position and focal length of the lens array 1028.
  • magnification of each of the divided images formed on the sensor surface is different, when the spread of the intensity distribution imaged on the pixel blocks 1031 to 1034 in FIG. 16 is different, or when the position of the intensity distribution imaged is different.
  • image blurring occurs and the detection sensitivity decreases.
  • the control unit 53 of FIG. 1 calculates the magnification of the image formed on the photoelectric conversion unit 103 based on the magnification calculation unit 532 and the magnification of the image calculated by the magnification calculation unit 532. It has a calculation processing unit 533 for obtaining a control amount for changing the image formation state of the image.
  • the image formation state control unit 10212 changes the image formation state of the image formed on the photoelectric conversion unit 103 based on the control amount obtained by the calculation processing unit 533.
  • FIG. 19A An example of the image forming unit 102-A1 is shown in FIG. 19A.
  • the aperture 1029 is smaller than the lens array 1028 to be installed, and is made of a material such as metal that can block a part of the light incident on the lens array 1028.
  • FIG. 19A light is incident only on the uppermost lens of the plurality of lens arrays, and the light is blocked by the apertures in the other lenses.
  • 19B and 19C show examples of apertures.
  • FIG. 19B there is an outer frame 1029a, and the metal plate 1029-1 can be moved inside the outer frame 1029a by an electrically controlled motor. As shown in FIG. 19B, by moving the metal plate 1029-1 in the arrow direction, it is possible to independently observe the images formed by the plurality of lenses of the lens array 1028.
  • the aperture 1029 enters the optical path at the time of adjustment, and the aperture 1029 is completely deviated from the optical path at the time of inspection to detect all the divided images.
  • FIG. 19C shows a method in which the metal plate 1029-2 slides in the direction of the arrow from the side in the outer frame 1029b that is about twice the size of the lens array 1028.
  • the metal plate 1029-2 is slid from the side to block part of the light incident on the lens array 1028.
  • the image formation of each lens of the lens array 1028 can be observed independently.
  • the image forming unit 102-A1 has an image selection mechanism that selects a part of images from a plurality of images formed by dividing the aperture.
  • the magnification calculator 532 calculates the magnification of a part of the images selected by the image selection mechanism.
  • the calculation processing unit 533 obtains a control amount for changing the image formation state of some images based on the magnification of some images calculated by the magnification calculation unit 532.
  • the image selection mechanism is configured by an aperture 1029 that selects a part of the plurality of images by blocking a part of the light that is incident on the front side of the photoelectric conversion unit 103.
  • a changeover switch 1037 is attached to the photoelectric conversion unit 103 as shown in FIG.
  • the operator can electrically switch each of the changeover switches 1037 ON and OFF from a GUI described later, and only a part of each sensor is detected.
  • the signal of the sensor 1031 is detected, but the signals of the sensor 1032, the sensor 1033, and the sensor 1034 are not detected.
  • the mechanism of FIG. 20 does not detect leaked light from the aperture 1029 and is highly accurate. Furthermore, since the aperture 1029 can be switched at a higher speed than mechanically moving, the time required for measurement can be shortened.
  • the image selection mechanism is configured by a changeover switch 1037 that electrically selects ON/OFF to select the partial image from the plurality of images.
  • FIG. 21 shows a mechanism for controlling the external atmospheric pressure of the lens array 1028.
  • the lens array 1028 is inserted into the sealed space 10210-a.
  • the surface through which light enters and exits is made of synthetic quartz or the like so that light is transmitted.
  • An atmospheric pressure sensor 10210-b is attached inside the closed space to measure the atmospheric pressure. While referring to the measured atmospheric pressure data in the signal processing unit 105, the atmospheric pressure in the closed space is controlled using the control box 10210-c.
  • This mechanism constitutes the image formation state control unit 10212 that changes the image state of the sensor surface because the focal length of the lens depends on the external atmospheric pressure.
  • the imaging unit 102-A1 has an atmospheric pressure adjusting mechanism that controls the atmospheric pressure of the closed space (space 10210-a) including the lens array 1028. Then, the imaging state control unit 10212 changes the imaging state by controlling the atmospheric pressure of the closed space (space 10210-a) by the atmospheric pressure adjusting mechanism.
  • micrometers 10211a, 10211b, 10211c are attached to the lens arrays 1028a, 1028b, 1028c, respectively.
  • the respective positions of the lens arrays 1028a, 1028b, 1028c are changed in the optical axis direction. This constitutes the image formation state control unit 1021 that changes the image state of the sensor surface.
  • the image forming unit 102-A1 includes the plurality of lens arrays 1028a, 1028b, and 1028c. Then, the image formation state control unit 10212 moves at least one of the plurality of lens arrays 1028a, 1028b, 1028c in the optical axis direction of the detection unit 102 to change the image formation state.
  • the image forming unit 102-A1 has micrometers 10211a, 10211b, 10211c arranged in each of the plurality of lens arrays 1028a, 1028b, 1028c.
  • the imaging state control unit 10212 moves at least one of the plurality of lens arrays 1028a, 1028b, 1028c in the optical axis direction by moving the micrometers 10211a, 10211b, 10211c to change the imaging state.
  • FIG. 23 shows another embodiment of FIGS. 21 and 22.
  • the micrometer 10211d is moved to move the photoelectric conversion unit 103 in the optical axis direction. This constitutes the image formation state control unit 10212 that changes the image state of the sensor surface.
  • the image formation state control unit 10212 changes the image formation state by moving the photoelectric conversion unit 103 in the optical axis direction of the detection unit 102.
  • the image forming unit 102-A1 includes a micrometer 10211d arranged in the photoelectric conversion unit 103. Then, the image formation state control unit 10212 moves the micrometer 10211d to move the photoelectric conversion unit 103 in the optical axis direction to change the image formation state.
  • the image state control unit 10212 may change the image forming state by moving the sample W in the direction perpendicular to the surface.
  • 24A to 24C show a GUI for observing a selected part of the divided image displayed on the display unit 54. It is an example in which a light beam divided into four is imaged on four sensors.
  • the image formation state of the sensor 1 is shown in the observation result 541-1.
  • the sensor shown in the observation result 541-1 is selected by the selection button 542-1. This indicates that the sensor 1 displayed in gray is selected.
  • the observation sensor is switched by the mechanism shown in FIGS. 19A to 19C or 20 and the image formation state of the sensor 2 is shown in the observation result 541-2. .. A part of the divided image is observed, and the image of each sensor is stored in the memory 531 in the control unit 53 shown in FIG.
  • the monitor 54-3 displays an integrated image of the images acquired by the sensors. Compared with the calibration value, the integrated measurement value is displayed larger and it can be seen that the integrated image is blurred.
  • the magnification is obtained from the image size of each sensor surface by the magnification calculator 532 in the controller 53 shown in FIG. 25, and the deviation amount from the calibration value specified by the operator is calculated. Measure at 533.
  • the image on the sensor surface is changed by the image formation state control unit 10212 in any of FIGS. 21, 22, and 23, and the image on each sensor surface is detected again.
  • 24D and 24E show the observation results of the images formed on the sensor 1 and the sensor 2 after changing the image on the sensor surface.
  • the size of the integrated image of the sensor 1, the sensor 2, the sensor 3, and the sensor 4 is approximately equal to the calibration value.
  • the magnification calculation unit 532 obtains the magnification from the size of each image, and if the deviation amount from the designated calibration value is smaller than the allowable value, the wafer inspection is started.
  • one image selected by the image selection mechanism is displayed on the display unit 54 of FIG. 1 (see FIGS. 24A, 24B, 24D, and 24E).
  • the display unit 54 also displays an integrated image of all the images selected by the image selection mechanism (see FIGS. 24C and 24F).
  • FIG. 26 shows a flowchart for starting measurement with equal magnifications for the divided images.
  • the size of the detected image is compared with the reference size (S262). As a result of the comparison, when the difference from the reference value is smaller than the allowable value (S263), the measurement is started (S265).
  • defect inspection apparatus of the second embodiment Since the basic structure of the defect inspection apparatus is the same as that of the first embodiment, its description is omitted.
  • FIG. 27 shows an example of the image forming unit 102-A1.
  • a polarization beam splitter 10213 is inserted as an optical path branching element between the lens array 1028 and the photoelectric conversion unit 103-1 to position the two-dimensional camera 103-2 at a position conjugate with the photoelectric conversion unit 103-1. Deploy.
  • the polarization beam splitter 10213 is used in this embodiment, a removable mirror 10214 for splitting light as shown in FIGS. 35A and 35B can also be used.
  • a CMOS camera or a CCD camera is used as the two-dimensional camera.
  • the pixel size of the two-dimensional camera 103-2 is smaller than the size of the image, and the size of the light-receiving surface of the two-dimensional camera 103-2 is a size that allows observation of all divided images.
  • the image formed at the position of the photoelectric conversion unit 103-1 can be observed with high resolution, and the position and size of the image can be measured with high accuracy.
  • the detected divided image 544 is displayed on the two-dimensional camera image display unit 543 in the display unit 54 from the two-dimensional camera 103-2 via the control unit 53.
  • the image state can be changed while observing the image on the sensor surface. Further, it is possible to obtain the deviation from the ideal state and the magnification of the image forming position of each divided image.
  • the image forming unit 102-A1 includes the polarization beam splitter 10213 that splits a part of the light that is incident on the front side of the photoelectric conversion unit 103-1 and the two-dimensional light that the light that is split by the polarization beam splitter 10213 is incident on. It has a camera 103-2 and a two-dimensional camera image display unit 543 that displays at least one image of the plurality of images captured by the two-dimensional camera 103-2.
  • 28A to 28F show GUIs displayed on the display unit 54 by which an operator observes an image state from a two-dimensional camera image. This is an example in which an image divided into four is detected by the two-dimensional camera 103-2.
  • the monitor 54-7 and the monitor 54-8 have line profiles of lines 546-1 and 546-2 along the divided image 544 in the two-dimensional camera image display unit 543.
  • the calibration values are displayed on the observation results 545-1 and 545-2.
  • the magnification calculation unit 532 and the image position calculation unit 534 provided in the control unit 53 shown in FIG. 36 can measure the size and position of each divided image and can measure the difference from the calibration value.
  • the divided image integration processing unit 535 provided in the control unit 53 shown in FIG. 36 calculates the integrated image of each image and displays the line profile of the integrated image. , You can see how different the image size is.
  • the calibration button By pressing the calibration button, the amount of deviation from the calibration value specified by the operator is measured. Then, as in the first embodiment, when the deviation amount is larger than the allowable value, the image of the sensor surface is changed and the image of each sensor surface is detected again. If the deviation amount is smaller than the allowable value, the wafer inspection is started (see FIG. 26).
  • 28D to 28F show the observation results of the image state of the sensor surface after the image change.
  • the sizes and positions of the divided images can be made equal to each other, blurring of the integrated images can be prevented, and deterioration of detection sensitivity can be prevented.

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

La présente invention comprend : une partie de formation d'images servant à former une pluralité d'images, formées par partitionnement d'une ouverture, sur une partie de conversion photoélectrique à un grossissement établi pour chaque image ; et une unité de traitement de signaux servant à synthétiser la pluralité d'images formées sur l'unité de conversion photoélectrique pour détecter des défauts dans un échantillon.
PCT/JP2018/047448 2018-12-25 2018-12-25 Dispositif d'inspection de défaut WO2020136697A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/047448 WO2020136697A1 (fr) 2018-12-25 2018-12-25 Dispositif d'inspection de défaut

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/047448 WO2020136697A1 (fr) 2018-12-25 2018-12-25 Dispositif d'inspection de défaut

Publications (1)

Publication Number Publication Date
WO2020136697A1 true WO2020136697A1 (fr) 2020-07-02

Family

ID=71129241

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/047448 WO2020136697A1 (fr) 2018-12-25 2018-12-25 Dispositif d'inspection de défaut

Country Status (1)

Country Link
WO (1) WO2020136697A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022162881A1 (fr) * 2021-01-29 2022-08-04 株式会社日立ハイテク Dispositif d'inspection de défauts

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5919345A (ja) * 1982-07-23 1984-01-31 Hitachi Ltd 認識装置
JPS6232613A (ja) * 1985-08-05 1987-02-12 Canon Inc 投影露光装置
JPH0232523A (ja) * 1988-07-22 1990-02-02 Mitsubishi Electric Corp 露光制御方法
JP2003114195A (ja) * 2001-10-04 2003-04-18 Dainippon Screen Mfg Co Ltd 画像取得装置
US20080074749A1 (en) * 2006-09-07 2008-03-27 Lizotte Todd E Apparatus and methods for the inspection of microvias in printed circuit boards
JP2009283633A (ja) * 2008-05-21 2009-12-03 Hitachi High-Technologies Corp 表面検査装置及び表面検査方法
JP2012117898A (ja) * 2010-11-30 2012-06-21 Hitachi High-Technologies Corp 欠陥検査装置、欠陥情報取得装置及び欠陥検査方法
JP2014163681A (ja) * 2013-02-21 2014-09-08 Toppan Printing Co Ltd 周期性パターンのムラ検査方法及びムラ検査装置
JP2014209068A (ja) * 2013-04-16 2014-11-06 インスペック株式会社 パターン検査装置
JP2016035466A (ja) * 2015-09-24 2016-03-17 株式会社日立ハイテクノロジーズ 欠陥検査方法、微弱光検出方法および微弱光検出器
US20180059552A1 (en) * 2016-08-23 2018-03-01 Asml Netherlands B.V. Metrology Apparatus for Measuring a Structure Formed on a Substrate by a Lithographic Process, Lithographic System, and Method of Measuring a Structure Formed on a Substrate by a Lithographic Process
JP2018510320A (ja) * 2014-12-09 2018-04-12 ビーエーエスエフ ソシエタス・ヨーロピアBasf Se 光学検出器
WO2018216277A1 (fr) * 2017-05-22 2018-11-29 株式会社日立ハイテクノロジーズ Dispositif d'inspection de défauts et procédé d'inspection de défauts

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5919345A (ja) * 1982-07-23 1984-01-31 Hitachi Ltd 認識装置
JPS6232613A (ja) * 1985-08-05 1987-02-12 Canon Inc 投影露光装置
JPH0232523A (ja) * 1988-07-22 1990-02-02 Mitsubishi Electric Corp 露光制御方法
JP2003114195A (ja) * 2001-10-04 2003-04-18 Dainippon Screen Mfg Co Ltd 画像取得装置
US20080074749A1 (en) * 2006-09-07 2008-03-27 Lizotte Todd E Apparatus and methods for the inspection of microvias in printed circuit boards
JP2009283633A (ja) * 2008-05-21 2009-12-03 Hitachi High-Technologies Corp 表面検査装置及び表面検査方法
JP2012117898A (ja) * 2010-11-30 2012-06-21 Hitachi High-Technologies Corp 欠陥検査装置、欠陥情報取得装置及び欠陥検査方法
JP2014163681A (ja) * 2013-02-21 2014-09-08 Toppan Printing Co Ltd 周期性パターンのムラ検査方法及びムラ検査装置
JP2014209068A (ja) * 2013-04-16 2014-11-06 インスペック株式会社 パターン検査装置
JP2018510320A (ja) * 2014-12-09 2018-04-12 ビーエーエスエフ ソシエタス・ヨーロピアBasf Se 光学検出器
JP2016035466A (ja) * 2015-09-24 2016-03-17 株式会社日立ハイテクノロジーズ 欠陥検査方法、微弱光検出方法および微弱光検出器
US20180059552A1 (en) * 2016-08-23 2018-03-01 Asml Netherlands B.V. Metrology Apparatus for Measuring a Structure Formed on a Substrate by a Lithographic Process, Lithographic System, and Method of Measuring a Structure Formed on a Substrate by a Lithographic Process
WO2018216277A1 (fr) * 2017-05-22 2018-11-29 株式会社日立ハイテクノロジーズ Dispositif d'inspection de défauts et procédé d'inspection de défauts

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022162881A1 (fr) * 2021-01-29 2022-08-04 株式会社日立ハイテク Dispositif d'inspection de défauts

Similar Documents

Publication Publication Date Title
US11143598B2 (en) Defect inspection apparatus and defect inspection method
KR101478476B1 (ko) 결함 검사 방법, 미약광 검출 방법 및 미약광 검출기
US8922764B2 (en) Defect inspection method and defect inspection apparatus
JP5773939B2 (ja) 欠陥検査装置および欠陥検査方法
JP5487196B2 (ja) 小さな反射屈折対物レンズを用いる分割視野検査システム
JP5355922B2 (ja) 欠陥検査装置
JP2018519524A (ja) 半導体ウェハ上の高さを測定するための方法および装置
US11366069B2 (en) Simultaneous multi-directional laser wafer inspection
WO2013077125A1 (fr) Procédé d'inspection des défauts et dispositif correspondant
TW201604609A (zh) 自動聚焦系統
TW201932828A (zh) 用於晶圓檢測之系統
JP2004264287A (ja) サンプリング不足の画像を再構築するためにディザリングを用いることによって基板表面内の欠陥を識別する方法および装置
WO2013168557A1 (fr) Procédé de contrôle de défaut et dispositif de contrôle de défaut
JP5815798B2 (ja) 欠陥検査方法および欠陥検査装置
US20220291140A1 (en) Defect inspection device and defect inspection method
JP6117305B2 (ja) 欠陥検査方法、微弱光検出方法および微弱光検出器
JP2011027662A (ja) 欠陥検査装置およびその方法
US7767982B2 (en) Optical auto focusing system and method for electron beam inspection tool
WO2020136697A1 (fr) Dispositif d'inspection de défaut
US11356594B1 (en) Tilted slit confocal system configured for automated focus detection and tracking
WO2021250771A1 (fr) Dispositif d'inspection de défaut
JPH10221270A (ja) 異物検査装置
JP5668113B2 (ja) 欠陥検査装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18944131

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18944131

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP