WO2001008205A1 - Procede d'exposition, systeme d'exposition, source lumineuse, procede et dispositif de fabrication - Google Patents

Procede d'exposition, systeme d'exposition, source lumineuse, procede et dispositif de fabrication Download PDF

Info

Publication number
WO2001008205A1
WO2001008205A1 PCT/JP2000/004892 JP0004892W WO0108205A1 WO 2001008205 A1 WO2001008205 A1 WO 2001008205A1 JP 0004892 W JP0004892 W JP 0004892W WO 0108205 A1 WO0108205 A1 WO 0108205A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
exposure
substrate
sensor
energy beam
Prior art date
Application number
PCT/JP2000/004892
Other languages
English (en)
Japanese (ja)
Inventor
Yutaka Hamamura
Tatsushi Nomura
Hitoshi Takeuchi
Kenji Nishi
Kazumasa Hiramatsu
Original Assignee
Nikon Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nikon Corporation filed Critical Nikon Corporation
Priority to AU60224/00A priority Critical patent/AU6022400A/en
Publication of WO2001008205A1 publication Critical patent/WO2001008205A1/fr

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/7055Exposure light control in all parts of the microlithographic apparatus, e.g. pulse length control or light interruption
    • G03F7/70558Dose control, i.e. achievement of a desired dose

Definitions

  • the present invention relates to an exposure apparatus, an exposure method, a light source apparatus, and a device manufacturing method. More specifically, the present invention relates to a semiconductor element, a liquid crystal display element, and the like. The present invention relates to an exposure apparatus and an exposure method used in a photolithographic process to be manufactured, a light source apparatus suitable as a light source of the exposure apparatus, and a device manufacturing method including a step of performing exposure using the exposure apparatus and the exposure method. 2.
  • a step-and-repeat type reduction projection exposure apparatus (a so-called stepper) or a step-and-type that is an improvement to this stepper.
  • Projection exposure apparatuses such as scan type scanning exposure apparatuses (so-called scanning steppers) are mainly used.
  • R is the resolution of the projection optical system
  • human is the wavelength of the exposure light, N.A.
  • Exposure devices with a numerical aperture of the projection optical system of 0.6 or more have been put to practical use with the exposure light source of Shima laser), and exposure with a device rule (practical minimum line width) of 0.25 ⁇ has been realized.
  • an argon fluoride excimer laser ArF excimer laser
  • fluorine laser F 2 laser
  • F 2 laser fluorine laser
  • exposure control was performed as follows. That is, the amount of exposure light irradiated on the reticle in front of the projection optical system is measured in advance by a light amount monitor (called an integrator sensor) arranged in the illumination optical system, and the reticle and the reticle are measured behind the projection optical system.
  • the light quantity of the exposure light transmitted through the projection optical system is measured by a light quantity monitor on the wafer stage, for example, an illuminometer, and the output ratio between the integrator sensor and the illuminometer is obtained in advance.
  • the illuminance of the wafer surface is estimated from the output value of the integer sensor using the output ratio, and the exposure amount is feedback-controlled so that the illuminance of the image surface becomes a desired value.
  • the output fluctuates. Therefore, the output fluctuation is detected using a light amount monitor (energy monitor) disposed inside the light source. You also need.
  • a light amount monitor energy monitor
  • a photodiode (PD) using a Si semiconductor material is typically used as an optical sensor (light receiving element) for the above various measurements.
  • a photo die using a Si semiconductor material cannot obtain sufficient durability against the short-wavelength laser light. As a result, the use of a low-sensitivity sensor sometimes deteriorated the exposure accuracy.
  • a first object of the present invention is to maintain exposure accuracy at high accuracy over a long period of time without frequently replacing optical sensors.
  • An exposure apparatus is provided.
  • a second object of the present invention is to provide an exposure method capable of transferring a pattern onto a substrate with high line width accuracy.
  • a third object of the present invention is to provide a device manufacturing method capable of improving the productivity of a microdevice having a higher degree of integration. According to a first aspect of the present invention, there is provided an exposure apparatus for transferring an image of a pattern formed on a mask onto a substrate,
  • An optical sensor for measuring the intensity of at least a part of the energy beam, wherein the output of the optical sensor is used for controlling exposure conditions;
  • the optical sensor includes a light receiving unit for receiving an energy beam and an electrode connected to the light receiving unit, wherein the light receiving unit is formed of an MN system, and M is at least selected from the group consisting of In, Ga, and A 1 An exposure apparatus is provided, wherein N represents nitrogen.
  • the ⁇ -based material includes GaN, A1N, InN, InGaN, InA1NGaAlN, and InGaA1N. They may be used alone or in combination, for example, as a multilayer film.
  • the MN-based material forms a multilayer film
  • each layer is formed of the MN-based material, and the material forming each layer may be the same or different.
  • the multilayer film may have a p-type GaN layer and an n-type GaN layer, and may have an n-type GaN layer and an i-type GaN layer. Is also good.
  • these multilayer films may include a buffer layer formed of, for example, GaN.
  • This MN-based material is generally a very high melting point of 1200 ° C. or higher, and is a stable material having a hardness close to that of diamond.
  • the MN-based material is an impurity element which functions as a small amount of donor or sceptor to be doped to form a semiconductor, for example, Si, Ge, Sn, Sb, Mg, Z n, may contain trace amounts of Cd.
  • G a x A 1 X is preferably 0 ⁇ ⁇ 1.
  • the multilayer film comprises an I n G a A 1 N
  • the multilayer can be grown on a substrate. It is preferable to use a GaN single crystal substrate or a sapphire substrate from the viewpoint of epitaxially growing the crystal of the above-mentioned ⁇ -based material.
  • short-wavelength laser beams such as KrF excimer laser light having a wavelength of 248 nm and ArF excimer laser light having a wavelength of 193 nm, that is, laser beams of high energy are exposed.
  • the S-based photo diode conventionally used for measurement and control of exposure light intensity, focus, alignment of masks and substrates, etc. can be significantly degraded in a short time. all right.
  • Such deterioration and receiving from around the light receiving surface Contamination significantly alters the recombination rate of carriers in the photodiode, for example, the majority of carriers generated near the surface disappear rapidly due to surface recombination and do not contribute sufficiently to photocurrent.
  • a light receiving element having a light receiving portion formed of a plurality of layers of the above-mentioned MN-based compound semiconductor can be used as an ArF excimer laser light source, a light source of the latest exposure apparatus, or a generation or F 2 laser light source has attracted attention as a light source of next next generation exposure apparatus, sufficient sensitivity and sufficient for the laser plasma light source, the S 0 R wavelength 2 0 0 nm or less of a short wavelength emitted from such light It has found that it has excellent durability.
  • the present inventor made a light-receiving surface made of GaN as shown in FIG.
  • a pulsed laser beam from an ArF excimer laser having an oscillation wavelength of 193 nm was applied to the photo diode and the Si photo diode, respectively, at 140 nJ / Irradiation at cm 2 / pulse.
  • the optical sensor formed from the above-described MN-based compound semiconductor has high sensitivity to an energy beam having a wavelength of 200 nm or less and high light resistance. As the wavelength of the energy beam becomes shorter, the photon energy (() increases, and in the case of conventional S-based PDs, degradation and sensitivity fluctuations increase.
  • a r F laser, F 2 laser, K r 2 laser may comprise a short wavelength light source such as A r 2 laser.
  • the optical sensor used in the exposure apparatus of the present invention includes, for example, a beam monitor for observing the light intensity of a laser light source, an integrator sensor for obtaining the illuminance of the substrate, and a reflected light for obtaining the reflected light from the substrate.
  • an exposure apparatus for illuminating a mask (R) with an energy beam and transferring a pattern formed on the mask onto a substrate (W), and outputting the energy beam.
  • a first optical sensor (16c) including a light receiving element having a plurality of electrodes for taking out light to the exposure apparatus.
  • the first optical sensor can detect the intensity, center wavelength, spectrum half-value width, etc. of the energy beam with high accuracy and stability, and realize the first optical sensor. Deterioration of measurement reproducibility or deterioration over time due to poor sensitivity is suppressed, and unnecessary output fluctuations of the first optical sensor are reduced, so that it is possible to suppress the occurrence of an exposure amount control error due to this. Therefore, the exposure accuracy can be maintained at high accuracy over a long period of time without frequently replacing the first optical sensor.
  • pulsed light such as a laser light source
  • the energy variation E p ⁇ per cell becomes smaller, and the minimum pulse oscillation number ⁇ required to achieve the irradiation energy error ⁇ ⁇ ⁇ allowed during exposure can be made smaller.
  • the throughput can be improved by improving the scanning speed (scan speed).
  • the second optical sensor guides an energy beam output from a light source to an illumination optical system, is used for detecting a displacement of an optical axis between a leading optical system and the illumination optical system, or is used in the light source.
  • the second optical sensor may be an integrator sensor used for estimating illuminance on an image plane.
  • the second optical sensor may be configured to constantly monitor the energy beam. May be used. In the former case, deterioration of measurement reproducibility and deterioration over time due to poor sensitivity of the integrator sensor can be suppressed.
  • the state of the energy beam can be monitored with high accuracy regardless of whether it is exposed or not.
  • the output of the integrator sensor is used as a reference for other sensors, for example, after calibrating using a reference illuminometer, the exposure amount matching accuracy with another exposure apparatus (other unit) can be improved over a long period of time.
  • the maintenance interval for the calibration is long 1 (mean time between failure ures), and also contributes to the improvement of MTTR (mean ti me to repai r).
  • the integrator sensor is used for estimating the illuminance of the image plane, the exposure amount is controlled such that the integrated exposure amount on the substrate becomes the target exposure amount based on the output of the integrator sensor.
  • the energy beam emitted from the mask (R) can be applied to the substrate (W) by improving the exposure amount control accuracy and, consequently, the pattern line width accuracy formed on the substrate.
  • the third optical sensor can measure, for example, the transmittance of the mask, that is, the return light from the substrate side can be ignored.
  • the output of the third integrator sensor By calculating the ratio with the output of the optical sensor, it becomes possible to detect the reflectance (transmittance) of the mask with high accuracy by a predetermined calculation.
  • the reflectance of the substrate is calculated based on the output of the integrated sensor (46) and the output of the third optical sensor (47), and the reflectance is calculated based on the output of the integrated sensor.
  • Image quality adjusting devices (74a to 74c, 78, 50) for adjusting the image quality.
  • the energy beam from the light source is detected with high precision by the integrator sensor, and the reflected light from the mask and the projection optical system from the substrate side are detected by the third optical sensor.
  • the energy beam that returns after passing can be greeted with high accuracy, and the arithmetic unit calculates the reflectance of the substrate with high accuracy based on the output of the integrator sensor and the output of the third optical sensor.
  • a projection optical system (PL) that projects the energy beam emitted from the mask onto the substrate; a substrate stage (58) that holds the substrate and moves at least two-dimensionally; And a fourth optical sensor (59B) for receiving the energy beam applied to at least a part of a predetermined illumination field, and using the fourth optical sensor to perform the projection optical operation.
  • a transmittance measuring device for measuring the transmittance of the optical system including the system (PL) at predetermined intervals.
  • the exposure amount may be controlled in further consideration of the change in the transmittance measured by the transmittance measuring device.
  • the transmittance of the optical system is a concept including, for example, the reflectance when the projection optical system is an all-reflection optical system. That is, the concept indicates the ratio of light emitted from the optical system to light incident from the optical system.
  • the transmittance of the optical system is measured by a transmittance measuring device at a predetermined interval, for example, every time exposure of a predetermined number of substrates is completed, and the exposure control device measures the transmittance by the transmittance measuring device. Since the exposure amount is controlled in consideration of the fluctuation of the transmittance, it is possible to control the exposure amount with higher accuracy, and further, to perform the exposure with higher accuracy.
  • the fourth optical sensor (59B) includes a light receiving unit formed of an MN-based material that receives the energy beam, and a reverse bias applied to the light receiving unit to output a photocurrent to the outside.
  • a mask (R) is illuminated with an energy beam and a pattern formed on the mask is converted to a substrate (W).
  • a light-receiving element formed of an MN-based material for receiving the energy beam applied to the light-receiving part; and a plurality of electrodes for applying a reverse bias to the light-receiving part to extract a photocurrent to the outside.
  • an exposure apparatus including the five optical sensors (59).
  • the energy beam can be accurately detected on the image plane by the fifth optical sensor.
  • the irradiation amount sensor further includes a projection optical system (PL) for projecting the energy beam emitted from the mask (R) onto the substrate (W).
  • PL projection optical system
  • the fifth optical sensor may be a sensor that receives light from a mark disposed on the object plane side of the projection optical system on the image plane side of the projection optical system.
  • the projection position of the mask pattern as a reference for the mask alignment or the baseline measurement based on the measurement value of the fifth optical sensor is obtained, or the projection position of the mark image or the contrast of the image light flux is obtained. It is possible to determine the imaging characteristics of the projection optical system based on the above.
  • the fifth optical sensor is used for measuring the transmittance of an optical system including the projection optical system. It may be a sensor.
  • the transmittance fluctuation of the optical system caused by the irradiation of the energy beam having high energy can be detected with high accuracy and good stability.
  • the fifth optical sensor receives the energy beam irradiated onto the entire illumination field at one time.
  • An irradiation amount monitor having the light receiving unit having a possible area may be used.
  • a good imaging state is maintained by correcting the irradiation characteristics of the imaging characteristics of the projection optical system and the irradiation characteristics of the mask based on the measurement values of the irradiation amount monitor and correcting the irradiation characteristics of the mask. It is possible to do. In addition, even when the illumination conditions are changed, the irradiation amount monitor can accurately (highly) detect the energy beam passing through the projection optical system. It is also possible to correct the basic data of the calculation.
  • the fifth optical sensor includes the substrate stage (58)
  • the light receiving section which is detachably mounted on the upper side, and receives the interference light between the energy beam applied to at least a part of the illumination field and a light beam emitted from a predetermined pinhole; It may be a sensor used to measure the imaging characteristics of the optical system. In such a case, the sensor can detect the imaging characteristics of the projection optical system with high accuracy, so the accuracy can be improved, for example, when assembling the device, starting up after transport, or when returning from an emergency such as a power failure. Good ⁇ Adjustment of the imaging characteristics of the projection optical system can be performed.
  • the adjustment (correction) of the imaging characteristics of the projection optical system can be completely performed manually, but the projection optical system is adjusted based on the measurement value of the fifth optical sensor.
  • the imaging characteristic adjustment device automatically adjusts the imaging characteristic of the projection optical system based on the measurement value of the fifth optical sensor, so that the adjustment operation of the imaging characteristic is reduced.
  • the fifth optical sensor may be a reference illuminometer (90) detachably mounted on the substrate stage.
  • the reference The illuminometer may be used for calibration of an exposure amount on a substrate between a plurality of exposure apparatuses. In such a case, calibration (calibration) for matching (illuminance matching) of the exposure amount on the substrate between the units can be performed with high accuracy.
  • the fifth optical sensor is provided in a predetermined illumination field.
  • an exposure apparatus for illuminating a mask (R) with an energy beam and transferring a pattern formed on the mask onto a substrate (W) via a projection optical system (PL).
  • an image of the measurement pattern formed on the mask and the opening formed on the light receiving surface on the substrate stage are relatively scanned, and the energy beam from the light source transmitted through the opening is converted to the sixth light.
  • information for determining the positional relationship between the image plane of the mask or the projection optical system and the substrate with a maximum of six degrees of freedom can be detected with high accuracy. For example, if the above relative scanning is performed on a plurality of measurement patterns on a mask in an XY two-dimensional plane, aerial images of the respective measurement patterns are measured based on the output of the sixth optical sensor, and measurement of these aerial images is performed.
  • the magnification of the projection optical system divided by the imaging characteristics in the XY plane such as distortion (a type of information used as a reference for determining the positional relationship (overlap offset) between the mask and the substrate in the XY plane) is increased. It can be detected with high accuracy. Also, for example, during the above relative scanning, the position of the substrate stage in the Z direction orthogonal to the XY plane Or by repeating the above relative scanning while changing the Z position of the substrate stage, the positional relationship between the mask and the substrate in the z direction can be determined based on, for example, a change in the contrast of the differential signal of the optical sensor output.
  • the repelling offset as a reference for determining the relative positional relationship between the mask and the substrate in the 0x and 0y directions.
  • the shape of the image plane of the projection optical system or the curvature of field can be detected with high accuracy. Therefore, by adjusting the magnification and the like of the projection optical system in accordance with the above detection results, and performing focus leveling control based on the focus offset and the leveling offset, the overlay accuracy of the mask and the substrate can be improved.
  • an exposure apparatus for illuminating a mask (R) with an energy beam and transferring a pattern formed on the mask onto a substrate (W) via a projection optical system (PL).
  • a substrate stage (58) that holds the substrate and moves at least two-dimensionally; a mark pattern existing in a predetermined illumination field on the mask; and a mark pattern corresponding to the mark pattern on the substrate stage.
  • an alignment system having a seventh optical sensor that detects a predetermined mark pattern existing in the mark pattern, wherein the seventh optical sensor is formed from an MN-based material that receives image light fluxes of the mark patterns.
  • An exposure apparatus comprising: a light receiving element having a light receiving unit and a plurality of electrodes for applying a reverse bias to the light receiving unit to extract a photocurrent to the outside.
  • the seventh optical sensor constituting the alignment system is provided with a mark pattern existing in a predetermined illumination field on a mask and a predetermined pattern existing on the substrate stage corresponding to the mark pattern.
  • the so-called TTR (through-the-reticle) type substrate alignment is performed by detecting the alignment mark pattern on the substrate as the mark pattern of the substrate and referring to the mask. it can.
  • a seventh optical sensor constituting an alignment system is provided on a mask.
  • mask alignment includes detection of a mask position on a mask coordinate system or a projection position of a mask on a substrate coordinate system, and association between the mask coordinate system and the substrate coordinate system. In this case, for example, even if light having a wavelength of 200 nm or less is used as the energy beam for exposure and light having a wavelength equal to or close to the exposure wavelength is used as the alignment wavelength, a highly accurate mark pattern can be obtained with high accuracy. Detection becomes possible. Therefore, according to the present invention, the overlay accuracy can be improved.
  • the seventh optical sensor may detect either a one-dimensional image or a two-dimensional image. Also in this case, it is not necessary to frequently exchange the seventh optical sensor.
  • the seventh optical sensor is an image sensor (104R, 104R) that detects a projected image of the both mark patterns as a predetermined two-dimensional image
  • the alignment system may be a mask alignment system (100) for aligning the mask (R).
  • the exposure apparatus according to each aspect further comprising one or more eighth optical sensors that receive the energy beam, wherein at least one of the eighth optical sensors is formed of an MN-based material that receives the energy beam.
  • An optical sensor including a light receiving element having the formed light receiving portion and a plurality of electrodes for applying a reverse bias to the light receiving portion to extract a photocurrent to the outside can be provided.
  • An optical sensor that includes a light-receiving element that has a high accuracy enables highly accurate and stable detection of an energy beam, resulting in exposure amount control accuracy and overlay accuracy (synchronization between a mask and a substrate in a scanning type exposure apparatus).
  • the exposure accuracy can be maintained at high accuracy over a long period of time without frequently replacing the optical sensor.
  • all of the eighth optical sensors are provided with a light receiving portion formed of an MN-based material for receiving an energy beam, and a plurality of electrodes for applying a reverse bias to the light receiving portion and extracting a photocurrent to the outside.
  • the exposure accuracy can be most improved by improving the exposure amount control accuracy, the overlay accuracy, or the line width accuracy on the substrate.
  • the substrate stage can control a position and a posture of the substrate in at least five degrees of freedom.
  • the directions of five degrees of freedom mean the superposition control axes (X, ⁇ ) and the focus / leveling control axes ( ⁇ , ⁇ , ⁇ y), excluding the in-plane rotation direction (0 z direction) of the substrate. .
  • the 0z direction can be controlled on the substrate stage side or by driving the mask side.
  • the relative positional relationship between the mask and the substrate in the directions of six degrees of freedom can be set to a desired relationship.
  • the wavelength of the energy beam is 200 nm or less.
  • the present invention According to the optical sensor employed in the exposure apparatus, can be detected highly accurately whether one good stability, for example, a wavelength 1 9 3 nm of a r F excimer laser light, the wavelength 1 5 7
  • an exposure method for illuminating a mask with an energy beam and transferring a pattern formed on the mask onto a substrate via a projection optical system comprising: A first step of receiving the energy beam in a state where a reverse bias is applied to the light receiving unit and detecting information on the intensity of the energy beam based on a photocurrent extracted from the light receiving unit to the outside; Transferring a pattern of the mask onto the substrate at a predetermined resolution and a depth of focus using the obtained information.
  • an energy beam is received in a state where a reverse bias is applied to a light receiving unit formed of an MN-based material, and information on the intensity of the energy beam is detected based on a photocurrent extracted from the light receiving unit to the outside. Thereafter, the pattern of the mask is transferred onto the substrate at a predetermined resolution and depth of focus using the detected information.
  • information about the intensity of the energy beam is detected with high accuracy over a long period of time, and the mask pattern is transferred onto the substrate at a predetermined resolution and depth of focus using this information. It is possible to improve the line width accuracy of the notch transferred and formed on the substrate.
  • the information detected in the first step is adjusted in the second step by adjusting the imaging characteristics of the projection optical system, controlling the exposure amount, and controlling the mask (or the imaging surface of the projection optical system) and the substrate. It can be used for at least one of the adjustments of the relative position of.
  • a light source device used in an exposure apparatus for illuminating a mask with an energy beam and transferring a pattern formed on the mask onto a substrate, wherein the beam for outputting the energy beam is provided.
  • a light source which is housed in the same housing as the beam source and receives the energy beam output from the beam source, and is formed of an MN-based material; and a reverse bias is applied to the light source.
  • a light sensor (16c) including a light receiving element having a plurality of electrodes for applying photocurrent to take out a photocurrent to the outside.
  • the intensity of the energy beam which is highly accurate and stable, is enhanced by the optical sensor. It is possible to detect the wavelength, center wavelength, spectral half width, etc., and the deterioration of measurement reproducibility and deterioration over time due to poor sensitivity of the optical sensor is suppressed, and unnecessary output fluctuation of the optical sensor is reduced. Can suppress the occurrence of the exposure amount control error due to the above. Therefore, the exposure accuracy can be maintained at a high level over a long period of time without frequently replacing the optical sensor.
  • the beam source constituting the light source device of the present invention is a pulsed light source and is applied to a scanning exposure apparatus
  • the energy variation per pulse E p ⁇ becomes small, and the irradiation energy allowed during exposure is reduced.
  • the minimum number of pulse oscillations ⁇ required to achieve the error ⁇ can be reduced, thereby improving the throughput by increasing the scanning speed (scanning speed).
  • a ninth aspect of the present invention there is provided a device manufacturing method including a photolithographic step, wherein the exposure apparatus according to each aspect uses an energy beam having a wavelength of 200 ⁇ m or less in the photolithographic step. The exposure is performed by using.
  • FIG. 1 is a diagram showing a schematic configuration of an exposure apparatus according to one embodiment.
  • FIG. 2 is a block diagram showing the inside of the light source of FIG. 1 together with a main controller.
  • FIG. 3A is a diagram schematically illustrating an example of the configuration of the MN-based semiconductor light receiving element
  • FIG. 3B is a diagram schematically illustrating another example of the configuration of the MN-based semiconductor light receiving element.
  • FIG. 4 is a diagram showing an example of a configuration of an optical sensor including the semiconductor light receiving element of FIG.
  • FIG. 5 is a schematic plan view showing the Z tilt stage.
  • FIG. 6 (A) shows a part of the part near the Z-tilt stage in Fig. 1 including the aerial image measuring instrument.
  • FIG. 6B is an enlarged plan view showing the reflection film portion of FIG. 6A.
  • FIG. 7 is a schematic side view of the condenser lens, reticle, projection optical system, Z tilt stage, XY stage, and the like in FIG. 1 viewed in the + Y direction.
  • FIG. 8 is an enlarged plan view showing a state in which a part of the reference mark plate FM and the projected image RP of the reticle R in FIG.
  • FIG. 9 is a diagram showing an example of the configuration of the image sensor 104R.
  • FIG. 10 is a schematic plan view showing a sensor head of a reference illuminometer installed at a predetermined position on a Z tilt stage.
  • FIG. 11 is a diagram illustrating a state where the center position of the sensor head of the reference illuminometer is positioned at the center of the projection optical system.
  • FIG. 12 is a conceptual diagram showing the state of simultaneous measurement of illuminance by the reference illuminometer and the integrator sensor.
  • FIG. 13 (A) is a plan view showing an example of a reticle having a measurement mark formed thereon
  • FIG. 13 (B) is a diagram showing a specific configuration of the measurement mark.
  • FIG. 14 is a diagram for explaining a method for detecting a photoelectric conversion mark projected image.
  • FIG. 15 (A) is a diagram showing a waveform of a light amount signal obtained as a result of photoelectrically detecting the projected image of the X mark
  • FIG. 15 (B) is a diagram showing a differential waveform thereof.
  • FIG. 16 is a cross-sectional view of a wavefront aberration measuring instrument installed at a predetermined position on the Z tilt stage.
  • FIG. 17 is a flowchart for explaining an embodiment of the device manufacturing method according to the present invention.
  • FIG. 18 is a flowchart showing the processing in step 204 of FIG.
  • FIG. 19 is a graph showing the durability test results of the Si photodiode and the GaN photodiode used in the present invention.
  • FIG. 20 is a diagram schematically showing another example of the configuration of the MN-based semiconductor light receiving element.
  • BEST MODE FOR CARRYING OUT THE INVENTION Hereinafter, embodiments of the present invention will be described with reference to the drawings.
  • FIG. 1 shows one embodiment of the dew
  • the schematic configuration of the optical device 10 is shown.
  • the exposure apparatus 10 is a step-and-scan type scanning exposure apparatus.
  • the exposure apparatus 10 includes an illumination system formed of a light source 16 and an illumination optical system 12 as a light source device, and a reticle holding a reticle R as a mask illuminated by exposure light IL from the illumination system.
  • XY with stage RS projection optical system PL for projecting exposure light IL emitted from reticle R onto wafer W as a substrate
  • Z tilt stage 58 as a substrate stage for holding wafer W It has a stage 14 and a control system for them.
  • the light source 16 for example, an ArF excimer laser light source that outputs ultraviolet pulsed light having a wavelength of 193 nm is used.
  • the light source 16 is actually a champer 11 in which the components of the illumination optical system 12 and the exposure apparatus main body formed of the reticle stage RST, the projection optical system PL, and the XY stage 14 are housed.
  • F 2 laser light source (output wavelength 1 5 7 nm) as the light source may be other pulsed light source.
  • FIG. 2 shows the inside of the light source 16 together with the main controller 50.
  • the light source 16 has a laser resonator 16a, a beam splitter 16b, a beam monitor 16c as a first optical sensor, an energy controller 16d, a high-voltage power supply 16e, and the like.
  • the laser beam LB emitted in a pulse form from the laser resonator 16a is incident on the beam splitter 16b having a high transmittance and a small reflectance, and is transmitted through the beam splitter 16b. LB is ejected outside.
  • the laser beam LB reflected by the beam splitter 16b is incident on a beam monitor 16c as a first optical sensor having a detection unit formed of an MN-based crystal, and a photoelectric signal from the beam monitor 16c is formed.
  • the converted signal is output via a peak hold circuit (not shown) as ES —La 16d is being supplied.
  • the configuration and the like of the optical sensor constituting the beam monitor 16c are features of the present invention and will be described later in detail.
  • the energy controller 16d sets the output ES of the beam monitor 16c to correspond to the target value of energy per pulse in the control information TS supplied from the main controller 50.
  • the power supply voltage of the high-voltage power supply 16 e is feedback-controlled so as to obtain a value.
  • the unit of the energy control amount corresponding to the output ES of the beam monitor 16c is [m J / pulse].
  • the energy controller 16 d sets the power supply voltage in the high-voltage power supply 16 e based on the control information TS from the main controller 50, whereby the energy is emitted from the laser resonator 16 a.
  • the pulse energy of the laser beam LB to be set is set near a predetermined value.
  • the average value of the energy per pulse of the light source 16 is usually stabilized at a predetermined center energy E 0 , but the average value of the energy is a predetermined variable above and below the center energy E Q. It is configured so that it can be controlled within a range (for example, about 10%).
  • the pulse energy is finely modulated in the variable range.
  • the energy controller 16 d also changes the oscillation frequency by controlling the energy supplied to the laser resonator 16 a via the high-voltage power supply 16 e. Outside the splitter 16b, there is also arranged a shirt 16f for shielding the laser beam B with a laser beam according to the control information from the main controller 50.
  • FIG. 3A schematically shows an example of the MN-based semiconductor light receiving element.
  • the MN-based semiconductor light-receiving element 17 shown in FIG. 3 (A) has a crystal substrate 1 and a p-type crystal layer (p contact) sequentially laminated on the crystal substrate 1 via a buffer layer 1a.
  • n-type crystal layer S1 an n-type crystal layer (n-contact layer) S2, and a light receiving portion S formed from the n-type crystal layer S2 constituting the light receiving portion S (+) electrode ( n-type electrode) Q 1 and p-type (1)
  • An electrode (p-type side electrode) Q2 provided on the crystal layer S1.
  • the p-type crystal layer S 1 and the n-type crystal layer S 2 are grown sequentially on the crystal substrate 1 to form a pn junction structure.
  • a part of the upper surface of the p-type crystal layer S1 is an exposed surface, and (1) the electrode Q2 is disposed on the exposed surface. All the crystal layers constituting the light receiving section S were formed from MN-based materials.
  • the material of the crystal substrate 1 may be any material as long as the MN-based material is capable of crystal growth ⁇ for example, sapphire, quartz, SiC, and the like. Among them, a GaN single crystal substrate, a C-plane and an A-plane of sapphire, a 6H—SiC substrate, and particularly a C-plane sapphire substrate are preferable. Further, when growing an MN-based crystal layer on the crystal substrate 1, the buffer layer 1a formed of an MN-based material may be interposed, but the buffer layer is not necessarily provided. As the buffer layer 1a, for example, G a N, A 1 N, etc. are suitable.
  • the electrodes Q 1 and Q 2 are both electrodes (genetic electrodes) in which metal-semiconductor contact is almost negligible.
  • Examples of the material of the unique electrode include A1 / Ti, Au / Ti, Ti, and the like. Further, these materials may be combined.
  • the electrode Q1 is particularly balanced so that the light L to be received can be incident on the layer S2 and a sufficient area as an electrode is secured.
  • an opaque electrode formed of the above material the area occupied by the electrode with respect to the upper surface of the layer S 2 is taken into consideration, or an electrode transparent to the light L to be received is used. .
  • electrode Q1 should determine its occupation area or transmissivity such that more than 5% of light L illuminating the entire top surface of layer S2 can enter layer S2. , In terms of sensitivity.
  • Examples of the material of the transparent electrode include Au (10 nm) / Ni (5 nm).
  • a buffer layer 1 a of G a N is grown to a thickness of 100 angstroms, and then a thickness of 1,000 is formed on the buffer layer 1 a.
  • C is used to grow a Mg doped i-type GaN layer with a thickness of 2 ⁇ .
  • the sapphire substrate is used as the substrate 1 because, as is well known, the sapphire substrate is a material that is extremely stable against heat, has sufficient hardness, and has the same crystal system. This is because it is optimal as a substrate for growing III-based compound semiconductors used for light-receiving elements.
  • the buffer layer 1a is defined as G a a is that the buffer layer 1a grown on the sapphire substrate 1 is grown on the buffer layer 1a as described later. This is because the same composition as the layer S 1 can improve the crystallinity of the p contact layer S 1. Therefore, when the p contact layer S1 is set to A1N, the buffer layer 1a is also preferably set to AlN.
  • the substrate 1 is transferred to an annealing device and annealed at 450 ° C to make the Mg-doped GaN layer a low-resistance p-type GaN. I do.
  • a p-contact layer S1 is formed on the buffer layer 1a.
  • the temperature should be 400 ° C or more. Annealing with the above makes it possible to obtain a p-type.
  • an n-contact layer S2 formed of a Si-doped n-type GaAlN layer is grown on the p-contact layer S1 at 1,000 ° C.
  • the GaAlN layer a crystal made of MN-based material
  • it is converted to n-type by doping donor impurities such as Si, Ge, Sn, Sb, and Cd. can do.
  • a mask is formed on the surface of the n-type GaN layer S2, and a part of the n-type GaAIN layer S2 is etched to expose the p-type GaN layer S1, and the n-type GaN layer On S2
  • the negative electrode Q2 is formed on the positive electrode Q1 and the p-type GaN layer S1.
  • the materials described above are used as the electrode material.
  • the MN-based semiconductor light receiving element 17 has excellent resistance to ultraviolet light, and the MN-based semiconductor light receiving element 17 has an ultraviolet light, particularly a wavelength of 193 nm (ArF excimer laser light), F 2 laser beam or the like is more effective to select a receiving target light.
  • an ultraviolet light particularly a wavelength of 193 nm (ArF excimer laser light), F 2 laser beam or the like is more effective to select a receiving target light.
  • the MN-based semiconductor light-receiving element 17 of the present embodiment uses an MN-based material Because it is a device with good linearity, by using ultraviolet light with such a short wavelength as the light to be received, there is an advantage that the frequency of replacement and the like can be reduced compared to conventional PDs and the like. Note that it is not necessary to limit to the MN-based semiconductor light-receiving element of the type shown in Fig. 3 (A), and for example, even if the Schottky barrier type MN-based semiconductor light-receiving element shown in Fig.
  • the MN-based semiconductor light-receiving element 17 ′ shown in FIG. 3 (B) has a sapphire substrate 1 and a buffer layer 1 a on the substrate 1.
  • a depletion layer is formed at a location indicated by the symbol c.
  • the MN-based semiconductor light receiving element 17 ′ is manufactured in the same procedure as that of the MN-based semiconductor light receiving element 17, detailed description is omitted.
  • the i-type GaN crystal layer S4 can be obtained by not doping impurities.
  • the light L to be received is irradiated from the side opposite to the sapphire substrate 1, but the sapphire substrate is transparent to ultraviolet and blue light and can transmit light well.
  • the target light L can be irradiated from the substrate 1 side.
  • the above-described MN-based semiconductor light receiving element 17 (or 17 ′) is housed in a package, for example, as shown in FIG.
  • the optical sensor 2 shown in FIG. 4 uses a so-called hermetic seal container.
  • An MN semiconductor light receiving element 17 is mounted on a stem P 1 provided with a terminal 3 and sealed with a cap P 2. It has a stopped structure.
  • the cap P2 is provided with a window 4 for allowing external light to enter the internal MN-based semiconductor light-receiving element 17, similarly to a known light-receiving element package.
  • a window plate 5 made of fluorite or synthetic quartz and transmitting the light to be received is mounted inside the window 4.
  • a window plate 5 made of fluorite or synthetic quartz and transmitting the light to be received is mounted on the inner surface of the window plate 5, light having a wavelength shorter than 350 nm is mounted.
  • a short-wavelength transmission filter F that transmits only light is disposed. In this way, light other than the light to be received can be cut.
  • the filter F when used in an exposure apparatus, it is not always necessary to provide the filter F, since it is usually used under conditions where only the light to be received is irradiated.
  • the ArF excimer laser beam L1 having a wavelength of 193 nm passes through the window plate 5 and the filter F, and is highly sensitive by the MN-based semiconductor light receiving element 17. Is detected.
  • the illumination optical system 12 includes a beam shaping optical system 18, an energy rough adjuster 20, a fly-eye lens 22 as an optical integrator (homogenizer), an illumination system aperture stop plate 24, Beam splitter 26, 1st relay lens 28A, 2nd relay lens 28B, fixed reticle blind 30A, movable reticle blind 30B, mirror M for bending optical path, condenser lens 32, etc. It has.
  • the beam shaping optical system 18 is not shown via a light transmission window 13 provided in a chamber 11 (more precisely, a lens barrel of the illumination optical system 12 or one end surface of a housing for housing the same). Connected to the beam matching unit (or relay optical system).
  • the beam shaping optical system 18 converts the cross-sectional shape of the laser beam LB pulsed by the light source 16 so that it efficiently enters the fly-eye lens 22 provided behind the optical path of the laser beam LB. It is shaped by a cylinder lens or beam expander (both not shown).
  • ND filters for example, 6 ND filters
  • FIG. 1 two ND filters 36 A and 36 D are shown in FIG. 1 are arranged, and the rotating plate 34 is rotated by the drive motor 38.
  • the drive motor 38 is controlled by a main controller 50 described later. It is to be noted that a rotary plate similar to the rotary plate 34 may be arranged in two stages, and finer ⁇ transmittance may be adjusted by a combination of two sets of ND filters.
  • the fly-eye lens 22 is arranged on the optical path of the laser beam LB emitted from the energy coarse adjuster 20 and is a surface light source composed of a large number of light source images for illuminating the reticle R with a uniform illuminance distribution. Form a secondary light source.
  • the laser beam emitted from the secondary light source is also referred to as “exposure light I” in this specification.
  • An illumination system aperture stop plate 24 formed of a disc-shaped member is arranged near the exit surface of the fly-eye lens 22.
  • the illumination system aperture stop plate 24 includes, at equal angular intervals, for example, an aperture stop formed of a normal circular aperture, an aperture stop formed of a small circular aperture, and a coherence factor for reducing the ⁇ value, and annular illumination.
  • Aperture stop for zonal use, and a modified aperture stop with multiple apertures eccentrically arranged for the modified light source method (only two of these types are shown in FIG. 1), etc. Is arranged.
  • the illumination system aperture stop plate 24 is configured to be rotated by a drive device 40 such as a motor controlled by a main control device 50, so that one of the aperture stops is exposed to the exposure light I. It is selectively set on the optical path.
  • a beam splitter 26 having a small reflectance ⁇ a large transmittance is arranged, and further on the optical path behind the fixed reticle blind 30
  • An optical system formed of the first relay lens 28A and the second relay lens 28B with the ⁇ and movable reticle blind 30B interposed is arranged.
  • the fixed reticle blind 30A is a reticle R A rectangular opening which is arranged on a plane slightly defocused from a conjugate plane with respect to the pattern plane and defines an illumination area 42 R on the reticle R is formed.
  • a movable reticle blind 30B having an opening whose position and width in the scanning direction are variable is arranged near the fixed reticle blind 30A, and the movable reticle blind 30B is provided at the start and end of scanning exposure. By further limiting the illuminated area 42R via B, unnecessary portions of the exposure are prevented.
  • the movable reticle blind 30B is also used for setting an illumination area when detecting an aerial image by aerial image measurement described later.
  • On the optical path of the exposure light IL behind the second relay lens 28 B constituting the relay optical system there is a bending mirror M that reflects the exposure light IL passing through the second relay lens 28 B toward the reticle R. Is placed on the optical path of the exposure light IL behind this mirror.
  • Salenth 32 is arranged. Further, on one of the optical paths that are vertically bent by the beam splitter 26 in the illumination optical system 12 and on the other optical path, there are integrator sensors 46 as second and third optical sensors, and a reflected light monitor. 4 7 are arranged respectively.
  • the optical sensor 2 having the above-described MN-based semiconductor light receiving element 17 is used as the integrator 46 and the reflected light monitor 47.
  • the integer sensor 46 and the reflected light monitor 47 have high sensitivity in the far ultraviolet region and the vacuum ultraviolet region, and have a high response frequency for detecting the pulse light emission of the light source 16.
  • the operation of the illumination system 12 configured as described above will be briefly described.
  • the laser beam LB pulsed from the light source 16 is incident on the beam shaping optical system 18 where the rear fly-eye lens 22.
  • the cross-sectional shape is shaped so as to make it incident, and then it enters the energy rough adjuster 20.
  • the laser beam LB transmitted through any one of the ND filters of the energy rough adjuster 20 enters the fly-eye lens 22.
  • a secondary light source is formed on the exit-side focal plane of the fly-eye lens 22 (the pupil plane of the illumination optical system 12).
  • the exposure light IL emitted from the secondary light source passes through one of the aperture stops on the illumination system aperture stop plate 24 and then reaches a beam splitter 26 having a large transmittance ⁇ a small reflectance.
  • the exposure light IL transmitted through the beam splitter 26 passes through the rectangular opening of the fixed reticle blind 30 A and the movable reticle blind 30 B through the first relay lens 28 A, and then passes through the second After the optical path is bent vertically downward by the mirror M after passing through the relay lens 28 B, the rectangular illumination area 4 2 R on the reticle R held on the reticle stage RST is passed through the condenser lens 32. Illuminate with a uniform illuminance distribution.
  • the exposure light IL reflected by the beam splitter 26 is received by the integrator sensor 46 via the condenser lens 44, and the photoelectric conversion signal of the integrator overnight sensor 46 becomes a peak (not shown). It is supplied to main controller 50 as output DS (digit / pulse) via hold circuit and A / D converter.
  • This integer sensor 4 The correlation coefficient between the output DS of FIG. 6 and the illuminance (exposure amount) of the exposure light IL on the surface of the wafer W is obtained in advance as described later, and is stored in the memory 51 attached to the main controller 50. Is stored within.
  • the reflected light flux that illuminates the illumination area 42 R on the reticle R and is reflected by the pattern surface of the reticle passes through the condenser lens 32 and the relay optical system in the opposite direction, and The light is reflected by the beam splitter 26 and received by the reflected light monitor 47 via the condenser lens 48.
  • the Z tilt stage 58 is located below the projection optical system P, the exposure light IL transmitted through the pattern surface of the reticle is applied to the projection optical system PL and the surface of the wafer W (or a reference mark plate described later).
  • the reflected light flux passes through the projection optical system P, passes through the reticle R, the condenser lens 32, and the relay optical system in the opposite direction, and is reflected by the beam splitter 26.
  • the reflected light is received by the reflected light monitor 47 via the condenser lens 48.
  • Each optical element disposed between the beam splitter 26 and the wafer W has an anti-reflection film formed on the surface thereof, but the exposure light IL is slightly reflected on the surface. Is also received by the reflected light monitor 47.
  • the photoelectric conversion signal of the reflected light monitor 47 is supplied to the main controller 50 via a peak hold circuit (not shown) and an A / D converter.
  • the reflected light monitor 47 is mainly used for measuring the reflectance of the wafer W and the like. This will be described later.
  • the reflected light monitor 47 may be used for pre-measurement of the transmittance of the reticle R.
  • a reticle R is placed on the reticle stage RST, and is held by suction via a vacuum chuck (not shown).
  • the reticle stage RST can be finely driven in a horizontal plane (XY plane), and can be moved within a predetermined stroke range in the scanning direction (here, the Y direction, which is the horizontal direction in FIG. 1) by the reticle stage drive unit 49. It is to be scanned.
  • the position and the rotation amount of the reticle stage RST during this scanning are measured by an external laser interferometer 54 R via a movable mirror 52 R fixed on the reticle stage RST, and this laser
  • the measurement value of the interferometer 54 R is supplied to the main control device 50.
  • the material used for the reticle R needs to be properly used depending on the light source used. That is, if the K r F excimer one laser light source and A r F excimer one laser light source as a light source, can be used synthetic quartz, the case of using F 2 laser light source, fluorite, fluorine de one It must be made of synthetic quartz, quartz, etc.
  • the projection optical system PL is, for example, a two-sided telecentric reduction system, and includes a plurality of lens elements 70a, 70b,... Having a common optical axis in the Z-axis direction.
  • the projection optical system PL one having a projection magnification of, for example, 1/4, 1/5, 1/6, or the like is used. Therefore, as described above, when the illumination area 42 R on the reticle R is illuminated by the exposure light IL, the pattern formed on the reticle R is reduced by the projection optical system PL at a projection magnification of 3. The image is projected and transferred to a slit-like exposure area 42 W on the wafer W having a surface coated with a resist (photosensitive agent).
  • a plurality of lens elements can be independently moved.
  • the lens element 70 a closest to the reticle stage RST is held by a ring-shaped support member 72, which is a telescopic drive element such as a piezo element 74 a , 74 b, 74 c (the drive element 74 c on the far side of the drawing is not shown), and is supported at three points and connected to the lens barrel 76.
  • the driving elements 74a, 74b, and 74c allow the three points around the lens element 70a to be independently moved in the optical axis AX direction of the projection optical system PL. I have.
  • the lens element 70a can be translated along the optical axis AX in accordance with the displacement of the driving elements 74a, 74b, and 74c, and a plane perpendicular to the optical axis AX can be obtained. It can also be arbitrarily inclined with respect to.
  • the voltage applied to these drive elements 74a, 74b, and 74c is controlled by the imaging characteristic correction controller 78 based on a command from the main control device 50, whereby The displacement amounts of the driving elements 74a, 74b, and 74c are controlled.
  • the optical axis AX of the projection optical system PL is fixed to the lens barrel 76.
  • Lens element 70 b Refers to the optical axis of other lens elements (not shown).
  • the relationship between the vertical amount of the lens element 70a and the change amount of the magnification (or distortion) is obtained in advance by an experiment, and this is stored in a memory inside the main controller 50.
  • the vertical amount of the lens element 70a is calculated from the magnification (or distance) corrected by the main controller 50 at the time of correction, and an instruction is given to the imaging characteristic correction controller 48 to drive the drive element.
  • magnification (or distortion) correction is performed.
  • the relationship between the vertical amount of the lens element 70a and the amount of change in magnification or the like may use an optically calculated value. In this case, the vertical amount of the lens element 70a and the amount of change in magnification may be used.
  • the lens element 70a closest to the reticle R is movable, but the element 70a has a greater effect on magnification and distortion characteristics than the other lens elements, and is easier to control. If one of the lenses is selected and satisfies the same condition, any lens element may be configured to be movable for adjusting the lens interval instead of the lens element 70a. . It should be noted that at least one lens element other than the lens element 70a can be moved to adjust other optical properties, for example, field curvature, astigmatism, coma, or spherical aberration.
  • a sealed chamber is provided between specific lens elements near the center of the projection optical system PL in the optical axis direction, and the pressure of gas in the sealed chamber is adjusted by a pressure adjusting mechanism such as a bellows pump.
  • An imaging characteristic correction mechanism for adjusting the magnification of the projection optical system PL may be provided.
  • an aspherical lens is used as a part of the lens elements constituting the projection optical system PL, and the lens is rotated. You may do it. In this case, so-called rhombic distortion can be corrected.
  • a parallel plane plate may be provided in the projection optical system PL, and the imaging characteristic correction mechanism may be configured by a mechanism that tilts or rotates the plate.
  • the XY stage 14 is two-dimensionally driven by a wafer stage drive unit 56 in the Y direction, which is the scanning direction, and the X direction, which is orthogonal to the scanning direction (the direction perpendicular to the plane of FIG. 1).
  • a wafer W is held on a Z tilt stage 58 mounted on the XY stage 14 by vacuum suction or the like via a wafer holder 61 (not shown in FIG. 1, see FIG. 5).
  • the Z tilt stage 58 adjusts the position (focus position) of the wafer W in the Z direction by, for example, three actuators (piezo elements or voice coil motors), and adjusts the tilt angle of the wafer W with respect to the XY plane. It has the function.
  • the position of the XY stage 14 is measured by an external laser interferometer 54 W through a movable mirror 52 W fixed on the Z tilt stage 58, and the position of the laser interferometer 54 W is measured. The value is supplied to the main controller 50.
  • the moving mirror is actually an X moving mirror 54 having a reflecting surface perpendicular to the X axis, and a Y moving mirror 54 having a reflecting surface perpendicular to the Y axis, as shown in FIG. W y exists.
  • laser interferometers for X-axis position measurement, Y-axis position measurement, and rotation measurement (including the amount of pitching, pitching, and rolling) respectively. Although they are provided, they are typically shown in FIG. 1 as a movable mirror 52 W and a laser interferometer 54 W.
  • FIG. 5 is a schematic plan view of the Z tilt stage 58. As shown in FIG.
  • the first corner at the end in the + Y direction and the end in the X direction has a dose monitor 59A
  • the unevenness sensors 59B are arranged side by side in the Y direction.
  • an aerial image measuring instrument 59 C is disposed at a second corner of the Z tilt stage 58 at the + Y direction end and the + X direction end.
  • the dose monitor 59 A has a rectangular housing in plan view extending in the X direction, which is slightly larger than the exposure area 42 W, and has a central portion of this housing having substantially the same shape as the exposure area 42 W.
  • a slit-like opening 59d is formed.
  • This opening 59d is actually formed by removing a part of a light-shielding film formed on the upper surface of a light-receiving glass formed of synthetic quartz or the like forming the ceiling surface of the housing.
  • the irradiation amount monitor 59 A is used for measuring the intensity of the exposure light IL applied to the exposure area 42 W.
  • the unevenness sensor 59B has a housing having a substantially square shape in a plan view, and a pinhole-shaped opening 59e is formed in the center of the housing.
  • the opening 59e is actually formed by removing a part of a light shielding film formed on the upper surface of a light receiving glass formed of synthetic quartz or the like forming the ceiling surface of the housing.
  • the unevenness sensor 59B is also used for measuring the transmittance of the projection optical system PL, as described later.
  • the aerial image measuring instrument 59 C has a housing having a substantially square shape in a plan view, and is a reflection film formed on an upper surface of a light-receiving glass made of synthetic quartz or the like forming a ceiling surface of the housing.
  • the portion has a rectangular opening 59f.
  • FIG. 6A shows an enlarged view of the vicinity of the Z tilt stage 58 in FIG. 1 including the aerial image measuring instrument 59 C.
  • a protruding portion 58a having an open upper portion is provided on the upper surface of one end of the Z tilt stage 58, and the opening of the protruding portion 58a is closed.
  • Light receiving glass 82 is fitted.
  • a reflection film 83 also serving as a light-shielding film is formed on the upper surface of the light-receiving glass 82.
  • the shaded area indicates the reflection surface formed from the reflection film.
  • the reflecting surface also has a role of a reference reflecting surface at the time of calibration of a focus sensor described later in the present embodiment. As shown in FIG.
  • a relay optical system formed of lenses 84 and 86 and a relay optical system ( 8 4, 8 6), a light receiving optical system formed from a bending mirror 8 that bends the optical path of the illumination light flux (image light flux) relayed by a predetermined optical path length, and the MN semiconductor light receiving element 1 described above.
  • An optical sensor 2 having 7 is arranged. According to the aerial image measuring instrument 59 C, when a projection image of a measurement pattern formed on the reticle R, which will be described later, is detected via the projection optical system PL, the measurement pattern has transmitted through the projection optical system PL.
  • the exposure light IL illuminates the light-receiving glass 82, and the exposure light IL transmitted through the opening 59f on the light-receiving glass 82 passes through the light-receiving optical system to the MN-based semiconductor light-receiving element 17 constituting the optical sensor 2.
  • the MN-based semiconductor light receiving element 17 Upon arrival, the MN-based semiconductor light receiving element 17 performs photoelectric conversion and outputs a light quantity signal P corresponding to the received light quantity to the main controller 50.
  • the optical sensor 2 does not necessarily need to be provided inside the Z tilt stage 58, but the optical sensor 2 is arranged outside the Z tilt stage 58, and the illumination light beam relayed by the relay optical system is used as an optical fiber or the like.
  • the irradiation amount monitor 59 9 and the unevenness sensor 59 9 also have the same shape of the opening (opening pattern). Except for the above-mentioned aerial image measuring instrument 59 C, the light amount signal corresponding to the light receiving amount of the above-mentioned MN semiconductor light receiving element 17 constituting the irradiation amount monitor 59 A and the unevenness sensor 59 B is provided. Is supplied to the main controller.
  • a fiducial mark plate FM used for performing a reticle alignment described later is provided on the Z tilt stage 58.
  • the fiducial mark plate FM has a surface almost at the same height as the surface of the wafer W, and in fact, as shown in the plan view of FIG. And a part of the third corner at the end in the Y direction. Reference marks, such as a reticle alignment reference mark and a baseline measurement reference mark, are formed on the surface of the reference mark plate FM (this will be described later).
  • the exposure apparatus 10 includes a reticle alignment system for performing the reticle alignment.
  • FIG. 7 is a schematic side view of the condenser lens 32, the reticle R, the projection optical system P, the Z tilt stage 58, the XY stage 14 and the like in FIG. 1 viewed in the + Y direction.
  • reticle stage RST is not shown.
  • four pairs of reticle marks each formed of, for example, a cross-shaped two-dimensional mark are formed so as to sandwich the pattern area of the pattern surface of the reticle R in the X direction.
  • the fiducial mark plate FM on the Z tilt stage 58 four pairs of fiducial marks are formed in an array obtained by reducing the four pairs of reticle marks by the projection magnification.
  • FIG. 7 is a schematic side view of the condenser lens 32, the reticle R, the projection optical system P, the Z tilt stage 58, the XY stage 14 and the like in FIG. 1 viewed in the + Y direction.
  • reticle stage RST is not shown.
  • four pairs of reticle marks each formed of,
  • FIG. 8 is an enlarged plan view showing a state in which a part of the projected image RP of the reticle R and the fiducial mark plate FM in FIG. 7 are overlapped.
  • a first pair of fiducial marks 1 14 A, 114 E and a second pair of fiducial marks 1 1 4 B, 1 14F, a third pair of reference marks 114C and 114G, and a fourth pair of reference marks 111D and 114H are formed.
  • an image PAP of the pattern area is projected onto the center of the projected image RP of the reticle R, and a first pair of reticle mark images 1 1 1 at both sides of this image PAP at predetermined intervals in the Y direction.
  • a pair of reticle mark images 1 1 3 DP and 1 1 3 HP are projected. Note that, in FIG. 8, the above is described for the sake of simplicity. However, in the case of a scanning exposure apparatus such as the present embodiment, all reticle mark images may be actually projected simultaneously. Of course, the entire pattern area is not projected as a whole. In FIG. 7, the reticle marks corresponding to the pair of reference marks 111 D, 114 H and the reticle mark images 113 DP, 113 HP of FIG. 3 H appears.
  • the reticle stage RST and the XY stage 14 are driven by the main controller 50 via the reticle stage drive unit 49 and the wafer stage drive unit 56, as shown in FIG.
  • the reference mark 1 1 4D, 1 1 4 H on the reference mark plate FM is set within the rectangular exposure area 42 W so that the reticle mark image 1 1 4 D, 1 1 4 H
  • the relative position between reticle R and Z tilt stage 58 is set so that 3 DP and 1 13 HP almost overlap.
  • a pair of half prisms 101 R, 101 L of the reticle alignment system 100 is provided on the optical path of the illumination light ILR, ILL from the condenser lens 32 toward the reticle R.
  • the half prisms 101 R and 101 L are retracted outside the optical path. Then, one of the illumination lights ILR transmitted through the condenser lens 32 is transmitted through the half prism 101 R, irradiates the reticle mark 113 H on the reticle R, and is irradiated with the reticle. The illumination light reflected by the mark 113 H returns to the half prism 101 R. The illumination light ILR that has passed around the reticle mark 113H illuminates the reference mark 114H on the reference mark plate FM via the projection optical system PL, and is reflected from the reference mark 114H.
  • the light returns to the half prism 101 R via the projection optical system PL and the reticle R.
  • the reflected light from the reticle mark 113 H and the reference mark 111 H is reflected by the half prism 101 R, and then passed through the relay lenses 102 R and 103 R to the two-dimensional image element.
  • An image of the reticle mark 113H and the reference mark 114H is formed on the imaging surface of the child 104R.
  • the imaging signal of the image sensor 104 R is supplied to the main controller 50, which processes the image signal and projects the projected image of the reticle mark 113 H with respect to the reference mark 114 H. Calculate the amount of displacement in the X and Y directions.
  • relay lenses 102L, 103L, and an image sensor 104L are provided on the half prism 101L side on which the other illumination light ILL transmitted through the condenser lens 32 is incident.
  • the imaging signal of the imaging device 104 L is also supplied to the main controller 50, and the main controller 50 uses the image signal to project the reticle mark 113 D with respect to the reference mark 114 D in the X direction. , Calculate the amount of displacement in the Y direction.
  • a two-dimensional image sensor having a light receiving section formed from a large number of the aforementioned MN-based semiconductor light receiving elements 17 is used as the image sensor 104 R and the image sensor 104 L. I have.
  • the image sensor 104 R is arranged in a matrix with the horizontal direction of the paper as a row direction (horizontal direction) and the vertical direction of the paper as a column direction (vertical direction).
  • the switch elements 310 corresponding to the MN-based semiconductor light receiving elements 17 arranged in the row direction are opened and closed simultaneously so that the switch elements 3 Wiring for 0 1 is made.
  • the MN-based semiconductor light receiving element 17 is represented by a die symbol
  • the switch elements 301 and 305 are represented by FET symbols.
  • FIG. 9 shows an example in which the MN-based semiconductor light-receiving elements 17 are arranged in a matrix of 5 rows and 5 columns, but the arrangement of the MN-based semiconductor light-receiving elements 17 is arbitrary, and a desired imaging resolution is obtained.
  • the form of the array can be determined according to the imaging range.
  • the image sensor 104 R further includes a vertical shift register 309 for controlling the opening and closing of the switch element 301 and a horizontal shift register 311 for controlling the opening and closing of the switch element 305. I have.
  • the vertical shift register 309 selects the first row in order in synchronization with the supplied vertical clock signal, and closes the switch element 301 corresponding to the selected row.
  • the signal output terminals of the MN-based semiconductor light-receiving elements 17 arranged in each row are connected to the vertical signal lines 303 corresponding to the respective MN-based semiconductor light-receiving elements 17.
  • the horizontal shift register 311 selects the first column to the last column in order in synchronization with the supplied horizontal clock signal. Then, the switch element 3 05 corresponding to the selected column is closed, and the signal output terminals of the MN semiconductor light receiving elements 17 arranged in the selected row are sequentially connected to the image signal output line. In this way, the signal output terminal of each MN-system semiconductor light receiving element 17 is sequentially connected to the image signal output terminal 3 13, so that each MN-system semiconductor light-receiving element 17 is photoelectrically converted and stored. At the same timing as the signal obtained by using the CCD element, the signal charge is guided through the image signal output terminal 313 to, for example, a low noise amplifier provided outside.
  • the image sensor 104L has the same configuration as the image sensor 104R described above.
  • the mask alignment is made up of the half prisms 101 R, 101 L, s relay lenses 102 R, 103 R, relay lenses 102 L, 103 L, and image sensors 104 R, 104 L.
  • the reticle alignment system 100 is configured as a reticle alignment system.
  • the reticle alignment system includes a reticle replacement when the reticle is replaced or when the reticle is thermally deformed due to irradiation of illumination light.
  • ⁇ Fine mode '' for highly accurate measurement of misalignment when misalignment occurs
  • ⁇ Quick mode '' for confirming the position of the reticle before or after wafer exchange Have been.
  • the alignment in quick mode is also called "interval alignment". In the former fine mode, after the positional displacement of the image of the pair of reticle marks 113D and 113H is measured in the state shown in FIG.
  • the main controller 50 sets the reticle stage drive 49 By moving the reference mark plate FM and the reticle R synchronously in the Y direction at the projection magnification ratio via the wafer stage drive unit 56, respectively, the other three pairs of reference marks 1 14 C in FIG. Reticle mark images for, 114 G to 114 A, 114 E Measure the amount of misalignment of the 113 CP, 113 GP to 113 AP, and 113 EP. Then, main controller 50 determines the offset, rotation angle, and offset of the displacement of the projected image of reticle R with respect to fiducial mark plate FM and Z tilt stage 58 based on the displacement of these four pairs of reticle marks.
  • the controller 78 corrects the imaging characteristics of the projection optical system PL via the controller 78 and corrects the scanning direction of the reticle R during scanning exposure.
  • the main controller 50 controls the base of an unshown wafer-side optics alignment sensor provided on the side surface of the projection optical system PL.
  • the line amount is also measured. That is, as shown in FIG. 8, a reference mark Wm for base line measurement is formed on the reference mark plate FM in a predetermined positional relationship with respect to the reference marks 114D, 114H, and the like.
  • the alignment sensor of the reference mark W m is detected via the alignment sensor on the wafer side.
  • the baseline amount of the alignment sensor that is, the relative positional relationship between the reticle projection position and the alignment sensor is measured.
  • the exposure apparatus 10 of the present embodiment has a light source 16 whose on / off is controlled by the main controller 50, and a large number of light sources 16 directed toward the image forming plane of the projection optical system PL.
  • Irradiation optical system 60a for irradiating the imaging light flux for forming an image of the pinhole or the slit from the oblique direction to the optical axis AX, and reflection of the imaging light flux on the surface of the wafer W
  • An obliquely incident light type multi-point focal point detection system (focus sensor) comprising a light receiving optical system 60b for receiving a light beam is provided.
  • the receiving optical system 6 By controlling the inclination of the reflected light flux of the parallel plate (not shown) in 0b with respect to the optical axis, an offset is given to the focus detection system (60a, 60b) according to the focus fluctuation of the projection optical system PL. Perform the calibration. As a result, the image plane of the projection optical system PL and the surface of the wafer W coincide with each other within the range (width) of the depth of focus within the above-described exposure area 42 W.
  • the detailed configuration of the multi-point focal position detection system (focus sensor) similar to that of the present embodiment is disclosed in, for example, Japanese Patent Application Laid-Open No. 6-283403.
  • the main controller 50 sets the Z tilt stage so that the defocus becomes zero based on a defocus signal (defocus signal) from the light receiving optical system 60b, for example, an S-curve signal.
  • a defocus signal defocus signal
  • S-curve signal an S-curve signal.
  • Auto-focus (automatic focusing) and self-leveling are executed by controlling the Z position 8 through a drive system (not shown).
  • the reason why a parallel plate is provided in the light receiving optical system 60b and the offset is given to the focus detection system (60a, 60b) is that, for example, the lens element 70b is used for magnification correction.
  • the focus changes by moving a up and down, and the projection optical system PL absorbs the exposure light IL to change the imaging characteristics and change the position of the imaging plane.
  • the relationship between the vertical amount of the lens element 70a and the focus change amount is obtained in advance by an experiment and stored in the memory inside the main control device 50. Note that a calculated value may be used as the relationship between the amount of change in the lens element 70a and the amount of change in focus.
  • the control system is mainly constituted by a main control device 50 as a control device in FIG.
  • the main control unit 50 is a so-called micro combination comprising a CPU (central processing unit), ROM (read 'only' memory), RAM (random 'access. Memory), etc. It is configured to include a user (or a workstation), and controls, for example, synchronous scanning of the reticle R and the wafer W, stepping of the wafer W, exposure timing, and the like so that the exposure operation is properly performed.
  • the main controller 50 controls the amount of exposure at the time of scanning exposure as described later, and detects a projected image (space image) of a measurement mark (mark pattern) as described later.
  • the amount of change in the imaging characteristics of the projection optical system PL is calculated based on the result, and the imaging characteristics of the projection optical system PL are adjusted via the imaging characteristic correction controller 78 based on the calculation result.
  • the wafer W moves through the XY stage 14 with respect to the exposure area 42 W in the Y direction (or + Y direction) in the Y direction (or in the + Y direction).
  • Reticle stage RST and XY stage 14 via reticle stage drive unit 49 and wafer stage drive unit 56 based on the measured values of laser interferometers 54 R and 54 W, respectively. Control position and speed respectively.
  • the main controller 50 controls the position of the XY stage 14 via the wafer stage drive unit 56 based on the measurement value of the laser interferometer 54W.
  • the need for such a calibration is for the purpose of so-called exposure amount matching between the exposure apparatuses.
  • one exposure is performed by calibrating (calibrating) the integrator sensor, which is the illuminance reference for each exposure device, using a common reference illuminometer for multiple exposure devices used on the same device manufacturing line. If the exposure amount is set optimally for a certain sensitivity in the apparatus, the optimal exposure amount can be set in the same way for the resist having the same sensitivity in another apparatus.
  • the calibration reference illuminometer will be briefly described. This reference illuminometer is a type of irradiation sensor that detects the intensity of the exposure light IL on the image plane via the projection optical system PL. You. As shown in FIG.
  • the reference illuminometer 90 is separated into a sensor head section 9 OA and a main body data processing section (not shown), and these are connected by a cable 92. Since the reference illuminometer 90 must be used for calibration of the integrator sensor of another unit (exposure device), the sensor head section 9OA has a compact structure so as to be easy to carry. .
  • the main body data processing unit (not shown) is online with respect to the control system of the exposure apparatus 10, and has a configuration capable of performing data communication such as illuminance. Further, as described above, since the sensor head section 90A is separated from the main body data processing section, it is easy to install the exposure apparatus 10 on the Z tilt stage 58.
  • This sensor head section 90 A is located at the fourth corner at the + X direction end and one Y direction end of the Z tilt stage 58 (the sensors 59 A to 59 C in FIG. 5 and the reference mark plate). The remaining corners where neither of the FMs is provided will be installed. For this reason, a positioning bracket (not shown) is provided at a predetermined position of the fourth corner portion, and the sensor head 90A is screwed to the positioning bracket with a screw or the like. It can be fixed. Alternatively, a magnet may be provided on the back surface of the sensor head 9 OA, and the sensor head 90 A may be attracted and fixed on the Z tilt stage 58 by the magnetic force of the magnet. .
  • the sensor head 90A is positioned at a predetermined position only by aligning the sensor head 90A with the positioning metal fitting, and is attracted by the magnetic force of the magnet. Fixed. In this case, it is not only easy to install the sensor head section 90 A, but also if any load is applied, it can be removed immediately. It is possible to prevent accidents such as damage to the inside of the exposure apparatus 10 due to the sensor head section 9 OA jumping and catching the part 9 2 of the exposure apparatus 10. As shown in FIG.
  • Figure 11 As shown, the XY stage 14 is moved so that the center position of the sensor head 90A of the reference illuminometer 90 is located at the center of the projection optical system PL determined in advance.
  • a circle IF indicates an image field of the projection optical system PL.
  • the main controller 50 executes the simultaneous illuminance measurement by the reference illuminance meter 90 and the integrator sensor 46.
  • the position of the sensor head 90 A corresponds to the light receiving surface of the integrator sensor 46 inside the exposure area 42 W, as schematically shown in FIG.
  • Simultaneous measurement of the illuminance by the reference illuminometer 90 and the integrator sensor 46 is performed while sequentially stepping in the XY two-dimensional direction within the area.
  • simultaneous measurement is performed by the integrator sensor 46 and the reference illuminometer 90 on the stage ( See Figure 12).
  • main controller 50 obtains an average value of the measured values of reference illuminometer 90 obtained by mxn measurements. The above simultaneous measurement is performed for the entire illuminance adjustment range, and the average value of the measured values of the reference illuminometer 90 at each illuminance is obtained.
  • Hei 10-83953 discloses To the extent permitted by national legislation in the designated or elected country, these documents are incorporated and incorporated herein by reference. In the present embodiment, the sequence disclosed in the above publication can be partially modified and used.
  • the sensor head 9OA is described above.
  • An optical sensor having an MN-based semiconductor light receiving element 17 is used.
  • a method of calculating a change in the imaging characteristics of the projection optical system PL due to the absorption of illumination light will be described.
  • the method of measuring the prerequisite dose Q will be described.
  • the main controller 50 adjusts the aperture stop (not shown) provided at the position of the pupil plane of the projection optical system to set the numerical apertures ⁇ . ⁇ . This is performed by selectively setting the aperture stop on the system aperture stop plate 24 on the optical path.
  • the main controller 50 drives the XY stage 14 so that the irradiation amount monitor 59 is located directly below the projection optical system PL.
  • the main controller 50 oscillates the light source 16 and moves the reticle stage RST and the XY stage 14 synchronously under the same conditions as the actual exposure, while outputting the dose monitor 59 A and the integer sensor.
  • the output 1 0 Integre Isseki sensor 4 6 values, and corresponding thereto the dose Q 0 corresponding to synchronous movement position (scan position) Store in the memory 51. That is, the dose Q. , And the integrated sensor output I. Is stored in the memory 51 as a function corresponding to the scanning position of the reticle R.
  • Such preparatory work is executed by the main controller 50 prior to exposure.
  • the actual output 1 0 exposure dose has been stored in accordance with the scanning position of the reticle R at the time of Q 0 and I integrators gray motor sensor 4 6, and the output I of Integure one Tasensa 4 6 at the time of exposure, the Then, the irradiation dose Q at that time is calculated based on the following equation (1), and is used for calculating the illumination light absorption.
  • the output of the integre sensor 46 is used for calculation. Therefore, even when the power of the light source 16 fluctuates, the irradiation amount can be calculated without error. In addition, since the function corresponds to the scanning position of the reticle R, the irradiation amount can be accurately calculated even when, for example, the reticle pattern is unevenly distributed in the plane.
  • the output of the dose monitor 59 A was taken under the lighting conditions at the time of actual exposure as a preparatory work, but for example, the signal of the dose monitor may become saturated due to the characteristics of the dose monitor.
  • the main controller 50 sets the same exposure conditions (reticle R, reticle blind 30 B, illumination conditions) as those in the actual exposure, and sets the XY stage 1 4 is moved to move the reflector with the reflectance RH installed just below the projection optical system PL.
  • the main controller 50 oscillates the light source 16 and moves the reticle stage RST and the XY stage 14 synchronously under the same conditions as the actual exposure, while synchronizing with the output V H0 of the reflected light monitor 47 and the integer sensor.
  • the synchronous movement position output of Integure one yourself understood capacitors 4 6 corresponding to the reflected light output V H0 of the monitor 4 7, and which in accordance with (scanning position location) I H0 is stored in the memory 51 .
  • the output V H0 of the reflected light monitor 47 and the output I H0 of the integrator sensor 46 are stored in the memory 51 as a function corresponding to the scanning position of the reticle R.
  • the XY stage 14 is driven to move the installed reflector having the reflectivity RL just below the projection optical system P, and the reflected light monitor 4 is moved in the same manner as described above.
  • the wafer reflectance can be accurately calculated even when the power of the light source 16 fluctuates.
  • a method of calculating the amount of change in the imaging characteristic due to the absorption of illumination light will be described by taking focus as an example.
  • the focus change F HEAT due to the illumination light absorption of the projection optical system PL is calculated from the irradiation dose obtained as described above and the wafer reflectance R w using a model function expressed by the following equation (3).
  • HEAT L HEATk C Ffflt x (1 + OF x) x 3 ⁇ 4 x 1-exp
  • T PK Focus change time constant due to illumination light absorption
  • the model function of the above equation (3) is in the form of the sum of three first-order lag systems when the irradiation dose is input and the focus change is output.
  • the model function may be changed depending on the amount of illumination light absorbed by the projection optical system PL and the required accuracy. For example, if the amount of illumination light absorption is relatively small, the sum of two first-order lag systems may be used, or one first-order lag system may be used. Further, if it takes time for heat conduction from the time when the projection optical system PL absorbs the illumination light to appear as a change in the imaging characteristic, a model function of a waste time system may be employed.
  • the wafer reflectivity dependence is usually 1, but depending on the type of the projection optical system PL, for example, when a glass having a high absorptance is used as the material on the side close to the wafer W, the dependence on the reflectivity is large. May be. This time will be set to a value greater than 1 a F. Value smaller than 1 is set to a F when employing a glass absorption rate is smaller on the side closer to the wafer W to the opposite.
  • the focus change time constant due to illumination light absorption, the focus change rate with respect to illumination light absorption, and the wafer reflectivity dependence are all determined by experiments. Alternatively, it may be calculated by a highly accurate thermal analysis simulation.
  • the focus it is possible to calculate the change due to the absorption of illumination light for other imaging characteristics, for example, magnification, distance, and the like.
  • a model function of the sum of three first-order lag systems was required.However, for example, a single first-order lag system may be sufficient for calculating the curvature of the image plane.
  • the model function of the illumination light absorption may be changed for each imaging characteristic depending on the accuracy.
  • the first-order lag system uses two or one model function, it has the effect of reducing the calculation time. Next, it is performed at a predetermined time, for example, when assembling or starting up the device.
  • the X mark Mx and the Y mark My are line-and-space (L / S) mark patterns consisting of five mark marks.
  • the main controller 50 drives the movable reticle blind 30B and only the small area including the measurement mark 90-1 portion is driven.
  • the opening of the movable reticle blind 30B is set so that it is illuminated, and the reflective film 83 on the X side of the opening 59f on the light receiving glass 82 is located directly below the optical axis AX of the projection optical system PL.
  • the exposure light IL transmitted through the X mark Mx portion causes the exposure light IL to reach the light receiving glass 82.
  • the projection image Mx 'of the X mark MX is formed on the reflection film 83a on one X side of the opening pattern 59f.
  • main controller 50 moves XY stage 14 at a predetermined speed in the X direction via wafer stage drive unit 56.
  • the projected image Mx 'of the X mark Mx gradually overlaps the opening 59f from the right side.
  • the main controller 50 differentiates the waveform of the light amount signal P (actually, digital data captured at a predetermined sampling interval) as shown in FIG. 15A with respect to the scanning direction. Then, calculate the differential waveform as shown in Fig. 15 (B). As is clear from FIG. 15 (B), when the front edge of the opening 59f in the scanning direction crosses the projected image Mx 'of the X mark, the amount of light gradually increases, that is, the differential waveform increases. Side. Conversely, the rear edge of the opening 59f in the scanning direction crosses the projected image Mx 'of the X mark Mx.
  • the main controller 50 performs known signal processing such as Fourier transform based on the differentiated waveform as shown in FIG.
  • main controller 50 measures to drive the movable reticle blind 30 B mark 90_ 2, 90_ 3, 90- 4 4 parts while sequentially illuminating the performed measurement marks 90_ 2, 90- 3, a 90_ 4 detection of aerial image projected in the same manner as described above.
  • the main controller 50 converts the command value of the lens element drive amount for correcting the magnification change AMxi into the imaging characteristic correction controller 78.
  • the imaging characteristic correction controller 78 drives the driving elements 74a, 74b, and 74c to correct the above-mentioned change in magnification AMxi. This completes the measurement of the magnification change and the correction of the magnification in the non-scanning direction.
  • the change in magnification in the scanning direction can be easily corrected, for example, by adjusting the scanning speed of at least one of the reticle stage RST and the X ⁇ stage 14 during scanning exposure and changing the speed ratio. In this case, as shown in FIG.
  • Runode it is possible to see Runode not detect the projected image of the measurement mark 90_ n in direct Bok, also the optical performance of the magnification Ya Dace Bok one Chillon projection optical system PL including.
  • the Z tilt stage 58 If the aerial image is detected while driving in the direction and changing the Z position within a predetermined range, the focus position (or the focus position) is determined based on the contrast (amplitude of the differential waveform data) of the aerial image. Offset) and depth of focus can be detected. Also, the telecentricity of the projection optical system PL can be detected based on the position of the aerial image in the X direction (or Y direction) at each Z position.
  • the shot map data Data that defines the exposure order and scanning direction for each shot area is created and stored in the memory 51 (see FIG. 1).
  • the output DS of the integrator sensor 46 is compared with the output of the reference illuminometer 90 installed on the Z tilt stage 58 at the same height as the image plane (ie, the surface of the wafer). It has been calibrated as described.
  • the unit of data processing is a physical quantity of (mJ / (cm 2 -pul se)).
  • Calibration of the Integra sensor 46 refers to the output DS (digit / puls) of the Integra sensor 46. It is to obtain a conversion coefficient K 1 (or conversion function) for converting the exposure amount on the image plane (m J / (cm 2 -pulse)). By using this conversion coefficient ⁇ 1, the exposure amount (energy) given indirectly to the image plane can be measured from the output DS of the integrator sensor 46.
  • the output ES of the beam monitor 16 c is also calibrated against the output DS of the integrated sensor 46 whose calibration has been completed, and the number of correlations between the two is also obtained in advance. Is stored within.
  • the output of the reflected light monitor 47 is calibrated against the output of the integrator sensor 46 that has completed the above calibration, and the output of the integrator sensor 46 and the reflected light monitor 47 are output. It is assumed that the correlation coefficient ⁇ 3 with the output is obtained in advance and stored in the memory 51.
  • the operator inputs the illumination conditions (numerical aperture of the projection optical system ⁇ . ⁇ ), the shape of the secondary light source (the type of aperture stop 24), the coherence from the input / output device 62 (see Fig. 1) such as the console.
  • Exposure conditions including the factor ⁇ ⁇ reticle type (contact hole, line and space, etc.), reticle type (phase difference reticle, half-tone reticle, etc.) and minimum line width or exposure tolerance.
  • the main controller 50 sets the aperture stop (not shown) of the projection optical system PL, selects and sets the aperture of the illumination system aperture stop plate 24, and sets the energy coarse adjuster 20.
  • the selection of the neutral density filter and the setting of the target integrated exposure amount according to the register sensitivity are performed.
  • main controller 50 uses a reticle loader (not shown) to load reticle R to be exposed onto reticle stage RS #.
  • the reticle alignment is performed using the reticle alignment system 100, and the baseline measurement is performed.
  • the main controller 50 measures the transmittance of the optical system as follows.
  • the XY stage 14 is driven via the wafer stage drive unit 56 so that the unevenness sensor 59 B is located directly below the projection optical system PL, and a trigger pulse is given to the light source 16 to oscillate the light source 16. (Light emission), and the transmittance is multiplied by 100 times the ratio of the output of the integer sensor 46 at this time to the output of the blur sensor 59B and multiplied by a predetermined coefficient (K4).
  • K4 predetermined coefficient
  • the unevenness sensor 59B is positioned at each of the above-mentioned plural points, and the unevenness sensor 59B is positioned again at the plural points in the reverse order of the parentheses, and the average value of the two illuminances measured at each point is obtained. May be used to determine the illuminance distribution. In this case, since the forward path and the backward path are reverse paths, that is, the measurement order is reversed, the unevenness sensor 59B generated due to the heat energy accumulated in the unevenness sensor 59B by the exposure light exposure IL is irradiated.
  • Output drift (especially linear component) can be canceled. Furthermore, during a series of measurement operations by the unevenness sensor 59B, the transmittance of the optical system (including the projection optical system PL and the like) disposed between the beam splitter 26 and the unevenness sensor 59B fluctuates. In addition, the effect of transmittance fluctuation can be reduced by the reciprocating operation, and the linear component can be corrected, and the measurement accuracy of the illuminance distribution can be improved.
  • main controller 50 instructs a wafer transfer system (not shown) to replace wafer W.
  • the wafer is exchanged (if there is no wafer on the stage, the wafer is simply loaded) by the wafer transfer system and a wafer transfer mechanism (not shown) on the XY stage 14, and then the so-called search alignment and fine adjustment are performed.
  • Performs a series of alignment steps for alignment (such as EGA). Since the wafer exchange and wafer alignment are performed in the same manner as in a known exposure apparatus, further detailed description is omitted here.
  • the main controller 50 gives the target integrated exposure amount determined according to the exposure condition and the resist sensitivity to the wafer W, so that the control information TS is monitored while monitoring the output of the integer sensor 46.
  • the oscillation frequency (light emission timing) and light emission power of the light source 16 or to control the energy rough adjuster 20 via the motor 38.
  • Adjusts the amount of light irradiated to the reticle R that is, the exposure amount.
  • exposure control is performed based on the transmittance measured above and the exposure control target value.
  • the integrated exposure amount distribution is made substantially uniform based on the illuminance distribution measured earlier.
  • the illumination incident on the fly-eye lens 22 by a vibrating mirror disposed on the incident surface side of the fly-eye lens 22 The incident angle of light may be changed for each pulse.
  • the illuminance distribution in the exposure area 42 W changes for each pulse, and the integrated exposure amount distribution in the shot area becomes almost uniform.
  • the control method of the exposure amount by the main controller 50 is disclosed in, for example, Japanese Patent Application Laid-Open No. Hei 6-132191, and includes pulse energy, pulse oscillation frequency, and scanning speed.
  • Comprehensive exposure control, including the above, is disclosed in detail, for example, in Japanese Patent Application Laid-Open No. 10-2703445, and these documents are limited to the extent permitted by domestic laws of designated or selected countries. Incorporated as part of the text.
  • the disclosed contents of the above publications can be used as they are or with some modifications.
  • the main controller 50 controls the illumination system aperture stop plate 24 via the drive unit 40, and further controls the opening and closing operation of the movable reticle blind 30B in synchronization with the operation information of the stage system. .
  • main controller 50 instructs a wafer transfer system (not shown) to replace wafer W.
  • a wafer transfer system (not shown) to replace wafer W.
  • the wafer transfer system and the XY The wafer is exchanged by a wafer transfer mechanism (not shown) on the page 14, and thereafter, search alignment and fine alignment are performed on the replaced wafer in the same manner as described above.
  • the irradiation fluctuation of the imaging characteristics (including the fluctuation of the focus) of the projection optical system PL from the start of the exposure of the first wafer W is measured by the integrator sensor 46 and the reflected light monitor 47.
  • a command value that is obtained based on the value and that corrects this irradiation variation is given to the imaging characteristic correction controller 78 and an offset is given to the light receiving optical system 60b.
  • the reticle pattern is transferred to a plurality of shot areas on the wafer W by the step-and-scan method.
  • the transmittance of the optical system is measured in the same manner as described above, and the measurement result is obtained. It is stored in the memory 51, that is, the measured value of the transmittance is updated.
  • the transmittance of the optical system is repeatedly measured, and when the exposure of the Nth wafer W is completed, a series of exposure processing is completed.
  • the transmittance measured immediately before may be used as a fixed value, or the transmittance change may be sequentially updated by calculation and the calculated value may be used. This may be determined according to the transmittance measurement interval or the like.
  • the transmittance measuring device is configured by the integrator sensor 46, the irradiation amount monitor 59A, and the main controller 50.
  • an image is formed by the piezoelectric elements 74 a, 74 b, 74 c for driving the lens element 70 a, the image forming characteristic correcting controller 78, and the main controller 50.
  • a characteristic adjustment device is configured.
  • the main controller 50 also has a role of an exposure controller (exposure amount control system) and a stage controller (stage control system). Of course, these controllers may be provided separately from the main controller 50.
  • the optical sensor 2 having the MN-based semiconductor light receiving element 17 as described above is used as the beam monitor 16 c in the housing of the light source 16, the light receiving element In 17, the carrier layer is present on the surface, and the pulse ultraviolet light (ArF excimer laser light) oscillated from the laser resonator 16a with high precision and sensitivity stabilizes the LB intensity (power). It can be detected well. Therefore, deterioration of measurement reproducibility and deterioration over time due to poor sensitivity of the beam monitor 16c are suppressed, and unnecessary output fluctuation of the beam monitor 16c is reduced. It is possible to suppress the occurrence of an energy control error (exposure amount control error) due to the above.
  • Exposure amount control error exposure amount control error
  • the center wavelength and the half-width of the spectrum of the pulsed ultraviolet light LB may be detected and maintained within a predetermined allowable range.
  • the scanning speed (scanning speed) of the wafer W is set to VW, and the slit-shaped exposure on the wafer W is performed.
  • the width (slit width) of the area 42 W in the scanning direction (slit width) is D
  • the oscillation frequency of the laser light source is F
  • the distance between the wafer and W between pulse emission is V w / F, so that on the wafer
  • the number of pulses of the pulse exposure light IL to be irradiated per point (number of exposure pulses) N is expressed by the following equation.
  • N D / ( Vw / F)... (6)
  • the above-mentioned number N of exposure pulses is calculated based on the known variance E p ⁇ of the pulse energy of the light source 16 in order to obtain the required exposure control reproduction accuracy.
  • the minimum pulse number N min of the exposure light IL to be applied to each point on the wafer W determined based on the threshold value must be equal to or more than N min.
  • the number N of exposure pulses and the scanning speed VW are inversely proportional, assuming that the slit width D and the oscillation frequency F are constant, the number N of exposure pulses, and therefore the minimum number of exposure pulses (minimum number of pulse oscillations)
  • the intensity (power) of the pulsed ultraviolet light LB oscillated from the laser resonator 16a can be detected with high stability and high accuracy by the beam monitor 16c.
  • the energy variation E p ⁇ is reduced, and the minimum number of exposure pulses N min required to achieve the irradiation energy error ⁇ ⁇ allowed during exposure (to obtain the required exposure control reproduction accuracy) can be reduced.
  • the integrator sensor 46 having the above-mentioned optical sensor 2 makes it possible to detect the light intensity of the exposure light IL with high accuracy, high sensitivity, and excellent stability. As a result, the occurrence of measurement errors by the integrator sensor 46 due to this is suppressed, and it is possible to estimate the image plane illuminance over a long period of time with high accuracy. Also, the output of the integrator sensor 46 is used for normalization to prevent fluctuations in the measurement values of other sensors due to fluctuations in the power of the light source 16, so that measurement errors of those sensors may occur. Is also suppressed.
  • the exposure amount matching accuracy with another exposure apparatus can be improved.
  • the maintenance interval for the calibration can be lengthened, so that MTBF (mean time between facilities) or MTTR (mean time to repeat) can be improved.
  • the main controller 50 estimates the illuminance of the image plane based on the output of the integrator sensor 46 and controls the exposure so that the integrated exposure on the wafer W becomes the target exposure.
  • the integrator sensor 46 enables high-accuracy, high-sensitivity, and stable detection of the intensity of the exposure light IL, resulting in improved exposure amount control accuracy and, consequently, the accuracy of the pattern line width formed on the wafer W. Can be improved.
  • the main controller 50 measures the transmittance of the optical system at a predetermined interval using the unevenness sensor 59 B, and takes into account the fluctuation of the measured transmittance. By performing the above control, more accurate exposure amount control and, consequently, more accurate exposure can be performed.
  • the MN-based semiconductor light-receiving element 17 is used as the light-receiving element that constitutes the unevenness sensor 59B used for measuring the transmittance, even when the exposure operation is continued for a long time, the projection is performed by the unevenness sensor 59B.
  • the MN-based semiconductor is used as a dose monitor 59 A having an opening 59 d having an area capable of receiving the exposure light IL irradiating the entire surface of the illumination field (exposure area 42 W) at one degree.
  • the optical sensor 2 having the light receiving element 17 based on the measured value of the optical sensor, obtains the irradiation variation of the imaging characteristic of the projection optical system PL, and corrects the imaging characteristic accordingly. It is possible to maintain a proper imaging state.
  • the irradiation variation of the reticle R may be obtained based on the measurement value of the irradiation amount monitor 59 A, and the imaging characteristics of the projection optical system may be corrected in consideration of the fluctuation.
  • the light intensity monitor 59A can accurately measure the light intensity on the image plane of the exposure light IL passing through the projection optical system PL. It is also possible to correct the basic data of the irradiation variation calculation of the imaging characteristics.
  • the irradiation amount sensor 59 (specifically, the irradiation amount monitor 59 A for measuring the illuminance of the image plane, the unevenness sensor 59 B used for transmittance measurement, the aerial image measuring device 59 C ),
  • the imaging characteristics of the projection optical system PL are automatically adjusted by the imaging characteristics adjusting devices (74a to 74c, 78, 50). The adjustment of the image characteristics can be partially or fully automated. Further, in the present embodiment, the imaging characteristic adjustment device adjusts the imaging characteristic (mainly irradiation variation) of the projection optical system PL by further considering the output of the reflected light monitor 47.
  • the reflectance of the wafer W is calculated as described above, and the irradiation fluctuation amount of the projection optical system PL is calculated in consideration of the calculation result, and this is corrected by the imaging characteristic adjusting device. It is possible to more accurately correct the imaging characteristics caused by the irradiation variation of the PL. Furthermore, since the MN-based semiconductor light-receiving element 17 is used as the light-receiving element constituting the reflected light monitor 47, even if the exposure operation is continued for a long time, the reflected light monitor 47 reflects the light from the reticle R.
  • an MN-based semiconductor light-receiving element 1 is provided as a reference illuminometer 90 that is detachably mounted on the Z tilt stage 58 and is used to calibrate the exposure amount on the substrate between a plurality of exposure apparatuses. Since an optical sensor having 7 is used, calibration (calibration) for matching the amount of exposure light (illuminance matching) on the substrate between the units can be performed with high accuracy.
  • an illuminance mura sensor 59B capable of measuring the in-plane illuminance unevenness of the amount of exposure light IL in a predetermined illumination field by two-dimensionally moving the Z tilt stage 58
  • an MN-based semiconductor light receiving element 17 Since an optical sensor having an optical system is used, it is possible to accurately measure the unevenness of illumination via the optical system including the projection optical system PL on the substrate surface (image surface), and to accurately measure the illumination based on the value. ⁇ Illumination unevenness can be adjusted and the integrated exposure amount distribution can be made uniform, thereby improving the accuracy of the pattern line width transferred and formed on the wafer W.
  • the image of the measurement pattern formed on the reticle R and the opening 59 f of the light receiving surface provided on the Z tilt stage 58 are relatively scanned, and transmitted through the opening 59 f. Since the optical sensor 2 having the MN-based semiconductor light receiving element 17 is used as the aerial image measuring device 59 C for receiving the exposed exposure light IL by the optical sensor, the aerial image measuring device 59 C Information for determining the positional relationship between the reticle R or the imaging plane of the projection optical system PL and the wafer W can be detected with high accuracy.
  • the optical sensor An aerial image of each measurement pattern is measured based on the output of Step 2, and from the measurement results of the aerial image, the imaging characteristics of the projection optical system PL in the XY plane direction such as the magnification / distortion (reticle R and wafer It is possible to detect the positional relationship (a type of information used as a criterion for determining the positional offset (overlay offset) in the XY plane) of W with high accuracy.
  • the position of the Z tilt stage 58 is changed in the Z direction orthogonal to the XY plane, or the above relative scanning is repeated while changing the Z position of the Z tilt stage 58.
  • the focus offset which is information serving as a reference for determining the positional relationship in the Z direction between the reticle and the wafer based on a change in the contrast of the differential signal of the output of the optical sensor 2, and the projection optical system Focus position ⁇ Telecentricity or depth of focus can be detected with high accuracy.
  • the repelling tool as a reference for determining the relative positional relationship between the reticle and the wafer in the 0 ° and ⁇ y directions.
  • the set shape of the image plane of the projection optical system or curvature of field
  • the reticle R and the wafer W can be overlapped.
  • Accuracy (overlay accuracy) and line width control accuracy can be improved.
  • a reference mark plate for detecting a relative position between a plurality of reference marks (mark patterns) formed on the FM and a projection image of a reticle mark (mark pattern) formed on the reticle corresponding to each of the reference marks. Since the MN-based semiconductor light-receiving element 17 arranged in a matrix is used as a two-dimensional imaging element that constitutes the linear system 100, the two-dimensional imaging element can be used to mark each mark.
  • the projected image can be detected as a predetermined two-dimensional image with high accuracy. In this case, the overlay offset can be obtained with high accuracy based on the in-plane positional deviation between the projection images, so that the precision of the laser beam can be improved.
  • the exposure apparatus 10 of the present embodiment an ArF excimer laser light having a wavelength of 193 nm or a KrF excimer laser light having a wavelength of 248 nm or less is used as the exposure light I.
  • the MN-based components that make up each of the above optical sensors The semiconductor light receiving element 17 makes it possible to detect the above-mentioned exposure light IL with high accuracy, high sensitivity, and high stability. Therefore, in the present embodiment, by performing exposure using an energy beam having a short wavelength, it is possible to perform high-precision exposure by improving the resolution of the projection optical system PL.
  • the MN-based semiconductor light receiving element 17 has the MN-based semiconductor light receiving element 17, and it is not necessary to use a similar light receiving element, and it may be used only for any of the optical sensors.
  • the MN-based semiconductor light-receiving element 17 or a similar light-receiving element is used mainly for the sensor part with high brightness and the sensor part used for calibration, and the other part uses the conventional Si-based semiconductor light-receiving element. It is good. In such a case, the cost can be reduced.
  • the MN-based semiconductor light receiving element 17 a structure using the P-type GaN layer—N-type GaAlN layer shown in FIG. 3 (A) and the N-type shown in FIG.
  • FIG. 20 A semiconductor light receiving element having the structure as shown can be employed.
  • an N-type GaN layer S5 and an i-type GaN layer S6 are laminated on a sapphire substrate 1 via a GaN buffer layer 1a.
  • Form electrodes Q 1 and Q 2 are formed on the type G a A 1 N layer S 6 respectively.
  • an MN-based semiconductor light receiving element is used for each optical sensor.
  • quartz or fluorite encloses the optical path around the optical path, and the cover is purged with an inert gas such as nitrogen, helium, argon, or krypton. It is important to provide a means for flowing inert gas (to make the gas flow), and to use a technique to clean the sensor surface by so-called light cleaning.
  • the light beam passing portion was passed through a chemical filter.
  • a wavefront aberration measuring device which is a kind of detachable irradiation amount sensor, may be provided on the stage, and the imaging characteristic of the projection optical system may be measured by the wavefront aberration measuring device.
  • FIG. 16 is a cross-sectional view showing an example of the wavefront aberration measuring device 120 installed at a predetermined position on the Z tilt stage.
  • the wavefront aberration measuring device 120 is detachably mounted on the Z tilt stage 58 near the place where the sensor head 90A of the reference illuminometer 90 is installed.
  • the wavefront aberration measuring device 120 has a case 1 22 with an open top surface, and an imaging device similar to the imaging device 104 R described above in FIG. 9 fixed to the bottom plate 122 a of the case 122.
  • An element 124 and a light-receiving glass 126 closing the upper open end of the case 122 are provided.
  • a light-shielding film 128 is formed on the upper surface of the light-receiving glass 126 by a chrome layer or the like, and a predetermined opening 130a and a pinhole-shaped opening 130b are respectively formed in a part of the light-shielding film. Is formed.
  • a bending mirror 1332 is obliquely provided at 45 ° almost below the opening 130a, and a half mirror 134 is provided almost directly below the opening 130b. They are arranged parallel to one another.
  • the imaging device 124 is connected to a main body data processing unit (not shown) via a cable 133, and this main body data processing unit is online with respect to the control system of the exposure apparatus 10. It is configured to enable communication such as measurement data.
  • a main body data processing unit (not shown) via a cable 133, and this main body data processing unit is online with respect to the control system of the exposure apparatus 10. It is configured to enable communication such as measurement data.
  • one light beam (plane wave) LL1 emitted from the projection optical system PL and traveling toward the aperture 130a is transmitted through the aperture 130a and the light receiving glass 126.
  • the light is sequentially reflected by the bending mirror 13 and the half mirror 13 and is directed to the image pickup device 124.
  • the other light beam LL 2 emitted from the projection optical system PL and traveling toward the aperture 130 b is converted from a plane wave into a spherical wave by passing through the aperture 130 b, and the light receiving glass 1 2 6
  • the light beam LL1 is combined with the light beam LL1 through the half mirror 134 to the image pickup device 124.
  • an interference fringe due to the interference between the plane wave and the spherical wave forms an image on the light receiving surface of the image sensor 124, and an image signal of the interference fringe is processed by a data processing unit (not shown) via the cable 1336.
  • the wavefront difference of the projection optical system is calculated.
  • the obtained measurement data of the wavefront aberration is sent from the data processing unit to the control system of the exposure device 10, that is, the main control device 50.
  • the main controller 50 corrects the imaging characteristics of the projection optical system PL via the imaging characteristic correction controller 78 based on the measurement data of the wavefront aberration.
  • the wavefront aberration measuring device 120 when used, the method of detecting the image of the interference fringe is adopted, and thus it is difficult to accurately detect the imaging characteristics with the aerial image measuring device 59C described above. Even in the case where it is difficult, it is possible to measure the wavefront aberration of the projection optical system with high accuracy, for example, when assembling the equipment, when starting up after transport, when returning from an emergency such as a power failure, etc.
  • the exposure apparatus of the above-described embodiment is constructed by assembling various subsystems including the respective components listed in the claims of the present application so as to maintain predetermined mechanical, electrical, and optical accuracy. Manufactured. Before and after this assembly, in order to ensure these various precisions, adjustments to achieve optical precision for various optical systems, adjustments to achieve mechanical precision for various mechanical systems, and various electrical systems Is adjusted to achieve electrical accuracy.
  • the process of assembling the exposure apparatus from various subsystems includes mechanical connections, wiring connections of electric circuits, and piping connections of pneumatic circuits among the various subsystems. It goes without saying that there is an individual assembly process for each subsystem before the assembly process from these various subsystems to the exposure apparatus. When the process of assembling the various subsystems into the exposure apparatus is completed, comprehensive adjustments are made to ensure various precisions of the entire exposure apparatus. It is desirable that the exposure apparatus be manufactured in a clean room where the temperature, cleanliness, etc. are controlled. Further, in each of the above embodiments, the case where the present invention is applied to the step-and-scan type scanning exposure apparatus has been described.
  • the application range of the present invention is not limited to this, and the static exposure type
  • the present invention can be suitably applied to a step-and-repeat type exposure apparatus (such as a stepper). Further, it can be applied to a step-and-stitch exposure apparatus, a mirror projection aligner, etc.
  • the present invention is the illumination light IL for exposure in the above foregoing embodiments, A r F excimer laser beam (wavelength 1 93 nm), or F 2 laser beam (wavelength 1 57 nm), etc.
  • an exposure apparatus that uses vacuum ultraviolet light a r 2 laser beam having a wavelength of 1 26 nm is of course, the present invention can be suitably applied to an exposure apparatus using extreme ultraviolet light (EUV light) having a wavelength of 5 to 30 nm.
  • EUV light extreme ultraviolet light
  • single-wavelength laser light in the infrared or visible range oscillated from a DFB semiconductor laser or fiber laser is amplified by a fiber amplifier doped with, for example, erbium (or both erbium and itdium).
  • a harmonic converted to ultraviolet light using a nonlinear optical crystal may be used.
  • the oscillation wavelength of a single-wavelength laser is in the range of 1.51 to 1.59 yum
  • the 8th harmonic whose generation wavelength is in the range of 189 to 199 nm, or the generation wavelength of 151 to 1
  • the 10th harmonic within the range of 159 nm is output.
  • the generated wavelength is the eighth harmonic within the range of 193 to 194 nm, that is, almost the same wavelength as the ArF excimer laser light.
  • ultraviolet light is obtained consisting, when the oscillation wavelength from 1.57 to 1.58 in the range of Aim, 1 0 harmonic in the range generation wavelength of 1 fifty-seven to one 58 nm, i.e.
  • the F 2 laser beam substantially Ultraviolet light having the same wavelength can be obtained.
  • the oscillation wavelength is in the range of 1.03 to 1.1
  • the seventh harmonic whose output wavelength is in the range of 147 to 160 nm is output, and particularly the oscillation wavelength is 1.099 to 1.0.
  • 7 harmonic in the range generation wavelength of 1 57-1 58 i.e., ultraviolet light having almost the same wavelength as F 2 laser light.
  • the single wavelength oscillating laser for example, it is possible to use an itridium ⁇ doped fiber laser.
  • the projection optical system and the illumination optical system described in the above embodiments are merely examples, and it is a matter of course that the present invention is not limited to these.
  • the projection optical system is not limited to a refraction optical system, and a reflection system consisting of only a reflection optical element, or a catadioptric system having a reflection optical element and a refraction optical element (power dioptric system) may be used. good.
  • a catadioptric system may be used as the projection optical system.
  • the catadioptric projection optical system include a beam splitter as a reflection optical element disclosed in, for example, JP-A-8-17054 and JP-A-10-20195.
  • a catadioptric system having a concave mirror and a concave mirror without using a beam splitter as a reflective optical element as disclosed in JP-A-8-33469 and JP-A-10-3039
  • a catadioptric system having such a structure can be used.
  • these documents will be incorporated and incorporated into the text.
  • a plurality of refractive optical elements and two mirrors (concave mirrors) disclosed in U.S. Pat. No. 5,488,229 and Japanese Patent Application Laid-Open No. H10-1040513 are disclosed.
  • a sub-mirror which is a back surface mirror having a reflection surface formed on the side opposite to the entrance surface of the refraction element or the plane-parallel plate
  • a catadioptric system may be used in which the intermediate image of the reticle pattern formed is re-imaged on the wafer by the primary mirror and the secondary mirror.
  • a primary mirror and a secondary mirror are arranged following a plurality of refractive optical elements, and illumination light passes through a part of the primary mirror and is reflected in the order of the secondary mirror and the primary mirror. It passes through a part and reaches the wafer.
  • the present invention can be applied to an exposure apparatus used for transferring a device pattern onto a ceramic wafer, an exposure apparatus used for manufacturing an imaging device (such as a CCD), and the like.
  • FIG. 17 shows a flowchart of an example of manufacturing devices (semiconductor chips such as ICs and LSIs, liquid crystal panels, CCDs, thin-film magnetic heads, micromachines, etc.).
  • step 201 design step
  • step 202 mask manufacturing step
  • a mask on which the designed circuit pattern is formed is manufactured.
  • step 203 wafer manufacturing step
  • step 204 wafer processing step
  • step 205 device assembling step
  • step 206 inspection step
  • an operation check test, a durability test, and the like of the device manufactured in step 205 are performed. After these steps, the device is completed and shipped.
  • step 2 11 the surface of the wafer is oxidized.
  • step 2 1 2 (CVD step) forms an insulating film on the wafer surface.
  • step 2 13 electrode formation step
  • step 2 14 ion implantation step
  • ions are implanted into the wafer.
  • Step 2 1 1 to Step 2 1 4 It constitutes a pre-processing step of each stage of wafer processing, and is selected and executed in each stage according to a required process.
  • step 2 15 register forming step
  • a photosensitive agent is applied to the wafer.
  • step 1 the circuit pattern of the mask is transferred to the wafer by the lithography system (exposure apparatus) and the exposure method described above. Then, step
  • step 217 development step
  • step 218 etching step
  • step 219 resist removing step
  • unnecessary resist after etching is removed.
  • the substrate on which the image of the mask pattern is formed is peeled off in a developing step to form a step structure.
  • processes such as an etching process, a vapor deposition process, and an ion implantation process are hierarchically performed on the substrate having the same step structure to form a predetermined circuit device. Accordingly, it is possible to manufacture a circuit device formed by exposing a line width having a resolution of 0.25 ⁇ to 0.05 ⁇ m with a high yield.
  • the exposure apparatus and the exposure method of the present invention Even if a light source is used, exposure accuracy can be maintained over a long period of time without frequent replacement of the optical sensor. Further, according to the device manufacturing method of the present invention, it is possible to improve the productivity of a microdevice having a higher degree of integration.

Abstract

Un système d'exposition (10) comprend un moniteur de faisceau conçu pour observer la puissance d'une source laser (16), un capteur intégrateur (46) conçu pour détecter l'éclairement sur un substrat (W), un moniteur de réflexion (47) pour l'observation de la réflexion du substrat, et un photocapteur (59) conçu pour mesurer l'irradiation sur le substrat. La source laser (16) génère un laser à faible longueur d'onde de 200 nm ou moins et le photocapteur comprend un photodétecteur en métal-niture contenant de l'indium, du gallium ou de l'aluminium. Le photocapteur présente une durabilité accrue et permet une opération d'exposition extrêmement précise sur une longue période.
PCT/JP2000/004892 1999-07-23 2000-07-21 Procede d'exposition, systeme d'exposition, source lumineuse, procede et dispositif de fabrication WO2001008205A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU60224/00A AU6022400A (en) 1999-07-23 2000-07-21 Exposure method, exposure system, light source, and method of device manufacture

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP20978599 1999-07-23
JP11/209785 1999-07-23

Publications (1)

Publication Number Publication Date
WO2001008205A1 true WO2001008205A1 (fr) 2001-02-01

Family

ID=16578569

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2000/004892 WO2001008205A1 (fr) 1999-07-23 2000-07-21 Procede d'exposition, systeme d'exposition, source lumineuse, procede et dispositif de fabrication

Country Status (2)

Country Link
AU (1) AU6022400A (fr)
WO (1) WO2001008205A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101526759A (zh) * 2003-09-29 2009-09-09 株式会社尼康 曝光装置
US7907255B2 (en) 2003-08-29 2011-03-15 Asml Netherlands B.V. Lithographic apparatus and device manufacturing method
US8039807B2 (en) 2003-09-29 2011-10-18 Nikon Corporation Exposure apparatus, exposure method, and method for producing device
US8120763B2 (en) 2002-12-20 2012-02-21 Carl Zeiss Smt Gmbh Device and method for the optical measurement of an optical system by using an immersion fluid
US8629418B2 (en) 2005-02-28 2014-01-14 Asml Netherlands B.V. Lithographic apparatus and sensor therefor
TWI491058B (zh) * 2011-12-15 2015-07-01 Sony Corp 影像拾取面板及影像拾取處理系統
KR20160062007A (ko) * 2013-09-27 2016-06-01 칼 짜이스 에스엠테 게엠베하 특히, 마이크로리소그래픽 투영 노광 장치를 위한 거울
JP2016189029A (ja) * 2003-09-29 2016-11-04 株式会社ニコン 露光装置、計測方法、露光方法、及びデバイス製造方法
CN112200848A (zh) * 2020-10-30 2021-01-08 中国科学院自动化研究所 低光照弱对比复杂环境下的深度相机视觉增强方法及系统
WO2021192244A1 (fr) * 2020-03-27 2021-09-30 ギガフォトン株式会社 Procédé de détermination de détérioration de capteur
TWI752676B (zh) * 2014-12-09 2022-01-11 美商希瑪有限責任公司 光學源中干擾之補償

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6370419A (ja) * 1986-09-11 1988-03-30 Canon Inc 投影露光装置
JPH0332016A (ja) * 1989-06-28 1991-02-12 Canon Inc 照射光量制御装置
JPH05343287A (ja) * 1992-06-11 1993-12-24 Nikon Corp 露光方法
JPH09181350A (ja) * 1995-12-21 1997-07-11 Mitsubishi Cable Ind Ltd 短波長光の検出方法
JP2000101129A (ja) * 1998-09-18 2000-04-07 Mitsubishi Cable Ind Ltd 光検出方法及びそのためのGaN系半導体受光素子

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6370419A (ja) * 1986-09-11 1988-03-30 Canon Inc 投影露光装置
JPH0332016A (ja) * 1989-06-28 1991-02-12 Canon Inc 照射光量制御装置
JPH05343287A (ja) * 1992-06-11 1993-12-24 Nikon Corp 露光方法
JPH09181350A (ja) * 1995-12-21 1997-07-11 Mitsubishi Cable Ind Ltd 短波長光の検出方法
JP2000101129A (ja) * 1998-09-18 2000-04-07 Mitsubishi Cable Ind Ltd 光検出方法及びそのためのGaN系半導体受光素子

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8120763B2 (en) 2002-12-20 2012-02-21 Carl Zeiss Smt Gmbh Device and method for the optical measurement of an optical system by using an immersion fluid
US8836929B2 (en) 2002-12-20 2014-09-16 Carl Zeiss Smt Gmbh Device and method for the optical measurement of an optical system by using an immersion fluid
US8947637B2 (en) 2003-08-29 2015-02-03 Asml Netherlands B.V. Lithographic apparatus and device manufacturing method
US7907255B2 (en) 2003-08-29 2011-03-15 Asml Netherlands B.V. Lithographic apparatus and device manufacturing method
US8035798B2 (en) 2003-08-29 2011-10-11 Asml Netherlands B.V. Lithographic apparatus and device manufacturing method
US9568841B2 (en) 2003-08-29 2017-02-14 Asml Netherlands B.V. Lithographic apparatus and device manufacturing method
US10025204B2 (en) 2003-08-29 2018-07-17 Asml Netherlands B.V. Lithographic apparatus and device manufacturing method
US10514618B2 (en) 2003-08-29 2019-12-24 Asml Netherlands B.V. Lithographic apparatus and device manufacturing method
US9316919B2 (en) 2003-08-29 2016-04-19 Asml Netherlands B.V. Lithographic apparatus and device manufacturing method
US11003096B2 (en) 2003-08-29 2021-05-11 Asml Netherlands B.V. Lithographic apparatus and device manufacturing method
US8139198B2 (en) 2003-09-29 2012-03-20 Nikon Corporation Exposure apparatus, exposure method, and method for producing device
EP2837969A1 (fr) 2003-09-29 2015-02-18 Nikon Corporation Appareil d'exposition, procédé d'exposition et procédé de production du dispositif
US8749759B2 (en) 2003-09-29 2014-06-10 Nikon Corporation Exposure apparatus, exposure method, and method for producing device
CN101526759A (zh) * 2003-09-29 2009-09-09 株式会社尼康 曝光装置
US8305552B2 (en) 2003-09-29 2012-11-06 Nikon Corporation Exposure apparatus, exposure method, and method for producing device
JP2016189029A (ja) * 2003-09-29 2016-11-04 株式会社ニコン 露光装置、計測方法、露光方法、及びデバイス製造方法
EP3093710A2 (fr) 2003-09-29 2016-11-16 Nikon Corporation Appareil d'exposition, procede d'exposition et procede de production de dispositif
EP3093711A2 (fr) 2003-09-29 2016-11-16 Nikon Corporation Appareil d'exposition, procede d'exposition et procede de production de dispositif
US9513558B2 (en) 2003-09-29 2016-12-06 Nikon Corporation Exposure apparatus, exposure method, and method for producing device
US8039807B2 (en) 2003-09-29 2011-10-18 Nikon Corporation Exposure apparatus, exposure method, and method for producing device
US10025194B2 (en) 2003-09-29 2018-07-17 Nikon Corporation Exposure apparatus, exposure method, and method for producing device
US8629418B2 (en) 2005-02-28 2014-01-14 Asml Netherlands B.V. Lithographic apparatus and sensor therefor
TWI491058B (zh) * 2011-12-15 2015-07-01 Sony Corp 影像拾取面板及影像拾取處理系統
KR20160062007A (ko) * 2013-09-27 2016-06-01 칼 짜이스 에스엠테 게엠베하 특히, 마이크로리소그래픽 투영 노광 장치를 위한 거울
KR102214738B1 (ko) 2013-09-27 2021-02-10 칼 짜이스 에스엠테 게엠베하 특히, 마이크로리소그래픽 투영 노광 장치를 위한 거울
TWI752676B (zh) * 2014-12-09 2022-01-11 美商希瑪有限責任公司 光學源中干擾之補償
WO2021192244A1 (fr) * 2020-03-27 2021-09-30 ギガフォトン株式会社 Procédé de détermination de détérioration de capteur
US11808629B2 (en) 2020-03-27 2023-11-07 Gigaphoton Inc. Sensor degradation evaluation method
JP7402313B2 (ja) 2020-03-27 2023-12-20 ギガフォトン株式会社 センサ劣化判定方法
CN112200848A (zh) * 2020-10-30 2021-01-08 中国科学院自动化研究所 低光照弱对比复杂环境下的深度相机视觉增强方法及系统

Also Published As

Publication number Publication date
AU6022400A (en) 2001-02-13

Similar Documents

Publication Publication Date Title
US7298498B2 (en) Optical property measuring apparatus and optical property measuring method, exposure apparatus and exposure method, and device manufacturing method
US7616290B2 (en) Exposure apparatus and method
EP1347501A1 (fr) Instrument de mesure de l'aberration d'un front d'onde, procede de mesure de l'aberration d'un front d'onde, appareil d'exposition et procede de fabrication d'un microdispositif
US6509956B2 (en) Projection exposure method, manufacturing method for device using same, and projection exposure apparatus
US20090073404A1 (en) Variable slit device, illumination device, exposure apparatus, exposure method, and device manufacturing method
WO2006126522A1 (fr) Procede et appareil d'exposition, et procede de fabrication du dispositif
JPWO2002043123A1 (ja) 露光装置、露光方法及びデバイス製造方法
WO2001008205A1 (fr) Procede d'exposition, systeme d'exposition, source lumineuse, procede et dispositif de fabrication
JP2003151884A (ja) 合焦方法、位置計測方法および露光方法並びにデバイス製造方法
JP2005093948A (ja) 露光装置及びその調整方法、露光方法、並びにデバイス製造方法
WO2005117075A1 (fr) Méthode de correction, méthode de prévision, méthode d'exposition, méthode de correction des reflets, méthode de mesure des reflets, appareil d'exposition et méthode de fabrication du dispositif
WO1999031716A1 (fr) Aligneur, methode d'exposition et procede de fabrication de ce dispositif
US9513460B2 (en) Apparatus and methods for reducing autofocus error
JP2001035782A (ja) 露光装置及び露光方法、光源装置、並びにデバイス製造方法
JP2001143993A (ja) 露光装置及び露光方法、光源装置、並びにデバイス製造方法
JPH11251239A (ja) 照度分布計測方法、露光方法及びデバイス製造方法
WO2002050506A1 (fr) Appareil de mesure de surface d'onde et son utilisation, procede et appareil pour determiner des caracteristiques de mise au point, procede et appareil pour corriger des caracteristiques de mise au point, procede pour gerer des caracteristiques de mise au point, et procede et appareil d'exposition
JP2009141154A (ja) 走査露光装置及びデバイス製造方法
JP2003100613A (ja) 波面収差測定装置及び波面収差測定方法、並びに、露光装置及びデバイスの製造方法
US20070206167A1 (en) Exposure Method and Apparatus, and Device Manufacturing Method
JP2002198299A (ja) 露光装置及びデバイス製造方法
JP2002231611A (ja) 露光装置及びデバイス製造方法
JP2001217180A (ja) アライメント方法、アライメント装置、露光装置、及び露光方法
JP2003100612A (ja) 面位置検出装置、合焦装置の調整方法、面位置検出方法、露光装置及びデバイスの製造方法
JP2011060799A (ja) 露光装置、露光方法、及びデバイス製造方法

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP