JP2009054730A - Moving body driving method and moving body driving system, pattern forming method and device, exposure and device, and device manufacturing method - Google Patents

Moving body driving method and moving body driving system, pattern forming method and device, exposure and device, and device manufacturing method Download PDF

Info

Publication number
JP2009054730A
JP2009054730A JP2007219080A JP2007219080A JP2009054730A JP 2009054730 A JP2009054730 A JP 2009054730A JP 2007219080 A JP2007219080 A JP 2007219080A JP 2007219080 A JP2007219080 A JP 2007219080A JP 2009054730 A JP2009054730 A JP 2009054730A
Authority
JP
Japan
Prior art keywords
measurement
moving body
surface
wafer
position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2007219080A
Other languages
Japanese (ja)
Inventor
Yuho Kanatani
有歩 金谷
Original Assignee
Nikon Corp
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nikon Corp, 株式会社ニコン filed Critical Nikon Corp
Priority to JP2007219080A priority Critical patent/JP2009054730A/en
Publication of JP2009054730A publication Critical patent/JP2009054730A/en
Application status is Pending legal-status Critical

Links

Images

Abstract

Compensation of a measurement error associated with a change in wavelength of a probe beam of a surface position sensor, and measuring the position of a moving body with respect to a vertical direction and an inclination direction of a predetermined moving surface using the compensated surface position sensor, The moving body is two-dimensionally driven with high accuracy.
A configured surface position sensor from the focus sensor FS is a change in the wavelength of the probe beam LB 1, to generate a measurement error. Therefore, the wavelength λ of the probe beam LB 1 in the atmosphere in the sensor is measured using the environment sensor WT. The oscillation wavelength of the light source LD is adjusted according to the measurement result. Alternatively, the correction data of the surface position sensor for the wavelength λ is created, and the measurement result of the surface position sensor is corrected using the correction data. Using the surface position sensor in which the measurement error due to the wavelength change is compensated in this way, the position of the predetermined moving surface of the stage in the vertical direction and the tilt direction is measured, and the stage is two-dimensionally driven with high accuracy.
[Selection] Figure 25

Description

  The present invention relates to a moving body driving method and a moving body driving system, a pattern forming method and apparatus, an exposure and apparatus, and a device manufacturing method, and in particular, a moving body driving that drives a moving body substantially along a two-dimensional plane. Method and moving body driving system, pattern forming method using the moving body driving method, pattern forming apparatus including the moving body driving system, exposure method using the moving body driving method, and exposure apparatus including the moving body driving system The present invention also relates to a device manufacturing method using the pattern forming method.

  Conventionally, in lithography processes for manufacturing electronic devices (microdevices) such as semiconductor elements (integrated circuits, etc.) and liquid crystal display elements, step-and-repeat reduction projection exposure apparatuses (so-called steppers), step-and-scan methods Projection exposure apparatuses (so-called scanning steppers (also called scanners)) are mainly used.

  However, the surface of the wafer as the substrate to be exposed is not necessarily flat due to, for example, waviness of the wafer. For this reason, particularly in a scanning exposure apparatus such as a scanner, when a reticle pattern is transferred to a shot area on a wafer by a scanning exposure method, the projection optical system on the wafer surface at a plurality of detection points set in the exposure area Position information (focus information) in the optical axis direction is detected using, for example, a multi-point focus position detection system (hereinafter also referred to as “multi-point AF system”), and within the exposure region based on the detection result. Control the position and tilt of the table or stage holding the wafer in the optical axis direction so that the wafer surface always matches the image plane of the projection optical system (within the range of the focal depth of the image plane). Leveling control is performed (see, for example, Patent Document 1).

  Also, in steppers, scanners, etc., the wavelength of exposure light used with miniaturization of integrated circuits has become shorter year by year, and the numerical aperture of projection optical systems has gradually increased (larger NA). Thus, the resolution is improved. On the other hand, since the focal depth has become very narrow due to the shortening of the wavelength of the exposure light and the increase in the NA of the projection optical system, there has been a risk that the focus margin during the exposure operation will be insufficient. Therefore, an exposure apparatus using an immersion method has recently been attracting attention as a method of substantially shortening the exposure wavelength and substantially increasing (widening) the depth of focus compared to the air. (See Patent Document 2).

  However, in the exposure apparatus using this immersion method, or other exposure apparatus in which the distance (working distance) between the lower end surface of the projection optical system and the wafer is narrow, the multipoint AF system described above is used in the projection optical system. It is difficult to arrange in the vicinity. On the other hand, the exposure apparatus is required to realize high-precision wafer surface position control in order to realize high-precision exposure.

  In a stepper, scanner, or the like, the position measurement of a stage (table) that holds a substrate to be exposed (for example, a wafer) is generally performed using a high-resolution laser interferometer. However, the optical path length of the laser interference beam for measuring the position of the stage is about several hundred mm or more, and more precise position control of the stage is possible by miniaturization of the pattern due to higher integration of semiconductor elements. Due to the demands, short-term fluctuations in measured values due to air fluctuations caused by temperature changes and temperature gradients in the laser beam path of laser interferometers are now being ignored. is there.

  Therefore, it is conceivable to use a sensor system that directly measures the position information (surface position information) on the optical axis direction of the table surface in place of the interferometer. However, such a sensor system has various errors different from the interferometer. There are factors.

JP-A-6-283403 International Publication No. 2004/053955 Pamphlet

  From a first viewpoint, the present invention is a moving body driving method for driving a moving body substantially along a two-dimensional plane, and positional information in a direction perpendicular to the two-dimensional plane of one surface of the moving body. Is measured using a plurality of optical sensor heads, and based on the measurement information and information on the change in wavelength of the measurement beam of the sensor head, the moving body is moved in a direction orthogonal to the two-dimensional plane and the two Driving in at least one of the tilt directions with respect to the dimension plane.

  According to this, in order to cancel the position measurement error in the direction perpendicular to the two-dimensional plane of the movable body due to the change in the wavelength of the measurement beam of each sensor head, It is possible to drive in at least one of the tilt directions with respect to the two-dimensional plane.

  According to a second aspect of the present invention, there is provided a step of placing an object on a moving body movable along a moving surface; and a method for driving the moving body of the present invention to form a pattern on the object. And a step of driving the movable body.

  According to this, since the pattern is formed on the object, the moving body on which the object is placed is driven with high accuracy by the moving body driving method, so that the pattern can be formed on the object with high accuracy. Become.

  From a third viewpoint, the present invention is a device manufacturing method including a pattern forming step, and in the pattern forming step, the device manufacturing method forms a pattern on a substrate using the pattern forming method of the present invention. .

  According to a fourth aspect of the present invention, there is provided an exposure method for forming a pattern on an object by irradiation with an energy beam, wherein the moving body driving method of the present invention is used for relative movement between the energy beam and the object. And an exposure method for driving a moving body on which the object is placed.

  According to this, for the relative movement between the energy beam applied to the object and the object, the moving object on which the object is placed is driven with high accuracy using the moving object driving method of the present invention. Accordingly, it is possible to form a pattern on the object with high accuracy by scanning exposure.

  From a fifth aspect, the present invention is a moving body drive system that drives a moving body substantially along a two-dimensional plane, and is arranged two-dimensionally in a plane parallel to the two-dimensional plane, and the movement A surface position measurement system having a plurality of sensor heads for measuring position information in a direction orthogonal to the two-dimensional plane of one surface of the body; and position information in a direction orthogonal to the two-dimensional plane of one surface of the moving body; Measurement using a plurality of sensor heads of the position measurement system, and based on the measurement information and information on the change in wavelength of the measurement beam of the sensor head, the moving body is moved in a direction orthogonal to the two-dimensional plane and the two And a control device that drives in at least one of the tilt directions with respect to the dimension plane.

  According to this, in order to cancel the position measurement error in the direction perpendicular to the two-dimensional plane of the movable body due to the change in the wavelength of the measurement beam of each sensor head, It is possible to drive in at least one of the tilt directions with respect to the two-dimensional plane.

  From a sixth aspect, the present invention provides a moving body on which an object is placed and which can move along the moving surface while holding the object; a book for driving the moving body to form a pattern on the object A pattern forming apparatus comprising: the moving body drive system of the invention.

  According to this, since the moving body held by the object is driven with high accuracy by the moving body driving system for pattern formation on the object, it is possible to form the pattern on the object with high accuracy.

  According to a seventh aspect of the present invention, there is provided an exposure apparatus for forming a pattern on an object by irradiating an energy beam, the patterning apparatus for irradiating the object with the energy beam; the moving body drive system of the invention; And an exposure apparatus that drives a moving body on which the object is placed by the moving body drive system for relative movement between the energy beam and the object.

  According to this, for the relative movement between the energy beam applied to the object and the object, the moving object driving system of the present invention drives the moving object on which the object is placed with high accuracy. Accordingly, it is possible to form a pattern on the object with high accuracy by scanning exposure.

  Hereinafter, an embodiment of the present invention will be described with reference to FIGS.

  FIG. 1 schematically shows a configuration of an exposure apparatus 100 according to an embodiment. The exposure apparatus 100 is a step-and-scan type scanning exposure apparatus, that is, a so-called scanner. As will be described later, in the present embodiment, a projection optical system PL is provided. In the following description, a direction parallel to the optical axis AX of the projection optical system PL is a Z-axis direction, and a reticle in a plane perpendicular to the Z-axis direction. The direction in which the wafer is relatively scanned is the Y-axis direction, the direction orthogonal to the Z-axis and the Y-axis is the X-axis direction, and the rotation (tilt) directions around the X-axis, Y-axis, and Z-axis are θx, θy, And the θz direction will be described.

  The exposure apparatus 100 is emitted from an illumination system 10, a reticle stage RST and a reticle R that hold a reticle R that is illuminated by exposure illumination light (hereinafter referred to as illumination light or exposure light) IL from the illumination system 10. A projection unit PU including a projection optical system PL that projects the illumination light IL onto the wafer W, a stage device 50 having a wafer stage WST and a measurement stage MST, and a control system thereof. Wafer W is placed on wafer stage WST.

  The illumination system 10 includes, for example, an illuminance uniformizing optical system including a light source, an optical integrator, and the like, as disclosed in JP 2001-313250 A (corresponding US Patent Application Publication No. 2003/0025890). And an illumination optical system having a reticle blind or the like (both not shown). The illumination system 10 illuminates a slit-shaped illumination area IAR on a reticle R defined by a reticle blind (masking system) with illumination light (exposure light) IL with substantially uniform illuminance. Here, as the illumination light IL, for example, ArF excimer laser light (wavelength 193 nm) is used. As the optical integrator, for example, a fly-eye lens, a rod integrator (internal reflection type integrator), a diffractive optical element, or the like can be used.

  On reticle stage RST, reticle R on which a circuit pattern or the like is formed on its pattern surface (the lower surface in FIG. 1) is fixed, for example, by vacuum suction. The reticle stage RST can be slightly driven in the XY plane by a reticle stage drive system 11 (not shown in FIG. 1, refer to FIG. 6) including a linear motor, for example, and also in the scanning direction (left and right direction in FIG. 1). Can be driven at a scanning speed designated in the Y-axis direction).

  Position information within the moving plane of reticle stage RST (including rotation information in the θz direction) is transferred by reticle laser interferometer (hereinafter referred to as “reticle interferometer”) 116 to moving mirror 15 (actually in the Y-axis direction). Through a Y-moving mirror (or retro reflector) having an orthogonal reflecting surface and an X moving mirror having a reflecting surface orthogonal to the X-axis direction), detection is always performed with a resolution of, for example, about 0.25 nm. Is done. The measurement value of reticle interferometer 116 is sent to main controller 20 (not shown in FIG. 1, refer to FIG. 6). Main controller 20 calculates the position of reticle stage RST in the X-axis direction, Y-axis direction, and θz direction based on the measurement value of reticle interferometer 116 and controls reticle stage drive system 11 based on the calculation result. Thus, the position (and speed) of reticle stage RST is controlled. Instead of the movable mirror 15, the end surface of the reticle stage RST may be mirror-finished to form a reflective surface (corresponding to the reflective surface of the movable mirror 15). In addition, reticle interferometer 116 may measure position information of reticle stage RST with respect to at least one of the Z-axis, θx, and θy directions.

  Projection unit PU is arranged below reticle stage RST in FIG. The projection unit PU includes a lens barrel 40 and a projection optical system PL having a plurality of optical elements held in the lens barrel 40 in a predetermined positional relationship. As the projection optical system PL, for example, a refractive optical system including a plurality of lenses (lens elements) arranged along an optical axis AX parallel to the Z-axis direction is used. The projection optical system PL is, for example, both-side telecentric and has a predetermined projection magnification (for example, 1/4, 1/5, 1/8, etc.). For this reason, when the illumination area IAR is illuminated by the illumination light IL from the illumination system 10, the illumination that has passed through the reticle R, in which the first surface (object surface) of the projection optical system PL and the pattern surface are substantially aligned, is passed. A reduced image of the circuit pattern of the reticle R in the illumination area IAR (a reduced image of a part of the circuit pattern) passes through the projection optical system PL (projection unit PU) by the light IL, and its second surface (image surface). It is formed in a region (hereinafter also referred to as an exposure region) IA that is conjugated to the illumination region IAR on the wafer W, the surface of which is coated with a resist (photosensitive agent). The reticle stage RST and wafer stage WST are driven synchronously to move the reticle relative to the illumination area IAR (illumination light IL) in the scanning direction (Y-axis direction) and to the exposure area (illumination light IL). By moving the wafer W relative to the scanning direction (Y-axis direction), scanning exposure of one shot area (partition area) on the wafer W is performed, and a reticle pattern is transferred to the shot area. That is, in this embodiment, a pattern is generated on the wafer W by the illumination system 10, the reticle, and the projection optical system PL, and the pattern is formed on the wafer W by exposure of the sensitive layer (resist layer) on the wafer W by the illumination light IL. It is formed.

  Although not shown, the projection unit PU is mounted on a lens barrel surface plate supported by three support columns via a vibration isolation mechanism. However, the present invention is not limited to this. For example, as disclosed in the pamphlet of International Publication No. 2006/038952, a main frame member (not shown) disposed above the projection unit PU or a base member on which the reticle stage RST is disposed. For example, the projection unit PU may be supported by being suspended.

  In the exposure apparatus 100 according to the present embodiment, since exposure using a liquid immersion method is performed, the aperture on the reticle side increases as the numerical aperture NA of the projection optical system PL substantially increases. Therefore, in order to satisfy Petzval's conditions and avoid the enlargement of the projection optical system, a catadioptric system including a mirror and a lens can be adopted as the projection optical system. good. In addition to the photosensitive layer, for example, a protective film (top coat film) for protecting the wafer or the photosensitive layer may be formed on the wafer W.

  Further, in the exposure apparatus 100 of the present embodiment, in order to perform exposure using a liquid immersion method, an optical element on the most image plane side (wafer W side) constituting the projection optical system PL, here a lens (hereinafter referred to as “tip”). A nozzle unit 32 constituting a part of the local liquid immersion device 8 is provided so as to surround the periphery of the lower end portion of the lens barrel 40 holding the 191) (also referred to as a “lens”). In the present embodiment, as shown in FIG. 1, the lower end surface of the nozzle unit 32 is set substantially flush with the lower end surface of the tip lens 191. Further, the nozzle unit 32 is connected to the supply port and the recovery port of the liquid Lq, the lower surface on which the wafer W is disposed and provided with the recovery port, and the supply connected to the liquid supply tube 31A and the liquid recovery tube 31B, respectively. A flow path and a recovery flow path are provided. As shown in FIG. 3, the liquid supply pipe 31 </ b> A and the liquid recovery pipe 31 </ b> B are inclined by about 45 ° with respect to the X-axis direction and the Y-axis direction in plan view (viewed from above), and the center of the projection unit PU It is arranged symmetrically with respect to a straight line (reference axis) LV that passes through (the optical axis AX of the projection optical system PL, which also coincides with the center of the exposure area IA in the present embodiment) and is parallel to the Y axis.

  The other end of the supply pipe (not shown) whose one end is connected to the liquid supply device 5 (not shown in FIG. 1, see FIG. 6) is connected to the liquid supply pipe 31A. The other end of a recovery pipe (not shown) whose one end is connected to the liquid recovery device 6 (not shown in FIG. 1, see FIG. 6) is connected.

  The liquid supply device 5 includes a tank for supplying liquid, a pressure pump, a temperature control device, a valve for controlling supply / stop of the liquid to the liquid supply pipe 31A, and the like. As the valve, for example, it is desirable to use a flow rate control valve so that not only the supply / stop of the liquid but also the flow rate can be adjusted. The temperature control device adjusts the temperature of the liquid in the tank to, for example, the same temperature as the temperature in a chamber (not shown) in which the exposure apparatus is accommodated. Note that the tank, pressure pump, temperature control device, valve, and the like need not all be provided in the exposure apparatus 100, and at least a part of them may be replaced by equipment such as a factory in which the exposure apparatus 100 is installed. it can.

  The liquid recovery device 6 includes a tank and a suction pump for recovering the liquid, a valve for controlling recovery / stop of the liquid via the liquid recovery pipe 31B, and the like. As the valve, it is desirable to use a flow rate control valve similarly to the valve of the liquid supply device 5. Note that the tank, the suction pump, the valve, and the like do not have to be all provided in the exposure apparatus 100, and at least a part of them can be replaced by equipment such as a factory in which the exposure apparatus 100 is installed.

  In this embodiment, pure water that transmits ArF excimer laser light (light having a wavelength of 193 nm) (hereinafter, simply referred to as “water” unless otherwise required) is used as the liquid. Pure water has an advantage that it can be easily obtained in large quantities at a semiconductor manufacturing factory or the like and does not adversely affect the photoresist, optical lens, etc. on the wafer.

  The refractive index n of water with respect to ArF excimer laser light is approximately 1.44. In this water, the wavelength of the illumination light IL is shortened to 193 nm × 1 / n = about 134 nm.

  Each of the liquid supply device 5 and the liquid recovery device 6 includes a controller, and each controller is controlled by the main controller 20 (see FIG. 6). The controller of the liquid supply apparatus 5 opens a valve connected to the liquid supply pipe 31A at a predetermined opening degree according to an instruction from the main control apparatus 20, and the tip is provided through the liquid supply pipe 31A, the supply flow path, and the supply port. A liquid (water) is supplied between the lens 191 and the wafer W. At this time, the controller of the liquid recovery apparatus 6 opens the valve connected to the liquid recovery pipe 31B at a predetermined opening degree in response to an instruction from the main control apparatus 20, and sets the recovery port, the recovery flow path, and the liquid recovery pipe The liquid (water) is recovered from between the tip lens 191 and the wafer W into the liquid recovery apparatus 6 (liquid tank) via 31B. At this time, the main controller 20 controls the controller of the liquid supply device 5 and the liquid recovery so that the amount of water supplied between the tip lens 191 and the wafer W is always equal to the amount of recovered water. Commands are given to the controller of the device 6. Accordingly, a certain amount of liquid (water) Lq (see FIG. 1) is held between the front lens 191 and the wafer W. In this case, the liquid (water) Lq held between the tip lens 191 and the wafer W is always replaced.

  As is apparent from the above description, in this embodiment, the local liquid immersion device 8 is configured including the nozzle unit 32, the liquid supply device 5, the liquid recovery device 6, the liquid supply tube 31A, the liquid recovery tube 31B, and the like. Yes. Note that a part of the local liquid immersion device 8, for example, at least the nozzle unit 32, may be supported by being suspended from a main frame (including the lens barrel surface plate) holding the projection unit PU. You may provide in another frame member. Alternatively, when the projection unit PU is supported by being suspended as described above, the nozzle unit 32 may be suspended and supported integrally with the projection unit PU, but in the present embodiment, the projection unit PU is suspended and supported independently of the projection unit PU. A nozzle unit 32 is provided on the measurement frame. In this case, the projection unit PU may not be suspended and supported.

  Even when the measurement stage MST is positioned below the projection unit PU, it is possible to fill water between a measurement table (to be described later) and the front lens 191 in the same manner as described above.

  In the above description, one liquid supply pipe (nozzle) and one liquid recovery pipe (nozzle) are provided as an example. However, the present invention is not limited to this, and the relationship with surrounding members is considered. However, if the arrangement is possible, for example, as disclosed in International Publication No. 99/49504, a configuration having a large number of nozzles may be employed. In short, as long as the liquid can be supplied between the lowermost optical member (tip lens) 191 constituting the projection optical system PL and the wafer W, any configuration may be used. For example, an immersion mechanism disclosed in International Publication No. 2004/053955 pamphlet or an immersion mechanism disclosed in European Patent Publication No. 1420298 can be applied to the exposure apparatus of this embodiment.

  Returning to FIG. 1, the stage apparatus 50 includes a wafer stage WST and a measurement stage MST disposed above the base board 12, a measurement system 200 (see FIG. 6) that measures positional information of these stages WST and MST, and a stage. A stage drive system 124 (see FIG. 6) for driving WST and MST is provided. As shown in FIG. 6, the measurement system 200 includes an interferometer system 118, an encoder system 150, and the like. As shown in FIG. 2, the interferometer system 118 measures the position of the Y interferometer 16, the X interferometers 126, 127, and 128, the Z interferometers 43A and 43B, and the measurement stage MST for measuring the position of the wafer stage WST. Y interferometer 18 and X interferometer 130 for use. The configuration of the interferometer system will be described in detail later.

  Returning to FIG. 1, non-contact bearings (not shown), for example, vacuum preload type aerostatic bearings (hereinafter referred to as “air pads”) are provided at a plurality of locations on the bottom surfaces of wafer stage WST and measurement stage MST. The wafer stage WST and the measurement stage MST are supported in a non-contact manner above the base board 12 with a clearance of about several μm by the static pressure of the pressurized air ejected from the air pads toward the upper surface of the base board 12. Has been. The stages WST and MST are independent in the Y-axis direction (left and right direction in the drawing in FIG. 1) and X-axis direction (in the direction orthogonal to the drawing in FIG. 1) by a stage drive system 124 (see FIG. 6) including a linear motor. And can be driven.

  Wafer stage WST includes a stage main body 91 and a wafer table WTB mounted on stage main body 91. The wafer table WTB and the stage main body 91 are directed to the base board 12 in directions of six degrees of freedom (X, Y, Z, θx, etc.) by a drive system including a linear motor and a Z / leveling mechanism (including a voice coil motor). It can be driven to θy, θz).

On wafer table WTB, a wafer holder (not shown) for holding wafer W by vacuum suction or the like is provided. Although the wafer holder may be formed integrally with wafer table WTB, in this embodiment, the wafer holder and wafer table WTB are separately configured, and the wafer holder is fixed in the recess of wafer table WTB by, for example, vacuum suction. In addition, the upper surface of wafer table WTB has a surface (liquid repellent surface) that has been made liquid repellent with respect to liquid Lq and is substantially flush with the surface of wafer W placed on wafer holder, and has an outer shape. A plate (liquid repellent plate) 28 having a rectangular (contour) and a circular opening that is slightly larger than the wafer holder (wafer mounting region) is provided at the center thereof. The plate 28 is made of a material having a low coefficient of thermal expansion, such as glass or ceramics (Shot Corporation's Zerodur (trade name), Al 2 O 3, TiC, or the like). A liquid repellent film is formed of a fluorine resin material such as (Teflon (registered trademark)), an acrylic resin material, or a silicon resin material. Further, as shown in the plan view of the wafer table WTB (wafer stage WST) in FIG. 4A, the plate 28 surrounds a circular opening and has a first liquid repellent region 28a having a rectangular outer shape (contour), A rectangular frame-shaped (annular) second liquid repellent area 28b disposed around the one liquid repellent area 28a. The first liquid repellent area 28a is formed with, for example, at least a part of the liquid immersion area 14 protruding from the surface of the wafer during the exposure operation, and the second liquid repellent area 28b is formed with a scale for an encoder system described later. . It should be noted that at least a part of the surface of the plate 28 may not be flush with the surface of the wafer, that is, it may have a different height. The plate 28 may be a single plate, but in the present embodiment, a plurality of plates, for example, first and second liquid repellent plates corresponding to the first and second liquid repellent areas 28a and 28b, respectively, are combined. . In the present embodiment, since water is used as the liquid Lq as described above, the first and second liquid repellent regions 28a and 28b are also referred to as first and second water repellent plates 28a and 28b, respectively.

  In this case, the inner first water repellent plate 28a is irradiated with the exposure light IL, whereas the outer second water repellent plate 28b is hardly irradiated with the exposure light IL. In view of this, in the present embodiment, the surface of the first water repellent plate 28a is provided with a water repellent coat that is sufficiently resistant to the exposure light IL (in this case, light in the vacuum ultraviolet region). One water-repellent region is formed, and the second water-repellent plate 28b is formed with a second water-repellent region on the surface of which a water-repellent coat that is less resistant to the exposure light IL than the first water-repellent region is formed. ing. In general, a glass plate is difficult to be provided with a water-repellent coating that is sufficiently resistant to exposure light IL (in this case, light in the vacuum ultraviolet region). Thus, the first water-repellent plate 28a and the second water-repellent layer around it are thus formed. It is effective to separate the board 28b into two parts. The first water repellent region and the second water repellent region may be formed by applying two types of water repellent coatings having different resistances to the exposure light IL on the upper surface of the same plate. Further, the same type of water repellent coating may be used in the first and second water repellent areas. For example, only one water repellent region may be formed on the same plate.

  As is clear from FIG. 4A, a rectangular notch is formed at the center of the first water repellent plate 28a on the + Y side in the X-axis direction. A measurement plate 30 is embedded in a rectangular space surrounded by the water repellent plate 28b (inside the cutout). At the center in the longitudinal direction of the measurement plate 30 (on the center line LL of the wafer table WTB), a reference mark FM is formed, and the center of the reference mark is formed on one side and the other side of the reference mark in the X-axis direction. A pair of aerial image measurement slit patterns (slit-like measurement patterns) SL are formed in a symmetric arrangement with respect to FIG. As each aerial image measurement slit pattern SL, as an example, an L-shaped slit pattern having sides along the Y-axis direction and the X-axis direction, or two linear slits extending in the X-axis and Y-axis directions, respectively. A pattern or the like can be used.

  The wafer stage WST below each aerial image measurement slit pattern SL has an L shape in which an optical system including an objective lens, a mirror, a relay lens, and the like is housed as shown in FIG. 4B. The housing 36 is attached in a partially embedded state in a state of penetrating a part of the inside of the stage main body 91 from the wafer table WTB. Although not shown, the housing 36 is provided in a pair corresponding to the pair of aerial image measurement slit patterns SL.

  The optical system inside the housing 36 guides the illumination light IL transmitted through the aerial image measurement slit pattern SL along the L-shaped path and emits it in the −Y direction. In the following, for convenience, the optical system inside the housing 36 is described as a light transmission system 36 using the same reference numerals as the housing 36.

Furthermore, a large number of grid lines are directly formed on the upper surface of the second water repellent plate 28b at a predetermined pitch along each of the four sides. If this further detail, in the region of the X-axis direction one side and the other side of the second water repellent plate 28b (both sides in the horizontal direction in FIG 4 (A)), Y scales 39Y 1, 39Y 2 are formed respectively, Y In each of the scales 39Y 1 and 39Y 2 , for example, lattice lines 38 having the X-axis direction as a longitudinal direction are formed along a direction (Y-axis direction) parallel to the Y-axis at a predetermined pitch. And a reflection type grating (for example, a diffraction grating).

Similarly, the X scale 39X 1 is sandwiched between the Y scales 39Y 1 and 39Y 2 in the region on one side and the other side (upper and lower sides in FIG. 4A) of the second water repellent plate 28b. , 39X 2 are formed, and each of the X scales 39X 1 , 39X 2 is formed along a direction parallel to the X axis (X axis direction) at a predetermined pitch, for example, with grid lines 37 having the Y axis direction as the longitudinal direction. And a reflection type grating (for example, a diffraction grating) whose periodic direction is the X-axis direction. As each of the scales, a scale in which a reflective diffraction grating is formed on the surface of the second water repellent plate 28b by using, for example, a hologram is used. In this case, each scale is provided with a grid made up of narrow slits or grooves as scales at a predetermined interval (pitch). The type of the diffraction grating used for each scale is not limited, and may be not only those in which grooves or the like are mechanically formed, but may also be created by baking interference fringes on a photosensitive resin, for example. . However, each scale is formed by, for example, engraving the scale of the diffraction grating on a thin glass plate at a pitch between 138 nm and 4 μm, for example, 1 μm pitch. These scales are covered with the liquid repellent film (water repellent film) described above. In FIG. 4A, for the sake of convenience of illustration, the pitch of the lattice is shown much wider than the actual pitch. The same applies to the other drawings.

  Thus, in this embodiment, since the second water repellent plate 28b itself constitutes a scale, a glass plate having a low coefficient of thermal expansion is used as the second water repellent plate 28b. However, the upper surface of wafer table WTB is not limited to this, for example, by a leaf spring (or vacuum suction) or the like so as to prevent local expansion and contraction of the scale member made of a glass plate having a low thermal expansion coefficient on which a lattice is formed. In this case, a water repellent plate having the same water repellent coating on the entire surface may be used in place of the plate 28. Alternatively, wafer table WTB can be formed of a material having a low coefficient of thermal expansion. In such a case, the pair of Y scales and the pair of X scales may be formed directly on the upper surface of wafer table WTB. .

  In order to protect the diffraction grating, it is also effective to cover it with a glass plate having a low thermal expansion coefficient having water repellency. Here, as the glass plate, a glass plate having the same thickness as the wafer, for example, 1 mm thick can be used, and the wafer table is such that the surface of the glass plate has the same height (surface position) as the wafer surface. Installed on top of WTB.

  A positioning pattern for determining the relative position between the encoder head and the scale, which will be described later, is provided near the end of each scale. This positioning pattern is composed of, for example, grid lines having different reflectivities. When the encoder head scans the positioning pattern, the intensity of the output signal of the encoder changes. Therefore, a threshold is set in advance, and a position where the intensity of the output signal exceeds the threshold is detected. Based on the detected position, a relative position between the encoder head and the scale is set.

  The measurement stage MST includes a stage main body 92 that is driven in the XY plane by a linear motor (not shown) and the like, and a measurement table MTB mounted on the stage main body 92. Similarly to wafer stage WST, measurement stage MST is configured to be driven in a six-degree-of-freedom direction (X, Y, Z, θx, θy, θz) with respect to base board 12 by a drive system (not shown).

  In FIG. 6, a stage drive system 124 is shown including the drive system for wafer stage WST and the drive system for measurement stage MST.

  Various measurement members are provided on the measurement table MTB (and the stage main body 92). As this measuring member, for example, as shown in FIGS. 2 and 5A, an illuminance unevenness sensor 94 having a pinhole-shaped light receiving portion that receives illumination light IL on the image plane of the projection optical system PL. , An aerial image measuring device 96 that measures an aerial image (projected image) of a pattern projected by the projection optical system PL, and Shack-Hartman disclosed in, for example, International Publication No. 2003/065428 A wavefront aberration measuring instrument 98 of the type is adopted. As the wavefront aberration measuring instrument 98, for example, the one disclosed in International Publication No. 99/60361 pamphlet (corresponding European Patent No. 1,079,223) can be used.

  As the illuminance unevenness sensor 94, for example, a sensor having the same structure as that disclosed in Japanese Patent Application Laid-Open No. 57-117238 (corresponding US Pat. No. 4,465,368) can be used. Further, as the aerial image measuring device 96, for example, one having the same configuration as that disclosed in Japanese Patent Laid-Open No. 2002-14005 (corresponding to US Patent Application Publication No. 2002/0041377) can be used. . In the present embodiment, three measurement members (94, 96, 98) are provided on the measurement stage MST, but the type and / or number of measurement members are not limited to this. As the measurement member, for example, a transmittance measuring instrument that measures the transmittance of the projection optical system PL, and / or a measuring instrument that observes the above-described local liquid immersion device 8, such as the nozzle unit 32 (or the tip lens 191), or the like. May be used. Further, a member different from the measurement member, for example, a cleaning member for cleaning the nozzle unit 32, the tip lens 191 and the like may be mounted on the measurement stage MST.

  In this embodiment, as can be seen from FIG. 5A, the frequently used sensors, the illuminance unevenness sensor 94, the aerial image measuring device 96, and the like are the center line CL (Y axis passing through the center) of the measurement stage MST. Is placed on top. For this reason, in this embodiment, measurement using these sensors can be performed by moving only the Y-axis direction without moving the measurement stage MST in the X-axis direction.

  In addition to the sensors described above, illumination light IL is received on the image plane of the projection optical system PL disclosed in, for example, Japanese Patent Application Laid-Open No. 11-16816 (corresponding US Patent Application Publication No. 2002/0061469). An illuminance monitor having a light receiving portion with a predetermined area may be adopted, and it is desirable that this illuminance monitor is also arranged on the center line.

  In the present embodiment, the illumination light IL is applied in response to the immersion exposure that exposes the wafer W with the exposure light (illumination light) IL via the projection optical system PL and the liquid (water) Lq. The illuminance unevenness sensor 94 (and the illuminance monitor), the aerial image measuring device 96, and the wavefront aberration measuring device 98 used for the measurement to be used receive the illumination light IL through the projection optical system PL and water. . In addition, for example, each sensor may be mounted on the measurement table MTB (and the stage body 92) only partially, for example, or the entire sensor may be arranged on the measurement table MTB (and the stage body 92). May be.

  As shown in FIG. 5B, a frame-shaped attachment member 42 is fixed to the end surface on the −Y side of the stage main body 92 of the measurement stage MST. A pair of light receiving systems 44 is arranged on the end surface on the −Y side of the stage main body 92 in the vicinity of the center position in the X-axis direction inside the opening of the mounting member 42 so as to face the pair of light transmitting systems 36 described above. Is fixed. Each light receiving system 44 includes an optical system such as a relay lens, a light receiving element, such as a photomultiplier tube, and a housing for housing these. As can be seen from FIGS. 4B and 5B and the above description, in this embodiment, wafer stage WST and measurement stage MST are close to each other within a predetermined distance (contact) in the Y-axis direction (contact). (Including the state), the illumination light IL transmitted through each aerial image measurement slit pattern SL of the measurement plate 30 is guided by each of the light transmission systems 36 described above and received by the light receiving element of each light receiving system 44. That is, the measurement plate 30, the light transmission system 36, and the light reception system 44 are similar to those disclosed in the aforementioned Japanese Patent Application Laid-Open No. 2002-14005 (corresponding US Patent Application Publication No. 2002/0041377), etc. An aerial image measurement device 45 (see FIG. 6) is configured.

  On the mounting member 42, a fiducial bar (hereinafter abbreviated as “FD bar”) 46 made of a rod-shaped member having a rectangular cross section is extended in the X-axis direction. The FD bar 46 is kinematically supported on the measurement stage MST by a full kinematic mount structure.

  Since the FD bar 46 is a prototype (measurement standard), an optical glass ceramic having a low thermal expansion coefficient, for example, Zerodure (trade name) manufactured by Schott is used as the material. The flatness of the upper surface (front surface) of the FD bar 46 is set to be as high as that of a so-called reference flat plate. In addition, a reference grating (for example, a diffraction grating) 52 having a periodic direction in the Y-axis direction as shown in FIG. 5A is provided near one end and the other end of the FD bar 46 in the longitudinal direction. Is formed. The pair of reference gratings 52 are formed in a symmetrical arrangement with respect to the center of the FD bar 46 in the X-axis direction, that is, the center line CL described above, with a predetermined distance L therebetween.

  In addition, a plurality of reference marks M are formed on the upper surface of the FD bar 46 in an arrangement as shown in FIG. The plurality of reference marks M are formed in an array of three rows with respect to the Y-axis direction at the same pitch, and the arrays of rows are formed with a predetermined distance from each other in the X-axis direction. As each reference mark M, a two-dimensional mark having a size detectable by a primary alignment system and a secondary alignment system described later is used. Although the shape (configuration) of the reference mark M may be different from the above-described reference mark FM, in this embodiment, the reference mark M and the reference mark FM have the same configuration and the same as the alignment mark of the wafer W. It has a configuration. In the present embodiment, the surface of the FD bar 46 and the surface of the measurement table MTB (which may include the above-described measurement member) are also covered with a liquid repellent film (water repellent film).

In the exposure apparatus 100 of the present embodiment, illustration is omitted in FIG. 1 from the viewpoint of avoiding complication of the drawing, but actually, as shown in FIG. 3, the optical axis on the reference axis LV described above. A primary alignment system AL1 having a detection center at a position a predetermined distance away from -Y is disposed. The primary alignment system AL1 is fixed to the lower surface of the main frame (not shown) via a support member 54. Secondary alignment systems AL2 1 , AL2 2 , AL2 3 , AL2 4 having detection centers arranged almost symmetrically with respect to the straight line LV on one side and the other side in the X axis direction across the primary alignment system AL1, respectively Is provided. That is, the five alignment systems AL1, AL2 1 to AL2 4 have their detection centers arranged at different positions in the X-axis direction, that is, arranged along the X-axis direction.

Each secondary alignment system AL2 n (n = 1 to 4) rotates in a predetermined angle range clockwise and counterclockwise in FIG. 3 around the rotation center O as representatively shown for the secondary alignment system AL2 4 . The movable arm 56 n (n = 1 to 4) is fixed to the tip (rotating end). In the present embodiment, each secondary alignment system AL2 n includes a part thereof (for example, at least an optical system that irradiates the detection region with the alignment light and guides the light generated from the target mark in the detection region to the light receiving element). It is fixed to the arm 56 n and the remaining part is provided on the main frame that holds the projection unit PU. The secondary alignment systems AL2 1 , AL2 2 , AL2 3 , AL2 4 are each rotated about the rotation center O to adjust the X position. That is, the secondary alignment systems AL2 1 , AL2 2 , AL2 3 , AL2 4 have their detection areas (or detection centers) independently movable in the X-axis direction. Therefore, the primary alignment system AL1 and the secondary alignment systems AL2 1 , AL2 2 , AL2 3 , AL2 4 can adjust the relative positions of their detection areas in the X-axis direction. In the present embodiment, the X position of the secondary alignment systems AL2 1 , AL2 2 , AL2 3 , AL2 4 is adjusted by the rotation of the arm, but this is not limiting, and the secondary alignment systems AL2 1 , AL2 are not limited thereto. 2 , AL2 3 , AL2 4 may be provided with a driving mechanism that reciprocates in the X-axis direction. Further, at least one of the secondary alignment systems AL2 1 , AL2 2 , AL2 3 , AL2 4 may be movable not only in the X axis direction but also in the Y axis direction. Since each secondary alignment system AL2 n part is moved by arm 56 n, a sensor (not shown), such as by an interferometer or an encoder, and a part of the location information that is fixed to arm 56 n Measurement is possible. This sensor may only measure the positional information of the secondary alignment system AL2 n in the X-axis direction, but in other directions, for example, the Y-axis direction and / or the rotational direction (including at least one of the θx and θy directions). The position information may be measurable.

A vacuum pad 58 n (n = 1 to 4) made of a differential exhaust type air bearing is provided on the upper surface of each arm 56 n . Further, the arm 56 n can be rotated in accordance with an instruction from the main controller 20 by a rotation driving mechanism 60 n (n = 1 to 4, not shown in FIG. 3, refer to FIG. 6) including, for example, a motor or the like. . After adjusting the rotation of arm 56 n , main controller 20 operates each vacuum pad 58 n to adsorb and fix each arm 56 n to a main frame (not shown). Thereby, the state after adjusting the rotation angle of each arm 56 n , that is, the desired positional relationship between the primary alignment system AL1 and the four secondary alignment systems AL2 1 to AL2 4 is maintained.

If the portion of the main frame facing the arm 56 n is a magnetic material, an electromagnet may be used instead of the vacuum pad 58.

In the present embodiment, as each of the primary alignment system AL1 and the four secondary alignment systems AL2 1 to AL2 4 , for example, a broadband detection light beam that does not expose the resist on the wafer is irradiated to the target mark, and the reflected light from the target mark The target mark image formed on the light receiving surface and the image of the index (not shown) (the index pattern on the index plate provided in each alignment system) are imaged using an image sensor (CCD, etc.) An image processing type FIA (Field Image Alignment) system that outputs the image pickup signal is used. The imaging signal from each of primary alignment system AL1 and four secondary alignment systems AL2 1 AL24 4 is sent to the main controller 20 in FIG. 6, via an alignment signal processing system (not shown).

The alignment system is not limited to the FIA system. For example, the target mark is irradiated with coherent detection light to detect scattered light or diffracted light generated from the target mark, or 2 generated from the target mark. Of course, it is possible to use an alignment sensor that detects two diffracted lights (for example, diffracted lights of the same order or diffracted in the same direction) by interference alone or in appropriate combination. In the present embodiment, the five alignment systems AL1, AL2 1 to AL2 4 are fixed to the lower surface of the main frame holding the projection unit PU via the support member 54. For example, you may provide in the measurement frame mentioned above.

  Next, the configuration and the like of interferometer system 118 that measures position information of wafer stage WST and measurement stage MST will be described.

  Here, prior to the description of the specific configuration of the interferometer system, the measurement principle of the interferometer will be briefly described. The interferometer projects a measurement beam (measurement light) toward a reflecting surface installed on the measurement object. The reflected light is combined with the reference light and received, and the intensity of the combined light, that is, interference light is measured. Here, the relative phase (phase difference) between them changes by KΔL due to the optical path difference ΔL between the reflected light and the reference light. Thereby, the intensity of the interference light changes in proportion to 1 + a · cos (KΔL). However, assuming that the homodyne detection method is adopted, the wavenumbers of the length measuring light and the reference light are the same as K. The constant a is determined by the intensity ratio between the length measurement light and the reference light. Here, the reflection surface for the reference light is generally provided on the side surface of the projection unit PU (in some cases, in the interferometer unit). The reference light reflecting surface is a reference position for length measurement. Accordingly, the optical path difference ΔL reflects the distance from the reference position to the reflecting surface. Therefore, if the number of interference light intensity changes (the number of fringes) with respect to the change in the distance to the reflecting surface is measured, the displacement of the reflecting surface installed on the measurement object is calculated from the product of the counted value and the measuring unit. Calculated. Here, the unit of measurement is a half of the wavelength of the measuring light in the case of a single-pass interferometer, and a quarter of the wavelength in the case of a double-pass interferometer.

Incidentally, when employing an interferometer of the heterodyne detection method, the wave number K 2 of the wave number K 1 of measurement light and the reference beam are slightly different. In this case, measuring the optical path length of the reference light, respectively When L 1, L 2 and wavelength light, measuring the phase difference between the wavelength light and the reference light is given as K [Delta] L + [Delta] KL 1, the intensity of the interference light 1 + a · cos (KΔL + ΔKL It changes in proportion to 1 ). However, the optical path differences ΔL = L 1 −L 2 , ΔK = K 1 −K 2 , and K = K 2 were set. Here, the optical path L 2 of the reference beam is sufficiently short, if Naritate approximate [Delta] L ≒ L 1, the intensity of the interference light changes in proportion to 1 + a · cos [(K + ΔK) ΔL]. As can be seen, the intensity of the interference light varies with the optical path difference ΔL and periodically oscillates at a reference light wavelength of 2π / K, and the envelope of the periodic oscillation oscillates at a long period of 2π / ΔK. . Therefore, in the heterodyne detection method, the change direction of the optical path difference ΔL, that is, the displacement direction of the measurement object can be known from a long period of beat.

As a main error factor of the interferometer, there is an effect of temperature fluctuation (air fluctuation) of the atmosphere on the beam optical path. Assume that the wavelength λ of light changes to λ + Δλ due to air fluctuations. The change in the phase difference KΔL due to the minute change Δλ in wavelength is obtained as 2πΔLΔλ / λ 2 since the wave number K = 2π / λ. Assuming that the wavelength of light λ = 1 μm and minute change Δλ = 1 nm, the phase change is 2π × 100 with respect to the optical path difference ΔL = 100 mm. This phase change corresponds to a displacement of 100 times the unit of measurement. In this way, when the optical path length is set to be long, the interferometer is greatly affected by air fluctuation occurring in a short time, and is inferior in short-term stability. In such a case, it is desirable to use a surface position measuring system having an encoder or a Z head described later.

  The -Y end surface and -X end surface of wafer table WTB are mirror-finished to form reflecting surface 17a and reflecting surface 17b shown in FIG. The Y interferometer 16 and the X interferometers 126, 127, and 128 (in FIG. 1, the X interferometers 126 to 128 are not shown, refer to FIG. 2) that constitute a part of the interferometer system 118 (see FIG. 6). By projecting a length measurement beam to each of the reflecting surfaces 17a and 17b and receiving the respective reflected light, a reference position of each reflecting surface (for example, a fixed mirror is arranged on the side surface of the projection unit PU, and this is used as a reference surface). ), That is, position information in the XY plane of wafer stage WST is measured, and the measured position information is supplied to main controller 20. In the present embodiment, as will be described later, as each of the interferometers, a multi-axis interferometer having a plurality of measurement axes is used except for some of the interferometers.

  On the other hand, as shown in FIG. 4B, a movable mirror 41 whose longitudinal direction is the X-axis direction is attached to the side surface of the stage body 91 on the −Y side via a kinematic support mechanism (not shown). ing. The movable mirror 41 is formed of a member in which a rectangular parallelepiped member and a pair of triangular prism-like members fixed to one surface (the surface on the -Y side) of the rectangular parallelepiped are integrated. As can be seen from FIG. 2, the movable mirror 41 is designed such that the length in the X-axis direction is longer than the reflecting surface 17a of the wafer table WTB by at least the interval between two Z interferometers described later.

  The surface of the movable mirror 41 on the −Y side is mirror-finished, and three reflecting surfaces 41b, 41a, and 41c are formed as shown in FIG. 4B. The reflecting surface 41a constitutes a part of the end surface on the −Y side of the movable mirror 41, and extends in parallel with the XZ plane and in the X-axis direction. The reflective surface 41b constitutes a surface adjacent to the + Z side of the reflective surface 41a, forms an obtuse angle with respect to the reflective surface 41a, and extends in the X-axis direction. The reflection surface 41c constitutes a surface adjacent to the -Z side of the reflection surface 41a, and is provided symmetrically with the reflection surface 41b with the reflection surface 41a interposed therebetween.

  A pair of Z interferometers 43A and 43B constituting a part of the interferometer system 118 (see FIG. 6) that irradiates the movable mirror 41 with a length measuring beam are provided opposite to the movable mirror 41 (see FIG. 6). 1 and FIG. 2).

  The Z interferometers 43A and 43B are substantially the same distance away from one side and the other side in the X-axis direction of the Y interferometer 16 and are slightly lower than the Y interferometer 16, as can be seen from FIG. 1 and FIG. It is arranged at each position.

  As shown in FIG. 1, each of the Z interferometers 43A and 43B projects a measurement beam B1 along the Y-axis direction toward the reflection surface 41b, and a measurement beam B2 along the Y-axis direction. It is projected toward 41c (see FIG. 4B). In the present embodiment, a fixed mirror 47B having a reflecting surface orthogonal to the measuring beam B1 sequentially reflected by the reflecting surface 41b and the reflecting surface 41c, and a measuring beam B2 sequentially reflected by the reflecting surface 41c and the reflecting surface 41b, and Fixed mirrors 47A having orthogonal reflecting surfaces are respectively extended in the X-axis direction without interfering with the measuring beams B1 and B2 at a position a predetermined distance away from the movable mirror 41 in the -Y direction.

  The fixed mirrors 47A and 47B are supported by, for example, the same support (not shown) provided on a frame (not shown) that supports the projection unit PU.

As shown in FIG. 2 (and FIG. 13), the Y interferometer 16 is identical from a straight line (reference axis) LV parallel to the Y axis that passes through the projection center (optical axis AX, see FIG. 1) of the projection optical system PL. By projecting the measurement beams B4 1 and B4 2 onto the reflection surface 17a of the wafer table WTB along the measurement axis in the Y-axis direction separated to the distance −X side and + X side, and receiving each reflected light, The position (Y position) in the Y-axis direction at the irradiation point of the length measurement beams B4 1 and B4 2 of the wafer table WTB is detected. In FIG. 1, the measurement beams B4 1 and B4 2 are representatively shown as the measurement beam B4.

Further, the Y interferometer 16 has a predetermined interval in the Z-axis direction between the measuring beams B4 1 and B4 2 and directs the measuring beam B3 along the measuring axis in the Y-axis direction toward the reflecting surface 41a. The Y position of the reflecting surface 41a of the movable mirror 41 (that is, the wafer stage WST) is detected by receiving the measurement beam B3 that is projected and reflected by the reflecting surface 41a.

Main controller 20 determines reflection surface 17a, that is, the Y position of wafer stage WST (more precisely, based on the average value of the measurement values of the measurement axes corresponding to measurement beams B4 1 and B4 2 of Y interferometer 16. Y-axis direction displacement ΔY) is calculated. Further, main controller 20 determines displacement (yawing amount) Δθz in the rotational direction (θz direction) about wafer Z WST about the Z axis based on the difference between the measurement values of the measurement axes corresponding to measurement beams B4 1 and B4 2. (Y) is calculated. Main controller 20 also calculates displacement (pitching amount) Δθx in the θx direction of wafer stage WST based on the Y position (displacement ΔY in the Y-axis direction) of reflecting surface 17a and reflecting surface 41a.

Further, as shown in FIGS. 2 and 13, the X interferometer 126 has two length measurement axes separated by the same distance with respect to a straight line (reference axis) LH in the X axis direction passing through the optical axis of the projection optical system PL. along and projecting the measurement beams B5 1, B5 2 to wafer table WTB, the main controller 20, based on the measurement values of the measurement axes corresponding to measurement beams B5 1, B5 2, wafer stage WST The position in the X-axis direction (X position, more precisely, the displacement ΔX in the X-axis direction) is calculated. Further, main controller 20, than the difference between the measurement values of the measurement axes corresponding to measurement beams B5 1, B5 2, calculates the displacement of the θz direction of wafer stage WST (yawing amount) Δθz (X). It should be noted that Δθz (X) obtained from X interferometer 126 and Δθz (Y) obtained from Y interferometer 16 are equal to each other and represent displacement (yawing amount) Δθz of wafer stage WST in the θz direction.

  Further, as shown in FIGS. 14 and 15 and the like, the measurement beam B7 from the X interferometer 128 is unloaded at the position where the wafer on the wafer table WTB is unloaded, and the wafer on the wafer table WTB. Is projected onto the reflecting surface 17b of the wafer table WTB along a straight line LUL parallel to the X-axis connecting the loading positions LP where the loading is performed. Further, as shown in FIGS. 2 and 15, the length measurement beam B6 from the X interferometer 127 is reflected along the straight line LA passing through the detection center of the primary alignment system AL1 and parallel to the X axis, and is reflected on the wafer table WTB. 17b.

  Main controller 20 can also obtain displacement ΔX of wafer stage WST in the X-axis direction from the measurement value of measurement beam B6 of X interferometer 127 and the measurement value of measurement beam B7 of X interferometer 128. . However, the arrangement of the three X interferometers 126, 127, and 128 is different with respect to the Y-axis direction. The X interferometer 126 is used for exposure shown in FIG. 13, and the X interferometer 127 is used for wafer alignment shown in FIG. The X interferometer 128 is used at the time of loading the wafer shown in FIG. 14 and at the time of unloading shown in FIG.

  As shown in FIG. 1, the measurement beams B1 and B2 along the Y axis are projected toward the movable mirror 41 from the above-described Z interferometers 43A and 43B, respectively. These measurement beams B1 and B2 are incident on the reflecting surfaces 41b and 41c of the movable mirror 41 at a predetermined incident angle (referred to as θ / 2). Then, the length measurement beam B1 is sequentially reflected by the reflection surfaces 41b and 41c and enters the reflection surface of the fixed mirror 47B perpendicularly, and the length measurement beam B2 is sequentially reflected by the reflection surfaces 41c and 41b and reflected by the fixed mirror 47A. Incident perpendicular to the surface. Then, the measurement beams B2 and B1 reflected by the reflecting surfaces of the fixed mirrors 47A and 47B are sequentially reflected again by the reflecting surfaces 41b and 41c, or again sequentially reflected by the reflecting surfaces 41c and 41b (the optical path at the time of incidence). The light is received by the Z interferometers 43A and 43B.

  Here, if the displacement of the movable mirror 41 (that is, wafer stage WST) in the Z-axis direction is ΔZo and the displacement in the Y-axis direction is ΔYo, the optical path length changes ΔL1 and ΔL2 of the length measuring beams B1 and B2 are as follows. (1) and (2).

ΔL1 = ΔYo × (1 + cos θ) + ΔZo × sin θ (1)
ΔL2 = ΔYo × (1 + cos θ) −ΔZo × sin θ (2)
Therefore, ΔZo and ΔYo are obtained by the following equations (3) and (4) from the equations (1) and (2).
ΔZo = (ΔL1−ΔL2) / 2sin θ (3)
ΔYo = (ΔL1 + ΔL2) / {2 (1 + cos θ)} (4)

  The displacements ΔZo and ΔYo are determined by the Z interferometers 43A and 43B, respectively. Therefore, the displacements determined by the Z interferometer 43A are ΔZoR and ΔYoR, and the displacements determined by the Z interferometer 43B are ΔZoL and ΔYoL. A distance at which the length measuring beams B1 and B2 projected by the Z interferometers 43A and 43B are separated in the X-axis direction is D (see FIG. 2). Under such a premise, the displacement (yawing amount) Δθz in the θz direction of the movable mirror 41 (that is, the wafer stage WST) and the displacement (rolling amount) Δθy in the θy direction are obtained by the following equations (5) and (6). It is done.

Δθz = tan −1 {(ΔYoR−ΔYoL) / D} (5)
Δθy = tan −1 {(ΔZoL−ΔZoR) / D} (6)
Therefore, main controller 20 uses equations (3) to (6) above, and based on the measurement results of Z interferometers 43A and 43B, displacement of wafer stage WST with four degrees of freedom ΔZo, ΔYo, Δθz. , Δθy can be calculated.
Thus, main controller 20 can obtain the displacement of wafer stage WST in the six degrees of freedom direction (Z, X, Y, θz, θx, θy directions) from the measurement result of interferometer system 118.

  In the present embodiment, a single stage that can be driven and driven with six degrees of freedom is adopted as wafer stage WST. Instead of this, stage main body 91 that can move freely in the XY plane, A wafer table WTB mounted on the stage main body 91 and capable of being relatively finely driven relative to the stage main body 91 in the Z-axis direction, the θx direction, and the θy direction may be included. A wafer stage WST having a so-called coarse / fine movement structure in which the table WTB is configured to be finely movable in the X axis direction, the Y axis direction, and the θz direction with respect to the stage main body 91 may be adopted. However, in this case, the interferometer system 118 needs to be configured to be able to measure the position information of the wafer table WTB in the 6 degrees of freedom direction. Similarly, the measurement stage MST may be configured by a stage main body 92 and a measurement table MTB having 3 degrees of freedom or 6 degrees of freedom mounted on the stage main body 91. Further, instead of the reflecting surface 17a and the reflecting surface 17b, a movable mirror composed of a plane mirror may be provided on the wafer table WTB.

  However, in the present embodiment, position information (including rotation information in the θz direction) of wafer stage WST in the XY plane is mainly measured by an encoder system described later, and measured values of interferometers 16, 126, and 127 are This is supplementarily used when correcting (calibrating) long-term fluctuations (for example, due to changes in the scale over time) of the measurement values of the encoder system.

  The interferometer system 118 is at least partially (for example, an optical system) provided on the main frame that holds the projection unit PU or integrally provided with the projection unit PU that is suspended and supported as described above. However, in this embodiment, it is provided on the measurement frame described above.

  In the present embodiment, the positional information of wafer stage WST is measured using the reflecting surface of the fixed mirror provided in projection unit PU as a reference plane. However, the position where the reference plane is arranged is limited to projection unit PU. The position information of wafer stage WST does not necessarily have to be measured using a fixed mirror.

  Further, in the present embodiment, the position information of wafer stage WST measured by interferometer system 118 is not used in the exposure operation and alignment operation described later, and mainly the calibration operation of the encoder system (that is, the calibration of the measurement value). However, the measurement information of the interferometer system 118 (that is, at least one of position information in the direction of 5 degrees of freedom) may be used in, for example, an exposure operation and / or an alignment operation. It is also conceivable to use the interferometer system 118 as a backup for the encoder system, which will be described in detail later. In the present embodiment, the encoder system measures position information of wafer stage WST in three degrees of freedom, that is, in the X axis, Y axis, and θz directions. Therefore, in an exposure operation or the like, of the measurement information of the interferometer system 118, a direction different from the measurement direction (X axis, Y axis, and θz direction) of the position information of the wafer stage WST by the encoder system, for example, the θx direction and / or θy. Only position information related to the direction may be used. In addition to the position information in the different directions, position information related to the same direction as the measurement direction of the encoder system (that is, at least one of the X-axis, Y-axis, and θz directions) It may be used. Further, interferometer system 118 may be capable of measuring position information of wafer stage WST in the Z-axis direction. In this case, position information in the Z-axis direction may be used in the exposure operation or the like.

  In addition, the interferometer system 118 (see FIG. 6) includes a Y interferometer 18 and an X interferometer 130 for measuring the two-dimensional position coordinates of the measurement table MTB. Reflective surfaces 19a and 19b similar to the wafer table WTB described above are also formed on the + Y end surface and the −X end surface of the measurement table MTB (see FIGS. 2 and 5A). The Y interferometer 18 and the X interferometer 130 (in FIG. 1, the X interferometer 130 is not shown, see FIG. 2) of the interferometer system 118 are placed on these reflecting surfaces 19a and 19b as shown in FIG. By projecting a length measurement beam and receiving each reflected light, the displacement of each reflecting surface from the reference position is measured. Main controller 20 receives the measurement values of Y interferometer 18 and X interferometer 130, and includes position information of measurement stage MST (for example, position information in the X-axis and Y-axis directions and rotation information in the θz direction). ) Is calculated.

  Note that a multi-axis interferometer similar to the Y interferometer 16 for the wafer stage WST may be used as the Y interferometer for the measurement table MTB. Further, as the X interferometer of measurement table MTB, a biaxial interferometer similar to X interferometer 126 for wafer stage WST may be used. Further, in order to measure the Z displacement, Y displacement, yawing amount, and rolling amount of the measurement stage MST, it is also possible to introduce an interferometer similar to the Z interferometers 43A and 43B for the wafer stage WST.

  Next, the configuration of an encoder system that measures position information (including rotation information in the θz direction) of wafer stage WST in the XY plane will be described.

  In the exposure apparatus 100 of the present embodiment, as shown in FIG. 3, four head units 62 </ b> A to 62 </ b> D of the encoder system are arranged so as to surround the nozzle unit 32 from four directions. These head units 62A to 62D are not shown in FIG. 3 and the like from the viewpoint of avoiding complication of the drawings, but actually, the head units 62A to 62D are suspended from a main frame holding the projection unit PU described above via a support member. Fixed in the lowered state.

As shown in FIG. 3, the head units 62A and 62C are arranged on the + X side and the −X side of the projection unit PU with the X-axis direction as the longitudinal direction. Each of the head units 62A and 62C includes a plurality of (here, five) Y heads 65 i and 64 j (i, j = 1 to 5) arranged at an interval WD in the X-axis direction. More specifically, the head units 62A and 62C are spaced apart by a distance WD on a straight line (reference axis) LH that passes through the optical axis AX of the projection optical system PL and is parallel to the X axis, except for the periphery of the projection unit PU. A plurality of (here, four) Y heads (64 1 to 64 4 or 65 2 to 65 5 ) arranged at positions around the projection unit PU at a predetermined distance from the reference axis LH in the −Y direction; That is, one Y head (64 5 or 65 1 ) disposed at the position on the −Y side of the nozzle unit 32 is provided. The head units 62A and 62C are each provided with five Z heads described later.

The head unit 62A uses the above-described Y scale 39Y 1 to measure a Y-axis position (Y position) of the wafer stage WST in the Y-axis direction (here, five eyes) Y linear encoder (hereinafter referred to as “Y” as appropriate). 70A (refer to FIG. 6). Similarly, the head unit 62C constitutes a multi-lens (here, 5 eyes) Y encoder 70C (refer to FIG. 6) that measures the Y position of the wafer stage WST using the Y scale 39Y 2 described above. Here, the distance WD in the X-axis direction of the five Y heads (64 i or 65 j ) (that is, measurement beams) provided in the head units 62A and 62C is the width of the Y scales 39Y 1 and 39Y 2 in the X-axis direction. It is set slightly narrower (more precisely, the length of the lattice line 38).

As shown in FIG. 3, the head unit 62B is arranged on the + Y side of the nozzle unit 32 (projection unit PU), and a plurality of head units 62B arranged here at intervals WD along the Y-axis direction on the reference axis LV. Four X heads 66 5 to 66 8 are provided. The head unit 62D is disposed on the −Y side of the primary alignment system AL1 opposite to the head unit 62B via the nozzle unit 32 (projection unit PU), and is disposed on the reference axis LV with an interval WD. A plurality of, here, four X heads 66 1 to 66 4 are provided.

The head unit 62B uses the above-described X scale 39X 1 to measure the position (X position) of the wafer stage WST in the X-axis direction (here, four eyes) X linear encoder (hereinafter referred to as “ 70B (refer to FIG. 6). In addition, the head unit 62D constitutes a multi-lens (four eyes here) X linear encoder 70D (see FIG. 6) that measures the X position of the wafer stage WST by using the X scale 39X 2 described above.

Here, the interval between the adjacent X heads 66 (measurement beams) included in the head units 62B and 62D is the width in the Y-axis direction of the X scales 39X 1 and 39X 2 (more precisely, the length of the grid line 37). ) Is set narrower. The distance between the X head 66 on the most −Y side of the head unit 62B and the X head 66 on the most + Y side of the head unit 62D is switched between the two X heads by moving the wafer stage WST in the Y-axis direction ( The width of the wafer table WTB is set to be slightly narrower than the width in the Y-axis direction so that connection (to be described later) is possible.

  In the present embodiment, head units 62F and 62E are further provided at a predetermined distance on the −Y side of the head units 62A and 62C, respectively. The head units 62E and 62F are not shown in FIG. 3 and the like from the viewpoint of avoiding complication of the drawings, but are actually suspended from the main frame holding the projection unit PU described above via a support member. It is fixed with. Note that the head units 62E and 62F and the head units 62A to 62D described above may be suspended and supported integrally with the projection unit PU, for example, when the projection unit PU is suspended and supported, or may be mounted on the measurement frame described above. It may be provided.

Head unit 62E, the position of the X-axis direction is provided with four different Y heads 67i to 674 4. More specifically, the head unit 62E is substantially the same spacing as the secondary alignment systems AL2 1 on the -X side parallel linear (reference axis) in the X-axis passing through the detection center of primary alignment system AL1 in the distance WD described above on LA in three Y heads 67i to 674 3 disposed innermost (a distance slightly shorter than WD) a predetermined distance from the Y heads 67 3 + X-side of the (+ X side) away and from the reference axis LA + Y side and a one and Y heads 67 4 disposed in a position a predetermined distance apart secondary alignment systems AL2 1 on the + Y side.

Head unit 62F with respect to reference axis LV, a head unit 62E symmetrical, comprises four Y heads 68 1 to 68 4 which are arranged symmetrically with respect to the four Y heads 67i to 674 4 and the reference axis LV Yes. At the time of an alignment operation, which will be described later, at least one Y head 67, 68 faces the Y scale 39Y 2 , 39Y 1 , respectively. Y encoder 70C, 70A) measures the Y position (and θz rotation) of wafer stage WST.

In the present embodiment, the Y heads 67 3 , 68 2 adjacent to the secondary alignment systems AL2 1 , AL2 4 in the X-axis direction at the time of baseline measurement (Sec-BCHK (interval)) of the secondary alignment system described later. Are opposed to the pair of reference gratings 52 of the FD bar 46, respectively, and the Y positions of the FD bar 46 are set at the positions of the reference gratings 52 by the Y heads 67 3 and 68 2 facing the pair of reference gratings 52, respectively. It is measured. In the following, encoders composed of Y heads 67 3 and 68 2 facing the pair of reference gratings 52 are respectively Y linear encoders (hereinafter also abbreviated as “Y encoder” or “encoder” as appropriate) 70E and 70F (FIG. 6). See).

  The six linear encoders 70A to 70F described above measure the position coordinates of wafer stage WST with a resolution of, for example, about 0.1 nm, and supply the measured values to main controller 20. Main controller 20 controls the position of wafer stage WST in the XY plane based on three measurement values of linear encoders 70A to 70D, and based on the measurement values of linear encoders 70E and 70F, FD bar The rotation of 46 in the θz direction is controlled.

  In the exposure apparatus 100 of the present embodiment, as shown in FIG. 3, for example, Japanese Patent Laid-Open No. 6-283403 (corresponding US Pat. No. 5,448,332) comprising an irradiation system 90a and a light receiving system 90b. Are provided with a multi-point focal position detection system (hereinafter abbreviated as “multi-point AF system”) having an oblique incidence method similar to that disclosed in the above. In the present embodiment, as an example, the irradiation system 90a is disposed on the + Y side of the −X end portion of the head unit 62E described above, and light is received on the + Y side of the + X end portion of the head unit 62F while facing this. A system 90b is arranged.

A plurality of detection points of the multi-point AF system (90a, 90b) are arranged at predetermined intervals along the X-axis direction on the test surface. In this embodiment, for example, they are arranged in a matrix of 1 row and M columns (M is the total number of detection points) or 2 rows and N columns (N is 1/2 of the total number of detection points). In FIG. 3, a plurality of detection points to which the detection beam is irradiated are not shown individually but are shown as elongated detection areas (beam areas) AF extending in the X-axis direction between the irradiation system 90a and the light receiving system 90b. Yes. Since this detection area AF has a length in the X-axis direction that is set to be approximately the same as the diameter of the wafer W, the wafer W is scanned almost once in the Y-axis direction, so that the entire surface of the wafer W can be moved in the Z-axis direction. Position information (surface position information) can be measured. The detection area AF is arranged between the liquid immersion area 14 (exposure area IA) and the detection area of the alignment system (AL1, AL2 1 , AL2 2 , AL2 3 , AL2 4 ) in the Y-axis direction. Therefore, the multipoint AF system and the alignment system can perform detection operations in parallel. The multipoint AF system may be provided in a main frame or the like that holds the projection unit PU, but in the present embodiment, it is provided in the measurement frame described above.

  In addition, although the some detection point shall be arrange | positioned by 1 row M column or 2 rows N columns, the number of rows and / or the number of columns is not restricted to this. However, when the number of rows is two or more, it is preferable that the positions of the detection points in the X-axis direction are different between different rows. Furthermore, although the plurality of detection points are arranged along the X-axis direction, the present invention is not limited to this, and all or some of the plurality of detection points may be arranged at different positions in the Y-axis direction. For example, a plurality of detection points may be arranged along a direction intersecting both the X axis and the Y axis. That is, it is only necessary that the plurality of detection points have different positions at least in the X-axis direction. In the present embodiment, the detection beam is irradiated to a plurality of detection points. However, for example, the detection beam may be irradiated to the entire detection area AF. Further, the length of the detection area AF in the X-axis direction may not be the same as the diameter of the wafer W.

  Each of a pair of Z positions is measured in a symmetrical arrangement with respect to the reference axis LV in the vicinity of detection points located at both ends of the plurality of detection points of the multi-point AF system (90a, 90b), that is, in the vicinity of both ends of the detection area AF. Heads for surface position sensors (hereinafter abbreviated as “Z head”) 72a, 72b and 72c, 72d are provided. These Z heads 72a to 72d are fixed to the lower surface of a main frame (not shown). The Z heads 72a to 72d may be provided on the above-described measurement frame or the like.

  The Z heads 72a to 72d irradiate the wafer table WTB with light from above, receive the reflected light, and obtain position information in the Z-axis direction orthogonal to the XY plane of the surface of the wafer table WTB at the irradiation point of the light. A sensor head to be measured, an optical displacement sensor head (CD pickup type sensor head) configured as an optical pickup used in a CD drive device, for example, is used.

Further, the above-described head units 62A and 62C are arranged in the same X position as the five Y heads 65 j and 64 i (i, j = 1 to 5) included in each, but the Y position is shifted, and each of the five Z heads. 76 j , 74 i (i, j = 1 to 5). Here, the three outer Z heads 76 3 to 76 5 and 74 1 to 74 3 belonging to the head units 62A and 62C are arranged in parallel to the reference axis LH at a predetermined distance in the + Y direction from the reference axis LH. Has been. The head unit 62A, the innermost Z heads 76 1, 74 5 belonging to each 62C, the + Y side of projection unit PU, and most inward from the second Z head 76 2, 74 4, Y heads 65 2 and 64 4 are arranged on the −Y side. The five Z heads 76 j and 74 i (i, j = 1 to 5) belonging to each of the head units 62A and 62C are arranged symmetrically with respect to the reference axis LV. In addition, as each Z head 76 and 74, the head of the optical displacement sensor similar to the above-mentioned Z head 72a-72d is employ | adopted. The configuration of the Z head will be described later.

Here, Z heads 743 is on a straight line parallel to the Z head 72a, the same Y-axis and 72b described above. Similarly, Z head 763 is Z head 72c described above, a parallel straight line at the same Y-axis and 72d.

Further, Z heads 743 and Z Y axis distance in a direction parallel to the head 74 4, and a direction parallel distance Y axis and Z head 763 and the Z head 76 2, Z heads 72a, 72b of the The interval in the direction parallel to the Y axis (which is the same as the interval in the direction parallel to the Y axis of the Z heads 72c and 72d) is almost the same. Further, Z heads 743 and Z head 74 Y-axis direction parallel distance between 5 and parallel distance Y axis and Z head 763 and the Z head 76 1, Z heads 72a, 72b of the It is slightly shorter than the interval in the direction parallel to the Y axis.

Above Z heads 72a to 72d, Z heads 74 to 72d, and Z heads 76 1 to 76 5, as shown in FIG. 6, are connected via a signal processing and selection device 170 to the main controller 20 and is, main controller 20, Z heads 72a to 72d, select any Z head from Z heads 74 to 72d, and Z heads 76 1 to 76 5 through a signal processing and selection device 170 The surface position information detected by the Z head in the activated state is received via the signal processing / selecting device 170. In the present embodiment, the Z heads 72a to 72d, the Z heads 74 1 to 74 5 , the Z heads 76 1 to 76 5, and the signal processing / selection device 170 are included in the Z axis direction of the wafer stage WST and the XY plane. A surface position measurement system 180 that measures position information in the tilt direction is configured.

  In FIG. 3, the measurement stage MST is not shown, and a liquid immersion region formed by the water Lq held between the measurement stage MST and the tip lens 191 is indicated by reference numeral 14. In FIG. 3, reference sign UP indicates an unloading position where the wafer is unloaded on wafer table WTB, and reference sign LP indicates a loading position where the wafer is loaded onto wafer table WTB. In the present embodiment, the unload position UP and the loading position LP are set symmetrically with respect to the straight line LV. Note that the unload position UP and the loading position LP may be the same position.

FIG. 6 shows the main configuration of the control system of the exposure apparatus 100. This control system is mainly configured of a main control device 20 composed of a microcomputer (or a workstation) that comprehensively controls the entire apparatus. The memory 34, which is an external storage device connected to the main controller 20, has an interferometer system 118, an encoder system 150 (encoders 70A to 70F), Z heads 72a to 72d, 74 1 to 74 5 , 76 1 to 76. Measurement system correction information such as 5 is stored. In FIG. 6, various sensors provided on the measurement stage MST such as the illuminance unevenness sensor 94, the aerial image measuring device 96, and the wavefront aberration measuring device 98 described above are collectively shown as a sensor group 99.

Then, Z heads 72a to 72d, the 72d, and 76 1 to 76 5 configuration and the like of, typically taken up by describing the Z head 72a shown in FIG.

  As shown in FIG. 7, the Z head 72a includes a focus sensor FS, a sensor main body ZH that houses the focus sensor FS, a drive unit (not shown) that drives the sensor main body ZH in the Z-axis direction, and a Z of the sensor main body ZH. A measurement unit ZE and the like for measuring an axial displacement are provided.

  As the focus sensor FS, the probe beam LB is projected onto the measurement target surface S, and the reflected light is received to optically read the displacement of the measurement target surface S, which is the same as an optical pickup used in a CD drive device or the like. These optical displacement sensors are used. The configuration and the like of the focus sensor will be described later. The output signal of the focus sensor FS is sent to a drive unit (not shown).

  The drive unit (not shown) includes an actuator, for example, a voice coil motor. One of the mover and the stator of the voice coil motor is not accommodated in the sensor body ZH, and the other is accommodated in the sensor body ZH, the measurement unit ZE, and the like. Each is fixed to a part of the case shown in the figure. This drive unit keeps the distance between the sensor body ZH and the measurement target surface S constant according to the output signal from the focus sensor FS (more precisely, the measurement target surface S is the best of the optical system of the focus sensor FS). The sensor main body ZH is driven in the Z-axis direction so as to maintain the focus position. Thereby, the sensor main body ZH follows the displacement of the measurement target surface S in the Z-axis direction, and the focus lock state is maintained.

  As the measurement unit ZE, in the present embodiment, a diffraction interference type encoder is used as an example. The measuring unit ZE is a reflective diffraction grating EG having a periodic direction in the Z-axis direction provided on a side surface of a support member SM that is fixed to the upper surface of the sensor body ZH and extends in the Z-axis direction, and is opposed to the diffraction grating EG. And an encoder head EH attached to a housing (not shown). The encoder head EH projects the probe beam EL onto the diffraction grating EG, and receives the reflected / diffracted light from the diffraction grating EG with the light receiving element, whereby the irradiation point of the probe beam EL from the reference point (for example, the origin) By reading the displacement, the displacement of the sensor body ZH in the Z-axis direction is read.

  In the present embodiment, as described above, in the focus lock state, the sensor main body ZH is displaced in the Z-axis direction so as to keep the distance from the measurement target surface S constant. Accordingly, the encoder head EH of the measuring unit ZE measures the displacement of the sensor body ZH in the Z-axis direction, whereby the surface position (Z position) of the measurement target surface S is measured. The measurement value of the encoder head EH is supplied to the main control device 20 via the signal processing / selection device 170 described above as the measurement value of the Z head 72a.

As an example, the focus sensor FS includes three parts of an irradiation system FS 1 , an optical system FS 2 , and a light receiving system FS 3 as shown in FIG. 8A.

The irradiation system FS 1 includes a light source LD made of, for example, a laser diode, and a diffraction grating plate (diffractive optical element) ZG disposed on the optical path of laser light emitted from the light source LD. Here, a tunable semiconductor laser (tunable diode laser) may be used as the light source LD.

For example, the optical system FS 2 includes a diffracted beam of laser light generated by the diffraction grating plate ZG, that is, a polarization beam splitter PBS, a collimator lens CL, a quarter-wave plate (sequentially disposed on the optical path of the probe beam LB 1 ( λ / 4 plate) WP, objective lens OL and the like.

As an example, the light receiving system FS 3 includes a cylindrical lens CYL and a four-divided light receiving element ZD that are sequentially arranged on the return optical path of the reflected beam LB 2 on the measurement target surface S of the probe beam LB 1 .

According to the focus sensor FS, linearly polarized laser light generated by the light source LD of the irradiation system FS 1 is projected onto the diffraction grating plate ZG, and diffracted light (probe beam) LB 1 is generated by the diffraction grating plate ZG. The central axis (principal ray) of the probe beam LB 1 is parallel to the Z axis and orthogonal to the measurement target surface S.

Then, the probe beam LB 1 , that is, the light of the polarization component that becomes P-polarized light with respect to the separation surface of the polarization beam splitter PBS enters the optical system FS 2 . Then, the probe beam LB 1 passes through the polarization beam splitter PBS, is converted into a parallel beam by the collimator lens CL, passes through the λ / 4 plate WP, becomes circularly polarized light, and is collected by the objective lens OL, and measured. Projected onto the target surface S. As a result, reflected light (reflected beam) LB 2 which is circularly polarized light in the direction opposite to the incident light of the probe beam LB 1 is generated on the measurement target surface S. Then, the reflected beam LB 2 traces the optical path of incident light (probe beam LB 1 ) in the reverse direction, passes through the objective lens OL, the λ / 4 plate WP, and the collimator lens CL, and travels toward the polarization beam splitter PBS. In this case, the reflected beam LB 2 is converted to S-polarized light by passing through the λ / 4 plate WP twice. Therefore, the reflected beam LB 2 is bent in the traveling direction by the separation surface of the polarization beam splitter PBS and sent to the light receiving system FS 3 .

In the light receiving system FS 3 , the reflected beam LB 2 is transmitted through the cylindrical lens CYL and projected onto the detection surface of the quadrant light receiving element ZD. Here, the cylindrical lens CYL is a so-called “kamaboko-shaped” lens, and as shown in FIG. 8B, the YZ section has a convex shape with the convex part in the Y-axis direction, and FIG. ), The XY cross section has a rectangular shape. For this reason, the cross-sectional shape of the reflected beam LB 2 transmitted through the cylindrical lens CYL is asymmetrically narrowed in the Z-axis direction and the X-axis direction, and astigmatism occurs.

Tetrameric light receiving element ZD receives reflected beam LB 2 on that detection surface. As shown in FIG. 9A, the detection surface of the quadrant light receiving element ZD is a square as a whole, and the two diagonal lines are separated into four detection areas a, b, c, and d. Has been. The center of the detection surface is OZD .

Here, the ideal focus state shown in FIG. 8 (A) (the focused state), i.e., in the state in which the probe beam LB 1 is focused on the measurement target surface S 0, on the detection surface of the reflected beam LB 2 As shown in FIG. 9C, the cross-sectional shape is a circle centered on the center OZD .

In FIG. 8A, the so-called front pin state in which the probe beam LB 1 is focused on the measurement target surface S 1 (that is, the measurement target surface S is at the ideal position S 0 , and the quadrant light receiving element ZD is in FIG. 8 (B) and a state equivalent to the state in the position indicated by reference numeral 1 in FIG. 8 (C)), the cross-sectional shape on the detection surface of the reflected beam LB 2 is such that shown in FIG. 9 (B) It becomes a horizontally long oval centered around the center OZD .

Further, in FIG. 8A, the so-called rear pin state in which the probe beam LB 1 is focused on the measurement target surface S -1 (that is, the measurement target surface S is at the ideal position S 0 , and the quadrant light receiving element ZD). 8 (B) and FIG. 8 (C) is a state equivalent to the state indicated by reference numeral −1), the cross-sectional shape of the reflected beam LB 2 on the detection surface is as shown in FIG. 9 (D). It becomes a vertically long oval centered on the center OZD as shown.

  In an arithmetic circuit (not shown) connected to the quadrant light-receiving element ZD, the intensity of light received by the four detection areas a, b, c, and d is Ia, Ib, Ic, and Id, respectively, by the following equation (7): The expressed focus error I is calculated and output to a drive unit (not shown).

I = (Ia + Ic) − (Ib + Id) (7)
In the above-described ideal focus state, the areas of the beam cross sections in each of the four detection regions are equal to each other, so that I = 0 is obtained. In the above-described front pin state, I <0 from Equation (7), and in the rear pin state, I> 0 from Equation (7).

A drive unit (not shown) receives the focus error I from the detection unit FS 3 in the focus sensor FS and drives the sensor body ZH storing the focus sensor FS in the Z-axis direction so as to reproduce I = 0. Due to the operation of the drive unit, the sensor main body ZH is also displaced following the Z displacement of the measurement target surface S. Therefore, the probe beam always focuses on the measurement target surface S, that is, the sensor main body ZH and the measurement target surface S. The distance between is always kept constant (the focus lock state is kept).

  On the other hand, the drive unit (not shown) can also drive and position the sensor body ZH in the Z-axis direction so that the measurement result of the measurement unit ZE matches the input signal from the outside of the Z head 72a. Therefore, the focal point of the probe beam LB can be positioned at a position different from the surface position of the actual measurement target surface S. By this operation of the drive unit (scale servo control), it is possible to execute a return process when switching the Z head, which will be described later, an avoidance process when an output signal abnormality occurs, and the like.

  In the present embodiment, as described above, an encoder is employed as the measurement unit ZE, and the Z displacement of the diffraction grating EG installed in the sensor body ZH is read using the encoder head EH. Since the encoder head EH is a relative position sensor that measures the displacement of the measurement target (diffraction grating EG) from the reference point, it is necessary to determine the reference point. In the present embodiment, the end position of the diffraction grating EG is detected, or when a positioning pattern is provided on the diffraction grating EG, the positioning pattern is detected to detect the reference position of the Z displacement (for example, The origin) may be determined. In any case, the reference surface position of the measurement target surface S can be determined corresponding to the reference position of the diffraction grating EG, and the Z displacement of the measurement target surface S from the reference surface position, that is, the position in the Z-axis direction can be determined. It can be measured. When the Z head is started for the first time, such as when the exposure apparatus 100 is started, the reference position (for example, the origin) of the diffraction grating EG (that is, the reference surface position of the measurement target surface S) is always set. In this case, it is desirable that the reference position is set in the vicinity of the center of the movement range of the sensor body ZH. Therefore, the reference plane position corresponding to the reference position in the vicinity of the center is the focus of the optical system of the focus sensor FS. A drive coil for adjusting the focal position of the optical system may be provided so as to match the position, and the Z position of the objective lens OL may be adjusted.

In the Z head 72a, the sensor body ZH and the measurement unit ZE are both housed in a housing (not shown), and the optical path length of the portion of the probe beam LB 1 exposed to the outside of the housing is extremely short. The influence of is very small. Therefore, a sensor including a Z head is remarkably excellent in measurement stability (short-term stability) in a short period of time when air fluctuates, for example, as compared with a laser interferometer.

Other Z heads are configured and function in the same manner as the Z head 72a described above. Thus, in the embodiment, as each Z head, configured to observe the encoder Like Y scales 39Y 1, 39Y diffraction grating surface such as a 2 from above (+ Z direction) is employed. Accordingly, by measuring surface position information at different positions on the upper surface of wafer table WTB with a plurality of Z heads, the position of wafer stage WST in the Z-axis direction, θy rotation (rolling), and θx rotation (pitching) are measured. Can do. However, in the present embodiment, since the accuracy of the pitching control of wafer stage WST is not particularly important at the time of exposure, the surface position measurement system including the Z head does not measure pitching, and Y scale 39Y on wafer table WTB is used. 1 and 39Y 2 are each configured such that one Z head faces each other.

  Next, detection (hereinafter referred to as focus mapping) of position information (surface position information) regarding the Z-axis direction of the surface of the wafer W performed by the exposure apparatus 100 of the present embodiment will be described.

In this focus mapping, main controller 20 applies X head 66 3 (X linear encoder 70D) facing X scale 39X 2 and Y scales 39Y 1 and Y 2 as shown in FIG. Based on two Y heads 68 2 and 67 3 (Y linear encoders 70A and 70C) facing each other, the position of wafer stage WST in the XY plane is managed. In the state of FIG. 10A, a straight line (center line) parallel to the Y axis passing through the center of wafer table WTB (substantially coincides with the center of wafer W) coincides with the aforementioned reference line LV. Yes.

  In this state, main controller 20 starts scanning (scanning) of wafer stage WST in the + Y direction. After this scanning starts, wafer stage WST moves in the + Y direction, and multipoint AF system (90a , 90b), the Z heads 72a to 72d and the multipoint AF system (90a, 90b) are operated together (turned ON) until the detection beam starts to be applied on the wafer W.

  Then, with the Z heads 72a to 72d and the multipoint AF system (90a, 90b) operating at the same time, the wafer stage WST advances in the + Y direction as shown in FIG. 10B. In the meantime, the position information (surface position information) regarding the Z-axis direction of the surface of the wafer table WTB (surface of the plate 28) measured by the Z heads 72a to 72d at a predetermined sampling interval, and the multipoint AF system (90a, 90b). The position information (surface position information) regarding the Z-axis direction of the surface of the wafer W at the plurality of detection points detected in (1) is acquired, and the acquired surface position information and measurement by the Y linear encoders 70A and 70C at each sampling time. The three values are associated with each other and sequentially stored in a memory (not shown).

  When the detection beam of the multipoint AF system (90a, 90b) is not applied to the wafer W, the main controller 20 ends the above sampling, and the detection points for the detection points of the multipoint AF system (90a, 90b) are terminated. The surface position information is converted into data based on the surface position information obtained by the Z heads 72a to 72d simultaneously taken.

If this further detail, Z head 72a, based on the average value of the measurement values of 72b, a predetermined point on the -X side end portion of the region of the plate 28 (Y scales 39Y 2 is formed region) ( For example, it corresponds to the midpoint of the measurement points of each of the Z heads 72a and 72b, that is, the point on the X axis that is substantially the same as the arrangement of a plurality of detection points of the multipoint AF system (90a and 90b): Surface position information at the measurement point P1) is obtained. Further, Z heads 72c, based on an average value of the measurement values of 72d, a predetermined point on the area on the + X side end portion near the plate 28 (Y region scales 39Y 1 is formed) (eg, Z head 72c, 72d corresponds to the midpoint of each measurement point, that is, substantially the same point on the X axis as the array of a plurality of detection points of the multipoint AF system (90a, 90b): This point is hereinafter referred to as the right measurement point P2) The surface position information at is obtained. Then, as shown in FIG. 10C, main controller 20 obtains the surface position information at each detection point of the multipoint AF system (90a, 90b) from the surface position of left measurement point P1 and the right measurement point P2. Are converted into plane position data z1 to zk based on a straight line connecting the plane positions. The main control device 20 performs such conversion for the information captured at the time of all sampling.

In this way, by acquiring the above conversion data in advance, for example, at the time of exposure, the main controller 20 uses the above-described Z heads 74 i and 76 j to the surface of the wafer table WTB (Y scale). Points on the area where 39Y 2 is formed (points near the left measurement point P1) and points on the area where the Y scale 39Y 1 is formed (points near the right measurement point P2)) Measurement is performed to calculate the Z position and θy rotation (rolling) amount θy of wafer stage WST. A predetermined calculation is performed using the Z position, the rolling amount θy, and the θx rotation (pitching) amount θx of the wafer stage WST measured by the Y interferometer 16, and the center (exposure) of the exposure area IA is performed. The Z position (Z 0 ), the rolling amount θy, and the pitching amount θx of the wafer table WTB surface at the center) are calculated, and based on the calculation results, the surface position of the left measurement point P1 and the surface of the right measurement point P2 are calculated. The position of the upper surface of the wafer W is controlled without actually acquiring the surface position information of the wafer W by obtaining a straight line connecting the positions and passing through the exposure center and using the straight line and the surface position data z1 to zk. (Focus leveling control) becomes possible. Therefore, there is no problem even if the multipoint AF system is arranged at a position distant from the projection optical system PL, so that the focus mapping of the present embodiment can be suitably applied even to an exposure apparatus with a narrow working distance.

In the above description, the surface position of the left measurement point P1 and the surface position of the right measurement point P2 are calculated based on the average value of the measured values of the Z heads 72a and 72b and the average value of the Z heads 72c and 72d, respectively. However, the present invention is not limited to this, and the surface position information at each detection point of the multipoint AF system (90a, 90b) is a surface position based on, for example, a straight line connecting the surface positions measured by the Z heads 72a and 72c. It may be converted into data. In this case, the difference between the measured value of the Z head 72a and the measured value of the Z head 72b acquired at each sampling timing, and the difference between the measured value of the Z head 72c and the measured value of the Z head 72d are obtained. Then, when performing surface position control during exposure or the like, the Z position and θy rotation of wafer stage WST are calculated by measuring the surface of wafer table WTB with Z heads 74 i and 76 j. The surface position information on the wafer surface is actually obtained by performing a predetermined calculation using the pitching amount θx of wafer stage WST measured by interferometer 16 and the above-described surface position data z1 to zk and the difference. Therefore, the surface position of the wafer W can be controlled.

  However, the above description is based on the assumption that there are no irregularities in the X-axis direction on the surface of wafer table WTB. In the following, it is assumed that there are no irregularities in the X-axis direction on the surface of wafer table WTB.

  Next, focus calibration will be described. Focus calibration is representative of surface position information at one end and the other end of the wafer table WTB in a certain reference state, and the surface of the measurement plate 30 of the multipoint AF system (90a, 90b). Projection optical system detected using the aerial image measurement device 45 in a process similar to the above-described reference state in the process of obtaining the relationship with the detection result (surface position information) at the detection point (the first half of the focus calibration) Processing for obtaining surface position information at one end and the other end of the wafer table WTB in the X-axis direction corresponding to the best focus position of the PL (processing in the latter half of the focus calibration) is performed. Based on this, the offset at the representative detection point of the multipoint AF system (90a, 90b), that is, the best fork of the projection optical system PL It means a process such as obtaining a difference between the detection origin of the scan position and the multipoint AF system.

At the time of focus calibration, as shown in FIG. 11A, main controller 20 applies X head 66 2 (X linear encoder 70D) facing X scale 39X 2 and Y scales 39Y 1 and 39Y 2 , respectively. Based on two opposing Y heads 68 2 and 67 3 (Y linear encoders 70A and 70C), the position of wafer stage WST in the XY plane is managed. In the state of FIG. 11A, the center line of wafer table WTB coincides with reference line LV. Further, in the state of FIG. 11A, wafer stage WST is at a position where the detection beam from multi-point AF system (90a, 90b) is irradiated onto measurement plate 30 described above in the Y-axis direction. Although not shown here, there is a measurement stage MST on the + Y side of wafer stage WST, and water flows between FD bar 46 and wafer table WTB described above and tip lens 191 of projection optical system PL. It is held (see FIG. 18).

(A) In this state, main controller 20 performs the first half of the focus calibration as follows. That is, main controller 20 detects wafer table WTB detected by Z heads 72a, 72b, 72c, 72d in the vicinity of detection points located at both ends of the detection area of multipoint AF system (90a, 90b). While detecting surface position information at one end and the other end in the X-axis direction, the above-described measurement plate 30 (FIG. 3) is detected using the multipoint AF system (90a, 90b) on the basis of the surface position information. Reference) Detect surface position information on the surface. Thereby, the measured values of the Z heads 72a, 72b, 72c, 72d in a state where the center line of the wafer table WTB coincides with the reference line LV (surface positions at one end and the other end of the wafer table WTB in the X axis direction). Information) and a detection result (surface position information) at a detection point on the surface of the measurement plate 30 of the multipoint AF system (90a, 90b) (a detection point located at the center or in the vicinity thereof among a plurality of detection points). I want.

(B) Next, main controller 20 moves wafer stage WST in the + Y direction by a predetermined distance, and stops wafer stage WST at a position where measurement plate 30 is disposed directly under projection optical system PL. Then, main controller 20 performs the second half of the focus calibration as follows. That is, as shown in FIG. 11B, the main controller 20 uses the surface position information measured by the Z heads 72a to 72d as a reference, as in the case of the first half of the focus calibration described above. The reticle R or a mark (not shown) on the reticle stage RST is controlled using the aerial image measuring device 45 while controlling the position (Z position) of the projection optical system PL of the measurement plate 30 (wafer stage WST) in the optical axis direction. An aerial image of the measurement mark formed on the plate is measured by Z-direction scan measurement disclosed in, for example, International Publication No. 2005/1224834 pamphlet, and the best focus position of the projection optical system PL is determined based on the measurement result. taking measurement. Main controller 20 synchronizes with the capture of the output signal from aerial image measuring device 45 during the above Z-direction scan measurement, and surface position information at one end and the other end of wafer table WTB in the X-axis direction. The measurement values of the pair of Z heads 74 3 and 76 3 are measured. Then, the values of the Z heads 74 3 and 76 3 corresponding to the best focus position of the projection optical system PL are stored in a memory (not shown). In the latter half of the focus calibration, the position (Z position) in the optical axis direction of the projection optical system PL of the measurement plate 30 (wafer stage WST) on the basis of the surface position information measured by the Z heads 72a to 72d. This is because the latter half of the focus calibration is performed during the focus mapping described above.

  In this case, as shown in FIG. 11B, since the liquid immersion region 14 is formed between the projection optical system PL and the measurement plate 30 (wafer stage WST), the measurement of the aerial image described above is performed. This is performed via the projection optical system PL and water. Although not shown in FIG. 11B, the measurement plate 30 and the like of the aerial image measurement device 45 are mounted on the wafer stage WST, and the light receiving elements and the like are mounted on the measurement stage MST. The measurement of the aerial image is performed while wafer stage WST and measurement stage MST are kept in contact (or in proximity) (see FIG. 20).

(C) As a result, the main controller 20 determines the measured values of the Z heads 72a to 72d (the one on the X axis direction side of the wafer table WTB and the other end of the wafer table WTB) obtained in the process of the first half of the focus calibration in (a). The surface position information at the portion) and the detection result (surface position information) on the surface of the measurement plate 30 by the multi-point AF system (90a, 90b), and the projection obtained in the latter half of the focus calibration process of (b) above. Based on the measured values of the Z heads 74 3 and 76 3 corresponding to the best focus position of the optical system PL (that is, the surface position information at one end and the other end of the wafer table WTB in the X-axis direction) An offset at a representative detection point of the point AF system (90a, 90b), that is, a deviation between the best focus position of the projection optical system PL and the detection origin of the multipoint AF system is obtained. Possible to become. In the present embodiment, this representative detection point is, for example, a detection point at the center of a plurality of detection points or in the vicinity thereof, but the number and / or position thereof may be arbitrary. In this case, main controller 20 adjusts the detection origin of the multipoint AF system so that the offset at the representative detection point becomes zero. This adjustment may be performed optically, for example, by adjusting the angle of a parallel plane plate (not shown) inside the light receiving system 90b, or the detection offset may be adjusted electrically. Alternatively, the offset may be stored without adjusting the detection origin. Here, the detection origin is adjusted by the optical method described above. Thereby, the focus calibration of the multipoint AF system (90a, 90b) is completed. In addition, in the adjustment of the optical detection origin, it is difficult to set the offset to zero at all the remaining detection points other than the representative detection point. Therefore, the offset after optical adjustment is stored at the remaining detection points. It is preferable to keep it.

  Next, offset correction of detection values between a plurality of light receiving elements (sensors) individually corresponding to a plurality of detection points of the multipoint AF system (90a, 90b) (hereinafter referred to as AF sensor offset correction) will be described. .

  In this AF sensor offset correction, as shown in FIG. 12A, the main controller 20 performs a multipoint AF system (90a, 90b) for the FD bar 46 provided with a predetermined reference plane. The detection beam is irradiated from the irradiation system 90a, and the output signal from the light receiving system 90b of the multipoint AF system (90a, 90b) that receives the reflected light from the surface (reference plane) of the FD bar 46 is captured.

  In this case, if the surface of the FD bar 46 is set parallel to the XY plane, the main controller 20 individually corresponds to a plurality of detection points based on the output signal acquired as described above. The relationship between detection values (measurement values) of a plurality of sensors is obtained and the relationship is stored in a memory, or the detection values of all sensors are, for example, representative detection points at the time of the focus calibration described above. The AF sensor offset correction can be performed by electrically adjusting the detection offset of each sensor so as to be the same value as the detection value of the corresponding sensor.

However, in this embodiment, when the output signal is received from the light receiving system 90b of the multipoint AF system (90a, 90b), the main controller 20 causes the Z head 74 as shown in FIG. Since the surface inclination of the measurement stage MST (integrated with the FD bar 46) is detected using 4 , 74 5 , 76 1 , and 76 2 , it is not always necessary to set the surface of the FD bar 46 parallel to the XY plane. . That is, as schematically shown in FIG. 12B, the detection values at the respective detection points are values as indicated by arrows in the same figure, and the lines connecting the upper ends of the detection values are the same. If there are irregularities as indicated by dotted lines in the figure, each detection value may be adjusted so that a line connecting the upper ends of the detection values is indicated by a solid line in the figure.

  Next, a parallel processing operation using wafer stage WST and measurement stage MST in exposure apparatus 100 of the present embodiment will be described with reference to FIGS. During the following operations, the main controller 20 controls the opening and closing of the valves of the liquid supply device 5 and the liquid recovery device 6 of the local liquid immersion device 8 as described above, and the leading end lens of the projection optical system PL. 191 is always filled with water. However, in the following, in order to make the explanation easy to understand, explanation regarding the control of the liquid supply device 5 and the liquid recovery device 6 is omitted. Further, the following description of the operation will be made with reference to a number of drawings, and the same members may or may not be labeled with the same members for each drawing. In other words, although the reference numerals described in the drawings are different, the drawings have the same configuration regardless of the presence or absence of the reference numerals. The same applies to each drawing used in the description so far.

  FIG. 13 shows a state in which step-and-scan exposure is performed on wafer W placed on wafer stage WST. In this exposure, wafer stage WST is moved to the scanning start position (acceleration start position) for exposure of each shot area on wafer W based on the result of wafer alignment (EGA: Enhanced Global Alignment) performed before the start. It is performed by repeating the movement between the moving shots and the scanning exposure in which the pattern formed on the reticle R is transferred to each shot area by the scanning exposure method. Further, the exposure is performed in order from the shot area located on the −Y side on the wafer W to the shot area located on the + Y side. Note that the liquid immersion region 14 is formed between the projection unit PU and the wafer W.

During the exposure described above, main controller 20 causes wafer stage WST to move within the XY plane (including rotation in the θz direction) between two Y encoders 70A and 70C and one of two X encoders 70B and 70D. Control is performed based on the measurement results of a total of three encoders. Here, the two X encoders 70B and 70D are constituted by two X heads 66 facing the X scales 39X 1 and 39X 2 respectively, and the two Y encoders 70A and 70C are the Y scales 39Y 1 and 39Y 2 . The Y heads 65 and 64 are opposed to each other. Further, the Z position of wafer stage WST and the rotation (rolling) in the θy direction are the Z heads 74 belonging to head units 62C and 62A respectively facing the X-axis direction one side and the other side end of wafer table WTB surface. Control is based on the measured values of i and 76 i . The θx rotation (pitching) of wafer stage WST is controlled based on the measurement value of Y interferometer 16. In the case where three or more Z heads including Z head 74 i, 76 i on the surface of second water repellent plate 28b of wafer table WTB is opposed, Z heads 74 i, 76 i and other one Z Based on the measurement value of the head, it is also possible to control the position of wafer stage WST in the Z-axis direction, θy rotation (rolling), and θx rotation (pitching). In any case, the control of the position of the wafer stage WST in the Z-axis direction, the rotation in the θy direction, and the rotation in the θx direction (that is, focus / leveling control of the wafer W) is based on the result of focus mapping performed in advance. Has been done.

Shown in Figure 13, the position of wafer stage WST, but X is the scale 39X 1 (shown circled in Figure 13) X heads 66 5 faces, facing X scale 39X 2 X There is no head 66. Therefore, main controller 20 executes position (X, Y, θz) control of wafer stage WST using one X encoder 70B and two Y encoders 70A, 70C. In this case, when wafer stage WST moves from the position shown in FIG. 13 moves in the -Y direction, the X heads 66 5 (no longer faces) off the X scales 39X 1, the broken line in the X heads 66 4 (FIG. 13 instead It is shown enclosed in a circle in which) faces X scale 39X 2. Therefore, main controller 20 switches to stage control using one X encoder 70D and two Y encoders 70A and 70C.

When the wafer stage WST is located at the position shown in FIG. 13, the Z heads 74 3 and 76 3 (shown in circles in FIG. 13) face the Y scales 39Y 2 and 39Y 1 , respectively. Yes. Therefore, main controller 20 performs position (Z, θy) control of wafer stage WST using Z heads 74 3 and 76 3 . Here, when wafer stage WST moves in the + X direction from the position shown in FIG. 13, Z heads 74 3 and 76 3 deviate from the corresponding Y scale, and instead, Z heads 74 4 and 76 4 (indicated by broken lines in the figure). (Shown in a circle) are opposed to the Y scales 39Y 2 and 39Y 1 , respectively. Therefore, main controller 20 switches to stage control using Z heads 74 4 and 76 4 .

  In this way, main controller 20 performs stage control by constantly switching the encoder and Z head to be used according to the position coordinates of wafer stage WST.

  Independently of the position measurement of wafer stage WST using the above-described measuring instrument system, position (X, Y, Z, θx, θy, θz) measurement of wafer stage WST using interferometer system 118 is always performed. Has been done. Here, the X position and θz rotation (yawing) of wafer stage WST using X interferometer 126, 127, or 128 constituting interferometer system 118, and the Y position, θx rotation, and For the θz rotation, the Y position, the Z position, the θy rotation, and the θz rotation are measured using Z interferometers 43A and 43B (not shown in FIG. 13, refer to FIG. 1 or 2). Any one of X interferometers 126, 127, and 128 is used according to the Y position of wafer stage WST. During the exposure, an X interferometer 126 is used as shown in FIG. The measurement result of interferometer system 118 is used for position control of wafer stage WST, with the exception of the pitching amount (θx rotation), in an auxiliary manner, at the time of backup described later, or when measurement by the encoder system cannot be performed. The

  When exposure of wafer W is completed, main controller 20 drives wafer stage WST toward unload position UP. At that time, the wafer stage WST and the measurement stage MST, which are separated from each other during the exposure, come into contact with each other with a separation distance of about 300 μm therebetween and shift to the scram state. Here, the −Y side surface of the FD bar 46 on the measurement table MTB and the + Y side surface of the wafer table WTB come into contact with or approach each other. With this scrum state maintained, both stages WST and MST move in the −Y direction, so that the liquid immersion region 14 formed under the projection unit PU moves onto the measurement stage MST. For example, FIGS. 14 and 15 show the state after movement.

When wafer stage WST further moves in the -Y direction and deviates from the effective stroke area (area where wafer stage WST moves during exposure and wafer alignment), all the X heads, Y heads, and all that constitute encoders 70A-70D. Z heads deviate from the corresponding scales on wafer table WTB. Therefore, stage control based on the measurement results of the encoders 70A to 70D and the Z heads 74 i and 76 j becomes impossible. Immediately before that, main controller 20 switches to stage control based on the measurement result of interferometer system 118. Here, among the three X interferometers 126, 127, and 128, the X interferometer 128 is used.

  Thereafter, as shown in FIG. 14, wafer stage WST releases the scrum state with measurement stage MST and moves to unload position UP. After the movement, main controller 20 unloads wafer W on wafer table WTB. Then, as shown in FIG. 15, wafer stage WST is driven in the + X direction to move to loading position LP, and the next wafer W is loaded onto wafer table WTB.

In parallel with these operations, main controller 20 adjusts the position of FD bar 46 supported by measurement stage MST in the XY plane, baseline measurement of four secondary alignment systems AL2 1 to AL2 4 , Execute Sec-BCHK (secondary baseline check). Sec-BCHK is performed at intervals every wafer exchange. Here, in order to measure the position (θz rotation) in the XY plane, the Y heads 67 3 and 68 2 and the Y heads 67 3 and 68 2 are each measured from a pair of reference gratings 52 on the FD bar 46 facing each other. The constructed Y encoders 70E and 70F are used.

Next, main controller 20 drives wafer stage WST to position reference mark FM on measurement plate 30 within the detection field of primary alignment system AL1, as shown in FIG. 16, and alignment systems AL1, AL2 performing first half of the processing of 1 AL24 determining the reference position of the baseline measurement of 4 Pri-BCHK (primary baseline check).

At this time, as shown in FIG. 16, two Y heads 68 2 and 67 3 and one X head 66 1 (indicated by circles in the figure) are respectively connected to Y scales 39Y 1 and 39Y 2. and come to face the X scale 39X 2. Therefore, main controller 20 switches from interferometer system 118 to stage control using encoder system 150 (encoders 70A, 70C, and 70D). The interferometer system 118 is again used auxiliary, except for the measurement of θx rotation. Of the three X interferometers 126, 127, and 128, the X interferometer 127 is used.

  Next, main controller 20 manages the position of wafer stage WST based on the measurement values of the three encoders described above, and moves toward a position where alignment marks attached to the three first alignment shot areas are detected. Starts moving the wafer stage WST in the + Y direction.

  Then, when wafer stage WST reaches the position shown in FIG. 17, main controller 20 stops wafer stage WST. Prior to this, main controller 20 operates (turns on) Z heads 72a to 72d when all or a part of Z heads 72a to 72d face wafer table WTB or at a point before that. Measurement of the Z position and tilt (θy rotation) of wafer stage WST is started.

After stopping wafer stage WST, main controller 20 uses primary alignment system AL1, secondary alignment systems AL2 2 and AL2 3 to detect alignment marks attached to three first alignment shot areas AS almost simultaneously and individually. Then (see the star mark in FIG. 17), the detection results of the three alignment systems AL1, AL2 2 and AL2 3 and the measurement values of the three encoders at the time of detection are associated with each other and stored in a memory (not shown).

  As described above, in the present embodiment, the transition to the contact state (or proximity state) between the measurement stage MST and the wafer stage WST is completed at the position where the alignment mark in the first alignment shot region is detected. Main controller 20 moves both stages WST and MST in the + Y direction in the contact state (or proximity state) (step movement toward the position where alignment marks attached to the five second alignment shot areas are detected) ) Is started. Prior to the start of movement of both stages WST and MST in the + Y direction, main controller 20 irradiates wafer table WTB with the detection beam of multipoint AF system (90a, 90b) as shown in FIG. To start. As a result, a multi-point AF detection area is formed on wafer table WTB.

  When both stages WST and MST reach the position shown in FIG. 18 while both stages WST and MST are moving in the + Y direction, main controller 20 performs the first half of the focus calibration described above. The measured values of the Z heads 72a, 72b, 72c, 72d in a state where the center line of the wafer table WTB coincides with the reference axis LV (surface position information at one end and the other end of the wafer table WTB in the X axis direction) And the detection result (surface position information) on the surface of the measurement plate 30 by the multipoint AF system (90a, 90b). At this time, the liquid immersion region 14 is formed on the upper surface of the FD bar 46.

Then, when both stages WST and MST are further moved in the + Y direction while maintaining the contact state (or proximity state) and reach the position shown in FIG. 19, the five alignment systems AL1, AL2 1 to AL2 4 are used. Alignment marks attached to the five second alignment shot areas are detected almost simultaneously and individually (see the star mark in FIG. 19), and the detection results of the above five alignment systems AL1, AL2 1 to AL2 4 and their detection time The measurement values of the three encoders measuring the position of the wafer stage WST in the XY plane are associated with each other and stored in a memory (not shown). At this time, main controller 20 determines the position of wafer stage WST in the XY plane based on the measured values of X head 66 2 (X linear encoder 70D) and Y linear encoders 70A and 70C facing X scale 39X 2. I have control.

  Further, main controller 20 moves both stages WST and MST in the + Y direction in the contact state (or in the proximity state) after the simultaneous detection of the alignment marks attached to the five second alignment shot areas is completed. Simultaneously with the start, as shown in FIG. 19, the above-described focus mapping using the Z heads 72a to 72d and the multipoint AF system (90a, 90b) is started.

Then, when both stages WST and MST reach a position where measurement plate 30 shown in FIG. 20 is arranged immediately below projection optical system PL, main controller 20 determines the optical axis of projection optical system PL of wafer stage WST. Wafer stage based on surface position information measured by Z heads 72a, 72b, 72c, 72d without switching Z heads used for control of positions (Z positions) in directions to Z heads 74 i , 76 j In the state where the control of the Z position of the WST (measurement plate 30) is continued, the process of the latter half of the focus calibration described above is performed.

  The main controller 20 then sets the offset at the representative detection points of the multi-point AF system (90a, 90b) in the above-described procedure based on the results of the first half of the focus calibration process and the latter half of the process. Obtained and stored in the internal memory. The main controller 20 adds an offset to the mapping information when reading the mapping information obtained as a result of the focus mapping during exposure.

  In the state of FIG. 20, the focus mapping described above is continued.

When wafer stage WST reaches the position shown in FIG. 21 due to movement of both stages WST and MST in the + Y direction in the above contact state (or proximity state), main controller 20 moves wafer stage WST to its position. While stopping at the position, the measurement stage MST continues to move in the + Y direction as it is. Then, main controller 20 uses five alignment systems AL1, AL2 1 to AL2 4 to detect the alignment marks attached to the five third alignment shot regions almost simultaneously and individually (the star mark in FIG. 21). (See), the detection results of the five alignment systems AL1, AL2 1 to AL2 4 and the measurement values of the three encoders at the time of detection are associated with each other and stored in the internal memory. Also at this point, focus mapping continues.

  On the other hand, after a predetermined time from the stop of wafer stage WST, measurement stage MST and wafer stage WST shift from contact (or proximity state) to separation state. After shifting to this separated state, main controller 20 stops at that position when measurement stage MST reaches an exposure start standby position where it waits until exposure starts.

  Next, main controller 20 starts moving wafer stage WST in the + Y direction toward a position where alignment marks attached to the three force alignment shots are detected. At this time, the focus mapping is continued. On the other hand, measurement stage MST stands by at the exposure start stand-by position.

Then, when wafer stage WST reaches the position shown in FIG. 22, main controller 20 immediately stops wafer stage WST, and uses wafers on wafer W using primary alignment system AL1, secondary alignment systems AL2 2 and AL2 3. The alignment marks attached to the three force alignment shot areas are detected almost simultaneously and individually (see the star marks in FIG. 22), and the detection results of the above three alignment systems AL1, AL2 2 and AL2 3 and their detection time The measured values of three encoders of the four encoders are associated and stored in a memory (not shown). Even at this time, the focus mapping is continued, and the measurement stage MST remains on standby at the exposure start standby position. Then, main controller 20 uses, for example, the statistics disclosed in Japanese Patent Application Laid-Open No. 61-44429, etc., using the detection results of the total 16 alignment marks thus obtained and the corresponding encoder measurement values. Calculation is performed to calculate array information (coordinate values) of all shot areas on the wafer W on the coordinate system defined by the measurement axes of the four encoders of the encoder system.

  Next, main controller 20 continues the focus mapping while moving wafer stage WST in the + Y direction again. When the detection beam from the multipoint AF system (90a, 90b) deviates from the surface of the wafer W, the focus mapping is ended as shown in FIG.

Thereafter, main controller 20 moves wafer stage WST to a scanning start position (exposure start position) for first shot exposure on wafer W. During the movement, main controller 20 moves Z position of wafer stage WST, While maintaining the θy rotation and the θx rotation, the Z head used for controlling the Z position of the wafer stage WST and the θy rotation is switched from the Z heads 72a to 72d to the Z heads 74 i and 74 j . Immediately after the switching, the main controller 20 performs step-and-and-on based on the results of the above-mentioned wafer alignment (EGA) and the latest measurement results of the five alignment systems AL1, AL2 1 to AL2 4. Scanning exposure is performed by liquid immersion exposure, and a reticle pattern is sequentially transferred to a plurality of shot areas on the wafer W. Thereafter, the same operation is repeated.

Next, a method for calculating the Z position and the tilt amount of wafer stage WST using the measurement result of the Z head will be described. Main controller 20 uses four Z heads 70a to 70d constituting surface position measurement system 180 (see FIG. 6) at the time of focus calibration and focus mapping, and height Z and inclination (rolling) θy of wafer stage WST. Measure. Further, main controller 20 measures height Z and inclination (rolling) θy of wafer stage WST using two Z heads 74 i and 76 j (i and j are any one of 1 to 5) during exposure. To do. Each Z head measures the surface position of the reflection type diffraction grating by projecting a probe beam onto the reflection type diffraction grating formed on the corresponding Y scale 39Y 1 or 39Y 2 and receiving the reflected light. It is configured.

FIG. 24A shows a two-dimensional plane having a height Z 0 at the reference point O, a rotation angle (tilt angle) θx around the X axis, and a rotation angle (tilt angle) θy around the Y axis. The height Z at the position (X, Y) of this plane is given by the function of the following equation (8).
f (X, Y) = − tan θy · X + tan θx · Y + Z 0 (8)

As shown in FIG. 24B, at the time of exposure, two Z heads 74 i and 76 j (where i and j are any one of 1 to 5) are used to move the reference plane of the wafer stage WST and the projection optical system. The height Z and the rolling θy from the movement reference plane (surface substantially parallel to the XY plane) of the wafer table WTB at the intersection (reference point) O with the PL optical axis AX are measured. Here, Z heads 74 3 and 76 3 are used as an example. Similarly to the example of FIG. 24A, the height of wafer table WTB at reference point O is Z 0 , the inclination (pitching) around the X axis is θx, and the inclination (rolling) around the Y axis is θy. In this case, Y scales 39Y 1 to coordinates in the XY plane (p L, q L) positioned in the Z head 743 and the coordinates (p R, q R) each Z head 763 which is located to present, 39Y 2 The measured values Z L and Z R of the surface position of (the reflection type diffraction grating formed on) follow the theoretical formulas (9) and (10) similar to the formula (8).

Z L = −tan θy · p L + tan θx · q L + Z 0 (9)
Z R = −tan θy · p R + tan θx · q R + Z 0 (10)
Therefore, from the theoretical equations (9) and (10), the height Z 0 and the rolling θy of the wafer table WTB at the reference point O are measured using the measured values Z L and Z R of the Z heads 74 3 and 76 3 . The following expressions (11) and (12) are expressed.

Z 0 = {Z L + Z R −tan θx · (q L + q R )} / 2 (11)
tan θy = {Z L −Z R −tan θx · (q L −q R )} / (p R −p L ) (12)
Even when other combinations of Z heads are used, the height Z 0 and rolling θy of wafer table WTB at reference point O can be calculated by using theoretical equations (11) and (12). However, the pitching θx uses a measurement result of another sensor system (interferometer system 118 in the present embodiment).

As shown in FIG. 24B, at the time of focus calibration and focus mapping, the four Z heads 72a to 72d are used at the center point O ′ of a plurality of detection points of the multipoint AF system (90a, 90b). Then, the height Z and the rolling θy of the wafer table WTB are measured. Here, Z head 72a~72d each position (X, Y) = (p a, q a), (p b, q b), (p c, q c), (p d, q d) to is set up. These positions are symmetrical with respect to the center point O ′ = (Ox ′, Oy ′), that is, p a = p b , p c = p d , q a = q c, as shown in FIG. , Q b = q d and (p a + p c ) / 2 = (p b + p d ) / 2 = Ox ′, (q a + q b ) / 2 = (q c + q d ) / 2 = Oy ′ Is set.

Z heads 72a, 72b of the measurement values Za, than the average of Zb (Za + Zb) / 2 , the position (p a = p b, Oy ') height Ze of wafer table WTB at a point e of, Z heads 70c, 70d measurements Zc, than the average of Zd (Zc + Zd) / 2 , the position (p c = p d, Oy ') height Zf of wafer table WTB at a point f of obtained. Here, assuming that the height of the wafer table WTB at the center point O ′ is Z 0 and the inclination (rolling) around the Y axis is θy, Ze and Zf follow the theoretical formulas (13) and (14), respectively.

Ze {= (Za + Zb) / 2} = - tanθy · (p a + p b -2Ox ') / 2 + Z 0 ...... (13)
Zf {= (Zc + Zd) / 2} = - tanθy · (p c + p d -2Ox ') / 2 + Z 0 ... (14)
Therefore, from the theoretical formulas (13) and (14), the height Z 0 and the rolling θy of the wafer table WTB at the center point O ′ are calculated using the measured values Za to Zd of the Z heads 70a to 70d as follows: 15) and (16).

Z 0 = (Ze + Zf) / 2 = (Za + Zb + Zc + Zd) / 4 (15)
tanθy = -2 (Ze-Zf) / (p a + p b -p c -p d)
= - (Za + Zb-Zc -Zd) / (p a + p b -p c -p d) ... (16)
However, the pitching θx uses a measurement result of another sensor system (interferometer system 118 in the present embodiment).

As shown in FIG. 16, from servo control of wafer stage WST by interferometer system 118 to encoder system 150 (encoders 70A to 70F) and surface position measurement system 180 (Z head systems 72a to 72d, 74 1 to 74 5 , 76). Immediately after switching to servo control according to 1 to 76 5 ), only two of the Z heads 72b and 72d are opposed to the corresponding Y scales 39Y 1 and 39Y 2 , and therefore, the equations (15) and (16) are changed. It is impossible to calculate the Z and θy positions of wafer stage WST at center point O ′. In this case, the following equations (17) and (18) are applied.

Z 0 = {Z b + Z d −tan θx · (q b + q d −2Oy ′)} / 2 (17)
tan θy = {Z b −Z d −tan θx · (q b −q d )} / (p d −p b ) (18)
Then, after wafer stage WST moves in the + Z direction and Z heads 72a and 72c face corresponding Y scales 39Y 1 and 39Y 2 , equations (15) and (16) above are applied.

As described above, the scanning exposure for the wafer W is performed after the focus is adjusted by finely driving the wafer stage WST in the Z-axis direction and the tilt direction according to the unevenness of the surface of the wafer W. Therefore, prior to scanning exposure, focus mapping for measuring the unevenness (focus map) on the surface of the wafer W is executed. Here, asperities on the surface of the wafer W are measured using the Z heads 72a to 72d at a predetermined sampling interval (ie, Y interval) while moving the wafer stage WST in the + Y direction, as shown in FIG. Measured using the multi-point AF system (90a, 90b) with reference to the surface position of the wafer table WTB (more precisely, the corresponding Y scales 39Y 1 , 39Y 2 ).

More specifically, as shown in FIG. 24 (B), Z head 72a, the surface position Za of Y scales 39Y 2 as measured using 72b, than the average of Zb, the surface position of wafer table WTB at point e Ze but, Z head 72c, the surface position Zc of the Y scales 39Y 1 as measured using 72d, than the average of Zd, surface position Zf of wafer table WTB at point f, determined. Here, a plurality of detection points of the multipoint AF system and their centers O ′ are positioned on a straight line ef parallel to the X axis connecting the points e and f. Therefore, as shown in FIG. 10C, the surface position Ze at the point e (P1 in FIG. 10C) and the surface position Zf at the point f (P2 in FIG. 10C) of the wafer table WTB. The surface position Z 0k of the surface of the wafer W at the detection point X k is measured using the multi-point AF system (90a, 90b) with reference to the straight line represented by the following equation (19).

Z (X) = − tan θy · X + Z 0 (19)
However, Z 0 and tanθy, using the measurement results Za~Zd Z heads 72a to 72d, the above equation (17), obtained from (18). From the obtained surface position result Z 0k , unevenness data (focus map) Z k on the surface of the wafer W is obtained as in the following equation (20).
Z k = Z 0k −Z (X k ) (20)

During exposure, for each shot area, the wafer stage WST in the Z-axis direction and the tilt direction by the minute driving according focus map Z k obtained as described above, the focus is adjusted. Here, at the time of exposure, the surface position of wafer table WTB (more precisely, corresponding Y scales 39Y 2 and 39Y 1 ) is measured using Z heads 74 i and 76 j (i, j = 1 to 5). . Therefore, to reset the focus map Z k of the reference line Z (X). However, Z 0 and tan θy are obtained from the above equations (11) and (12) using the measurement results Z L and Z R of the Z heads 74 i and 76 j (i, j = 1 to 5). By the above procedure, the surface position of the surface of the wafer W is converted to Z k + Z (X k ).

Then, Z head 72a~72d of the surface position measuring system 180, 72d, 76 1 to 76 out of 5 measurement errors, measurement errors due to changes in the wavelength of the probe beam LB 1 of focus sensor FS The correction will be described.

As shown in FIG. 25, the optical system of the focus sensor FS constituting the Z head is designed according to the design wavelength (reference wavelength) λ 0 of the probe beam LB 1 emitted from the light source LD. Here, by changing the state of the atmosphere in instability or focus sensor FS of the light source LD, the wavelength of the probe beam LB 1 is changed, the focal position for deviated from the design position, the measurement error. Therefore, it is necessary to compensate for a measurement error (hereinafter referred to as a wavelength change caused error) caused by the wavelength change of the probe beam LB 1 .

In order to compensate a measurement error caused by the wavelength variation of the probe beam LB 1, 2 two methods can be considered. The first method is a method in the case where a wavelength tunable semiconductor laser is used as the light source LD. The light source LD is arranged so that the wavelength λ on the optical path of the probe beam LB 1 matches the reference wavelength λ 0 . This is a method of adjusting the oscillation wavelength λ. The second method is a method of correcting the measurement result of the Z head regardless of whether the light source LD is a wavelength tunable semiconductor laser.

In any method, it is necessary to measure the wavelength λ of the probe beam LB 1 in the sensor main body ZH in which the focus sensor FS is housed (including the focus sensor FS) and in the surrounding atmosphere. Two methods are conceivable as a method of measuring the wavelength λ. In the first method, for example, a reference beam is extracted from the light source LD independently of the probe beam LB 1 and the wavelength λ (refractive index n of the optical path atmosphere) is measured using a wavelength measuring instrument such as a wavelength tracker (not shown). It is a method to do. In the second method, as shown in FIG. 25, an environmental sensor WT is installed in the focus sensor FS (or in the sensor body ZH or its periphery), and the refractive index n of the atmosphere in the focus sensor FS is indirectly set. It is a method to measure automatically. Here, the environment sensor WT includes, for example, an atmospheric pressure sensor, a temperature sensor, a humidity sensor, and the like. Using these measurement results, for example, atmospheric pressure P and temperature T, the refractive index n of the atmosphere is obtained from the following equation (21).

n-1 = (n 0 -1 ) (P / P 0) / (T / T 0) ...... (21)
Here, the refractive index in the reference state (where the reference wavelength λ 0 is defined) is expressed as n 0 , the pressure is expressed as P 0 , and the temperature is expressed as T 0 . The reason why the parameter of the above equation (21) does not include humidity is that the environment control chamber in which the exposure apparatus main body of the exposure apparatus is accommodated is normally adjusted to the target temperature with an accuracy of ± 1/100 ° C. This is because the wavelength change of the laser beam due to humidity fluctuation can be ignored.

However, when it is necessary to determine the refractive index n in the vicinity of the optical path of the probe beam LB 1 in consideration of the influence of humidity, it is obtained from Edlen's empirical formula of the following formula (22).

n = 1 + 2.87755 × 10 −7 P × Func (P, T) −2.6 × 10 −9 e −0.0057627 T F R (22)
Here, Func (P, T) = (1 + P (0.612−0.010 × 10 −6 T)) / (1 + 0.0036610T)
However, the temperature T (° C.), the pressure P (hPa), gave a relative atmospheric humidity of F R (%). Then, the wavelength of the probe beam LB 1 in the atmosphere in and around the focus sensor FS is obtained as λ = λ 0 (n 0 / n).

In the case of adopting the first method of adjusting the oscillation wavelength λ of the light source LD, the main control device 20 is all determined as described above or measured using a wavelength measuring instrument. Z heads 72a~72d, 74 1 ~74 5, 76 1 ~76 5 wavelengths lambda probe beam LB 1 of unifies the reference wavelength lambda 0 (corrected). In this first method, since the wavelength change itself is corrected, the error due to wavelength change of each Z head becomes zero during actual measurement. A prescription is also effective in which a temperature sensor is installed in the light source LD and the wavelength change is corrected according to the temperature change. Of course, not only the temperature but also the change in the state quantity that affects the oscillation wavelength λ of the light source LD may be detected, and the wavelength change may be corrected from the change.

When adopting the second method of correcting the measurement result of the Z head, the main controller 20 uses, for example, the position coordinates (Z, θx, θy) of the wafer stage WST obtained by using the interferometer system 118. Thus, the measurement result of the surface position of the measurement target surface S of the Z head is predicted. A difference O = Z 0 −Z between the predicted value Z 0 and the actual measurement value Z is measured as a function of the wavelength λ of the probe beam LB 1 , and correction data O (λ) is created from the result. However, the condition O (λ 0 ) = 0 is satisfied. Then, the measurement result Z at the time of the wavelength λ of the probe beam LB 1 is corrected as Z + O (λ). Here, when the measured value Z, that is, the wavelength λ dependency of the focal position of the probe beam is known, it is not necessary to create the correction data O (λ).

  As the above-described environmental sensor WT, a sensor prepared for correcting the wavelength change of another sensor system such as the interferometer system 118 may be used in combination as long as it detects the state of the atmosphere around wafer stage WST. .

When the error due to wavelength change of each Z head is corrected by this second method, the main controller 20, for example, at the time of the focus mapping described above, each Z head of the surface position measuring system 180 (this In this case, the wafer stage WST may be driven in at least one of the Z-axis direction and the θy direction so that the wavelength change-induced error component of the Z heads 72a to 72d) is corrected. Correction may be performed (a wavelength change-induced error may be used as a measurement offset). The point is that, for example, the wafer is finally exposed during exposure so that the measurement error of the position information in the Z-axis direction orthogonal to the XY plane of wafer stage WST by each Z head due to the error due to the wavelength change of each Z head is canceled. The stage WST only needs to be driven in at least one of the Z-axis direction and the tilt direction (θy direction) with respect to the XY plane. In this sense, with respect to the Z heads 74 i and 76 j used at the time of exposure, the wafer stage WST is adjusted so that the wavelength change caused error is corrected so that the wavelength change caused error does not adversely affect the exposure result. It is desirable to drive in at least one of the Z-axis direction and the θy direction.

  Until now, in order to simplify the explanation, the main controller 20 controls the various parts of the exposure apparatus including the stage system control, interferometer system, encoder system, etc. Of course, at least a part of the control performed by the main control device 20 may be shared by a plurality of control devices. For example, a stage control device that performs control of the stage system, switching of the heads of the encoder system and the surface position measurement system, and the like may be provided under the main control device 20. Further, the control performed by the main control device 20 is not necessarily realized by hardware, and is a computer that defines the operations of the main control device 20 or some control devices that share and control as described above. It may be realized by software by a program.

  As described above in detail, according to the exposure apparatus 100 according to the present embodiment, the main controller 20 causes the position information in the Z-axis direction perpendicular to the XY plane of the surface of the wafer stage WST to be obtained from the plurality of surface position measurement systems 180. The wafer stage WST is measured in the Z head, and the wafer stage WST is driven in at least one of the Z-axis direction and the θy direction based on the measurement information and information on the change in wavelength of the probe beam of the focus sensor FS of each Z head. . Thus, it is possible to drive wafer stage WST in at least one of the Z-axis direction and the θy direction so that the position measurement error of wafer stage WST in the Z-axis direction and θy direction due to the error due to the wavelength change of each Z head is canceled. It becomes possible.

  Further, according to exposure apparatus 100 according to the present embodiment, the wafer placed on wafer stage WST (wafer table WTB) whose position in the Z-axis direction (and θy direction) is controlled with high accuracy as described above. By transferring and forming the pattern of the reticle R on each shot area of W, it becomes possible to form the pattern on each shot area on the wafer W with high accuracy.

  In addition, according to the exposure apparatus 100 according to the present embodiment, scanning is performed using the Z head without measuring surface position information on the surface of the wafer W during exposure based on the result of the focus mapping performed in advance. By performing the focus / leveling control of the wafer with high accuracy during exposure, a pattern can be formed on the wafer W with high accuracy. Furthermore, in the present embodiment, high-resolution exposure can be realized by immersion exposure, so that a fine pattern can be accurately transferred onto the wafer W in this respect.

In the above embodiment, the focus sensor FS of each Z head may focus on the surface of the cover glass that protects the diffraction grating surface formed on the scales Y 1 and Y 2 when performing the focus servo described above. Although good, it is desirable to focus on a surface farther than the cover glass surface, such as a diffraction grating surface. In this case, when there is foreign matter (dust) such as particles on the surface of the cover glass, the surface of the cover glass becomes a surface defocused by the thickness of the cover glass. Because it becomes difficult to receive.

In the above-described embodiment, a plurality of Z heads are arranged outside (above) wafer stage WST in the operation range of wafer stage WST (the range of movement within the actual movement of the apparatus). The surface position measuring system configured to detect the Z position of the surface of the wafer table WTB (Y scale 39Y 1 , 39Y 2 ) is employed, but the present invention is not limited to this. For example, instead of the surface position detection system 180, a detection apparatus in which a plurality of Z heads are arranged on the upper surface of the moving body and a reflecting surface that reflects the probe beam from the Z head is provided outside the moving body. It may be adopted.

  In the above embodiment, the encoder system is configured such that the grating portion (X scale, Y scale) is provided on the wafer table (wafer stage), and the X head and the Y head are arranged outside the wafer stage so as to face the lattice portion. Although the case where it is adopted is illustrated, the present invention is not limited thereto, and an encoder head is provided on the moving body, and a two-dimensional lattice (or a one-dimensional lattice portion arranged two-dimensionally) is disposed outside the moving body so as to face the encoder head. You may employ | adopt the encoder system of a structure. In this case, when the Z head is also arranged on the upper surface of the moving body, the two-dimensional grating (or the one-dimensional grating portion arranged two-dimensionally) is also used as a reflecting surface for reflecting the probe beam from the Z head. May be.

  In the above embodiment, as shown in FIG. 7, each Z head is driven in the Z-axis direction by a drive unit (not shown) and includes a sensor main body ZH (first sensor) containing a focus sensor FS. Although the case where the measurement unit ZE (second sensor) that measures the displacement in the Z-axis direction of the first sensor (sensor body ZH) is described has been described, the present invention is not limited to this. That is, in the Z head (sensor head), the first sensor itself is not necessarily movable in the Z-axis direction, and a part of members constituting the first sensor (for example, the above-described focus sensor) can be moved. The Z-axis direction of the moving body so that the optical positional relationship between the first sensor and the surface of the measurement object (for example, the conjugate relationship with the light receiving surface (detection surface) of the light receiving element in the first sensor) is maintained. It is only necessary that the movement of the member moves in accordance with the movement of. In that case, the second sensor measures the displacement of the moving member in the moving direction from the reference position. Of course, when the sensor head is provided on the moving body, the measurement target of the first sensor according to the position change of the moving body in the direction perpendicular to the two-dimensional plane in the direction perpendicular to the two-dimensional plane. The moving member may be moved so as to maintain the optical positional relationship between the first sensor and the object, for example, the above-described two-dimensional lattice (or one-dimensional lattice portion arranged two-dimensionally). Alternatively, a Z head to which the first sensor is fixed, or a sensor head that includes only the first sensor and does not include the drive unit (not shown) and the second sensor can be used instead of the Z head. In addition to the optical pickup type focus sensor, an optical displacement sensor head capable of measuring the displacement of an object may be used instead of the Z head. Even when such a head is used, the correction of the position measurement error of the measurement object due to the change in the wavelength of the measurement beam described in the above embodiment is effective.

  In the above embodiment, the lower surface of the nozzle unit 32 and the lower end surface of the tip optical element of the projection optical system PL are substantially flush with each other. However, the present invention is not limited to this. You may arrange | position near the image plane (namely, wafer) of projection optical system PL rather than the output surface of an element. That is, the local liquid immersion device 8 is not limited to the above-described structure. For example, European Patent Publication No. 1420298, International Publication No. 2004/055803 Pamphlet, International Publication No. 2004/057590 Pamphlet, International Publication No. 2005/029559. Pamphlet (corresponding US Patent Publication No. 2006/0231206), pamphlet of International Publication No. 2004/086468 (corresponding US Patent Publication No. 2005/0280791), JP-A-2004-289126 (corresponding US Patent No. 6,952,253) Etc.) can be used. Further, as disclosed in, for example, International Publication No. 2004/019128 (corresponding US Patent Publication No. 2005/0248856), in addition to the optical path on the image plane side of the tip optical element, the object plane side of the tip optical element The optical path may be filled with liquid. Furthermore, a thin film having a lyophilic property and / or a dissolution preventing function may be formed on a part (including at least a contact surface with the liquid) or the entire surface of the tip optical element. Quartz has a high affinity with a liquid and does not require a dissolution preventing film, but fluorite preferably forms at least a dissolution preventing film.

  In the above embodiment, pure water (water) is used as the liquid. However, the present invention is not limited to this. As the liquid, a safe liquid that is chemically stable and has a high transmittance of the illumination light IL, such as a fluorine-based inert liquid, may be used. As this fluorinated inert liquid, for example, Fluorinert (trade name of 3M, USA) can be used. This fluorine-based inert liquid is also excellent in terms of cooling effect. Further, a liquid having a refractive index higher than that of pure water (refractive index of about 1.44), for example, 1.5 or more may be used as the liquid. Examples of the liquid include predetermined liquids having C—H bonds or O—H bonds such as isopropanol having a refractive index of about 1.50 and glycerol (glycerin) having a refractive index of about 1.61, hexane, heptane, decane, and the like. Or a predetermined liquid (organic solvent) or decalin (Decalin: Decahydronaphthalene) having a refractive index of about 1.60. Alternatively, any two or more of these liquids may be mixed, or at least one of these liquids may be added (mixed) to pure water. Alternatively, the liquid may be one obtained by adding (mixing) a base or an acid such as H +, Cs +, K +, Cl-, SO42-, PO42- to pure water. Further, pure water may be added (mixed) with fine particles such as Al oxide. These liquids can transmit ArF excimer laser light. As the liquid, the light absorption coefficient is small, the temperature dependency is small, and the projection optical system (tip optical member) and / or the photosensitive material (or protective film (topcoat film) coated on the wafer surface is used. ) Or an antireflection film) is preferable. Further, when the F2 laser is used as a light source, fomblin oil may be selected. Furthermore, as the liquid, a liquid having a higher refractive index with respect to the illumination light IL than that of pure water, for example, a liquid having a refractive index of about 1.6 to 1.8 may be used. It is also possible to use a supercritical fluid as the liquid. Further, the leading optical element of the projection optical system PL is made of, for example, quartz (silica) or a fluoride compound such as calcium fluoride (fluorite), barium fluoride, strontium fluoride, lithium fluoride, and sodium fluoride. A single crystal material may be used, or a material having a higher refractive index than quartz or fluorite (for example, 1.6 or more) may be used. Examples of the material having a refractive index of 1.6 or more include sapphire, germanium dioxide and the like disclosed in International Publication No. 2005/059617, or potassium chloride disclosed in International Publication No. 2005/059618. (Refractive index is about 1.75) or the like can be used.

  In the above embodiment, the recovered liquid may be reused. In this case, it is desirable to provide a filter for removing impurities from the recovered liquid in the liquid recovery device or the recovery pipe. .

  In the above embodiment, the case where the exposure apparatus is an immersion type exposure apparatus has been described. However, the present invention is not limited to this, and a dry type exposure that exposes the wafer W without using liquid (water). It can also be employed in devices.

  In the above embodiment, the case where the present invention is applied to a scanning exposure apparatus such as a step-and-scan method has been described. However, the present invention is not limited to this, and the present invention is applied to a stationary exposure apparatus such as a stepper. May be. The present invention can also be applied to a step-and-stitch reduction projection exposure apparatus, a proximity exposure apparatus, or a mirror projection aligner that synthesizes a shot area and a shot area. Further, for example, JP-A-10-163099 and JP-A-10-214783 (corresponding US Pat. No. 6,590,634), JP 2000-505958 (corresponding US Pat. No. 5,969,441). As disclosed in US Pat. No. 6,208,407 and the like, the present invention can also be applied to a multi-stage type exposure apparatus having a plurality of wafer stages WST.

  In addition, the projection optical system in the exposure apparatus of the above embodiment may be not only a reduction system but also an equal magnification and an enlargement system, and the projection optical system PL may be not only a refraction system but also a reflection system or a catadioptric system. The projected image may be either an inverted image or an erect image. Further, the exposure area IA irradiated with the illumination light IL through the projection optical system PL is an on-axis area including the optical axis AX within the field of the projection optical system PL. For example, International Publication No. 2004/107011 pamphlet. An optical system having a plurality of reflecting surfaces and forming an intermediate image at least once (a reflecting system or a reflex system) is provided in a part thereof, and has a single optical axis. Similar to the so-called inline catadioptric system, the exposure area may be an off-axis area that does not include the optical axis AX. In addition, the illumination area and the exposure area described above are rectangular in shape, but the shape is not limited to this, and may be, for example, an arc, a trapezoid, or a parallelogram.

The light source of the exposure apparatus of the above embodiment is not limited to the ArF excimer laser, but is a KrF excimer laser (output wavelength 248 nm), F 2 laser (output wavelength 157 nm), Ar 2 laser (output wavelength 126 nm), Kr 2 laser ( It is also possible to use a pulsed laser light source such as an output wavelength of 146 nm or an ultrahigh pressure mercury lamp that emits a bright line such as g-line (wavelength 436 nm) or i-line (wavelength 365 nm). A harmonic generator of a YAG laser or the like can also be used. In addition, as disclosed in, for example, International Publication No. 99/46835 pamphlet (corresponding US Pat. No. 7,023,610), an infrared region oscillated from a DFB semiconductor laser or a fiber laser as a vacuum ultraviolet light, or visible For example, a single wavelength laser beam in the region may be amplified by a fiber amplifier doped with erbium (or both erbium and ytterbium), and a harmonic wave converted into ultraviolet light using a nonlinear optical crystal may be used.

  In the above embodiment, it is needless to say that the illumination light IL of the exposure apparatus is not limited to light having a wavelength of 100 nm or more, and light having a wavelength of less than 100 nm may be used. For example, in recent years, in order to expose a pattern of 70 nm or less, EUV (Extreme Ultraviolet) light in a soft X-ray region (for example, a wavelength region of 5 to 15 nm) is generated using an SOR or a plasma laser as a light source, and its exposure wavelength Development of an EUV exposure apparatus using an all-reflection reduction optical system designed under (for example, 13.5 nm) and a reflective mask is underway. In this apparatus, since a configuration in which scanning exposure is performed by synchronously scanning the mask and the wafer using arc illumination is conceivable, the present invention can also be suitably applied to such an apparatus. In addition, the present invention can be applied to an exposure apparatus using a charged particle beam such as an electron beam or an ion beam.

  In the above-described embodiment, a light transmission mask (reticle) in which a predetermined light-shielding pattern (or phase pattern / dimming pattern) is formed on a light-transmitting substrate is used. Instead of this reticle, For example, as disclosed in US Pat. No. 6,778,257, based on electronic data of a pattern to be exposed, an electronic mask (variable molding mask, active pattern) that forms a transmission pattern, a reflection pattern, or a light emission pattern is disclosed. Also called a mask or an image generator, for example, a DMD (Digital Micro-mirror Device) which is a kind of non-light emitting image display element (spatial light modulator) may be used.

  Further, as disclosed in, for example, International Publication No. 2001/035168, an exposure apparatus (lithography system) that forms line and space patterns on a wafer by forming interference fringes on the wafer. The present invention can be applied.

  Further, as disclosed in, for example, Japanese translations of PCT publication No. 2004-51850 (corresponding US Pat. No. 6,611,316), two reticle patterns are synthesized on a wafer via a projection optical system, and The present invention can also be applied to an exposure apparatus that performs double exposure of one shot area on a wafer almost simultaneously by scanning exposure.

  Further, the apparatus for forming a pattern on an object is not limited to the exposure apparatus (lithography system) described above, and the present invention can be applied to an apparatus for forming a pattern on an object by, for example, an ink jet method.

  Note that the object on which the pattern is to be formed in the above embodiment (the object to be exposed to the energy beam) is not limited to the wafer, but other objects such as a glass plate, a ceramic substrate, a film member, or a mask blank. But it ’s okay.

  The use of the exposure apparatus is not limited to the exposure apparatus for semiconductor manufacturing. For example, an exposure apparatus for liquid crystal that transfers a liquid crystal display element pattern onto a square glass plate, an organic EL, a thin film magnetic head, an image sensor ( CCDs, etc.), micromachines, DNA chips and the like can also be widely applied to exposure apparatuses. Further, in order to manufacture reticles or masks used in not only microdevices such as semiconductor elements but also light exposure apparatuses, EUV exposure apparatuses, X-ray exposure apparatuses, electron beam exposure apparatuses, etc., glass substrates or silicon wafers, etc. The present invention can also be applied to an exposure apparatus that transfers a circuit pattern.

  The moving body driving system and the moving body driving method of the present invention are not limited to the exposure apparatus, and the positioning of the sample in other substrate processing apparatuses (for example, a laser repair apparatus, a substrate inspection apparatus, etc.) or other precision machines. The present invention can be widely applied to apparatuses including a moving body such as a stage that moves in a two-dimensional plane, such as an apparatus and a wire bonding apparatus.

  The semiconductor device includes a step of designing a function and performance of the device, a step of forming a wafer from a silicon material, a step of performing exposure by the exposure apparatus of the above embodiment to form a pattern on the wafer, and a wafer on which the pattern is formed. The wafer is manufactured through a developing step, a step of etching the wafer after development, a device assembly step (including a dicing process, a bonding process, and a packaging process), an inspection step, and the like.

  As described above, the moving body driving system and the moving body driving method of the present invention are suitable for driving a moving body within a moving surface. The pattern forming apparatus and the pattern forming method of the present invention are suitable for forming a pattern on an object. The exposure method, exposure apparatus, and device manufacturing method of the present invention are suitable for manufacturing micro devices.

It is a figure which shows schematically the structure of the exposure apparatus which concerns on one Embodiment. It is a top view which shows the stage apparatus of FIG. FIG. 2 is a plan view showing an arrangement of various measuring devices (encoder, alignment system, multipoint AF system, Z head, etc.) provided in the exposure apparatus of FIG. 1. FIG. 4A is a plan view showing wafer stage WST, and FIG. 4B is a schematic sectional side view showing a part of wafer stage WST. FIG. 5A is a plan view showing the measurement stage MST, and FIG. 5B is a schematic cross-sectional side view showing a part of the measurement stage MST. 1 is a block diagram schematically showing a configuration of a control system of an exposure apparatus according to an embodiment. It is a figure which shows roughly an example of a structure of Z head. 8A is a diagram illustrating an example of the configuration of the focus sensor, and FIGS. 8B and 8C are diagrams for explaining the shape and function of the cylindrical lens in FIG. 8A. . FIG. 9A is a diagram showing how the detection area of the quadrant light-receiving element is divided. FIGS. 9B, 9C, and 9D are a front pin state and an ideal focus state, respectively. FIG. 6 is a diagram showing a cross-sectional shape on the detection surface of the reflected beam LB 2 in a rear pin state. FIGS. 10A to 10C are views for explaining focus mapping performed in the exposure apparatus according to the embodiment. FIGS. 11A and 11B are views for explaining focus calibration performed by the exposure apparatus according to an embodiment. FIGS. 12A and 12B are diagrams for explaining AF sensor offset correction performed by the exposure apparatus according to the embodiment. It is a figure which shows the state of the wafer stage of the state in which the exposure of the step and scan system with respect to the wafer on a wafer stage is performed. It is a figure which shows the state of both the stages at the time of unloading of a wafer (when the measurement stage arrives at the position which performs Sec-BCHK (interval)). It is a figure which shows the state of both the stages at the time of wafer loading. It is a figure which shows the state of both the stage at the time of switching from the stage servo control by an interferometer to the stage servo control by an encoder (when the wafer stage moves to the position where the first half processing of Pri-BCHK is performed). It is a figure which shows the state of a wafer stage and the measurement stage at the time of detecting simultaneously the alignment mark attached to the three first alignment shot area | regions using alignment system AL1, AL2 2 and AL2 3 . It is a figure which shows the state of a wafer stage when the process of the first half of focus calibration is performed. It is a figure which shows the state of a wafer stage when measuring the alignment mark attached to five second alignment shot area | regions simultaneously using alignment system AL1, AL2 1 -AL2 4 . It is a figure which shows the state of a wafer stage and the measurement stage when at least one of the process of Pri-BCHK latter half and the process of focus calibration latter half is performed. It is a figure which shows the state of a wafer stage when the alignment marks attached to five third alignment shot area | regions are detected simultaneously using alignment system AL1, AL2 1 -AL2 4 . It is a figure which shows the state of a wafer stage and the measurement stage at the time of detecting simultaneously the alignment mark attached to three force alignment shot area | regions using alignment system AL1, AL2 2 and AL2 3 . It is a figure which shows the state of a wafer stage and a measurement stage when focus mapping is complete | finished. 24A and 24B are diagrams for explaining a method for calculating the Z position and the tilt amount of wafer stage WST using the measurement results of the Z head. It is a figure for demonstrating correction | amendment of the wavelength change of the probe beam of Z head.

Explanation of symbols

20 ... main control unit, 34 ... memory, 39Y 1, 39Y 2 ... Y scale, 50 ... stage device, 72a to 72d ... Z head, 74 1 to 74 5 ... Z head, 76 1 to 76 5 ... Z head, 100 DESCRIPTION OF SYMBOLS ... Exposure apparatus, 118 ... Interferometer system, 150 ... Encoder system, 180 ... Surface position measurement system, WST ... Wafer stage, WTB ... Wafer table, FS ... Focus sensor, ZH ... Sensor main body, ZE ... Measurement part, RST ... Reticle Stage, PL ... projection optical system, W ... object.

Claims (15)

  1. A moving body driving method for driving a moving body substantially along a two-dimensional plane,
    Position information in a direction orthogonal to the two-dimensional plane of one surface of the moving body is measured using a plurality of optical sensor heads, and based on the measurement information and information on a change in wavelength of a measurement beam of the sensor head. And driving the moving body in at least one of a direction orthogonal to the two-dimensional plane and an inclination direction with respect to the two-dimensional plane.
  2.   The movement according to claim 1, wherein a physical quantity related to a wavelength of the measurement beam is measured prior to driving the moving body in at least one of a direction orthogonal to the two-dimensional plane and a tilt direction with respect to the two-dimensional plane. Body drive method.
  3.   The moving body driving method according to claim 2, wherein the physical quantity includes at least one of a temperature, an atmospheric pressure, a humidity, and a refractive index of an optical path of a measurement beam of the sensor head or an ambient atmosphere.
  4.   The driving step drives the moving body in at least one of a direction orthogonal to the plane and an inclination direction with respect to the plane while compensating for the influence of the change in wavelength on the measurement information. A moving body driving method according to claim 1.
  5.   5. The moving body driving method according to claim 4, wherein the compensation includes changing a wavelength of a measurement beam of the sensor head.
  6. Placing an object on a moving body movable along a moving surface;
    A pattern forming method comprising: a step of driving the moving body by the moving body driving method according to claim 1 to form a pattern on the object.
  7. A device manufacturing method including a pattern forming step,
    In the said pattern formation process, the device manufacturing method which forms a pattern on a board | substrate using the pattern formation method of Claim 6.
  8. An exposure method for forming a pattern on an object by irradiation with an energy beam,
    An exposure method for driving a moving body on which the object is placed using the moving body driving method according to claim 1 for relative movement between the energy beam and the object.
  9. A mobile drive system for driving a mobile substantially along a two-dimensional plane,
    A surface position measurement system having a plurality of sensor heads that are two-dimensionally arranged in a plane parallel to the two-dimensional plane and measure position information of one surface of the movable body in a direction perpendicular to the two-dimensional plane;
    Position information in a direction perpendicular to the two-dimensional plane of one surface of the moving body is measured using a plurality of sensor heads of the surface position measurement system, and information on the measurement information and a change in wavelength of a measurement beam of the sensor head And a control device that drives the moving body in at least one of a direction orthogonal to the two-dimensional plane and an inclination direction with respect to the two-dimensional plane.
  10.   The moving body drive system according to claim 9, further comprising a measurement device that measures a physical quantity related to a wavelength of the measurement beam.
  11.   The mobile measurement system according to claim 10, wherein the measurement device measures at least one of a temperature, an atmospheric pressure, a humidity, and a refractive index of an optical path of a measurement beam of the sensor head or a surrounding atmosphere.
  12.   The control device drives the moving body in at least one of a direction orthogonal to the two-dimensional plane and a tilt direction with respect to the two-dimensional plane while compensating for the influence of the change in wavelength on the measurement information. The moving body drive system as described in any one of -11.
  13. The sensor head can change the wavelength of the measurement beam;
    The mobile control system according to any one of claims 9 to 12, wherein the control device changes a wavelength of a measurement beam of the sensor head.
  14. A moving body on which an object is placed and which can move along the moving surface while holding the object;
    A moving body drive system according to any one of claims 9 to 13, which drives the moving body to form a pattern on the object.
  15. An exposure apparatus that forms a pattern on an object by irradiation with an energy beam,
    A patterning device for irradiating the object with the energy beam;
    A moving body drive system according to any one of claims 9 to 13,
    An exposure apparatus that drives a moving body on which the object is placed by the moving body driving system for relative movement between the energy beam and the object.
JP2007219080A 2007-08-24 2007-08-24 Moving body driving method and moving body driving system, pattern forming method and device, exposure and device, and device manufacturing method Pending JP2009054730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007219080A JP2009054730A (en) 2007-08-24 2007-08-24 Moving body driving method and moving body driving system, pattern forming method and device, exposure and device, and device manufacturing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007219080A JP2009054730A (en) 2007-08-24 2007-08-24 Moving body driving method and moving body driving system, pattern forming method and device, exposure and device, and device manufacturing method

Publications (1)

Publication Number Publication Date
JP2009054730A true JP2009054730A (en) 2009-03-12

Family

ID=40505562

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007219080A Pending JP2009054730A (en) 2007-08-24 2007-08-24 Moving body driving method and moving body driving system, pattern forming method and device, exposure and device, and device manufacturing method

Country Status (1)

Country Link
JP (1) JP2009054730A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07106219A (en) * 1993-09-29 1995-04-21 Sony Corp Aligner, original optical disc aligner and semiconductor aligner
JPH09199403A (en) * 1996-01-14 1997-07-31 Nikon Corp Peojection aligner
JP2004301825A (en) * 2002-12-10 2004-10-28 Nikon Corp Surface position detection device, exposure method and method for manufacturing device
JP2006108681A (en) * 2004-10-05 2006-04-20 Asml Netherlands Bv Lithography apparatus and position determination method
JP2006191079A (en) * 2004-12-30 2006-07-20 Asml Netherlands Bv Lithography equipment and device manufacture method
WO2006100076A2 (en) * 2005-03-23 2006-09-28 Asml Netherlands B.V. Lithographic apparatus and device manufacturing method
JP2006317454A (en) * 2005-05-13 2006-11-24 Vistec Semiconductor Systems Gmbh Measurement device and method for determining relative position of positioning table arranged movable in at least one direction
WO2007007549A1 (en) * 2005-07-08 2007-01-18 Nikon Corporation Surface position detection apparatus, exposure apparatus, and exposure method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07106219A (en) * 1993-09-29 1995-04-21 Sony Corp Aligner, original optical disc aligner and semiconductor aligner
JPH09199403A (en) * 1996-01-14 1997-07-31 Nikon Corp Peojection aligner
JP2004301825A (en) * 2002-12-10 2004-10-28 Nikon Corp Surface position detection device, exposure method and method for manufacturing device
JP2006108681A (en) * 2004-10-05 2006-04-20 Asml Netherlands Bv Lithography apparatus and position determination method
JP2006191079A (en) * 2004-12-30 2006-07-20 Asml Netherlands Bv Lithography equipment and device manufacture method
WO2006100076A2 (en) * 2005-03-23 2006-09-28 Asml Netherlands B.V. Lithographic apparatus and device manufacturing method
JP2006317454A (en) * 2005-05-13 2006-11-24 Vistec Semiconductor Systems Gmbh Measurement device and method for determining relative position of positioning table arranged movable in at least one direction
WO2007007549A1 (en) * 2005-07-08 2007-01-18 Nikon Corporation Surface position detection apparatus, exposure apparatus, and exposure method

Similar Documents

Publication Publication Date Title
EP2071612B1 (en) Mobile body drive method and mobile body drive system, pattern formation method and apparatus, exposure method and apparatus, and device manufacturing method
KR101442381B1 (en) Moving body drive method, moving body drive system, pattern formation method, pattern formation device, exposure method, exposure device, and device fabrication method
US10197924B2 (en) Movable body drive method and movable body drive system, pattern formation method and apparatus, exposure method and apparatus, device manufacturing method, and calibration method
JP6508285B2 (en) Exposure apparatus, exposure method, and device manufacturing method
TWI609252B (en) Moving body driving system and moving body driving method, pattern forming apparatus and method, exposure apparatus and method, element manufacturing method, and determination method
US8860925B2 (en) Movable body drive method and movable body drive system, pattern formation method and apparatus, exposure method and apparatus, and device manufacturing method
EP3115844B1 (en) Exposure apparatus, exposure method and device manufacturing method
EP2003681B1 (en) Measuring apparatus, measuring method, pattern forming apparatus, pattern forming method, and device manufacturing method
EP3327507B1 (en) Exposure apparatus, exposure method, and device manufacturing method
KR101477318B1 (en) Movable body drive method and movable body drive system, pattern formation method and apparatus, exposure method and apparatus, position control method and position control system, and device manufacturing method
US8665455B2 (en) Movable body apparatus, pattern formation apparatus and exposure apparatus, and device manufacturing method
JP5787001B2 (en) exposure apparatus, exposure method, and device manufacturing method
KR20100047182A (en) Position measuring system, exposure device, position measuring method, exposure method, device manufacturing method, tool, and measuring method
TWI454852B (en) A moving body system and a moving body driving method, a pattern forming apparatus and a pattern forming method, an exposure apparatus and an exposure method, and an element manufacturing method
TWI525395B (en) Mobile body driving method and moving body driving system, pattern forming method and apparatus, exposure method and apparatus, and component manufacturing method
WO2009084196A1 (en) Moving body driving system, pattern forming apparatus, exposure apparatus, exposure method and device manufacturing method
US8547527B2 (en) Movable body drive method and movable body drive system, pattern formation method and pattern formation apparatus, and device manufacturing method
JP5088588B2 (en) Exposure apparatus, exposure method, and device manufacturing method
US8098362B2 (en) Detection device, movable body apparatus, pattern formation apparatus and pattern formation method, exposure apparatus and exposure method, and device manufacturing method
TWI475336B (en) Mobile body driving method and moving body driving system, pattern forming method and apparatus, exposure method and apparatus, and component manufacturing method
US8422015B2 (en) Movable body apparatus, pattern formation apparatus and exposure apparatus, and device manufacturing method
JP5071894B2 (en) Stage apparatus, pattern forming apparatus, exposure apparatus, stage driving method, exposure method, and device manufacturing method
US8792086B2 (en) Movable body drive method and movable body drive system, and pattern formation method and pattern formation apparatus
TWI539239B (en) Mobile body driving method and moving body driving system, pattern forming method and apparatus, exposure method and apparatus, component manufacturing method, and measuring method
TWI428703B (en) Mobile body driving method and moving body driving system, pattern forming method and apparatus, exposure method and apparatus, and component manufacturing method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20100413

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110518

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120426

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120507

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120614

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20130111

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20130312

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20130930