WO2022064707A1 - 解析システム - Google Patents
解析システム Download PDFInfo
- Publication number
- WO2022064707A1 WO2022064707A1 PCT/JP2020/036694 JP2020036694W WO2022064707A1 WO 2022064707 A1 WO2022064707 A1 WO 2022064707A1 JP 2020036694 W JP2020036694 W JP 2020036694W WO 2022064707 A1 WO2022064707 A1 WO 2022064707A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sample
- observation
- analysis system
- multilayer structure
- coordinates
- Prior art date
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 77
- 238000010894 electron beam technology Methods 0.000 claims abstract description 44
- 230000001678 irradiating effect Effects 0.000 claims abstract description 13
- 239000002245 particle Substances 0.000 claims description 25
- 238000004364 calculation method Methods 0.000 claims description 22
- 238000005498 polishing Methods 0.000 claims description 14
- 238000005520 cutting process Methods 0.000 claims description 4
- 239000010410 layer Substances 0.000 description 74
- 238000000034 method Methods 0.000 description 33
- 238000003860 storage Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 16
- 230000007246 mechanism Effects 0.000 description 11
- 238000002360 preparation method Methods 0.000 description 11
- 239000004065 semiconductor Substances 0.000 description 9
- 238000000992 sputter etching Methods 0.000 description 7
- 238000011156 evaluation Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000004020 conductor Substances 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000007517 polishing process Methods 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 239000011229 interlayer Substances 0.000 description 1
- 238000010884 ion-beam technique Methods 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000007790 scraping Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B15/00—Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons
- G01B15/04—Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B15/00—Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01J—ELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
- H01J37/00—Discharge tubes with provision for introducing objects or material to be exposed to the discharge, e.g. for the purpose of examination or processing thereof
- H01J37/02—Details
- H01J37/22—Optical or photographic arrangements associated with the tube
- H01J37/222—Image processing arrangements associated with the tube
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B9/00—Measuring instruments characterised by the use of optical techniques
- G01B9/02—Interferometers
- G01B9/0209—Low-coherence interferometers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B9/00—Measuring instruments characterised by the use of optical techniques
- G01B9/04—Measuring microscopes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/22—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material
- G01N23/225—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01J—ELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
- H01J37/00—Discharge tubes with provision for introducing objects or material to be exposed to the discharge, e.g. for the purpose of examination or processing thereof
- H01J37/02—Details
- H01J37/244—Detectors; Associated components or circuits therefor
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01J—ELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
- H01J37/00—Discharge tubes with provision for introducing objects or material to be exposed to the discharge, e.g. for the purpose of examination or processing thereof
- H01J37/26—Electron or ion microscopes; Electron or ion diffraction tubes
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L22/00—Testing or measuring during manufacture or treatment; Reliability measurements, i.e. testing of parts without further processing to modify the parts as such; Structural arrangements therefor
- H01L22/10—Measuring as part of the manufacturing process
- H01L22/12—Measuring as part of the manufacturing process for structural parameters, e.g. thickness, line width, refractive index, temperature, warp, bond strength, defects, optical inspection, electrical measurement of structural dimensions, metallurgic measurement of diffusions
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B2210/00—Aspects not specifically covered by any group under G01B, e.g. of wheel alignment, caliper-like sensors
- G01B2210/56—Measuring geometric parameters of semiconductor structures, e.g. profile, critical dimensions or trench depth
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/60—Specific applications or type of materials
- G01N2223/611—Specific applications or type of materials patterned objects; electronic devices
- G01N2223/6116—Specific applications or type of materials patterned objects; electronic devices semiconductor wafer
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/60—Specific applications or type of materials
- G01N2223/633—Specific applications or type of materials thickness, density, surface weight (unit area)
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01J—ELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
- H01J2237/00—Discharge tubes exposing object to beam, e.g. for analysis treatment, etching, imaging
- H01J2237/20—Positioning, supporting, modifying or maintaining the physical state of objects being observed or treated
- H01J2237/202—Movement
- H01J2237/20207—Tilt
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01J—ELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
- H01J2237/00—Discharge tubes exposing object to beam, e.g. for analysis treatment, etching, imaging
- H01J2237/244—Detection characterized by the detecting means
- H01J2237/2448—Secondary particle detectors
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01J—ELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
- H01J2237/00—Discharge tubes exposing object to beam, e.g. for analysis treatment, etching, imaging
- H01J2237/245—Detection characterised by the variable being measured
- H01J2237/24571—Measurements of non-electric or non-magnetic variables
- H01J2237/24578—Spatial variables, e.g. position, distance
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01J—ELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
- H01J2237/00—Discharge tubes exposing object to beam, e.g. for analysis treatment, etching, imaging
- H01J2237/26—Electron or ion microscopes
- H01J2237/2611—Stereoscopic measurements and/or imaging
Definitions
- the present invention relates to an analysis system, and more particularly to an analysis system capable of acquiring depth information of a multilayer structure contained in a sample.
- a method of obtaining pattern depth information by observing the sample while scraping the sample little by little with an integrated ion beam (FIB), or a sample created by mechanical polishing is used.
- FIB integrated ion beam
- Patent Document 1 FIB is used to process a sample into a tapered shape, and an electron microscope is used to obtain a surface observation image of the formed slope, and the start position of the downhill slope and scanning of an electron beam are obtained.
- a technique for calculating the depth of a pattern based on a distance and a tilt angle is disclosed.
- the means using FIB can evaluate a pattern with high accuracy, but there are problems such as a narrow processing area, a long evaluation time, and difficulty in reacquiring data. Further, although the means for predicting the inclination angle of the polished surface can be evaluated quickly, there is a problem that the accuracy of the evaluation value of the pattern is low because the depth information of the pattern can be calculated only by prediction.
- the analysis system in one embodiment is a step of (a) irradiating a sample containing a multilayer structure with an electron beam from a first direction to acquire a first image of the sample viewed from the first direction. , (B) A step of irradiating the sample with the electron beam from a second direction intersecting with the first direction to acquire a second photographed image of the sample as seen from the second direction, (c). ) The first photographed image, the second photographed image, the number of layers of the multilayer structure, the thickness of one layer or the thickness of each layer of the multilayer structure, and the depth at which the first layer of the multilayer structure starts. A step of acquiring the depth information of the multilayer structure by using the information of the sample including the above.
- the analysis system in one embodiment acquires (a) a first image of the sample viewed from the first direction by irradiating the sample including the multilayer structure with an electron beam from the first direction. Steps, (b) a step of designating an observation range in the first image, (c) the first step using an objective lens for a plurality of points of the sample within the designated observation range. Based on the step of focusing the electron beam in one direction, (d) the result of the focusing in the step (c), between the objective lens and the focal position at the plurality of points of the sample.
- Steps of acquiring distances and creating a WD profile graphing those distances (e) the number of layers of the multi-layer structure, the thickness of one layer or the thickness of each layer of the multi-layer structure, and the thickness of the multi-layer structure.
- the step includes a step of acquiring the depth information of the multilayer structure by collating the information of the sample including the depth at which the first layer starts with the WD profile.
- the depth information of the multi-layer structure can be acquired quickly and with high accuracy.
- FIG. 1 It is a schematic diagram which shows an example of the charged particle beam apparatus in Embodiment 1.
- FIG. It is a top view of the sample in Embodiment 1.
- FIG. It is sectional drawing of the sample in Embodiment 1.
- FIG. It is a top view of the sample in Embodiment 1.
- FIG. It is sectional drawing of the sample in Embodiment 1.
- FIG. It is a flowchart of the analysis system in Embodiment 1.
- It is a schematic diagram which shows the operation screen in Embodiment 1.
- FIG. It is a schematic diagram which shows the operation screen in Embodiment 1.
- FIG. It is a schematic diagram which shows the operation screen in Embodiment 1.
- FIG. It is a schematic diagram which shows the operation screen in Embodiment 1.
- FIG. It is a schematic diagram which shows the operation screen in Embodiment 1.
- FIG. It is a schematic diagram which shows the operation screen in Embodiment 1.
- FIG. It is a schematic diagram
- FIG. 6 is a photographed image and a recording table of the pattern analysis in the first embodiment. It is a flowchart of the analysis system in Embodiment 2. It is a flowchart of the analysis system in Embodiment 3. It is a schematic diagram which shows the operation screen in Embodiment 3. FIG. It is a schematic diagram which shows the operation screen in Embodiment 3. FIG. It is a schematic diagram which shows the operation screen in Embodiment 3. FIG. It is a schematic diagram which shows the operation screen in Embodiment 3. FIG. It is a schematic diagram which shows the operation screen in Embodiment 3. FIG.
- FIG. 1 It is a recording table which shows the recording example of each information in Embodiment 3. It is a schematic diagram which shows the operation screen in Embodiment 1.
- FIG. It is a schematic diagram which shows an example of the surface shape measuring apparatus in Embodiment 4. It is a flowchart of the analysis system in Embodiment 4. It is a schematic diagram which shows the operation screen in Embodiment 4.
- FIG. 1 It is a recording table which shows the recording example of each information in Embodiment 3. It is a schematic diagram which shows the operation screen in Embodiment 1.
- FIG. It is a schematic diagram which shows an example of the surface shape measuring apparatus in Embodiment 4.
- the X direction, the Y direction, and the Z direction described in the present application intersect each other and are orthogonal to each other.
- the Z direction may be described as the vertical direction, the height direction, or the thickness direction of a certain structure.
- Embodiment 1 The analysis system according to the first embodiment will be described below. First, the charged particle beam apparatus 1 constituting a part of the analysis system will be described with reference to FIG. In FIG. 1, for example, a scanning electron microscope (SEM) is exemplified as the charged particle beam device 1.
- SEM scanning electron microscope
- the charged particle beam device 1 shown in FIG. 1 analyzes the sample SAM by irradiating the sample SAM arranged in the sample chamber 7 with the electron beam EB1 from the electron gun 3 provided inside the lens barrel 2. It is a device for observing and measuring).
- the charged particle beam device 1 includes a sample chamber 7 and a lens barrel 2 attached to the sample chamber 7 and constituting an electron beam column.
- the lens barrel 2 has an electron gun 3 capable of irradiating the electron beam EB1, a condenser lens 4 for focusing the electron beam EB1, a deflection coil 5 for scanning the electron beam EB1, and an electron beam EB1 for focusing. It includes an objective lens 6 and the like.
- a sample table (holder) 8 for mounting the sample SAM, a stage 9 for installing the sample table 8, a stage control device 10, a detector 11, and the like are provided inside the sample chamber 7.
- the sample chamber 7 is provided with an introduction / outlet.
- the sample table 8 on which the sample SAM is mounted is conveyed to the inside of the sample chamber 7 via the introduction / extraction port and installed in the stage 9. Further, when taking out the sample SAM, the sample table 8 on which the sample SAM is mounted is conveyed to the outside of the sample chamber 7 via the introduction / extraction port.
- the stage control device 10 is connected to the stage 9 and can displace the position and orientation of the stage 9.
- the displacement of the stage 9 displaces the position and orientation of the sample SAM.
- the stage control device 10 includes an XY-axis drive mechanism that can be driven in a direction parallel to the mounting surface of the charged particle beam device 1, a Z-axis drive mechanism that can be driven in a direction perpendicular to the above-mentioned mounting surface, and rotation. It has an R-axis drive mechanism that can be driven in a direction and a T-axis drive mechanism that can be driven in a direction inclined with respect to the XY plane.
- Each of these drive mechanisms is a mechanism used for analyzing any part of the sample SAM and the sample table 8 installed on the stage 9. As a result, the portion of the sample SAM to be analyzed is moved to the center of the imaging field of view and tilted in an arbitrary direction.
- the detector 11 can detect the secondary electron EM2 emitted from the sample SAM when the sample SAM is irradiated with the electron beam EB1 at the time of analysis of the sample SAM.
- the detector 11 may be provided inside the sample chamber 7 or inside the lens barrel 2.
- the charged particle beam device 1 includes a comprehensive control unit C0, and includes a display device 20 and an operation device 21 electrically connected to the comprehensive control unit C0 inside or outside the charged particle beam device 1.
- the display device 20 is, for example, a display
- the operation device 21 is, for example, a mouse and a keyboard.
- various types of information are input to the general control unit C0 or output from the general control unit C0.
- the comprehensive control unit C0 has a scanning signal control unit C1, a stage control unit C2, and a calculation unit C3, and controls them. Therefore, in the present application, it may be described that the control performed by the scanning signal control unit C1, the stage control unit C2, and the calculation unit C3 is performed by the comprehensive control unit C0. Further, the comprehensive control unit C0 having the scanning signal control unit C1, the stage control unit C2, and the calculation unit C3 may be regarded as one control unit, and the comprehensive control unit C0 may be simply referred to as a “control unit”.
- the scanning signal control unit C1 is electrically connected to the electron gun 3, the condenser lens 4, the deflection coil 5, and the objective lens 6 to control their operations.
- the electron gun 3 receives a control signal from the scanning signal control unit C1 to generate an electron beam EB1, and the electron beam EB1 is irradiated toward the sample SAM.
- Each of the condenser lens 4, the deflection coil 5, and the objective lens 6 receives a control signal from the scanning signal control unit C1 and excites a magnetic field.
- the magnetic field of the condenser lens 4 causes the electron beam EB1 to be focused so as to have an appropriate beam diameter.
- the electron beam EB1 is deflected by the magnetic field of the deflection coil 5 and scanned two-dimensionally on the sample SAM.
- the magnetic field of the objective lens 6 causes the electron beam EB1 to be refocused on the sample SAM.
- the electron beam EB1 can be focused on the sample SAM.
- the stage control unit C2 is electrically connected to the stage control device 10 and has a function of controlling the operation of each drive mechanism of the stage control device 10 and always linking the field of view and the coordinates of the stage 9.
- the calculation unit C3 includes an image acquisition unit C4, an image combination unit C5, an instruction input unit C6, a storage unit C7, and a pattern shape analysis unit C8.
- the image acquisition unit C4 is electrically connected to the detector 11 and controls this operation. Further, the image acquisition unit C4 can process the secondary electron EM2 detected by the detector 11 as a signal and convert this signal into a captured image (image data). The captured image is output to the display device 20, and the user can confirm the captured image on the display device 20.
- the image combining unit C5 can connect the above-mentioned captured images acquired by the image acquisition unit C4 to create, for example, a wide area image as shown in FIG. 6 described later.
- the wide area image is output to the display device 20, and the user can confirm the wide area image on the display device 20.
- the instruction input unit C6 receives the information input by the user on the display device 20 using the operation device 21.
- the storage unit C7 can store information such as the coordinates of the stage 9 and the acquired captured image (image data). In addition, each information is associated with each other.
- the pattern shape analysis unit C8 has a function of analyzing a plurality of pattern shapes included in the sample SAM.
- the calculation unit C3 uses the information received by the instruction input unit C6 and the information stored in the storage unit C7 to relate to stage coordinates, pattern shape analysis, depth information of the multi-layer structure, and the like, which will be described later. The calculation can be performed.
- FIG. 2Ba is a split cross-sectional view of the sample SAM cut along the line AA of FIG. 2Aa on the observation surface 30.
- FIG. 2Bb is a split cross-sectional view of the sample SAM cut along the line BB of FIG. 2Ab on the observation surface 30.
- the sample SAM in the first embodiment is, for example, a slice obtained from a part of a wafer on which various semiconductor devices are formed. Therefore, the sample SAM includes a semiconductor substrate, a semiconductor element such as a transistor formed on the semiconductor substrate, a highly integrated large-scale integrated circuit (LSI) device composed of a plurality of transistors, and a plurality of gate electrodes. It includes a multi-layer wiring layer including, and an interlayer insulating film formed between them.
- LSI large-scale integrated circuit
- the sample SAM has an upper surface TS and a lower surface BS on the opposite side of the upper surface TS.
- a part of the sample SAM is polished by a polishing device such as an ion milling device, a FIB or a dimple grinder.
- 2Aa and 2Ba show diagrams of samples polished by an ion milling device or FIB.
- 2Ab and 2Bb show a diagram of a sample SAM polished by an ion milling device or a dimple grinder.
- an observation surface (polished surface) 30 forming an inclined surface is formed on a part of the upper surface TS of the sample SAM by this polishing treatment.
- FIGS. 2Aa and 2Ab also show an enlarged view of a part of the sample SAM including the observation surface 30.
- the sample SAM contains a plurality of patterns 32.
- Each of the plurality of patterns 32 is, for example, a semiconductor device having a columnar structure extending in the Z direction.
- each of the plurality of patterns 32 is, for example, a structure such as an LSI wiring or a transistor having a multilayer structure.
- a plurality of conductor layers such as the multilayer wiring layer are shown as a multilayer structure 31. That is, the sample SAM includes a plurality of conductor layers laminated in the first direction (Z direction), which is the direction from the upper surface TS of the sample SAM toward the lower surface BS of the sample SAM, as the multilayer structure 31. Further, although not shown in detail here, the multilayer structure 31 is formed around a plurality of patterns 32.
- the observation surface 30 is inclined from the upper surface TS of the sample SAM toward the lower surface BS of the sample SAM. More specifically, in cross-sectional view, the observation surface 30 forms an inclined surface that is continuously inclined from the upper surface TS toward the lower surface BS.
- the polishing process by the polishing device is performed so that all the layers of the multilayer structure 31 are polished, and the bottom portion of the observation surface 30 is located deeper than the bottom layer of the multilayer structure 31. Therefore, all the layers of the multilayer structure 31 are exposed on the observation surface 30 and the fractured surface.
- the analysis system includes, as a method for measuring the sample SAM, a step performed in the polishing apparatus, a step performed in the sample preparation apparatus, and a step performed in the charged particle beam apparatus 1. Therefore, not only the charged particle beam device 1, but also their polishing device and sample preparation device form a part of the analysis system.
- step S1 the sample SAM is prepared.
- a sample SAM is prepared by cutting out a part of the wafer using a sample preparation device such as a diamond cutter.
- the cut out sample SAM is transferred from the sample preparation device to the polishing device.
- the polishing device is, for example, an ion milling device, a FIB, a dimple grinder, or the like.
- the observation surface 30 is formed on a part of the upper surface TS by polishing the upper surface TS of the sample SAM using a polishing device.
- the polished sample SAM is transferred from the polishing device to the sample preparation device.
- the sample preparation device is, for example, a FIB or an ion milling device.
- the sample SAM shown in FIG. 2Ba or FIG. 2Bb is produced by cutting the sample SAM on the observation surface 30 by the sample preparation device. Then, the cut sample SAM is mounted on the sample table 8.
- step S2 the sample SAM is installed.
- the sample table 8 on which the sample SAM is mounted is transported from the sample preparation device to the charged particle beam device 1.
- the sample table 8 on which the sample SAM is mounted is installed on the stage 9 so that the upper surface TS of the sample SAM faces the electron gun 3.
- the upper surface TS including the observation surface 30 is arranged perpendicular to the Z direction.
- step S3 the application is started.
- the application is started by the user performing an operation on the display device 20 using the operation device 21.
- the operation screen 40a is displayed on the display device 20 as shown in FIG.
- the operation screen 40a is mainly used for the user to input an instruction to the comprehensive control unit C0 and for the user to obtain each information from the comprehensive control unit C0.
- the user can switch between the display unit 41 for wide area image shooting, the display unit 42 for depth information acquisition, and the display unit 70 for pattern analysis.
- the display unit 41 for wide area image shooting includes a shot image display unit 43, a condition display unit 44, a capture button B1, a reference button B2, a button B3 for adding a position designation tool, and a button for starting wide area image creation. B4 is provided.
- the capture button B1 is used when irradiating the sample SAM with the electron beam EB1 and acquiring a captured image.
- the reference button B2 is used when outputting a captured image captured in the past to the captured image display unit 43.
- the button B3 for adding the position designation tool is used when adding the observation range 45 described later.
- the button B4 for starting wide area image creation is used when continuous shooting is performed in order to create a wide area image in step S6 described later.
- the condition display unit 44 displays shooting conditions such as start point coordinates, end point coordinates, magnification, and number of shots. Further, the condition display unit 44 is provided with a button B5 for determining the shooting conditions and a button B6 for setting further details of the shooting conditions.
- step S4 alignment and acquisition of the whole image which is a photographed image seen from the first direction (Z direction) are performed.
- the user performs alignment including focusing of the electron beam EB1 on the sample SAM and changing the magnification.
- the capture button B1 the sample SAM is irradiated with the electron beam EB1 from the first direction (Z direction), and the entire image including the observation surface 30 is acquired.
- the acquired overall image seen from the first direction (Z direction) is output from the comprehensive control unit C0 to the captured image display unit 43.
- the user can grasp the outline of the sample SAM.
- step S5 shooting conditions are set.
- the user drags the mouse, which is, for example, the operating device 21 on the captured image display unit 43, so that the observation range 45 is designated for the entire image including the observation surface 30.
- the comprehensive control unit C0 converts the designated observation range 45 into the position coordinates of the sample SAM, and outputs the start point coordinates and the end point coordinates to the condition display unit 44.
- the user can additionally specify the observation range 45 by clicking the button B3 for adding the position specification tool.
- another observation range 45 that is offset in the Y direction with respect to the initially selected observation range 45 can be added.
- the comprehensive control unit C0 accepts the input imaging conditions and starts continuous imaging for creating a plurality of imaging images of the sample SAM viewed from the first direction (Z direction).
- step S6 when the user clicks the B4 button, a wide area image which is a photographed image of the sample SAM viewed from the first direction (Z direction) is acquired.
- the electron beam EB1 is irradiated to the upper surface TS (observation surface 30) of the sample SAM installed on the stage 9.
- the secondary electrons EB2 emitted from the sample SAM are detected as a signal.
- the image acquisition unit C4 of the comprehensive control unit C0 acquires a captured image viewed from the first direction (Z direction) based on the detected signal. By performing these operations in order for the observation target within the observation range 45, a plurality of captured images are acquired.
- a plurality of captured images acquired in step S5 are joined by the image coupling unit C5 of the comprehensive control unit C0 to create a wide area image viewed from the first direction (Z direction). Will be done.
- the created wide area image is output to the captured image display unit 43.
- the comprehensive control unit C0 acquires a plurality of captured images and wide area images based on the above signals detected by the detector 11.
- the comprehensive control unit C0 can associate the coordinates of the captured image with the coordinates of the stage 9. Therefore, the user can confirm the coordinate information of the target observation position and the like. These coordinates are stored in the storage unit C7 of the comprehensive control unit C0.
- step S6 it is premised that a wide area image is created, but depending on the sample, it is not necessary to create a wide area image, and there are cases where only one or several places need to be photographed. In this case, only the start point coordinates are displayed on the condition display unit 44, and the end point coordinates are not displayed.
- the following description of wide area image creation includes the case of shooting at one or several places.
- step S7 the reference coordinates are specified.
- the user clicks on the first portion of the multilayer structure 31 exposed on the observation surface 30.
- the first place is, for example, the first layer of the multilayer structure 31.
- the comprehensive control unit C0 designates the first location as the reference coordinates (x1, y1) 46a viewed from the first direction (Z direction) in the wide area image.
- the designated reference coordinates (x1, y1) 46a are stored in the storage unit C7.
- the first layer of the multilayer structure 31 in the present application is the layer closest to the upper surface TS of the sample SAM, and corresponds to the uppermost layer of the multilayer structure 31. In the following description, the same applies to the case of expressing "the first layer of the multilayer structure 31".
- the sample can clearly confirm the multi-layer structure, it is easy to specify a layer suitable as a starting point as the reference coordinate, for example, the first layer of the multi-layer structure.
- a layer suitable as a starting point for example, the first layer of the multi-layer structure.
- the multilayer structure cannot be clearly confirmed.
- the position of a specific pattern or the position of a structure having another shape can be specified as the reference coordinates (x1, y1) 46a.
- step S8 the sample SAM is tilted.
- a control signal is transmitted from the stage control unit C2 of the general control unit C0 to the stage control device 10, and the T-axis drive mechanism of the stage control device 10 is driven. That is, the stage control device 10 is controlled so that the fractured surface of the sample SAM faces the electron gun 3 in the second direction intersecting the first direction, and the stage 9 in which the sample SAM is installed is tilted.
- the user can observe the fractured surface of the sample SAM from the second direction (Y direction).
- the stage 9 is tilted 90 degrees
- the first direction (Z direction) is orthogonal to the second direction (Y direction).
- first direction is described as the Z direction and the second direction is described as the Y direction here, the first direction and the second direction are not limited to the Z direction and the Y direction, and may be directions that intersect each other. Just do it.
- the drive range of the T-axis drive mechanism of the stage control device 10 may be less than 90 degrees.
- the user takes out the sample table 8 on which the sample SAM is mounted from the sample chamber 7, removes the sample SAM from the sample table 8, and in a state where the sample SAM is tilted 90 degrees, the sample SAM is returned to the sample table 8 again.
- the sample table 8 on which the sample SAM is mounted is returned to the sample chamber 7.
- step S9 alignment and acquisition of a cross-sectional image which is a photographed image of the sample SAM viewed from the second direction (Y direction) are performed by the same method as in step S4.
- the acquired cross-sectional image is output from the comprehensive control unit C0 to the captured image display unit 43.
- step S10 the reference coordinates are linked.
- the user switches from the display unit 41 for wide area image shooting to the display unit 42 for acquiring depth information on the operation screen 40a.
- the display unit 42 for acquiring depth information includes a captured image display unit 43, a capture button B1, a reference button B2, and a reference button B2, similarly to the display unit 41 for wide area image capture.
- a button B3 for adding a position designation tool is provided.
- the display unit 42 for acquiring depth information is provided with a movement condition display unit 47 and a layer information display unit 48.
- the movement condition display unit 47 is provided with a button B7 for moving to the reference position, a button B8 for linking with the first direction, and a button B9 for moving to the X coordinate. Further, the layer information display unit 48 can display layer information such as the number of layers, the thickness of one layer, and the depth at which the first layer starts.
- the photographed image display unit 43 displays a cross-sectional image of the sample SAM seen from the second direction (Y direction) acquired in step S9.
- the stage control unit C2 of the comprehensive control unit C0 moves the stage control device 10 so that the stage 9 is located at the X coordinate x1 of the reference coordinates (x1, y1) 46a.
- the X-coordinate position 49 of the stage 9 after movement is displayed on the captured image display unit 43.
- the Z coordinate z1 as a reference in the second direction (Y direction) is obtained. It is specified. That is, the user performs a click operation on the second portion of the multilayer structure 31 exposed on the observation surface 30 in the cross-sectional image and corresponding to the coordinate x1 of the reference coordinates (x1, y1) 46a.
- the reference Z coordinate z1 is specified in the second direction (Y direction).
- the comprehensive control unit C0 designates the second location as the reference coordinates (x1, z1) 46b in the cross-sectional image, and the reference coordinates (x1, y1) 46a and The reference coordinates (x1, z1) 46b are associated with each other.
- the correspondence between the reference coordinates (x1, z1) 46b, the reference coordinates (x1, y1) 46a, and the reference coordinates (x1, z1) 46b is stored in the storage unit C7.
- step S11 the observation coordinates (x2, y2) 46c are specified and the stage 9 is moved.
- the comprehensive control unit C0 displays a wide-area image (FIG. 6) as seen from the first direction (Z direction) acquired in step S6. It is displayed on the unit 43.
- the third place is a place different from the above-mentioned first place, for example, a layer different from the first layer of the multilayer structure 31.
- the comprehensive control unit C0 designates the third location as the observation coordinates (x2, y2) 46c viewed from the first direction (Z direction) in the wide area image.
- the comprehensive control unit C0 causes the movement condition display unit 47 to display the X coordinate x2 of the designated observation coordinates (x2, y2) 46c.
- the stage control unit C2 of the comprehensive control unit C0 controls the stage so that the stage 9 is located at the X coordinate x2 of the observation coordinates (x2, y2) 46c. Move the device 10.
- FIG. 9 shows the operation screen 40a after the stage 9 has moved in FIG. As shown in FIG. 9, the X-coordinate position 49 of the moved stage 9 is displayed on the captured image display unit 43.
- the Z coordinate z2 is specified by the user performing a click operation on the multilayer structure 31 at the position overlapping the X coordinate position 49 on the captured image display unit 43. That is, the user performs a click operation on the fourth portion of the multilayer structure 31 exposed on the observation surface 30 in the cross-sectional image and corresponding to the coordinates x2 of the observation coordinates (x2, y2) 46c.
- the Z coordinate z2 is specified.
- the comprehensive control unit C0 designates the above-mentioned fourth point as the observation coordinates (x2, z2) 46d in the cross-sectional image.
- the correspondence between the observation coordinates (x2, y2) 46c, the observation coordinates (x2, y2) 46c, and the observation coordinates (x2, z2) 46d is stored in the storage unit C7.
- the comprehensive control unit C0 moves the stage 9 toward the designated observation coordinates (x2, z2) 46d.
- Alignment is performed in step S12.
- the user focuses the electron beam EB1 on the sample SAM, changes the magnification, and the like in order to perform detailed observation of the target observation coordinates (x2, z2) 46d.
- step S13 the captured image viewed from the second direction (Y direction) is acquired.
- step S12 when the user clicks the capture button B1, shooting is performed and a captured image viewed from the second direction (Y direction) is acquired.
- the acquired photographed image is stored in the storage unit C7.
- steps S12 and S13 are not essential from the viewpoint of acquiring the depth information of the multilayer structure 31, and may be omitted.
- step S14 the depth information of the multilayer structure 31 is acquired.
- the calculation unit C3 of the comprehensive control unit C0 has the depth (Z direction) of the observation coordinates (x2, z2) 46d from the reference coordinates (x1, z1) 46b. Distance in) is calculated.
- the user displays the information of the sample SAM including the number of layers of the multilayer structure 31, the thickness of one layer of the multilayer structure 31 or the thickness of each layer, the depth at which the first layer of the multilayer structure 31 starts, and the like.
- Input to the unit 48 is the information of the sample SAM including the number of layers of the multilayer structure 31, the thickness of one layer of the multilayer structure 31 or the thickness of each layer, the depth at which the first layer of the multilayer structure 31 starts, and the like.
- the calculation unit C3 of the comprehensive control unit C0 can be referred to.
- the number of layers of the observed coordinates (x2, z2) 46d from the coordinates (x1, z1) 46b is calculated.
- the calculation unit C3 of the comprehensive control unit C0 is the observation coordinates (x2, z2) from the upper surface TS of the sample SAM. Calculate the depth and number of layers of 46d. That is, it is calculated which layer of the multilayer structure 31 the observation coordinates (x2, z2) 46d are located.
- the depth information of the multilayer structure 31 is acquired using the wide area image and the cross-sectional image. That is, the depth information of the multilayer structure 31 includes the depth and the number of layers of the observation coordinates (x2, z2) 46d from the reference coordinates (x1, z1) 46b and the observation coordinates (x2, z2) from the upper surface TS of the sample SAM. ) 46d depth and number of layers. Further, these pieces of information are stored in the storage unit C7.
- step S15 it is determined whether or not to observe other observation coordinates.
- step S16 When no observation of other observation coordinates is performed (NO), the next step is step S16.
- steps S11 to S14 are repeated.
- the comprehensive control unit C0 can give depth information at all of the target observation coordinates on the order of nanometers.
- the coordinates and the depth information of the multilayer structure 31 acquired in steps S1 to S15 are recorded as a recording table as shown in FIG. 10 and stored in the storage unit C7.
- the wide area image seen from the first direction (Z direction) and the cross-sectional image seen from the second direction (Y direction) are used, and the respective coordinates are used.
- the depth information of the multilayer structure 31 can be acquired. Then, the user can directly obtain the depth information of the multilayer structure 31 on the order of nanometers.
- the machined area is There are problems such as narrowness, time-consuming evaluation, and difficulty in reacquiring data, and the means for predicting the inclination angle of the polished surface has problems such as low accuracy of depth information.
- the analysis system in the first embodiment can be performed in a wider area and in a shorter time than the means using the FIB, and the data can be easily reacquired. Further, the analysis system in the first embodiment can obtain more accurate depth information than the means for predicting the inclination angle of the polished surface. That is, in the analysis system according to the first embodiment, the depth information of the multilayer structure 31 can be acquired quickly and with high accuracy.
- step S16 a plurality of patterns 32 included in the sample SAM are analyzed.
- the user switches from the display unit 42 for acquiring depth information to the display unit 70 for pattern analysis on the operation screen 40a.
- the display unit 70 for pattern analysis is provided with a captured image display unit 43, an image reading setting unit 71, a button B19 for pattern detection, and a button B20 for pattern analysis. Further, the image reading setting unit 71 is provided with a reading button B17 and a reference button B18.
- the user inputs the number of layers or the depth of the sample SAM and clicks the reading button B17, so that the comprehensive control unit C0 controls the depth of the multilayer structure 31 acquired in step S14.
- the comprehensive control unit C0 controls the depth of the multilayer structure 31 acquired in step S14.
- an image is taken at the observation coordinates (x3, y3, z3) 46e, which is the input position, and the photographed image is displayed on the image display unit 43.
- the user can also click the reference button B18 to select a photographed image acquired in the past.
- the comprehensive control unit C0 detects a plurality of patterns 32 using image recognition technology, assigns numbers to the plurality of patterns 32, and assigns numbers to the plurality of patterns 32. Is displayed on the captured image display unit 43.
- the pattern shape analysis unit C8 uses the image recognition technique to measure the diameters of the plurality of patterns 32 at the observation coordinates (x3, y3, z3) 46e. Is automatically measured. Then, the pattern shape analysis unit C8 acquires pattern shape information such as a major axis diameter, a minor axis diameter, an average diameter, and roundness for each of the plurality of patterns 32. These pattern shape information is stored in the storage unit C7.
- observation coordinates (x3, y3, z3) 46e described here indicate the coordinates of the center position of the photographed image being observed. Therefore, the calculated number of layers also indicates the number of layers at the center position of the observed image.
- the comprehensive control unit C0 can record the acquired pattern shape information as a recording table, and can output the recording table together with the observed photographed image as shown in FIG. Further, such pattern shape information is associated with other information and recorded in the recording table of FIG.
- the depth information of the multilayer structure 31 can be acquired, but also the pattern shape information of the plurality of patterns 32 included in the sample SAM can be acquired. can.
- the timing of forming the fractured surface of the sample SAM is different from that in the first embodiment, and the wide area image is acquired from the first direction (Z direction), and the reference coordinates (x1, y1) 46a are obtained. After the designation and designation of the observation coordinates (x2, y2) 46c are made, the fractured surface of the sample SAM is formed.
- step S21 the observation surface 30 of the sample SAM is formed by the same method as in step S1.
- the sample SAM is in the state of FIG. 2Aa or FIG. 2Ab, and the fractured surface is not formed. In this state, the sample SAM is mounted on the sample table 8.
- steps S22 to S27 the same work as in steps S2 to S7 is performed.
- the comprehensive control unit C0 acquires a wide area image of the sample SAM seen from the first direction (Z direction), and the reference coordinates (x1, y1) 46a are specified in the wide area image.
- step S28 the observation coordinates (x2, y2) 46c are specified before the reference coordinates (x1, z1) 46b are specified. That is, following the designation of the reference coordinates (x1, y1) 46a shown in FIG. 6, the observation coordinates (x2, y2) 46c shown in FIG. 8 are designated.
- step S29 the sample SAM is taken out.
- the sample SAM is taken out from the sample chamber 7, and then the sample SAM is removed from the sample table 8.
- the sample SAM is then transported to a sample preparation device such as a FIB or ion milling device.
- step S30 a fractured surface is formed.
- the sample SAM shown in FIG. 2Ba or FIG. 2Bb is produced.
- step S31 the sample SAM is mounted on the sample table 8 so that the fractured surface is irradiated on the electron beam EB.
- the cut sample SAM is mounted on the sample table 8.
- the sample table 8 is transported to the charged particle beam device 1, and the sample table 8 is installed on the stage 9.
- the fractured surface of the sample SAM is arranged perpendicular to the Z direction so as to face the electron gun 3.
- step S32 the application is started by the same method as in step S3 and the like.
- step S33 alignment and acquisition of a cross-sectional image which is a photographed image of the sample SAM viewed from the second direction (Y direction) are performed by the same method as in step S9.
- steps S34 to S38 the same work as in steps S10 to S14 is performed. That is, the comprehensive control unit C0 sees the second portion of the multilayer structure 31 corresponding to the coordinate x1 of the reference coordinates (x1, y1) 46a in the cross-sectional image from the second direction (Y direction) as the reference coordinates (x1,). z1) Designated as 46b. Further, the comprehensive control unit C0 views the fourth portion of the multilayer structure 31 corresponding to the coordinate x2 of the observation coordinates (x2, y2) 46c in the cross-sectional image from the second direction (Y direction) at the observation coordinates (x2, y2). z2) Designated as 46d.
- the calculation unit C3 of the comprehensive control unit C0 calculates the depth and the number of layers of the observation coordinates (x2, z2) 46d from the reference coordinates (x1, z1) 46b, and the observation coordinates from the upper surface TS of the sample SAM ( x2, z2) Calculate the depth of 46d and the number of layers.
- step S39 as in step S15, steps S34 to S38 are repeated until the observation of all the observation coordinates is completed.
- the depth information of the multilayer structure 31 can be acquired quickly and with high accuracy.
- the sample SAM when the sample SAM is first cut as in the first embodiment, it is unknown whether or not the split cross section is a surface on which the pattern of the multilayer structure 31 can be clearly observed.
- the sample SAM since the sample SAM is cut later, it is easy to create a surface on which the pattern of the multilayer structure 31 can be clearly observed.
- the second embodiment it is necessary to cut the sample SAM according to the position of the reference coordinates (x1, y1) 46a designated before the cut of the sample SAM.
- the split position may be slightly deviated from the position of the reference coordinates (x1, y1) 46a. From this point of view, the first embodiment is more suitable than the second embodiment.
- step S40 as in step S16, the analysis of the plurality of patterns 32 is performed, and the pattern shape information of the plurality of patterns 32 is acquired.
- working distance which is the distance between the objective lens 6 and the focal position due to the focusing of the electron beam EB1 in the first direction (Z direction).
- WD the depth information of the multilayer structure 31 is acquired based on the WD.
- step S41 the observation surface 30 of the sample SAM is formed by the same method as in step S1. At this time, the sample SAM is in the state of FIG. 2Aa or FIG. 2Ab, and the fractured surface is not formed.
- steps S42 to S44 the same work as in steps S2 to S4 is performed.
- the sample table 8 on which the sample SAM is mounted is installed on the stage 9 so that the upper surface TS of the sample SAM faces the electron gun 3.
- the application is started.
- alignment and acquisition of the whole image which is a photographed image seen from the first direction (Z direction) are performed.
- the operation screen 40b is displayed on the display device 20 as shown in FIG.
- the operation screen 40b is mainly used for the user to input an instruction to the comprehensive control unit C0 and for the user to obtain each information from the comprehensive control unit C0.
- the user can switch between the display unit 51 for WD acquisition setting, the display unit 52 for WD profile, the display unit 53 for observation, and the display unit 70 for pattern analysis.
- the display unit 51 for WD acquisition setting includes a captured image display unit 54, a WD acquisition setting unit 55, a mode selection unit 56, a capture button B1, a reference button B2, a position specification tool addition button B3, and a WD.
- a button B12 for starting data acquisition is provided.
- the WD acquisition setting unit 55 displays acquisition conditions such as start point coordinates, end point coordinates, magnification, and WD acquisition count. Further, the WD acquisition setting unit 55 is provided with a button B11 for determining the WD acquisition condition.
- the mode selection unit 56 displays a check box for selecting a prescan mode or a shooting mode.
- the electron beam EB1 is irradiated to the upper surface TS of the sample SAM from the first direction (Z direction), and the entire image including the observation surface 30 is acquired.
- step S45 the WD acquisition setting is performed.
- the user drags the mouse, which is, for example, the operating device 21 on the captured image display unit 43, so that the observation range 57 is designated for the entire image including the observation surface 30.
- the comprehensive control unit C0 converts the designated observation range 57 into the position coordinates of the sample SAM, and outputs the start point coordinates and the end point coordinates to the WD acquisition setting unit 55.
- the comprehensive control unit C0 accepts detailed settings such as the number of WD acquisitions or the WD acquisition interval, calculates the WD acquisition position, and displays the final observation range 57 on the captured image display unit 43.
- the user can additionally specify the observation range 57 by clicking the button B3 for adding the position specification tool.
- another observation range 57 that is offset in the Y direction with respect to the initially selected observation range 57 can be added.
- more accurate depth information of the multilayer structure 31 can be obtained by collating them with each other.
- the user can select the prescan mode or the shooting mode in the mode selection unit 56.
- the prescan mode In the prescan mode, the first observation point is automatically focused, the data is saved in the recording unit C7, the stage 9 is moved to the next observation point, and the next observation point is automatically focused. Focusing is done. That is, the prescan mode is a mode in which the WD value is saved by repeating focusing without acquiring a captured image. In this case, the acquisition of the captured image is performed after the WD profile is created.
- the shooting mode In the shooting mode, focusing and acquisition of the shot image are automatically performed for the first observation point, and the data is saved in the recording unit C7. After that, the stage 9 is moved to the next observation point, and the next observation point is automatically focused and the captured image is acquired. That is, the shooting mode is a mode in which the WD value is saved by acquiring a shot image viewed from the first direction (Z direction) together with focusing.
- the comprehensive control unit C0 starts acquiring the WD value within the observation range 57 by the prescan mode or the shooting mode.
- step S46 the stage control unit C2 of the comprehensive control unit C0 moves the stage control device 10 and the stage 9 to the starting point coordinates of the observation range 57.
- step S47 the scanning signal control unit C1 of the comprehensive control unit C0 irradiates the upper surface TS of the sample SAM with the electron beam EB1 from the first direction (Z direction), and the objective lens 6 is used to irradiate the observation range 57. Focusing is performed at the start point coordinates.
- step S48 the mode is determined. If the prescan mode is selected, the subsequent steps are step S50, and if the shooting mode is selected, the next step is step S49.
- step S49 the shooting image is acquired together with the focusing.
- step S50 the calculation unit C13 of the comprehensive control unit C0 acquires the x-coordinate, the y-coordinate, and the WD information which is the distance between the objective lens 6 and the focal position at the focused portion. ..
- the acquired information is stored in the storage unit C7.
- step S51 the stage 9 is moved to the next observation point, and the next observation point is automatically focused. After that, steps S47 to S51 are repeated until information such as WD at all the target observation points is acquired.
- steps S52 and S53 first, as shown in FIG. 16, the user switches from the display unit 51 for WD acquisition setting to the display unit 52 for WD profile on the operation screen 40b.
- the display unit 52 for the WD profile is provided with a captured image display unit 54, a layer information display unit 58, and a button B13 for acquiring the WD profile.
- step S52 the information of the sample SAM is input.
- the user displays the information of the sample SAM including the number of layers of the multilayer structure 31, the thickness of one layer of the multilayer structure 31 or the thickness of each layer, the depth at which the first layer of the multilayer structure 31 starts, and the like.
- the calculation unit C3 of the comprehensive control unit C0 associates the information of the sample SAM input by the user with the information of the WD at all the observation points.
- step S53 a WD profile is created.
- the user switches from the display unit 52 for the WD profile to the display unit 53 for observation on the operation screen 40b.
- the observation display unit 53 is provided with a captured image display unit 54, an observation position selection unit 59, an observation condition setting unit 60, a WD profile acquisition button B13, and a capture button B14.
- the comprehensive control unit C0 creates a WD profile that graphs the distance (WD) between the objective lens 6 and the focal position at a plurality of observation points of the sample SAM. Will be created. Further, the region other than the observation surface 30 is drawn as a flat line in the WD profile. Therefore, the user can determine that the flat line corresponds to the top surface TS of the sample SAM.
- the calculation unit C3 of the comprehensive control unit C0 is used to input sample SAM information (the number of layers of the multi-layer structure 31, the thickness of one layer of the multi-layer structure 31 or the thickness of each layer, and the multi-layer structure).
- sample SAM information the number of layers of the multi-layer structure 31, the thickness of one layer of the multi-layer structure 31 or the thickness of each layer, and the multi-layer structure.
- the depth information of the multilayer structure 31 includes the depth and the number of layers at a predetermined position on the WD profile from the top surface TS of the sample SAM.
- the observation surface 30 formed by the polishing treatment may not have the target surface shape.
- the observation surface 30 may have irregularities.
- the user can quickly determine the success or failure of the shape of the observation surface 30.
- the user can use another observation range 57 added by using the button B3 for adding the position designation tool.
- step S54 the mode is determined. If the prescan mode is selected, the next step is step S55, and if the shooting mode is selected, the next step is step S56.
- a photographed image can be created at a desired location on the sample SAM.
- the user sets various observation conditions in the observation condition setting unit 60.
- the user selects "select from WD profile" in the observation position selection unit 59.
- the user specifies a predetermined position in the WD profile.
- the user clicks the capture button B14 the electron beam EB1 is irradiated from the first direction (Z direction) to the portion of the sample SAM corresponding to the specified predetermined position. Then, the captured image of the sample SAM seen from the first direction (Z direction) is acquired.
- the shooting here is continuous shooting, and a plurality of shot images can be acquired by continuously shooting the observation range 57, and a wide area image can be obtained by joining the plurality of shot images. can.
- the user selects "the number of layers from the surface” or “the depth from the surface” in the observation position selection unit 59, and the user inputs to the locations input to them. It is also possible to acquire a photographed image.
- step S49 or step S55 when observing or photographing from the first direction (Z direction), it is assumed that a foreign substance is present on the upper surface TS of the sample SAM and the pattern of the multilayer structure 31 cannot be accurately detected. To.
- the x-coordinate of the foreign matter portion is held, moved to another y-coordinate position, and the x-coordinate is displaced by several points at a position considered to be the same depth.
- the coordinates and the depth information of the multilayer structure 31 acquired in steps S41 to S55 are recorded as a recording table as shown in FIG. 18 and stored in the storage unit C7.
- the comprehensive control unit C0 can acquire the depth and the number of layers of the predetermined position from the upper surface TS of the sample SAM by performing the calculation based on the predetermined position on the WD profile.
- the three-dimensional information of the sample SAM can be acquired on the order of nanometers, and the depth information of the multilayer structure 31 can be acquired quickly and with high accuracy. be able to.
- step S56 a plurality of patterns 32 are analyzed in the same manner as in step S16.
- the user switches from the observation display unit 53 to the pattern analysis display unit 70 on the operation screen 40b.
- the operation performed by the display unit 70 for pattern analysis is the same as the method described in step S16.
- the user inputs the number of layers or the depth of the sample SAM and clicks the reading button B17, so that the comprehensive control unit C0 controls the depth of the multilayer structure 31 acquired in step S53.
- the comprehensive control unit C0 controls the depth of the multilayer structure 31 acquired in step S53.
- an image is taken at the observation coordinates (x3, y3, z3) 46e, which is the input position, and the photographed image is displayed on the image display unit 43.
- the user can also click the reference button B18 to select a photographed image acquired in the past.
- the pattern shape information of the plurality of patterns 32 can be acquired by the same method as in step S16 in the first embodiment.
- the three-dimensional information data of the sample SAM acquired by another method different from the third embodiment is collated with the WD profile, and the WD profile is corrected.
- the above-mentioned other method is a method performed in a device different from the charged particle beam device 1, for example, a method performed in the surface shape measuring device 101. This makes it possible to acquire the three-dimensional information of the sample SAM with higher accuracy.
- the surface shape measuring device 101 shown in FIG. 20 is, for example, a white interference microscope, and can acquire three-dimensional information (for example, position coordinates x, y, z) of the upper surface TS of the sample SAM.
- the surface shape measuring device 101 includes a lens barrel 102, a stage 109, a stage control device 110, and a comprehensive control unit C10.
- the comprehensive control unit C10 is electrically connected to the display device 20 and the operation device 21 provided inside or outside the surface shape measuring device 101.
- a white light source 103 Inside the lens barrel 102, a white light source 103, a first beam splitter 104, a second beam splitter 105, an objective lens 106, a reference surface 107, and a camera 108 are provided.
- the stage 109 and the stage control device 110 are provided outside the lens barrel 2 and are stationary in the atmosphere.
- the stage 109 can mount the sample SAM.
- the stage control device 110 is connected to the stage 109 and can displace the position and orientation of the stage 109.
- the displacement of the stage 109 displaces the position and orientation of the sample SAM.
- the stage control device 110 has substantially the same mechanism as the stage control device 10 of the charged particle beam device 1.
- the white light source 103 emits white light WL1.
- the first beam splitter 104 and the second beam splitter 105 divide the emitted white light WL1 into two, irradiate the reference surface 107 with one, and irradiate the surface of the sample SAM with the other.
- the reflected light WL2 reflected from both the reference surface 107 and the sample SAM is imaged on the measuring camera 108.
- the objective lens 106 focuses the white light WL1 so as to focus on the sample SAM installed on the stage 109.
- the comprehensive control unit C10 has an optical system control unit C11, a stage control unit C12, and a calculation unit C13, and controls these. Therefore, in the present application, it may be described that the control performed by the scanning signal control unit C11, the stage control unit C12, and the calculation unit C13 is performed by the comprehensive control unit C10. Further, the comprehensive control unit C10 having the scanning signal control unit C11, the stage control unit C12, and the calculation unit C13 may be regarded as one control unit, and the comprehensive control unit C10 may be simply referred to as a “control unit”.
- the optical system control unit C11 is electrically connected to the white light source 103, the first beam splitter 104, the second beam splitter 105, the objective lens 106, and the reference surface 107, and controls their operations.
- the stage control unit C12 is electrically connected to the stage control device 110 and controls the operation of each drive mechanism of the stage control device 110.
- the calculation unit C13 includes a surface information acquisition unit C14, an instruction input unit C15, and a storage unit C16.
- the surface information acquisition unit C14 is electrically connected to the camera 108 and converts the reflected light WL2 detected by the camera 108 into three-dimensional information data as a signal. That is, the three-dimensional information data is data created based on the reflected light WL2 reflected by the sample SAM when the sample SAM is irradiated with the white light WL1. The three-dimensional information data is output to the display device 20, and the user can confirm the three-dimensional information data on the display device 20.
- the instruction input unit C15 receives the information input by the user on the display device 20 using the operation device 21.
- the storage unit C16 can store information such as the coordinates of the stage 9 and the acquired three-dimensional information data of the sample SAM. In addition, each information is associated with each other.
- step S61 the observation surface 30 of the sample SAM is formed by the same method as in step S41.
- step S62 the surface shape of the sample SAM is measured.
- the user installs the sample SAM on the stage 109 of the surface shape measuring device 101 and turns on the power of the surface shape measuring device 101.
- the comprehensive control unit C10 receives a surface shape measurement instruction from the user and starts measuring the surface shape of the sample SAM.
- the surface shape of the measured sample SAM is stored in the storage unit C16 as three-dimensional information data. Thereby, it is possible to judge the performance of the sample SAM before inserting the sample SAM into the charged particle beam apparatus 1.
- the surface shape measuring device 101 is electrically connected to the charged particle beam device 1 via a network or the like. Therefore, the acquired 3D information data can be linked to the WD and WD profiles acquired in the charged particle beam apparatus 1.
- steps S63 to S72 the same work as in steps S42 to S51 is performed.
- the sample SAM is conveyed from the surface shape measuring device 101 to the charged particle beam device 1, and the WD information is acquired by the integrated control unit C0 via the application.
- step S73 the charged particle beam device 1 reads data (three-dimensional information data) of another method, and in step S74, fitting conditions are set.
- the accuracy selection unit 61 is provided on the display unit 52 for the WD profile of the operation screen 40b.
- the accuracy selection unit 61 is provided with a check box that allows selection of fitting with another method as a method of selecting the accuracy for acquiring the depth information of the multilayer structure 31. Further, the accuracy selection unit 61 is also provided with a button B15 for reading data of another method and a button B16 for starting fitting.
- the user selects fitting with another method (selects "Yes"), and the comprehensive control unit C0 accepts the selection.
- the comprehensive control unit C0 reads the data (three-dimensional information data) of the other method.
- step S75 the 3D information data and the WD information are fitted.
- the comprehensive control unit C0 fits the read three-dimensional information data and the WD information. Examples of the fitting method include curve fitting and three-point alignment.
- the WD at the plurality of observation points of the sample SAM is collated with the three-dimensional information data acquired by the surface shape measuring device 101. Then, as a result of the collation, the WD is corrected.
- the surface shape measuring device 101 since the resolution of the surface shape measuring device 101 (white interference microscope) is on the order of angstrom ( ⁇ ), the surface shape measuring device 101 has sufficient analysis accuracy for the nano-order multilayer structure 31. ing. Further, while the WD at a plurality of observation points of the sample SAM is a combination of fragmentary information, the three-dimensional information data by the surface shape measuring device 101 is continuous information. Therefore, by correcting the WD so as to match the more accurate three-dimensional information data, a more accurate WD profile can be obtained.
- steps S76 to S80 the same work as in steps S52 to S56 is performed. That is, the information of the sample SAM is input, the WD profile is created, and the captured image in the prescan mode is acquired.
- the WD profile in the fourth embodiment is created by graphing the corrected WD.
- the depth information of the multilayer structure 31 can be acquired with higher accuracy than in the third embodiment.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Analytical Chemistry (AREA)
- Engineering & Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Manufacturing & Machinery (AREA)
- Electromagnetism (AREA)
- Immunology (AREA)
- Power Engineering (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Health & Medical Sciences (AREA)
- Pathology (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
- Iron Core Of Rotating Electric Machines (AREA)
- Acyclic And Carbocyclic Compounds In Medicinal Compositions (AREA)
- Steroid Compounds (AREA)
Abstract
Description
以下に、実施の形態1における解析システムについて説明する。まず、図1を用いて、解析システムの一部を構成する荷電粒子線装置1について説明する。図1では、荷電粒子線装置1として、例えば走査型電子顕微鏡(SEM)が例示されている。
図1に示される荷電粒子線装置1は、鏡筒2の内部に備えられた電子銃3から、試料室7に配置された試料SAMへ電子線EB1を照射することで、試料SAMを解析(観察、測定)するための装置である。
図2Aaおよび図2Abは、実施の形態1における試料SAMの平面図である。図2Baは、図2AaのA-A線に沿うように、観察面30において割断された試料SAMの割断面図である。図2Bbは、図2AbのB-B線に沿うように、観察面30において割断された試料SAMの割断面図である。
以下に、図3のフローチャートに示される各ステップS1~S16と、図4~図10とを対比させながら、実施の形態1における解析システムについて説明する。
以下に図13を用いて、実施の形態2における解析システムを説明する。なお、以下では、主に実施の形態1との相違点について説明する。
以下に図14~図18を用いて、実施の形態3における解析システムを説明する。なお、以下では、主に実施の形態1との相違点について説明する。
以下に図20~図22を用いて、実施の形態4における解析システムを説明する。なお、以下では、主に実施の形態3との相違点について説明する。
2 鏡筒
3 電子銃
4 コンデンサレンズ
5 偏向コイル
6 対物レンズ
7 試料室
8 試料台
9 ステージ
10 ステージ制御装置
11 検出器
20 表示機器
21 操作機器
30 観察面(研磨面)
31 多層構造
32 パターン
40a、40b 操作画面
41 広域像撮影用の表示部
42 深さ情報取得用の表示部
43 撮影像表示部
44 条件表示部
45 観察範囲
46a 基準座標(x1,y1)
46b 基準座標(x1,z1)
46c 観察座標(x2,y2)
46d 観察座標(x2,z2)
46e 観察座標(x3,y3,z3)
47 移動条件表示部
48 層情報表示部
49 X座標位置
51 WD取得設定用の表示部
52 WDプロファイル用の表示部
53 観察用の表示部
54 撮影像表示部
55 WD取得設定部
56 モード選択部
57 観察範囲
58 層情報表示部
59 観察位置選択部
60 観察条件設定部
61 精度選択部
70 パターン解析用の表示部
71 画像読込設定部
101 表面形状計測装置
102 鏡筒
103 白色光源
104 第1ビームスプリッタ
105 第2ビームスプリッタ
106 対物レンズ
107 参照面
108 カメラ
109 ステージ
110 ステージ制御装置
B1 キャプチャ用のボタン
B2 参照用のボタン
B3 位置指定ツール追加用のボタン
B4 広域像作成開始用のボタン
B5 撮影条件の決定用のボタン
B6 撮影条件の詳細設定用のボタン
B7 基準位置への移動用のボタン
B8 第1方向とのリンク用のボタン
B9 X座標へ移動用のボタン
B10 深さ情報取得用のボタン
B11 取得条件の決定用のボタン
B12 WDデータ取得開始用のボタン
B13 WDプロファイル取得用のボタン
B14 キャプチャ用のボタン
B15 他手法のデータ読込用のボタン
B16 フィッティング開始用のボタン
B17 読込用のボタン
B18 参照用のボタン
B19 パターン検出用のボタン
B20 パターン解析用のボタン
BS 下面
C0 総合制御部(制御部)
C1 走査信号制御部
C2 ステージ制御部
C3 演算部
C4 画像取得部
C5 画像結合部
C6 指示入力部
C7 記憶部
C8 パターン形状解析部
C10 総合制御部(制御部)
C11 光学系制御部
C12 ステージ制御部
C13 演算部
C14 表面情報取得部
C15 指示入力部
C16 記憶部
EB1 電子線
EB2 二次電子
SAM 試料
TS 上面
WL1 白色光
WL2 反射光
Claims (15)
- (a)多層構造を含む試料に対して第1方向から電子線を照射することで、前記第1方向から見た前記試料の第1撮影像を取得するステップ、
(b)前記試料に対して前記第1方向と交差する第2方向から前記電子線を照射することで、前記第2方向から見た前記試料の第2撮影像を取得するステップ、
(c)前記第1撮影像と、前記第2撮影像と、前記多層構造の層数、前記多層構造の1層の厚さまたは各層の厚さ、および、前記多層構造の1層目が始まる深さを含む前記試料の情報とを用いて、前記多層構造の深さ情報を取得するステップ、
を備える、解析システム。 - 請求項1に記載の解析システムにおいて、
前記試料は、
上面と、
前記上面と反対側の下面と、
前記上面から前記下面へ向かって傾斜するように、前記上面の一部に形成された観察面と、
前記観察面において割断された割断面と、
を有し、
前記多層構造の一部は、前記観察面および前記割断面において露出し、
前記ステップ(a)では、前記上面に対して前記電子線が照射され、
前記ステップ(b)では、前記割断面に対して前記電子線が照射される、解析システム。 - 請求項2に記載の解析システムにおいて、
前記ステップ(c)は、
(c1)前記第1撮影像において、前記観察面において露出している前記多層構造の第1箇所を、前記第1方向から見た第1基準座標(x1,y1)として指定するステップ、
(c2)前記第2撮影像において、前記観察面において露出し、且つ、前記第1基準座標(x1,y1)の座標x1に対応する前記多層構造の第2箇所を、前記第2方向から見た第2基準座標(x1,z1)として指定するステップ、
(c3)前記第1撮影像において、前記観察面において露出し、且つ、前記第1箇所と異なる前記多層構造の第3箇所を、前記第1方向から見た第1観察座標(x2,y2)として指定するステップ、
(c4)前記第2撮影像において、前記観察面において露出し、且つ、前記第2基準座標(x2,y2)の座標x2に対応する前記多層構造の第4箇所を、前記第2方向から見た第2観察座標(x2,z2)として指定するステップ、
(c5)前記第2基準座標(x1,z1)からの前記第2観察座標(x2,z2)の深さを演算するステップ、
を有し、
前記多層構造の深さ情報は、前記第2基準座標(x1,z1)からの前記第2観察座標(x2,z2)の深さを含む、解析システム。 - 請求項3に記載の解析システムにおいて、
前記ステップ(c)は、
(c6)前記試料の情報と、前記ステップ(c5)の演算結果とを照合することで、前記第2基準座標(x1,z1)からの前記第2観察座標(x2,z2)の層数を演算するステップ、
(c7)前記ステップ(c6)の後、前記第2基準座標(x1,z1)が前記多層構造の1層目に位置する場合、前記試料の前記上面からの前記第2観察座標(x2,z2)の深さおよび層数を演算するステップ、
を更に有し、
前記多層構造の深さ情報は、前記第2基準座標(x1,z1)からの前記第2観察座標(x2,z2)の層数と、前記試料の前記上面からの前記第2観察座標(x2,z2)の深さおよび層数とを更に含む、解析システム。 - 請求項3に記載の解析システムにおいて、
(d)前記上面および前記下面を有する前記試料を準備するステップ、
(e)前記ステップ(d)の後、前記上面の一部に対して研磨処理を施すことで、前記上面の一部に、前記観察面を形成するステップ、
(f)前記ステップ(e)の後、前記観察面において前記試料を割断することで、前記割断面を形成するステップ、
を更に備える、解析システム。 - 請求項5に記載の解析システムにおいて、
前記ステップ(d)、前記ステップ(e)および前記ステップ(f)の後、前記ステップ(a)、前記ステップ(b)および前記ステップ(c)が行われる、解析システム。 - 請求項5に記載の解析システムにおいて、
前記ステップ(d)および前記ステップ(e)の後、前記ステップ(a)が行われ、
前記ステップ(a)の後、前記ステップ(c1)および前記ステップ(c3)が行われ、
前記ステップ(c1)および前記ステップ(c3)の後、前記ステップ(f)が行われ、
前記ステップ(f)の後、前記ステップ(b)が行われ、
前記ステップ(b)の後、前記ステップ(c2)および前記ステップ(c4)が行われ、
前記ステップ(c2)および前記ステップ(c4)の後、前記ステップ(c5)が行われる、解析システム。 - (a)多層構造を含む試料に対して第1方向から電子線を照射することで、前記第1方向から見た前記試料の第1撮影像を取得するステップ、
(b)前記第1撮影像において、観察範囲を指定するステップ、
(c)指定された前記観察範囲内において、前記試料のうち複数の箇所に対して、対物レンズを用いて前記第1方向における前記電子線の焦点合わせを行うステップ、
(d)前記ステップ(c)の前記焦点合わせの結果を基にして、前記試料の前記複数の箇所における前記対物レンズと焦点位置との間の距離を取得し、それらの距離をグラフ化したWDプロファイルを作成するステップ、
(e)前記多層構造の層数、前記多層構造の1層の厚さまたは各層の厚さ、および、前記多層構造の1層目が始まる深さを含む前記試料の情報と、前記WDプロファイルとを照合することで、前記多層構造の深さ情報を取得するステップ、
を備える、解析システム。 - 請求項8に記載の解析システムにおいて、
前記試料は、
上面と、
前記上面と反対側の下面と、
前記上面から前記下面へ向かって傾斜するように、前記上面の一部に形成された観察面と、
を有し、
前記多層構造の一部は、前記観察面において露出し、
前記ステップ(a)および前記ステップ(c)では、前記上面に対して前記電子線が照射され、
前記ステップ(b)の前記観察範囲は、前記観察面を含む、解析システム。 - 請求項9に記載の解析システムにおいて、
前記多層構造の深さ情報は、前記試料の前記上面からの前記WDプロファイル上の所定位置の深さおよび層数を含む、解析システム。 - 請求項9に記載の解析システムにおいて、
前記ステップ(c)では、前記焦点合わせが行われると共に、前記試料のうち前記複数の箇所に対して前記第1方向から前記電子線を照射することで、前記第1方向から見た前記試料の第2撮影像が取得される、解析システム。 - 請求項9に記載の解析システムにおいて、
(f)前記ステップ(d)の後、前記WDプロファイルにおいて所定の位置を指定し、前記試料のうち指定された前記所定の位置に対応する箇所に対して、前記第1方向から前記電子線を照射することで、前記第1方向から見た前記試料の第3撮影像を取得するステップ、
を更に備える、解析システム。 - 請求項8に記載の解析システムにおいて、
前記電子線を照射可能な電子銃と、
前記試料を設置可能なステージと、
前記ステージに接続され、且つ、前記ステージの位置および向きを変位させることができるステージ制御装置と、
前記電子線を前記試料上に集束させることができる前記対物レンズと、
前記ステージに設置された前記試料に前記電子線が照射された場合、前記試料から放出される二次電子を信号として検出可能な検出器と、
前記電子銃、前記ステージ制御装置、前記対物レンズおよび前記検出器の各々の動作を制御する制御部と、
を有する荷電粒子線装置を更に備え、
前記制御部は、前記検出器において検出された前記信号を基にして、前記第1撮影像を取得でき、前記対物レンズを制御することで前記焦点合わせを実行でき、前記焦点合わせの結果を基にして、前記WDプロファイルを作成でき、前記WDプロファイルを用いて、前記試料に含まれる多層構造の深さ情報を取得できる、解析システム。 - 請求項13に記載の解析システムにおいて、
前記ステップ(d)において、前記試料の前記複数の箇所における前記対物レンズと焦点位置との間の距離は、前記荷電粒子線装置と異なる表面形状計測装置において取得された前記試料の3次元情報データと照合され、
照合の結果、前記試料の前記複数の箇所における前記対物レンズと焦点位置との間の距離の補正が行われ、
それらの補正された距離をグラフ化することで、前記WDプロファイルが作成される、解析システム。 - 請求項14に記載の解析システムにおいて、
前記表面形状計測装置は、白色干渉顕微鏡であり、
前記3次元情報データは、前記試料に白色光が照射された際に、前記試料で反射した反射光を基にして作成されたデータである、解析システム。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202080105369.2A CN116157891A (zh) | 2020-09-28 | 2020-09-28 | 解析系统 |
KR1020237009048A KR20230049740A (ko) | 2020-09-28 | 2020-09-28 | 해석 시스템 |
JP2022551107A JP7446453B2 (ja) | 2020-09-28 | 2020-09-28 | 解析システム |
US18/027,501 US20230377836A1 (en) | 2020-09-28 | 2020-09-28 | Analysis System |
PCT/JP2020/036694 WO2022064707A1 (ja) | 2020-09-28 | 2020-09-28 | 解析システム |
TW110132556A TWI809491B (zh) | 2020-09-28 | 2021-09-02 | 解析系統 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2020/036694 WO2022064707A1 (ja) | 2020-09-28 | 2020-09-28 | 解析システム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022064707A1 true WO2022064707A1 (ja) | 2022-03-31 |
Family
ID=80845148
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/036694 WO2022064707A1 (ja) | 2020-09-28 | 2020-09-28 | 解析システム |
Country Status (6)
Country | Link |
---|---|
US (1) | US20230377836A1 (ja) |
JP (1) | JP7446453B2 (ja) |
KR (1) | KR20230049740A (ja) |
CN (1) | CN116157891A (ja) |
TW (1) | TWI809491B (ja) |
WO (1) | WO2022064707A1 (ja) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH087818A (ja) * | 1994-06-23 | 1996-01-12 | Ryoden Semiconductor Syst Eng Kk | 走査型電子顕微鏡 |
JP2001201318A (ja) * | 2000-01-18 | 2001-07-27 | Toshiba Corp | 膜厚測定方法及びその装置並びにその記録媒体 |
WO2016002341A1 (ja) * | 2014-06-30 | 2016-01-07 | 株式会社 日立ハイテクノロジーズ | パターン測定方法、及びパターン測定装置 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI744671B (zh) * | 2018-08-03 | 2021-11-01 | 日商紐富來科技股份有限公司 | 電子光學系統及多射束圖像取得裝置 |
-
2020
- 2020-09-28 US US18/027,501 patent/US20230377836A1/en active Pending
- 2020-09-28 KR KR1020237009048A patent/KR20230049740A/ko unknown
- 2020-09-28 CN CN202080105369.2A patent/CN116157891A/zh active Pending
- 2020-09-28 JP JP2022551107A patent/JP7446453B2/ja active Active
- 2020-09-28 WO PCT/JP2020/036694 patent/WO2022064707A1/ja active Application Filing
-
2021
- 2021-09-02 TW TW110132556A patent/TWI809491B/zh active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH087818A (ja) * | 1994-06-23 | 1996-01-12 | Ryoden Semiconductor Syst Eng Kk | 走査型電子顕微鏡 |
JP2001201318A (ja) * | 2000-01-18 | 2001-07-27 | Toshiba Corp | 膜厚測定方法及びその装置並びにその記録媒体 |
WO2016002341A1 (ja) * | 2014-06-30 | 2016-01-07 | 株式会社 日立ハイテクノロジーズ | パターン測定方法、及びパターン測定装置 |
Also Published As
Publication number | Publication date |
---|---|
TW202213569A (zh) | 2022-04-01 |
JP7446453B2 (ja) | 2024-03-08 |
US20230377836A1 (en) | 2023-11-23 |
CN116157891A (zh) | 2023-05-23 |
JPWO2022064707A1 (ja) | 2022-03-31 |
TWI809491B (zh) | 2023-07-21 |
KR20230049740A (ko) | 2023-04-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4474337B2 (ja) | 試料作製・観察方法及び荷電粒子ビーム装置 | |
JP5059297B2 (ja) | 電子線式観察装置 | |
WO2016002341A1 (ja) | パターン測定方法、及びパターン測定装置 | |
WO2016121265A1 (ja) | 試料観察方法および試料観察装置 | |
JP2003007243A (ja) | レーザ欠陥検出機能を備えた走査型電子顕微鏡のオートフォーカス方式 | |
NL1039512C2 (en) | Integrated optical and charged particle inspection apparatus. | |
JP5309552B2 (ja) | 電子線トモグラフィ法及び電子線トモグラフィ装置 | |
CN106370680B (zh) | 用于tem/stem层析成像倾斜系列采集和对准的基准形成 | |
JP3602646B2 (ja) | 試料の寸法測定装置 | |
KR20180008577A (ko) | 결함 판정 방법, 및 x선 검사 장치 | |
JP6659290B2 (ja) | 試料位置合わせ方法および荷電粒子ビーム装置 | |
JP5075393B2 (ja) | 走査電子顕微鏡 | |
JP2010256261A (ja) | 結晶方位同定システム及び透過電子顕微鏡 | |
TWI785582B (zh) | 用於在帶電粒子束檢測系統中增強檢測影像之方法、影像增強裝置及其相關非暫時性電腦可讀媒體 | |
WO2017033591A1 (ja) | 荷電粒子線装置および試料ステージのアライメント調整方法 | |
WO2022064707A1 (ja) | 解析システム | |
JP6088337B2 (ja) | パターン検査方法及びパターン検査装置 | |
JP6207893B2 (ja) | 試料観察装置用のテンプレート作成装置 | |
WO2021186637A1 (ja) | 荷電粒子線装置 | |
WO2023053373A1 (ja) | 解析システム | |
CN108231513A (zh) | 用于操作显微镜的方法 | |
JP7167323B2 (ja) | パターン計測装置および計測方法 | |
JP2009110969A (ja) | パターン寸法測定方法、及びパターン寸法測定装置 | |
JP5270984B2 (ja) | 観察対象物整列方法および観察対象物整列装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20955300 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022551107 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20237009048 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20955300 Country of ref document: EP Kind code of ref document: A1 |