WO2022102266A1 - 画像補正装置、パターン検査装置、及び画像補正方法 - Google Patents

画像補正装置、パターン検査装置、及び画像補正方法 Download PDF

Info

Publication number
WO2022102266A1
WO2022102266A1 PCT/JP2021/035451 JP2021035451W WO2022102266A1 WO 2022102266 A1 WO2022102266 A1 WO 2022102266A1 JP 2021035451 W JP2021035451 W JP 2021035451W WO 2022102266 A1 WO2022102266 A1 WO 2022102266A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
shift amount
pixel
gradation value
target pixel
Prior art date
Application number
PCT/JP2021/035451
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
昌孝 白土
長作 能弾
Original Assignee
株式会社ニューフレアテクノロジー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニューフレアテクノロジー filed Critical 株式会社ニューフレアテクノロジー
Publication of WO2022102266A1 publication Critical patent/WO2022102266A1/ja

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals

Definitions

  • JP2020-188280 application number
  • JP2020-188280 application number
  • the present invention relates to an image correction device, a pattern inspection device, and an image correction method.
  • the present invention relates to a method of aligning an image of a graphic pattern formed on a substrate imaged for inspection using an electron beam.
  • the patterns constituting the LSI are on the order of submicrons to nanometers.
  • the accuracy of the pattern inspection device for inspecting the defects of the ultrafine pattern transferred on the semiconductor wafer is the pattern defect of the mask used when the ultrafine pattern is exposed and transferred on the semiconductor wafer by the photolithography technique. Therefore, it is necessary to improve the accuracy of the pattern inspection device for inspecting defects of the transfer mask used in LSI manufacturing.
  • an inspection method a method of inspecting by comparing a measurement image of a pattern formed on a substrate such as a semiconductor wafer or a lithography mask with design data or a measurement image of the same pattern on the substrate.
  • a pattern inspection method "die to die (die-die) inspection” in which measurement image data obtained by imaging the same pattern in different places on the same substrate are compared, or a design image based on pattern-designed design data.
  • die to data (die database) inspection that generates data (reference image) and compares it with the measurement image that is the measurement data obtained by imaging the pattern.
  • the substrate to be inspected is placed on the stage, and the light flux scans the sample by the movement of the stage, and the inspection is performed.
  • the substrate to be inspected is irradiated with a luminous flux by a light source and an illumination optical system.
  • the light transmitted or reflected through the substrate to be inspected is imaged on the sensor via the optical system.
  • the image captured by the sensor is sent to the comparison circuit as measurement data.
  • the comparison circuit after the images are aligned with each other, the measurement data and the reference data are compared according to an appropriate algorithm, and if they do not match, it is determined that there is a pattern defect.
  • the SSD (Sum of Squared Difference) method was used as a method for aligning the images.
  • one of the images to be compared is shifted in units of sub-pixels, the shifted pixel value is obtained by interpolation for the positional deviation of less than one pixel, and the sum of squares of the difference between the pixel values of both images is the minimum. Adjust to the position where.
  • an optical image is acquired by irradiating a substrate to be inspected with a laser beam and capturing a transmitted image or a reflected image thereof.
  • development of an inspection device that irradiates the substrate to be inspected with a multi-beam using an electron beam detects secondary electrons corresponding to each beam emitted from the substrate to be inspected, and acquires a pattern image. Is also progressing.
  • the electron beam inspection device the number of electrons incident per unit region is limited, so that the influence of shot noise on individual electrons is large.
  • a two-step processing method such as performing interpolation processing according to the shift amount for the entire image, and further applying a compensation filter that suppresses the fluctuation of the noise level according to the shift amount generated by the interpolation processing. Proposed (see, for example, Patent Document 1).
  • one aspect of the present invention provides a device and a method capable of correcting an image so that a variation in noise level according to a shift amount does not occur.
  • the image correction device of one aspect of the present invention is A storage device that stores images and A shift amount determination circuit that determines the shift amount for each sub-pixel for one of the entire image and the partial image for each position of the image.
  • Interpolation processing that performs interpolation processing according to the shift amount using the gradation value of the target pixel and the gradation value of the peripheral pixels of the target pixel for each pixel for the above one of the entire image and the partial image for each position.
  • Circuit and Equipped with In the interpolation process the sum of the weighting coefficients of each term of the linear sum at the pixel position is 1, and the weighting coefficient of each term is set so that the sum of squares of the weighting coefficients of each term is a constant that does not depend on the shift amount. Is used to calculate the linear sum of the gradation value of the target pixel and the gradation value of the peripheral pixels as the interpolation value of the target pixel.
  • the pattern inspection device of one aspect of the present invention is A storage device that stores the first image on which the graphic pattern is formed, and A shift amount determination circuit that determines the shift amount for each sub-pixel with respect to one of the entire image of the first image and the partial image for each position of the first image.
  • Interpolation processing circuit that performs interpolation processing according to the shift amount using the gradation value of the target pixel and the gradation value of the peripheral pixels of the target pixel for each pixel for one of the entire image and the partial image for each position.
  • a comparison circuit that compares the first image that has been interpolated with the second image that corresponds to the first image.
  • the sum of the weighting coefficients of each term of the linear sum at the pixel position is 1, and the weighting coefficient of each term is set so that the sum of squares of the weighting coefficients of each term is a constant that does not depend on the shift amount. Is used to calculate the linear sum of the gradation value of the target pixel and the gradation value of the peripheral pixels as the interpolation value of the target pixel.
  • the image correction method of one aspect of the present invention is The shift amount for each sub-pixel is determined for one of the entire image of the image stored in the storage device and the partial image for each position of the image. Interpolation processing is performed for each of the entire image and the partial image for each position according to the shift amount using the gradation value of the target pixel and the gradation value of the peripheral pixels of the target pixel for each pixel.
  • the sum of the weighting coefficients of each term of the linear sum at the pixel position is 1, and the weighting coefficient of each term is set so that the sum of squares of the weighting coefficients of each term is a constant that does not depend on the shift amount. Is used to calculate the linear sum of the gradation value of the target pixel and the gradation value of the peripheral pixels as the interpolation value of the target pixel.
  • the image can be corrected so that the fluctuation of the noise level according to the shift amount does not occur. Therefore, highly accurate pattern inspection can be performed.
  • FIG. It is a block diagram which shows the structure of the pattern inspection apparatus in Embodiment 1.
  • FIG. It is a conceptual diagram which shows the structure of the molded aperture array substrate in Embodiment 1.
  • FIG. It is a figure which shows an example of the plurality of chip regions formed on the semiconductor substrate in Embodiment 1.
  • FIG. It is a figure for demonstrating the image acquisition process in Embodiment 1.
  • FIG. It is an example of the internal block diagram which shows the structure in the comparison circuit in Embodiment 1.
  • It is a figure which shows an example of the fluctuation of a noise level in the comparative example of Embodiment 1.
  • FIG. It is a flowchart which shows the main process of the image correction method in Embodiment 1.
  • FIG. It is a figure for demonstrating the filter function of the sub-pixel interpolation processing in Embodiment 1.
  • FIG. It is a figure for demonstrating the filter function of the sub-pixel interpolation processing in Embodiment 1.
  • FIG. It is a flowchart which shows the internal process of the filter table making process in Embodiment 1.
  • FIG. It is a figure which showed the example of the filter table in Embodiment 1 by a graph.
  • FIG. It is a figure which shows an example of the image before correction in the comparative example of Embodiment 1.
  • FIG. It is a figure which shows an example of the corrected image in the comparative example of Embodiment 1.
  • FIG. It is a figure which shows an example of the image before correction in Embodiment 1.
  • FIG. It is a figure which shows an example of the corrected image in Embodiment 1.
  • FIG. It is an example of the internal block diagram which shows the structure in the comparison circuit in Embodiment 2.
  • FIG. It is a figure which shows an example of the structure of the image correction apparatus in Embodiment 3.
  • FIG. It is a figure which shows an example of the shape of the correction target image in Embodiment 3.
  • FIG. It is a figure which shows an example of the shape of the correction target image in Embodiment 3.
  • FIG. It is a figure which shows an example of the shape of the correction target image in Embodiment 3.
  • an electron beam inspection device that acquires an image using an electron beam will be described as an example of the image correction device.
  • It may be a device that acquires an image using an ion beam, ultraviolet rays, or the like.
  • it may be a device that corrects such an image by inputting an image acquired externally.
  • the electron beam will be described below with respect to the configuration using a multi-beam, for example, a configuration using a single beam with one electron beam may be used.
  • FIG. 1 is a configuration diagram showing a configuration of a pattern inspection device according to the first embodiment.
  • the inspection device 100 for inspecting a pattern formed on a substrate is an example of an electron beam inspection device.
  • the inspection device 100 includes an image acquisition mechanism 150 and a control system circuit 160 (control unit).
  • the image acquisition mechanism 150 includes an electron beam column 102 (electron lens barrel), an inspection room 103, a detection circuit 106, a chip pattern memory 123, a stage drive mechanism 142, and a laser length measuring system 122.
  • an electron gun 201 In the electron beam column 102, an electron gun 201, an illumination lens 202, a molded aperture array substrate 203, a reduction lens 205, a limiting aperture substrate 213, an objective lens 207, a main deflector 208, a sub-deflector 209, and a batch blanking deflector are included. 212, beam separator 214, deflector 218, projection lenses 224,226, and multi-detector 222 are arranged.
  • the primary electron optics system is configured by the device 209. Further, the secondary electron optical system is composed of the electromagnetic lens 207, the beam separator 214, the deflector 218, and the electromagnetic lenses 224 and 226.
  • a stage 105 that can move at least in the XY direction is arranged in the inspection room 103.
  • a substrate 101 (sample) to be inspected is arranged on the stage 105.
  • the substrate 101 includes a mask substrate for exposure and a semiconductor substrate such as a silicon wafer.
  • a semiconductor substrate such as a silicon wafer.
  • a plurality of chip patterns are formed on the semiconductor substrate.
  • a chip pattern is formed on the exposure mask substrate.
  • the chip pattern is composed of a plurality of graphic patterns.
  • the substrate 101 is a semiconductor substrate, for example, with the pattern forming surface facing upward. Further, on the stage 105, a mirror 216 that reflects the laser beam for laser length measurement emitted from the laser length measuring system 122 arranged outside the examination room 103 is arranged.
  • the multi-detector 222 is connected to the detection circuit 106 outside the electron beam column 102.
  • the detection circuit 106 is connected to the chip pattern memory 123.
  • the control computer 110 that controls the entire inspection device 100 uses the position circuit 107, the comparison circuit 108, the reference image creation circuit 112, the stage control circuit 114, the lens control circuit 124, and the blanking via the bus 120. It is connected to a control circuit 126, a deflection control circuit 128, a filter table creation circuit 130, a storage device 109 such as a magnetic disk device, a monitor 117, a memory 118, and a printer 119. Further, the deflection control circuit 128 is connected to a DAC (digital-to-analog conversion) amplifier 144, 146, 148. The DAC amplifier 146 is connected to the main deflector 208, and the DAC amplifier 144 is connected to the sub-deflector 209. The DAC amplifier 148 is connected to the deflector 218.
  • DAC digital-to-analog conversion
  • the chip pattern memory 123 is connected to the comparison circuit 108.
  • the stage 105 is driven by the drive mechanism 142 under the control of the stage control circuit 114.
  • a drive system such as a three-axis (XY ⁇ ) motor that drives in the X direction, the Y direction, and the ⁇ direction in the stage coordinate system is configured, and the stage 105 can move in the XY ⁇ direction. It has become.
  • X motors, Y motors, and ⁇ motors (not shown), for example, step motors can be used.
  • the stage 105 can be moved in the horizontal direction and the rotational direction by the motor of each axis of XY ⁇ .
  • the moving position of the stage 105 is measured by the laser length measuring system 122 and supplied to the position circuit 107.
  • the laser length measuring system 122 measures the position of the stage 105 by the principle of the laser interferometry method by receiving the reflected light from the mirror 216.
  • the stage coordinate system for example, the X direction, the Y direction, and the ⁇ direction of the primary coordinate system are set with respect to the plane orthogonal to the optical axis of the multi-primary electron beam 20.
  • the electromagnetic lens 202, the electromagnetic lens 205, the electromagnetic lens 206, the electromagnetic lens 207 (objective lens), the electromagnetic lenses 224,226, and the beam separator 214 are controlled by the lens control circuit 124.
  • the batch blanking deflector 212 is composed of electrodes having two or more poles, and is controlled by the blanking control circuit 126 via a DAC amplifier (not shown) for each electrode.
  • the sub-deflector 209 is composed of electrodes having four or more poles, and each electrode is controlled by the deflection control circuit 128 via the DAC amplifier 144.
  • the main deflector 208 is composed of electrodes having four or more poles, and each electrode is controlled by a deflection control circuit 128 via a DAC amplifier 146.
  • the deflector 218 is composed of electrodes having four or more poles, and each electrode is controlled by a deflection control circuit 128 via a DAC amplifier 148.
  • a high-voltage power supply circuit (not shown) is connected to the electron gun 201, and an acceleration voltage from the high-voltage power supply circuit is applied between the filament and the extraction electrode (not shown) in the electron gun 201, and the voltage of a predetermined extraction electrode (Wenert) is applied.
  • a predetermined extraction electrode Wenert
  • FIG. 1 describes a configuration necessary for explaining the first embodiment.
  • the inspection device 100 may usually have other configurations required.
  • FIG. 2 is a conceptual diagram showing the configuration of the molded aperture array substrate according to the first embodiment.
  • the molded aperture array substrate 203 has holes (openings) of two-dimensional horizontal (x direction) m 1 row ⁇ vertical (y direction) n 1 step (m 1 and n 1 are integers of 2 or more). ) 22 are formed at a predetermined arrangement pitch in the x and y directions.
  • a case where a hole (opening) 22 of 23 ⁇ 23 is formed is shown.
  • Each hole 22 is formed by a rectangle having the same size and shape. Alternatively, it may be a circle having the same outer diameter.
  • a part of the electron beam 200 passes through each of these plurality of holes 22, so that the multi-primary electron beam 20 is formed.
  • the operation of the image acquisition mechanism 150 in the case of acquiring a secondary electronic image will be described.
  • the image acquisition mechanism 150 acquires an image to be inspected of the graphic pattern from the substrate 101 on which the graphic pattern is formed by using the multi-beam 20 by the electron beam.
  • the operation of the image acquisition mechanism 150 in the inspection device 100 will be described.
  • the electron beam 200 emitted from the electron gun 201 is refracted by the electromagnetic lens 202 to illuminate the entire molded aperture array substrate 203.
  • a plurality of holes 22 are formed in the molded aperture array substrate 203, and the electron beam 200 illuminates a region including all the plurality of holes 22.
  • Each part of the electron beam 200 irradiated to the positions of the plurality of holes 22 passes through the plurality of holes 22 of the molded aperture array substrate 203, respectively, thereby forming the multi-primary electron beam 20.
  • the formed multi-primary electron beam 20 is refracted by the electromagnetic lens 205 and the electromagnetic lens 206, respectively, and the intermediate image plane (image plane) of each beam of the multi-primary electron beam 20 is repeated while repeating the intermediate image and the crossover. It passes through the beam separator 214 arranged at the conjugate position (IP) and proceeds to the electromagnetic lens 207 (objective lens).
  • the electromagnetic lens 207 focuses the multi-primary electron beam 20 on the substrate 101.
  • the multi-primary electron beam 20 focused (focused) on the surface of the substrate 101 (sample) by the objective lens 207 is collectively deflected by the main deflector 208 and the sub-deflector 209, and the substrate 101 of each beam is applied.
  • Each of the above irradiation positions is irradiated.
  • the entire beam 20 is shielded.
  • the multi-primary electron beam 20 not deflected by the batch blanking deflector 212 passes through the central hole of the limiting aperture substrate 206 as shown in FIG.
  • the limiting aperture substrate 206 shields the multi-primary electron beam 20 deflected so that the beam is turned off by the batch blanking deflector 212.
  • the multi-primary electron beam 20 for image acquisition is formed by the beam group that has passed through the limiting aperture substrate 206 formed from the time when the beam is turned on to the time when the beam is turned off.
  • the multi-primary electron beam 20 When the multi-primary electron beam 20 is irradiated to a desired position of the substrate 101, it corresponds to each beam of the multi-primary electron beam 20 from the substrate 101 due to the irradiation of the multi-primary electron beam 20. , A bundle of secondary electrons including backscattered electrons (multi-secondary electron beam 300) is emitted.
  • the multi-secondary electron beam 300 emitted from the substrate 101 passes through the electromagnetic lens 207 and proceeds to the beam separator 214.
  • the beam separator 214 generates an electric field and a magnetic field in the direction orthogonal to the plane orthogonal to the direction (orbital central axis) in which the central beam of the multi-primary electron beam 20 travels.
  • the electric field exerts a force in the same direction regardless of the traveling direction of the electron.
  • the magnetic field exerts a force according to Fleming's left-hand rule. Therefore, the direction of the force acting on the electron can be changed depending on the intrusion direction of the electron.
  • the force due to the electric field and the force due to the magnetic field cancel each other out in the multi-primary electron beam 20 that enters the beam separator 214 from above, and the multi-primary electron beam 20 travels straight downward.
  • the multi-secondary electron beam 300 which is bent diagonally upward and separated from the multi-primary electron beam 20, is further bent by the deflector 218 and projected onto the multi-detector 222 while being refracted by the electromagnetic lenses 224 and 226.
  • the multi-detector 222 detects the projected multi-secondary electron beam 300.
  • the multi-detector 222 has a plurality of detection elements (for example, a diode type two-dimensional sensor (not shown)). Then, each beam of the multi-primary electron beam 20 collides with the detection element corresponding to each secondary electron beam of the multi-secondary electron beam 300 on the detection surface of the multi-detector 222 to generate electrons. Next-electron image data is generated for each pixel.
  • the intensity signal detected by the multi-detector 222 is output to the detection circuit 106.
  • Each primary electron beam is irradiated into a sub-irradiation region surrounded by an inter-beam pitch in the x direction and an inter-beam pitch in the y direction in which its own beam is located on the substrate 101, and scans the sub-irradiation region ( Scan operation).
  • FIG. 3 is a diagram showing an example of a plurality of chip regions formed on the semiconductor substrate in the first embodiment.
  • a plurality of chips (wafer dies) 332 are formed in a two-dimensional array in the inspection region 330 of the semiconductor substrate (wafer) 101.
  • a mask pattern for one chip formed on an exposure mask substrate is transferred to each chip 332 by being reduced to, for example, 1/4 by an exposure device (stepper) (not shown).
  • FIG. 4 is a diagram for explaining the image acquisition process in the first embodiment.
  • the region of each chip 332 is divided into a plurality of stripe regions 32 with a predetermined width, for example, in the y direction.
  • the scanning operation by the image acquisition mechanism 150 is performed, for example, for each stripe region 32.
  • the scanning operation of the stripe region 32 is relatively advanced in the x direction.
  • Each stripe region 32 is divided into a plurality of rectangular regions 33 in the longitudinal direction.
  • the movement of the beam to the rectangular region 33 of interest is performed by batch deflection of the entire multi-primary electron beam 20 by the main deflector 208.
  • the irradiation region 34 that can be irradiated by one irradiation of the multi-primary electron beam 20 is (the x-direction obtained by multiplying the x-direction beam-to-beam pitch of the multi-primary electron beam 20 on the substrate 101 surface by the number of beams in the x-direction. Size) ⁇ (size in the y direction obtained by multiplying the pitch between beams of the multi-primary electron beam 20 in the y direction on the surface of the substrate 101 by the number of beams in the y direction).
  • the irradiation region 34 becomes the field of view of the multi-primary electron beam 20.
  • each of the primary electron beams 10 constituting the multi-primary electron beam 20 is irradiated into the sub-irradiation region 29 surrounded by the inter-beam pitch in the x-direction and the inter-beam pitch in the y direction in which the own beam is located. , Scan (scan operation) in the sub-irradiation area 29.
  • Scan scanner operation
  • Each primary electron beam 10 is responsible for any of the sub-irradiation regions 29 that are different from each other.
  • each primary electron beam 10 irradiates the same position in the responsible sub-irradiation region 29.
  • the movement of the primary electron beam 10 in the sub-irradiation region 29 is performed by batch deflection of the entire multi-primary electron beam 20 by the sub-deflector 209. This operation is repeated to sequentially irradiate the inside of one sub-irradiation region 29 with one primary electron beam 10.
  • each stripe region 32 is set to the same size as the y-direction size of the irradiation region 34 or to be narrowed by the scan margin.
  • the irradiation area 34 has the same size as the rectangular area 33 is shown. However, it is not limited to this.
  • the irradiation area 34 may be smaller than the rectangular area 33. Or it may be large. Then, each primary electron beam 10 constituting the multi-primary electron beam 20 is irradiated into the sub-irradiation region 29 in which its own beam is located, and scans (scans) the inside of the sub-irradiation region 29.
  • the irradiation position is moved to the adjacent rectangular region 33 in the same stripe region 32 by the collective deflection of the entire multi-primary electron beam 20 by the main deflector 208.
  • This operation is repeated to irradiate the inside of the stripe region 32 in order.
  • the irradiation region 34 moves to the next stripe region 32 by moving the stage 105 and / or batch deflection of the entire multi-primary electron beam 20 by the main deflector 208.
  • the scanning operation for each sub-irradiation region 29 and the acquisition of the secondary electron image are performed.
  • a secondary electronic image of the rectangular region 33 By combining these secondary electronic images for each sub-irradiation region 29, a secondary electronic image of the rectangular region 33, a secondary electronic image of the striped region 32, or a secondary electronic image of the chip 332 is configured. Further, when actually performing image comparison, the sub-irradiation region 29 in each rectangular region 33 is further divided into a plurality of frame regions 30, and the frame image 31 for each frame region 30 is compared.
  • FIG. 4 shows a case where the sub-irradiation region 29 scanned by one primary electron beam 10 is divided into four frame regions 30 formed by dividing the sub-irradiation region 29 into two in the x and y directions, for example. ..
  • the main deflector 208 collectively deflects the irradiation position of the multi-primary electron beam 20 so as to follow the movement of the stage 105. Tracking operation is performed by. Therefore, the emission position of the multi-secondary electron beam 300 changes momentarily with respect to the orbital central axis of the multi-primary electron beam 20. Similarly, when scanning in the sub-irradiation region 29, the emission position of each secondary electron beam changes momentarily in the sub-irradiation region 29. The deflector 218 collectively deflects the multi-secondary electron beam 300 so that each secondary electron beam whose emission position has changed is irradiated into the corresponding detection region of the multi-detector 222.
  • the image acquisition mechanism 150 promotes the scanning operation for each stripe area 32.
  • the multi-secondary electron beam 300 emitted from the substrate 101 due to the irradiation of the multi-primary electron beam 20 by irradiating the multi-primary electron beam 20 is detected by the multi-detector 222. ..
  • the detected multi-secondary electron beam 300 may contain backscattered electrons. Alternatively, the backscattered electrons may diverge while moving through the secondary electron optical system and may not reach the multi-detector 222.
  • the secondary electron detection data (measured image data: secondary electron image data: inspected image data) for each pixel in each sub-irradiation region 29 detected by the multi-detector 222 is output to the detection circuit 106 in the order of measurement.
  • analog detection data is converted into digital data by an A / D converter (not shown) and stored in the chip pattern memory 123. Then, the obtained measurement image data is transferred to the comparison circuit 108 together with the information indicating each position from the position circuit 107.
  • the reference image creation circuit 112 creates a reference image corresponding to the frame image 31 for each frame region 30 based on the design data that is the source of the plurality of graphic patterns formed on the substrate 101. Specifically, it operates as follows. First, the design pattern data is read from the storage device 109 through the control computer 110, and each graphic pattern defined in the read design pattern data is converted into binary or multi-valued image data.
  • the figure defined in the design pattern data is, for example, a basic figure of a rectangle or a triangle, for example, the coordinates (x, y) at the reference position of the figure, the length of the side, the rectangle, the triangle, or the like.
  • Graphical data that defines the shape, size, position, etc. of each pattern graphic is stored in information such as a graphic code that serves as an identifier that distinguishes the graphic types of.
  • the design pattern data to be the graphic data is input to the reference image creation circuit 112
  • the data is expanded to the data for each graphic, and the graphic code indicating the graphic shape of the graphic data, the graphic dimension, and the like are interpreted.
  • it is developed into binary or multi-valued design pattern image data as a pattern arranged in the squares having a grid of predetermined quantized dimensions as a unit and output.
  • the design data is read, the occupancy rate of the figure in the design pattern is calculated for each cell created by virtually dividing the inspection area into cells with a predetermined dimension as a unit, and the n-bit occupancy rate data is obtained.
  • Output For example, it is preferable to set one cell as one pixel.
  • the reference image creation circuit 112 filters the design image data of the design pattern, which is the image data of the figure, by using a predetermined filter function. Thereby, the design image data in which the image intensity (shade value) is the image data on the design side of the digital value can be matched with the image generation characteristic obtained by the irradiation of the multi-primary electron beam 20.
  • the image data for each pixel of the created reference image is output to the comparison circuit 108.
  • FIG. 5 is an example of an internal configuration diagram showing a configuration in the comparison circuit according to the first embodiment.
  • storage devices 50, 51, 52, 74 such as a magnetic disk device, smoothing processing units 54, 56, sub-pixel interpolation processing unit 60, sub-pixel shift processing unit 63, SSD (Sum). of Squared Difference) value calculation unit 62, optimization processing unit 64, smoothing processing unit 70, and comparison processing unit 72 are arranged.
  • Each "-part" includes a processing circuit, and the processing circuit includes an electric circuit, a computer, a processor, a circuit board, a quantum circuit, a semiconductor device, and the like. Further, a common processing circuit (same processing circuit) may be used for each "-part". Alternatively, different processing circuits (separate processing circuits) may be used. Smoothing processing unit 54, 56, sub-pixel interpolation processing unit 60, sub-pixel shift processing unit 63, SSD (Sum of Squared Difference) value calculation unit 62, optimization processing unit 64, smoothing processing unit 70, and comparison processing unit 72. The input data required for the above or the result of the calculation is stored in a memory (not shown) each time.
  • the image to be inspected image data (frame image data) transferred into the comparison circuit 108 is stored in the storage device 52. Further, the reference image data transferred into the comparison circuit 108 is stored in the storage device 50. In the comparison circuit 108, the frame image to be the inspected image and the reference image are aligned.
  • the number of electrons incident per unit region is limited, so that the influence of shot noise on individual electrons is large. Therefore, for misalignment of less than one pixel, interpolation processing is performed on the entire image according to the shift amount, and a compensation filter that suppresses the fluctuation of the noise level according to the shift amount generated by the interpolation processing is further applied.
  • a two-step processing method can be considered. However, in such processing, since the processing is performed in two stages, for example, interpolation processing with 4 taps and compensation filter processing with 3 taps, there is a problem that the amount of memory required for data processing becomes large, and the noise level fluctuates. There was a problem that it may not be possible to completely erase.
  • FIG. 6 is a diagram showing an example of noise level fluctuation in the comparative example of the first embodiment.
  • the interpolation processing by 4 taps and the compensation filter processing by, for example, 3 taps are performed in two stages.
  • this 4-tap shift filter and 3-tap compensation blur filter are applied continuously, it is equivalent to a 6-tap filter, but this sum of squares of coefficients is strictly a constant value as shown in FIG. do not become. Therefore, it can be seen that the fluctuation of the noise amount caused by the shift amount is not completely suppressed and remains slightly. As described above, it can be seen that the fluctuation of the noise amount cannot be completely suppressed when the filter is applied in two steps. Therefore, in the first embodiment, when the interpolation processing for performing the image correction is performed, the interpolation is performed so that the fluctuation of the noise level according to the shift amount does not occur.
  • FIG. 7 is a flowchart showing a main process of the image correction method according to the first embodiment.
  • the image correction method according to the first embodiment carries out a series of steps of a filter table creation step (S102), a shift amount determination step (S202), and an interpolation processing step (S204).
  • FIGS. 8A and 8B are diagrams for explaining the filter function of the sub-pixel interpolation processing in the first embodiment.
  • the examples of FIGS. 8A and 8B show interpolation processing when the image is shifted in the one-dimensional direction (x direction).
  • a total of 4 pixels are adjacent to each side of the shift direction when viewed from the shifted position x.
  • a method of interpolating using the gradation values (f (-1), f (0), f (1), f (2)) of the pixels (-1, 0, 1, 2) is often used.
  • a linear pattern using a total of four gradation values that is, the gradation value f (0) of the target pixel and the gradation values f (-1), f (1), and f (2) of the three peripheral pixels.
  • the sum is calculated.
  • the target pixel is located between the pixel (0) and the peripheral pixel (1) and is shifted from the pixel (0) by the shift amount x. Since x is a sub-pixel unit, 0 ⁇ x ⁇ 1.
  • the gradation value f (x) (interpolation value) of the target pixel is, for example, as shown in FIG. 8B using a 4-tap filter.
  • the pixel values of the four pixels have weighting coefficients a (x), b (x), c (x), and d (x), respectively. It can be defined by the equation (1) (interpolation filter function) showing the sum of the values multiplied by.
  • the sum of the weighting coefficients a (x), b (x), c (x), and d (x) of each term of the linear sum becomes 1, and the weighting coefficient a (of each term).
  • the sum of squares of x), b (x), c (x), and d (x) is set to be a constant R that does not depend on the shift amount x. Specifically, it is defined so that the relationship between the equation (2-1) and the equation (2-2) is established.
  • the amount of noise after interpolation can be defined on the left side of equation (2-2). Therefore, if the left side of the equation (2-2) can be defined to be a constant R, the noise amount after interpolation can be made constant without depending on the shift amount x. In other words, the fluctuation of the noise level can be made zero regardless of the shift amount x.
  • the even function k (x) that depends on the position from the position x shown in FIG. 8A to each peripheral pixel, it can be defined as the following equations (3-1) to (3-4).
  • k1 (x) is a function that defines an interval of 0 ⁇ x ⁇ 1 of k (x)
  • k2 (x) is a function that defines an interval of 1 ⁇ x ⁇ 2 of k (x). And shall be defined in these ranges.
  • k2 (x + 1) can be defined by the equation (6-1) by transforming using the equation (2-1), the equation (2-2), and the equations (3-1) to (3-4). .. Further, k2 (2-x) can be defined by the equation (6-2).
  • k2 (x) is also decided. Since R is determined if any one of k1 (0), k1 (1/2), and k1 (1) is determined, in short, if k1 (x) is determined, k2 (x) is also determined.
  • k1 (x) can be represented by, for example, the three-dimensional polynomial shown in the equation (7).
  • the coefficients p, q, r, and s on the right side of the equation (7) can be defined by the following equations (8-1) to (8-4).
  • the filter table creation circuit 130 creates a filter table in which coefficients for performing interpolation processing are defined.
  • FIG. 9 is a flowchart showing an internal process of the filter table creating process in the first embodiment.
  • the filter table may be created in the inspection device 100 or may be input after being created externally.
  • the filter table creation circuit 130 sets k1 (1/2) and r (S104). These values may be set to values input by the user from the outside.
  • the filter table creation circuit 130 substitutes k1 (1/2) into the equation (4) and calculates the constant R (S106).
  • the filter table creation circuit 130 substitutes R into the equations (5-1) and (5-2) to calculate k1 (0) and k1 (1) (S108).
  • the filter table creation circuit 130 sets the values of k1 (0), k1 (1/2), k1 (1), and r into equations (8-1), equations (8-2), and equations (8-2). Substitute in 8-4) to calculate the coefficients p, q, s (S110). As a result, k1 (x) can be obtained.
  • the filter table creation circuit 130 determines whether or not the obtained k1 (0), k1 (1), k2 (1), and k2 (2) are real numbers (S114). If all of k1 (0), k1 (1), k2 (1), and k2 (2) are not real numbers, change the values of k1 (1/2) and r until they become real numbers, and similarly S104. Each step up to S114 is repeated.
  • the weighting coefficients a (x), b (x), c (x), and d (x) can be obtained.
  • each value of the weighting coefficients a (x), b (x), c (x), and d (x) for each shift amount in the sub-pixel unit is calculated (S114).
  • the set of values of k1 (1/2) and r is not limited to one type.
  • a filter table that defines each value of d (x) is created (S118).
  • FIG. 10A and 10B are graphs showing an example of the filter table in the first embodiment.
  • the horizontal axis shows the values from 0 to 255, which are the shift amounts
  • the vertical axis shows the values of the weighting coefficients a (x), b (x), c (x), and d (x).
  • FIG. 10B shows a filter waveform of each coefficient shown in FIG. 10A when 256 gradations of 0 to 255 that the shift amount x can take are converted into a value of 0 to 1.
  • the obtained filter table is stored in the storage device 109.
  • the optimization processing unit 64 (shift amount determination unit) is subordinate to one of the entire image of the reference image (first image) and the partial image for each position of the reference image. Determine the shift amount for each pixel.
  • the entire reference image is shifted by a constant shift amount x.
  • the smoothing processing unit 56 reads out a frame image (second image) to be an inspected image from the storage device 52, and performs smoothing processing to smooth the end of the pattern with respect to the graphic pattern in the frame image.
  • the smoothing processing unit 54 reads out the reference image of the corresponding frame area 30 from the storage device 50, and performs smoothing processing for smoothing the pattern end portion with respect to the graphic pattern of the reference image.
  • a filter for smoothing processing for example, it is preferable to use a Gaussian filter. For example, a 7-row x 7-column Gaussian filter is used. Alternatively, for example, a Gaussian filter having 5 rows ⁇ 5 columns may be used.
  • a Gaussian filter with a matrix larger than 7 ⁇ 7 columns may be used.
  • Each element value of the Gaussian filter is set so that the central element value a (i, j) is the largest and decreases as it deviates from the center.
  • the central element value a (i, j) is 400/4096.
  • the element values a (i-3, j-3), a (i + 3, j-3), a (i-3, j + 3), and a (i + 3, j + 3) at the four corners are all 1/4096. .. In this case, ⁇ is about 1.3.
  • Each element value in the 7 ⁇ 7 column is applied to one pixel, and for example, in a mask die image composed of 512 ⁇ 512 pixels, the area of the 7 ⁇ 7 pixel is moved while being two-dimensionally shifted by one pixel. Then, at each shift position, the pixel value g (x, y) of the central pixel of 7 ⁇ 7 pixels is calculated.
  • the pixel value g (x, y) of the central pixel can be defined by the following equation (2) (Gaussian filter function).
  • noise including shot noise of the image to be inspected and the reference image can be reduced.
  • the effect can be increased by increasing the number of elements in the Gaussian filter matrix.
  • noise can be substantially eliminated by using a Gaussian filter represented by a matrix of 7 rows ⁇ 7 columns. In this way, with the noise substantially eliminated, the shift amount (shift amount) for alignment is calculated as shown below.
  • the sub-pixel shift processing unit 63 variably shifts the smoothed reference image in sub-pixel units.
  • one pixel is defined by, for example, 256 gradations, it is preferable to shift by 1/16 pixel or 1/8 pixel in the x and y directions, for example.
  • the SSD value calculation unit 62 calculates the difference squared sum (SSD) of each pixel value of the frame image and the corresponding pixel value of the reference image for each shift amount (shift amount).
  • the optimization processing unit 64 calculates the image shift amount (shift amount) that minimizes the difference squared sum (SSD). Therefore, the sub-pixel shift processing unit 63 variably shifts the image shift amount, and each time, the SSD value calculation unit 62 calculates the difference square sum (SSD) as described above, and optimizes the calculation result. Output to unit 64. As described above, the optimization processing unit 64 determines the image shift amount (shift amount) at which the difference squared sum (SSD) is minimized. The image shift amount (shift amount) that minimizes the difference squared sum (SSD) obtained by the above iterative calculation is output to the sub-pixel interpolation processing unit 60.
  • FIG. 11 is a diagram for explaining a method of obtaining a shift amount in the case of shifting depending on the position in the first embodiment.
  • the reference image of the frame area 30 size is divided into a plurality of small areas 35.
  • the example of FIG. 11 shows a case where the frame area 30 is divided into, for example, a 4 ⁇ 4 small area 35.
  • the SSD value calculation unit 62 calculates the difference squared sum (SSD) for each small area 35 while variably shifting the shift amount using the partial image divided into the small areas 35.
  • the optimization processing unit 64 determines the image shift amount (shift amount) that minimizes the difference squared sum (SSD) for each small area 35.
  • the sub-pixel interpolation processing unit 60 (interpolation processing unit) has the gradation value of the target pixel and the periphery of the target pixel for each pixel with respect to one of the entire image and the partial image for each position. Interpolation processing is performed according to the shift amount using the gradation value of the pixel.
  • the sum of the weighting coefficients of each term of the linear sum at the pixel position is 1, and the square sum of the weighting coefficients of each term is a constant R that does not depend on the shift amount x.
  • the linear sum of the gradation value of the target pixel and the gradation value of the peripheral pixels is used as the interpolation value of the target pixel. Is calculated.
  • the shift amount is used for each pixel using the gradation value of the target pixel and the gradation value of the peripheral pixels of the target pixel for the entire image.
  • Interpolation processing is performed according to.
  • the reference image is divided into a plurality of small areas 35 and the shift amount is determined for each small area 35, the gradation value of the target pixel and the target for each pixel with respect to the partial image applied to each small area 35.
  • Interpolation processing is performed according to the shift amount using the gradation values of the peripheral pixels of the pixels. Specifically, it operates as follows.
  • the sub-pixel interpolation processing unit 60 sets the weighting coefficients a (x), b (x), c (x), and d (x) of each term of the linear sum for interpolation according to the shift amount in the sub-pixel unit.
  • the weighting coefficients a (x), b (x), c (x), and d (x) of each term corresponding to the determined shift amount x are acquired.
  • the sub-pixel interpolation processing unit 60 calculates the pixel value f (x) (interpolation value) for each pixel of the reference image to be interpolated.
  • a total of four gradations of the gradation value f (0) of the target pixel and the gradation values f (-1), f (1), f (2) of the three peripheral pixels are used. Compute the linear sum using the values.
  • the pixel value f (x) (interpolated value) of the target pixel is calculated by a linear sum using the pixel values of the four peripheral pixels, for example, using a 4-tap filter shown in the equation (1). Note that the position shift of the normal image can occur not only in the horizontal direction but also in the vertical direction.
  • the sum of the weighting coefficients a (x), b (x), c (x), and d (x) of each term of the linear sum is 1, and the weighting coefficient a of each term is 1. Since the sum of squares of (x), b (x), c (x), and d (x) is set to be a constant R that does not depend on the shift amount x, the noise level fluctuates according to the shift amount. Can be avoided.
  • FIGS. 12A and 12B are diagrams showing an example of images before and after correction in the comparative example of the first embodiment.
  • FIGS. 12A and 12B graphs showing the state of noise at each position in the x direction with respect to the central portion in the y direction of the image are also shown.
  • 13A and 13B are diagrams showing an example of images before and after correction in the first embodiment.
  • graphs showing the state of noise at each position in the x direction with respect to the central portion in the y direction of the image are also shown.
  • a 3-tap compensation filter was applied to the central portion of the image shown in FIG. 12A after the conventional 4-tap filter processing. As a result, as shown in FIG.
  • the noise magnitude is reduced only in the region shifted by 0.5 pixel, and the noise becomes mottled as a whole. Comparing the regions of 5, noise unevenness occurs.
  • the interpolation processing of the first embodiment was performed on the central portion of the image shown in FIG. 13A, which is the same as the image shown in FIG. 12A. As a result, as shown in FIG. 13B, it can be seen that the magnitude of noise can be made uniform in the region shifted by 0.5 pixel and in the other regions in the central portion of the image.
  • interpolation is performed using a 4-tap filter, but the above is not limited to this. It is also preferable to interpolate using a filter with a larger number of taps. For example, the sum of the gradation value f (0) of the target pixel and the gradation values f (-2), f (-1), f (1), f (2), f (3) of the five peripheral pixels. It is also preferable to use a 4-tap filter using 6 gradation values.
  • the x direction is described, but the y direction is also interpolated in the same manner.
  • the interpolated value that has been interpolated in the x direction may be interpolated in the y direction.
  • the interpolated reference image is output to the comparison processing unit 72, and in this state, the frame image to be the inspected image and the reference image are compared. Since the reference image is slightly blurred by the interpolation processing by the sub-pixel interpolation processing unit 60, the frame image is also subjected to the smoothing processing that can obtain the same smoothing effect as the processing by the sub-pixel interpolation processing unit 60. It is preferable to bring them closer to the same degree. In such a case, it is also preferable that the smoothing processing unit 70 reads out the frame image that has not been smoothed from the storage device 52, and performs a weak smoothing process that slightly smoothes the pattern end portion with respect to the graphic pattern in the frame image. Is. As a filter for smoothing processing, for example, a Gaussian filter having 3 rows ⁇ 3 columns is used.
  • the comparison processing unit 72 compares the interpolated reference image (first image) with the frame image (second image) corresponding to the reference image. Specifically, the image to be inspected and the reference image are compared for each pixel. Using a predetermined determination threshold value, the two are compared for each pixel according to a predetermined determination condition, and the presence or absence of a defect such as a shape defect is determined. For example, if the difference in gradation value for each pixel is larger than the determination threshold value Th, it is determined as a defect candidate. Then, the comparison result is output. The comparison result may be output to the storage device 109, the monitor 117, or the memory 118, or from the printer 119.
  • the comparison processing unit 72 (comparison unit) generates contour lines of graphic patterns in the image from the image to be inspected and the reference image, respectively. Then, the deviations between the contour lines of the matching graphic patterns may be compared. For example, if the deviation between the contour lines is larger than the determination threshold value Th', it is determined as a defect candidate. Then, the comparison result is output.
  • the comparison result may be output to the storage device 74, output to the storage device 109, the monitor 117, or the memory 118, or output from the printer 119.
  • the image can be corrected so that the fluctuation of the noise level according to the shift amount does not occur. Therefore, highly accurate pattern inspection can be performed. Further, it is not necessary to perform the 3-tap filter processing (corresponding to a total of 6-tap processing) for compensating for the fluctuation of the noise level after the conventional 4-tap filter processing. As a result, the load of arithmetic processing can be reduced. Further, according to the first embodiment, as described with reference to FIG. 11, even if the shift amount is different for each part instead of the entire image, the fluctuation of the noise level according to the shift amount for each position does not occur. The image can be corrected.
  • FIG. 14 is an example of an internal configuration diagram showing a configuration in the comparison circuit according to the second embodiment.
  • the interpolation process according to the shift amount set for the reference image is performed, and then the difference squared sum (SSD) is calculated.
  • SSD difference squared sum
  • the sub-pixel interpolation processing unit 60 uses the reference image corresponding to the inspected image, and while variably shifting the inspected image and the reference image in sub-pixel units, the reference image. Is interpolated.
  • the content of the interpolation process is the same as that of the first embodiment. Since the interpolation processing is performed by the sub-pixel interpolation processing unit 60, the shot noise level does not change depending on the shift amount (shift amount) x.
  • the optimization processing unit 64 determines the optimum image shift amount for alignment based on the interpolated image for each shift amount variably shifted in sub-pixel units. Specifically, it is determined as follows.
  • the SSD value calculation unit 62 calculates the difference squared sum (SSD) of each pixel value of the image to be inspected and the corresponding pixel value of the interpolated reference image for each shift amount (shift amount).
  • the optimization processing unit 64 calculates the image shift amount (shift amount) that minimizes the difference squared sum (SSD). Therefore, the optimization processing unit 64 variably shifts the image shift amount, and outputs the set image shift amount to the sub-pixel interpolation processing unit 60 each time. Then, the sub-pixel interpolation processing unit 60 interpolates the reference image with the image shift amount set as described above. Then, the SSD value calculation unit 62 calculates the difference squared sum (SSD), and outputs the calculation result to the optimization processing unit 64. As described above, the optimization processing unit 64 obtains an image shift amount (shift amount) that minimizes the difference squared sum (SSD). Then, the reference image interpolated with the image shift amount that minimizes the sum of squared differences (SSD) and the image to be inspected are output to the comparison processing unit 72.
  • the alignment with the image to be inspected is performed using the reference image subjected to the interpolation processing in which the noise level fluctuation depending on the shift amount (shift amount) x does not occur.
  • the comparison processing unit 72 compares the interpolated reference image (first image) with the frame image (second image) corresponding to the reference image.
  • the method of comparison is the same as that of the first embodiment.
  • both the reference image and the image to be inspected may be configured to shift and approach each other.
  • the direction of shifting is opposite between the image to be inspected and the reference image.
  • the desired image shift amount (shift amount) x the reference image may be shifted by, for example, + x / 2, and the image to be inspected may be shifted by, for example, ⁇ x / 2. Then, interpolation processing is performed on the image to be inspected and the reference image according to the shift amount.
  • the sum of the weighting coefficients a (x), b (x), c (x), and d (x) of each term of the linear sum used for the interpolation processing of the image to be inspected and the reference image is 1.
  • the sum of squares of the weighting coefficients a (x), b (x), c (x), and d (x) of each term is set to be a constant R that does not depend on the shift amount x. stomach.
  • the comparison process may be performed on the image to be inspected and the reference image interpolated by half of the image shift amount that minimizes the sum of squared differences (SSD).
  • the difference squared sum (SSD) is calculated using the image after the interpolation processing, the accuracy of the shift amount itself can be improved. Therefore, according to the second embodiment, it is possible to align the images with each other with the influence of noise further reduced as compared with the first embodiment. Therefore, highly accurate pattern inspection can be performed.
  • FIG. 15 is a diagram showing an example of the configuration of the image correction device according to the third embodiment.
  • a storage device 50, 51, 53, 54 such as a magnetic disk device, a shift amount calculation unit 61, and a sub-pixel interpolation processing unit 60 are arranged in the image correction device 200 according to the third embodiment.
  • Each "-unit" such as the shift amount calculation unit 61 and the sub-pixel interpolation processing unit 60 includes a processing circuit, and the processing circuit includes an electric circuit, a computer, a processor, a circuit board, a quantum circuit, a semiconductor device, or the like. Is included. Further, a common processing circuit (same processing circuit) may be used for each "-part". Alternatively, different processing circuits (separate processing circuits) may be used.
  • the input data or the calculation result required in the shift amount calculation unit 61 and the sub-pixel interpolation processing unit 60 are stored in a memory (not shown) each time.
  • the image to be corrected is input from the outside and stored in the storage device 50. Further, the above-mentioned filter table is stored in the storage device 51.
  • 16A and 16B are diagrams showing an example of the shape of the image to be corrected in the third embodiment.
  • the image may be distorted due to aberrations of the optical system when acquiring the image. Since the degree of such distortion varies depending on the position, even if the entire image is uniformly shifted, it is not corrected. Therefore, in the third embodiment, the amount of strain depending on the position according to the characteristics of the optical system such as a lens is measured in advance by an experiment or a simulation.
  • the distortion correction if the distortion characteristics of the lens are known in advance, it is possible to know which pixel moves in which direction and how much. Therefore, the value can be used for correction. As the distortion characteristics of the lens, for example, barrel distortion and pincushion distortion are known.
  • the correction amount (shift amount) may be determined for each pixel based on the detailed measured value. Then, correlation data (shift amount data) that defines the shift amount according to the position is created and stored in the storage device 53.
  • the shift amount calculation unit 61 refers to the correlation data stored in the storage device 53, calculates the shift amount for each sub-pixel unit for the partial image for each position of the image stored in the storage device 50, and calculates the position. The shift amount is determined for each.
  • the sub-pixel interpolation processing unit 60 performs interpolation processing according to the shift amount on the partial image for each position of the image by using the gradation value of the target pixel and the gradation value of the peripheral pixels of the target pixel for each pixel. conduct.
  • the sum of the coefficients of each term of the linear sum is 1, and the sum of squares of the coefficients of each term is a constant R that does not depend on the shift amount x.
  • the interpolated value f ( As x), a linear sum of the gradation value f (0) of the target pixel and the gradation values f (-1), f (1), f (2) of the peripheral pixels is calculated.
  • the actual values of the weighting coefficients a (x), b (x), c (x), and d (x) of each term are filters of the storage device 51 for each coefficient corresponding to the determined shift amount for each position. It is the same as the first embodiment in that it is obtained from the table. Thereby, as shown in FIG. 16B, the distortion can be corrected.
  • the interpolated image (interpolated image) is output to the storage device 54 and stored.
  • the third embodiment it is possible not only to align the image but also to correct the image itself having individual distortion.
  • the series of "-circuits” includes a processing circuit, and the processing circuit includes an electric circuit, a computer, a processor, a circuit board, a quantum circuit, a semiconductor device, and the like. Further, a common processing circuit (same processing circuit) may be used for each "-circuit". Alternatively, different processing circuits (separate processing circuits) may be used.
  • the program for executing the processor or the like may be recorded on a recording medium such as a magnetic disk device, a magnetic tape device, an FD, or a ROM (read-only memory).
  • the position circuit 107, the comparison circuit 108, the reference image creation circuit 112, and the like may be configured by at least one processing circuit described above.
  • the present invention is not limited to these specific examples.
  • the case where the reference image is shifted is shown, but the present invention is not limited to this. It can be applied even when the image to be inspected is shifted.
  • the shift amount is 1 pixel or more, for example, when shifting by 3 + 5/16 pixels, 3 pixels are shifted in pixel units and 5/16 pixels are shifted. It may be shifted in units of sub-pixels by the method described above.
  • image correction device pattern inspection device, and image correction method.
  • it can be used as a method for aligning an image of a graphic pattern formed on a substrate imaged for inspection using an electron beam.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing Or Measuring Of Semiconductors Or The Like (AREA)
  • Image Processing (AREA)
PCT/JP2021/035451 2020-11-11 2021-09-27 画像補正装置、パターン検査装置、及び画像補正方法 WO2022102266A1 (ja)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020188280A JP2022077420A (ja) 2020-11-11 2020-11-11 画像補正装置、パターン検査装置、及び画像補正方法
JP2020-188280 2020-11-11

Publications (1)

Publication Number Publication Date
WO2022102266A1 true WO2022102266A1 (ja) 2022-05-19

Family

ID=81601142

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/035451 WO2022102266A1 (ja) 2020-11-11 2021-09-27 画像補正装置、パターン検査装置、及び画像補正方法

Country Status (3)

Country Link
JP (1) JP2022077420A (zh)
TW (1) TW202219499A (zh)
WO (1) WO2022102266A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000069280A (ja) * 1998-08-24 2000-03-03 Canon Inc 画像信号形成装置及び方法、画素補間装置及び方法並びに記憶媒体
JP2001043356A (ja) * 1999-07-27 2001-02-16 Konica Corp データ補間方法及び画像形成システム
JP2008300909A (ja) * 2007-05-29 2008-12-11 Konica Minolta Business Technologies Inc データ補間方法及び画像変倍方法画像処理装置
JP2008300974A (ja) * 2007-05-29 2008-12-11 Konica Minolta Business Technologies Inc データ補間方法及び画像変倍方法画像形成装置
JP2019039808A (ja) * 2017-08-25 2019-03-14 株式会社ニューフレアテクノロジー パターン検査装置及びパターン検査方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000069280A (ja) * 1998-08-24 2000-03-03 Canon Inc 画像信号形成装置及び方法、画素補間装置及び方法並びに記憶媒体
JP2001043356A (ja) * 1999-07-27 2001-02-16 Konica Corp データ補間方法及び画像形成システム
JP2008300909A (ja) * 2007-05-29 2008-12-11 Konica Minolta Business Technologies Inc データ補間方法及び画像変倍方法画像処理装置
JP2008300974A (ja) * 2007-05-29 2008-12-11 Konica Minolta Business Technologies Inc データ補間方法及び画像変倍方法画像形成装置
JP2019039808A (ja) * 2017-08-25 2019-03-14 株式会社ニューフレアテクノロジー パターン検査装置及びパターン検査方法

Also Published As

Publication number Publication date
TW202219499A (zh) 2022-05-16
JP2022077420A (ja) 2022-05-23

Similar Documents

Publication Publication Date Title
KR101855928B1 (ko) 패턴 검사 방법 및 패턴 검사 장치
JP7057220B2 (ja) マルチ電子ビーム画像取得装置及びマルチ電子ビーム光学系の位置決め方法
JP6981811B2 (ja) パターン検査装置及びパターン検査方法
TWI717761B (zh) 多電子束照射裝置,多電子束照射方法,及多電子束檢查裝置
JP7352447B2 (ja) パターン検査装置及びパターン検査方法
JP7241570B2 (ja) マルチ電子ビーム検査装置及びマルチ電子ビーム検査方法
JP2019200052A (ja) パターン検査装置及びパターン検査方法
KR102292850B1 (ko) 멀티 하전 입자 빔 검사 장치 및 멀티 하전 입자 빔 검사 방법
US20200168430A1 (en) Electron beam image acquisition apparatus and electron beam image acquisition method
JP2017162590A (ja) パターン検査装置及びパターン検査方法
KR102371265B1 (ko) 멀티 전자 빔 조사 장치
JP6966319B2 (ja) マルチビーム画像取得装置及びマルチビーム画像取得方法
WO2021235076A1 (ja) パターン検査装置及びパターン検査方法
WO2022024499A1 (ja) パターン検査装置及び輪郭線同士のアライメント量取得方法
TWI773329B (zh) 圖案檢查裝置以及圖案檢查方法
WO2022102266A1 (ja) 画像補正装置、パターン検査装置、及び画像補正方法
WO2021250997A1 (ja) マルチ電子ビーム画像取得装置及びマルチ電子ビーム画像取得方法
CN117981038A (zh) 多电子束图像获取装置及多电子束图像获取方法
US10984978B2 (en) Multiple electron beam inspection apparatus and multiple electron beam inspection method
JP7326480B2 (ja) パターン検査装置及びパターン検査方法
JP2021044461A (ja) アライメントマーク位置の検出方法及びアライメントマーク位置の検出装置
JP2020085837A (ja) 電子ビーム画像取得装置および電子ビーム画像取得方法
WO2021205729A1 (ja) マルチ電子ビーム検査装置及びマルチ電子ビーム検査方法
WO2021205728A1 (ja) マルチ電子ビーム検査装置及びマルチ電子ビーム検査方法
JP2022077421A (ja) 電子ビーム検査装置及び電子ビーム検査方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21891514

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21891514

Country of ref document: EP

Kind code of ref document: A1