US20060215899A1 - Image correcting method - Google Patents

Image correcting method Download PDF

Info

Publication number
US20060215899A1
US20060215899A1 US11360581 US36058106A US2006215899A1 US 20060215899 A1 US20060215899 A1 US 20060215899A1 US 11360581 US11360581 US 11360581 US 36058106 A US36058106 A US 36058106A US 2006215899 A1 US2006215899 A1 US 2006215899A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
image
pattern
noise
inspection
random
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11360581
Other versions
US7706623B2 (en )
Inventor
Junji Oaki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Toshiba Memory Corp
Original Assignee
Advanced Mask Inspection Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration, e.g. from bit-mapped to bit-mapped creating a similar image
    • G06T5/001Image restoration
    • G06T5/002Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Abstract

In a reticle inspecting apparatus or the like, there is provided an image correcting method which is effective when a rank of a matrix lacks due to continuous equal grayscale values when an image is handled as a matrix. In the image correcting method, a random noise image having fine grayscale is superposed on a pattern image to make a matrix full-rank.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • [0001]
    This application is based upon and claims the benefit of priority from prior Japanese Patent Application No. 2005-085215 filed on Mar. 24, 2005 in Japan, the entire contents of which are incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • [0002]
    1. Field of the Invention
  • [0003]
    The present invention relates to an image correcting method. The image correcting method can be used in, for example, a pattern inspection apparatus for inspecting the presence/absence of a defect of a micropattern image formed on a workpiece being tested such as reticle used in, for example, manufacturing of an LSI.
  • [0004]
    2. Description of the Related Art
  • [0005]
    In general, since a lot of cost is required to manufacture an LSI, an increase in yield is inevitable. As one factor which decreases a yield, a pattern defect of a reticle used when a micropatterning image is exposed and transferred on a semiconductor wafer by a lithography technique is known. In recent years, with a miniaturization of LSI pattern size, the minimum size of a defect to be detected is also miniaturized. For this reason, a higher precision of a pattern inspection apparatus for inspecting a defect of a reticle is required.
  • [0006]
    Methods of inspecting the presence/absence of a pattern defect are roughly classified into a method of comparing a die with a die (Die-to-Die comparison) and a method of comparing a die with a database (Die-to-Database comparison). The Die-to-Die comparison (DD comparison) is a method of comparing two dies on a reticle to detect a defect. The Die-to-Database comparison (DB comparison) is a method of comparing a die and a database generated from CAD data for LSI design to detect a defect.
  • [0007]
    With micropatterning on a reticle, defects such as a pixel positioning error between images to be compared with each other, expansion and contraction and distortion of an image, defects which are small enough to be buried in sensing noise, must be detected. Even in the DD comparison or the DB comparison, alignment and image correction in a sub-pixel unit is very important in a pre-stage in which comparison and inspection of an inspection reference pattern image and a pattern image under test.
  • [0008]
    Therefore, in the conventional pre-stage in which two images, i.e., an inspection reference pattern image and a pattern image under test are inspected by comparison, after alignment in units of sub-pixels based on bicubic interpolation is performed, a correction of expansion and contraction of an image (see, for example, Japanese Patent Application Laid-Open No. 2000-241136), a distortion correction of an image, a resizing correction, a noise averaging process, and the like are sequentially performed. However, a repetition of these corrections generates an accumulative error and serves as a main factor of deteriorating an image. Furthermore, setting of appropriate values a large number of parameters require for the respective corrections and setting of an appropriate order of the respective corrections are disadvantageously difficult.
  • BRIEF SUMMARY OF THE INVENTION
  • [0009]
    There is an image correcting method achieved by integrating alignment and image correction, having less image deterioration and a small number of setting parameters, and based on input/output model identification as effective image correction. For example, an inspection reference pattern image and a pattern image under test are used as input data and output data, respectively, to identify an input/output linear prediction model, and alignment in unit of a sub-pixel and image correction are simultaneously realized. In this case, a relational expression of matrixes is formed from the image data, and simultaneous equations are solved to identify a model parameter. At this time, in DB comparison, equal grayscale values continue in the inspection reference pattern image data (free from minute image sensor noise unlike in DD comparison), and a rank of a coefficient matrix of the simultaneous equations lacks, and it may be impossible to identify the model parameter.
  • [0010]
    The present invention has been made in consideration of the above circumstances, and has as its object to provide an image correcting method which is effective when a rank of a matrix lacks by continuous equal grayscale values when an image is handled as a matrix in image correction in a pattern inspection apparatus such as a reticle inspecting apparatus.
  • [0011]
    According to an embodiment of the present invention, there is provided an image correcting method for generating a correction image from pattern images of two types, including: the random noise pattern image generating step of generating a random noise pattern image at least in regions having almost equal grayscale values in the pattern image; and the random noise superposed image generating step of superposing the random noise pattern image at least on the regions having the almost equal gray scale values, and wherein the random noise pattern image has grayscale values which are finer than grayscale values of the pattern image.
  • [0012]
    According to the embodiment of the present invention, there is provided an image correcting method for generating a correction image from an inspection reference pattern image and a pattern image under test, including: the random noise pattern image generating step of generating a random noise pattern image having grayscale values which are finer than the grayscale values of the inspection reference pattern image; and the random noise superposed image generating step of superposing the random noise pattern image on the inspection reference pattern image.
  • [0013]
    According to the embodiment of the present invention, there is provided an image correcting method for generating a correction image from an inspection reference pattern image and a pattern image under test, including: the uninspected region setting step of setting uninspected regions in the two pattern images; the minimum grayscale value setting step of setting the grayscale values of the uninspected regions in the two pattern images as minimum calibration values; the random noise pattern image generating step of generating two random noise pattern images having grayscale values which are finer than the grayscale values of the two pattern images; and the random noise superposed image generating step of superposing the two random noise pattern images on the minimum calibration grayscale values in the two pattern images and the set uninspected regions, respectively.
  • [0014]
    According to the embodiment of the present invention, there is provided an image correcting method for generating a correction image from an inspection reference pattern image and a pattern image under test, including: a random noise pattern image generating step of generating a random noise pattern image in at least a region having almost equal grayscale values in the inspection reference pattern image; a random noise superposed image generating step of superposing the random noise pattern image on at least the region having the almost equal grayscale values, the random noise pattern image having grayscale values which are finer than the grayscale values of the pattern images; a simultaneous equation generating step of generating simultaneous equations which describe an input-output relationship using, as an output, each pixel of the pattern image under test and using, as an input, a linear coupling of a pixel group around each corresponding pixel of the inspection reference pattern image on which the random noise is superposed; the simultaneous equation solving step of solving the simultaneous equations to estimate parameters of the prediction model; and the correction image generating step of generating a correction image by using the estimated parameters.
  • BRIEF DESCRIPTION OF THE FIGURES OF THE DRAWING
  • [0015]
    FIGS. 1A and 1B are schematic views of a two-dimensional linear prediction model used in a pattern image inspection method;
  • [0016]
    FIG. 2 is a diagram showing the configuration of a concrete example of a pattern inspection apparatus;
  • [0017]
    FIG. 3 is a diagram for explaining image acquisition by reticle scanning of a line sensor;
  • [0018]
    FIGS. 4A and 4B are diagrams showing superposition of a random noise image having fine grayscale values;
  • [0019]
    FIG. 5 is a diagram showing steps of an image correcting method;
  • [0020]
    FIGS. 6A, 6B, and 6C are diagrams showing setting of an uninspected region and filling of random noise pattern image data having a fine grayscale;
  • [0021]
    FIG. 7 is a flow chart of an image correcting method;
  • [0022]
    FIG. 8 is a diagram showing steps of an image correcting method using weighted decomposition of one image.
  • DETAILED DESCRIPTION OF THE INVENTION
  • [0023]
    A pattern inspection method according to an embodiment of the present invention will be described below with reference to the drawings.
  • [0000]
    (Outline of Pattern Inspection Method)
  • [0024]
    A pattern inspection method is performed by using a pattern inspection apparatus. The pattern inspection apparatus is operated by using an irradiating unit for irradiating light on a workpiece being tested and an image acquiring unit for detecting reflected light or transmitted light from the workpiece being tested to acquire a pattern image. A configuration of one concrete example of the pattern inspection apparatus is shown in FIG. 2. A reticle will be described as the workpiece being tested. However, as the workpiece being tested, any sample on which a pattern is formed may be used. A mask, a wafer, and the like maybe used. The pattern inspection apparatus includes: a light source 10 for generating light; a stage 12 on which a reticle 2 is placed; a stage drive system for driving the stage (not shown); a transmission optics (a transmissive optical system) 14 designed to cause light from the light source 10 to transmit the reticle 2 placed on the stage 12; a reflection optics (a reflective optical system) 16 for irradiating the light from the light source 10 on the reticle 2 placed on the stage 12 to make it possible to detect the reflected light; a transmitted light sensor 18 for detecting transmitted light obtained from the transmission optics 14; and a reflected light sensor 20 for detecting the reflected light from the reflection optics 16. The transmission optics 14 and the reflection optics 16 are constituted by, e.g., a half mirror and a convex lens, respectively. The light irradiating unit includes at least one of the light source 10, the transmission optics 14, and the reflection optics 16. The image acquiring unit includes at least one of the transmitted light sensor 18 and the reflected light sensor 20.
  • [0025]
    Detailed acquisition of a pattern image drawn on the reticle 2 is performed by scanning the reticle 2 with a line sensor as shown in FIG. 3. In this case, for descriptive convenience, a unit of a strip 4 obtained by cutting the reticle 2 in strips in an X-axis direction shown in FIG. 3 (direction of one side of the reticle 2) is called one strip. A square image 5 obtained by finely cut one strip in a Y-axis direction (direction perpendicular to the X-axis direction) is called one sub-strip. One sub-strip, for example, is defined as 2048×2048 pixels. Inspection of the presence/absence of a defect is performed for every sub-strip. It is assumed that one pixel has 256 grayscales.
  • [0026]
    The pattern inspection method is performed by comparing pattern images with each other as shown in FIG. 2. As the comparison, comparison between a die and a die or comparison between a die and a database is known. In a die-to-die comparison (DD comparison) method, sensor data, recorded by the transmitted light sensor 18 or the reflected light sensor 20 using at least one of transmitted light and reflected light, of two dies on the reticle 2 are compared with each other by a conparator 40. In this manner, a defect is detected. In a die-to-database comparison (DB comparison) method, a sensor data, recorded by the transmitted light sensor 18 or the reflected light sensor 20 using at least one of transmitted light and reflected light on one die on the reticle 2 and design data 34 generated from a reference data generator circuit 32 based on CAD data 30 for designing an LSI are compared with each other by the comparator 40. In this manner, a defect is detected.
  • [0027]
    The pattern inspection method used in the embodiment is to break through the limit of a direct comparison method. In the pattern inspection method, as shown in FIG. 1B, during inspection of a relationship between an inspection reference pattern image and a pattern image under test (an under-test pattern image), the image under test is identified online by using a linear prediction model, e.g., a two-dimensional linear prediction model to construct a prediction model which fits a pixel positional error, expansion/contraction noise, or sensing noise. A correction image is generated from the prediction model. The correction image and the pattern image under test are compared with each other. Based on the comparison result, a defect on the pattern image under test is detected.
  • [0000]
    (Setting of Two-Dimensional Linear Prediction Model (Simultaneous Equation Generating Step))
  • [0028]
    First, a method of setting a two-dimensional prediction model (two-dimensional input/output linear prediction model) by regarding an inspection reference pattern image as two-dimensional input data and regarding a pattern image under test as two-dimensional output data will be described below. In this case, a 5×5 two-dimensional linear prediction model using a 5×5-pixel region will be exemplified. A suffix (corresponding to a position of 5×5 pixels) used in the model is shown in Table 1. In FIG. 1, the left images are defined as inspection reference patterns, and the right images are defined as images under test. The two-dimensional linear prediction model is a linear prediction model when input and output data are handled as two-dimensional data.
    TABLE 1
    0 1 2 3 4
    0 i − 2, j − 2 i − 2, j − 1 i − 2, j i − 2, j + 1 i − 2, j + 2
    1 i − 1, j − 2 i − 1, j − 1 i − 1, j i − 1, j + 1 i − 1, j + 2
    2 i, j − 2 i, j − 1 i, j i, j + 1 i, j + 2
    3 i + 1, j − 2 i + 1, j − 1 i + 1, j i + 1, j + 1 i + 1, j + 2
    4 i + 2, j − 2 i + 2, j − 1 i + 2, j i + 2, j + 1 i + 2, j + 2
  • [0029]
    The two-dimensional input data and the two-dimensional output data are defined as u(i,j) and y(i,j) Suffixes of an interested pixel are represented by i and j. Suffixes of total of 25 pixels on about two rows and about two columns surrounding the pixel are set as in Table 1. With respect to pixel data of one pair of 5×5 regions, a relational expression as shown in Equation (1) is set. Coefficients b00 to b44 of input data u(i,j) of Equation (1) are model parameters to be identified. [ Equation 1 ] y k = y ( i , j ) = b 00 u ( i - 2 , j - 2 ) + b 01 u ( i - 2 , j - 1 ) + b 02 u ( i - 2 , j ) + b 03 u ( i - 2 , j + 1 ) + b 04 u ( i - 2 , j + 2 ) + b 10 u ( i - 1 , j - 2 ) + b 11 u ( i - 1 , j - 1 ) + b 12 u ( i - 1 , j ) + b 13 u ( i - 1 , j + 1 ) + b 14 u ( i - 1 , j + 2 ) b 20 u ( i , j - 2 ) + b 21 u ( i , j - 1 ) + b 22 u ( i , j ) + b 23 u ( i , j + 1 ) + b 24 u ( i , j + 2 ) b 30 u ( i + 1 , j - 2 ) + b 31 u ( i + 1 , j - 1 ) + b 32 u ( i + 1 , j ) + b 33 u ( i + 1 , j + 1 ) + b 34 u ( i + 1 , j + 2 ) b 40 u ( i + 2 , j - 2 ) + b 41 u ( i + 2 , j - 1 ) + b 42 u ( i + 2 , j ) + b 43 u ( i + 2 , j + 1 ) + b 44 u ( i + 2 , j + 2 ) + ɛ ( i , j ) ( 1 )
  • [0030]
    Equation (1) means that data yk=y(i,j) of a certain pixel of a pattern image under test can be expressed by a linear coupling of data of 5×5 pixels surrounding one pixel of the corresponding inspection reference pattern image (see FIG. 1A). In this case, the statistical characteristics of a residual ε in Equation (1) are not apparent, and a parameter identification result obtained by the least-square method, to be described later, may have a bias. However, in the embodiment of the present invention, the fitting itself of input/output data obtained by Equation 1 is significant, and the value of the parameter is not directly used. For this reason, the residual ε does not cause any trouble.
  • [0000]
    (Simultaneous Equation Solving Step (Identification of Model Parameter))
  • [0031]
    When Equation (1) is expressed by a vector, Equation (2) is obtained. In this equation, an unknown parameter α is given by α=[b00, b01, . . . , b44]T, and data vector xk is given by xk=[u(i−2, j−2), u(i−2, j−1), . . . , u(i+2, j+2)]T.
  • [0000]
    [Equation 2]
    xk Tα=yk   (2)
  • [0032]
    Coordinates i and j of the inspection reference patter image and a pattern image under test are scanned to fetch data of pixels of the coordinates i and j, and 25 sets of data are simultaneously established, a model parameter can be identified. In fact, from a statistical viewpoint, as shown in Equation (3), n (>25) sets of data are prepared, and 25-dimensional simultaneous equations are solved based on the least-square method to identify α. In this case, A=[x1, x2, . . . , xn]T, y=[y1, y2, . . . , yn]T, xk Tα=yk, and k=1, 2, . . . , n.
    [Equation 3] [ x 1 T x n T ] α = [ y 1 y n ] A α = y α = ( A T A ) - 1 A T y ( 3 )
  • [0033]
    For example, when each of the inspection reference pattern image and the pattern image under test are constituted by 512×512 pixels, two pixels around each of the images are reduced. For this reason, the number of equations is given by Equation (4), and 258064 data can be obtained. In this manner, the equations the number of which is statistically sufficient can be secured.
  • [0000]
    [Equation 4 ]
    n=(512−4)×(512−4)=258064   (4)
    (Generation of Model Image)
  • [0034]
    An identified model parameter α and the input/output image data used in identification are assigned to Equation (1), and a simulation operation for scanning the coordinates i and j of the pixels is performed to generate a correction image. In the correction image, as a result of fitting based on the least-square method, reductions of a pixel positional error smaller than one pixel, expansion and contraction, distortion noise, a resizing process, and sensing noise can be realized. In this case, as a matter of course, data used in the simulation includes a defective pixel. However, since the number of defective pixels is considerably smaller than the number of data, the defective pixels are not fitted by the least-square method, and do not appear in the correction image. In addition, since a peripheral S/N ratio is improved, a defective pixel is advantageously emphasized.
  • [0000]
    (Random Noise Superposed Image Generating Step)
  • [0035]
    The above is an example in which simultaneous equations are established and solved by using a two-dimensional input/output linear prediction model while handling an image as a matrix. However, in general, when a pattern image includes regions having almost equal grayscale values, and when simultaneous equations are established and solved while handling an image as a matrix, equal grayscale values continue, a rank of a coefficient matrix of the simultaneous equations may lack to make it impossible to identify a model parameter. For example, in DB comparison, equal grayscale values continue in inspection reference pattern image data (free from minute image sensor noise unlike in DD comparison), and a rank of the coefficient matrix of the simultaneous equations lacks, and it may be impossible to identify the model parameter.
  • [0036]
    As described above, when a rank of the coefficient matrix of simultaneous equations lacks to make it impossible to identify a model parameter, the random noise pattern image is superposed in a region having almost equal grayscale values to make it possible to obtain a full-rank matrix. The random noise pattern image has grayscale values which are finer than the grayscale values of the pattern images, and is generated by the random noise superposed image generating step.
  • [0037]
    In a simple example, inspection reference pattern image data of DB inspection is as shown in FIG. 4A. The maximum grayscale values of 250 continue in the left half of an image (assumed to be calibrated every eight bits within the range of 10 to 250), and the minimum grayscale values of 10 continue in the right half. When the image is considered as a matrix, a rank apparently lacks. Therefore, a two-dimensional binary random noise image is prepared and superposed as shown in FIG. 4B to make it possible to obtain a full-rank matrix. Even in DB inspection, a matrix operation can be executed with the same precision as that in DD inspection.
  • [0000]
    (Generation of Random Noise Image)
  • [0038]
    A random noise image may be the M alignment obtained by two-dimensionally arranging the M sequences serving as pseudo random numbers which can be easily generated by a shift register or an image obtained by independently binarizing a sensor image. In this case, it is checked that the number of ranks is sufficient. Since reproducibility is desired for defect inspection for a reticle or the like, it is attended that a reproducible noise source must be used. The above procedures are organized in FIG. 5.
  • [0000]
    (Image Correcting Method)
  • [0039]
    FIG. 5 shows a procedure of image correction. A random noise image generating step (S3) of generating a random noise image to be superposed on an inspection reference pattern image (S1) and a pattern image under test (S2) is taken, a random noise superposed image generating step (S4) of superposing the random noise image on the pattern images is taken, a (new) inspection reference pattern image (S5) and a (new) pattern image under test (S6) are generated. By using the (new) inspection reference pattern image (S5) and the (new) pattern image under test (S6), the two-dimensional linear prediction model is set. More specifically, simultaneous equations are generated in a simultaneous equation generating step (S7), and the simultaneous equations are solved in a simultaneous equation solving step (S8), so that a model parameter can be identified. A correction image can be generated through a correction image generating step (S9). A difference image between the correction image generated as described above and the pattern image under test is created, and these image patterns are compared with each other to make it possible to easily find a defective portion of the image.
  • [0000]
    (Setting of Uninspected Region)
  • [0040]
    As another embodiment, an application to a case in which an uninspected region is set will be described below. The uninspected region denotes a region which need not be inspected, i.e., characters “inverted characters of A20” in FIG. 6A.
  • [0041]
    In order to make an image processing procedure equal to an image processing procedure in a case in which there is no uninspected region, it is considered that the minimum grayscale value (10 in the example in FIG. 4) is set in uninspected regions of an inspection reference pattern image and an uninspected pattern image. In this case, an image as shown in FIG. 6B is obtained. In this manner, a rank of a matrix formed from the image data lacks. However, the same random noise image as described in the previous embodiment is superposed to make it possible to make the matrix full-rank. This manner is shown in FIG. 6C.
  • [0000]
    (Weighted Decomposition of Image)
  • [0042]
    When a variation (expansion and contraction, distortion, or the like) in an image (for example, 512×512 pixels) is large, the image may not be sufficiently expressed by a 5×5-order linear prediction model. Therefore, in order to expand an expression of the prediction model, an image is decomposed into a plurality of images. First, reference points are set at separated pixel positions in the image, and 5×5-order linear prediction models are set at the reference points, respectively. The pixels of the image are expressed by linear interpolation of prediction models the number of which is equal to the number of reference points. The reference points are preferably set at a peripheral portion where a difference of variation of the image is large. The reference points are, for example, set at four apexes (points A, B, C, and D).
  • [0043]
    The 5×5-order linear prediction models are set at the apexes of the image, respectively, and pixels in the image are expressed by linear interpolation of four prediction models. In FIG. 7, an inspection reference pattern image is decomposed into four images (a, b, c, and d) weighted at the apexes (points A, B, C, and D), and one pixel of a corresponding pattern image under test may be expressed by linear coupling of 5×5 pixels near each pixel P of each decomposed image. The pixel P is expressed by a function of linear interpolation parameters t and w in the image as expressed by Equation (5).
  • [0000]
    [Equation 5 ]
    P=(1−t)(1−w)·a+t(1−w)·b+(1−t)w·c+tw·d   (5)
  • [0044]
    The number of terms in the right side member in Equation (5), i.e., the number of parameters to be identified is given by 5×5×4=100. For this reason, 100-dimensional simultaneous equations may be solved by the same procedure as that of Equation (1). In fact, from a statistical viewpoint, as in Equation (3), parameters to be identified are calculated based on the least-square method.
  • [0045]
    With the above procedures, advantages of sub-pixel alignment, expansion and contraction/distortion correction, and resizing correction can be obtained. An S/N ratio can be increased, and a defective portion of an image can be emphasized.
  • [0000]
    (Procedure of Pattern inspection Method)
  • [0046]
    FIG. 8 shows a procedure of a pattern inspection method. A (new) inspection reference pattern image (S5) and a (new) pattern image under test (S6) on which random noise images are superposed are decomposed as shown in FIG. 7 to generate decomposed images (decomposed image generating step S10). Simultaneous equations are generated from the decomposed images as expressed by Equation (5) (simultaneous equation generating step S7). The generated simultaneous equations are solved (simultaneous equation solving step S8). By using the calculated parameters, a correction image is generated (model image generating step S9). As described above, according to the embodiment, an effective image correcting method achieved by integrating alignment and image correction and having less image deterioration and a small number of setting parameters. A difference image between the correction image generated as described above and the pattern image under test is created, and these image patterns are compared with each other, so that a defective portion of the image can be easily detected.
  • [0047]
    As described above, according to the embodiment, in a reticle inspecting apparatus or the like, an image correcting method which is effective when a rank of a matrix lacks due to continuous equal grayscale values when an image is handled as a matrix.
  • [0048]
    Images are often handled as matrixes. The present invention is not limited to the embodiments described above, as a matter of course.

Claims (8)

  1. 1. An image correcting method for generating a correction image from two types of pattern images, said method comprising:
    generating a random noise pattern image at least in regions having almost equal grayscale values in the pattern image;
    superposing the random noise pattern image at least on the regions having the almost equal grayscale values, and wherein
    the random noise pattern image has grayscale values which are finer than the grayscale values of the pattern image.
  2. 2. An image correcting method for generating a correction image from an inspection reference pattern image and a pattern image under test, said method comprising:
    generating a random noise pattern image having grayscale values which are finer than the grayscale values of the inspection reference pattern image; and
    superposing the random noise pattern image on the inspection reference pattern image.
  3. 3. An image correcting method for generating a correction image from an inspection reference pattern image and a pattern image under test, said method comprising:
    generating two random noise pattern images having grayscale values which are finer than the grayscale values of the two pattern images; and
    superposing the two random noise pattern images on the two pattern images, respectively.
  4. 4. An image correcting method for generating a correction image from an inspection reference pattern image and a pattern image under test, said method comprising:
    setting uninspected regions in the two pattern images;
    setting the grayscale values of the uninspected regions in the two pattern images as calibration minimum values;
    generating two random noise pattern images having grayscale values which are finer than the grayscale values of the two pattern images; and
    superposing the two random noise pattern images on the minimum calibration grayscale values in the two pattern images and the set uninspected regions, respectively.
  5. 5. An image correcting method for generating a correction image from an inspection reference pattern image and a pattern image under test, said method comprising:
    generating a random noise pattern image at least in a region having almost equal grayscale values in the inspection reference pattern image;
    superposing the random noise pattern image at least on the region having the almost equal grayscale values,
    the random noise pattern image having grayscale values which are finer than the grayscale values of the pattern images;
    generating simultaneous equations which describe an input-output relationship using, as an output, each pixel of the pattern image under test and using, as an input, a linear coupling of a pixel group around each corresponding pixel of the reference pattern image on which the random noise is superposed;
    solving the simultaneous equations to estimate parameters of the prediction model; and
    generating a correction image by using the estimated parameters.
  6. 6. The image correcting method according to claim 5, wherein
    the linear prediction model is a two-dimensional prediction model using each pixel of the pattern image under test as two-dimensional output data and using a linear coupling of a pixel group around each pixel as two-dimensional input data.
  7. 7. The image correcting method according to claim 5, wherein
    The parameters of the prediction model are estimated by using the least-square method.
  8. 8. An image correcting method for generating a correction image from an inspection reference pattern image and a pattern image under test, said method comprising:
    generating a random noise pattern image at least in a region having almost equal grayscale values in the inspection reference pattern image;
    superposing the random noise pattern image at least on the region having the almost equal grayscale values,
    the random noise pattern image having grayscale values which are finer than the grayscale values of the pattern images;
    setting reference points at a plurality of separated positions in an inspection reference pattern image on which the random noise is superposed, giving a weight to the inspection reference pattern image on which the random noise is superposed with reference to the reference points, and generating decomposed images the number of which is equal to the number of reference points;
    generating simultaneous equations which describe an input-output relationship using each pixel of the pattern image under test on which the random noise is superposed as an output and using a linear coupling of a pixel group around each corresponding pixel of the decomposed images the number of which is equal to the number of reference points as an input;
    solving the simultaneous equations to estimate parameters of the prediction model; and
    generating a correction image by using the estimated parameters.
US11360581 2005-03-24 2006-02-24 Image correcting method Active 2029-02-27 US7706623B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2005085215A JP4174487B2 (en) 2005-03-24 2005-03-24 Image correction method
JP2005-085215 2005-03-24

Publications (2)

Publication Number Publication Date
US20060215899A1 true true US20060215899A1 (en) 2006-09-28
US7706623B2 US7706623B2 (en) 2010-04-27

Family

ID=37035219

Family Applications (1)

Application Number Title Priority Date Filing Date
US11360581 Active 2029-02-27 US7706623B2 (en) 2005-03-24 2006-02-24 Image correcting method

Country Status (2)

Country Link
US (1) US7706623B2 (en)
JP (1) JP4174487B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080050008A1 (en) * 2006-08-24 2008-02-28 Advanced Mask Inspection Technology Inc. Image correction method and apparatus for use in pattern inspection system
US20080050007A1 (en) * 2006-08-24 2008-02-28 Advanced Mask Inspection Technology Inc. Pattern inspection apparatus and method with enhanced test image correctability using frequency division scheme
US20090213226A1 (en) * 2008-02-11 2009-08-27 Ati Technologies Ulc Low-cost and pixel-accurate test method and apparatus for testing pixel generation circuits
US20140146169A1 (en) * 2011-07-11 2014-05-29 Luceo Method of acquiring several images of the same package with the aid of a single linear camera
US9965844B1 (en) * 2011-03-28 2018-05-08 Hermes Microvision Inc. Inspection method and system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5494330B2 (en) * 2010-07-26 2014-05-14 富士ゼロックス株式会社 An image processing apparatus and program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040015464A1 (en) * 2002-03-25 2004-01-22 Lockheed Martin Corporation Method and computer program product for producing a pattern recognition training set
US20060018530A1 (en) * 2004-07-15 2006-01-26 Kabushiki Kaisha Toshiba Pattern inspecting method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5563702A (en) 1991-08-22 1996-10-08 Kla Instruments Corporation Automated photomask inspection apparatus and method
JPH08272971A (en) 1995-03-31 1996-10-18 Toyota Motor Corp Object recognizing method
JPH1011583A (en) 1996-06-27 1998-01-16 Sony Corp Class classification adaptation processor and learning device/method for class classification adaptation processing
JPH1096613A (en) 1997-08-04 1998-04-14 Hitachi Ltd Defect detection method and device thereof
JP2000105832A (en) 1998-09-29 2000-04-11 Toshiba Corp Device and method for pattern inspection and recording medium stored with pattern inspecting program
JP2000241136A (en) 1999-02-22 2000-09-08 Matsushita Electric Ind Co Ltd Method and device for inspecting pattern
JP4096281B2 (en) 1999-06-09 2008-06-04 ソニー株式会社 Image processing apparatus and image processing method, and medium
JP3817979B2 (en) 1999-07-13 2006-09-06 株式会社日立製作所 Template matching method
JP2004038713A (en) 2002-07-05 2004-02-05 Toshiba Corp Object identification device, object identification method, dictionary creating device, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040015464A1 (en) * 2002-03-25 2004-01-22 Lockheed Martin Corporation Method and computer program product for producing a pattern recognition training set
US20060018530A1 (en) * 2004-07-15 2006-01-26 Kabushiki Kaisha Toshiba Pattern inspecting method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080050008A1 (en) * 2006-08-24 2008-02-28 Advanced Mask Inspection Technology Inc. Image correction method and apparatus for use in pattern inspection system
US20080050007A1 (en) * 2006-08-24 2008-02-28 Advanced Mask Inspection Technology Inc. Pattern inspection apparatus and method with enhanced test image correctability using frequency division scheme
US7764825B2 (en) 2006-08-24 2010-07-27 Advanced Mask Inspection Technology Inc. Pattern inspection apparatus and method with enhanced test image correctability using frequency division scheme
US7796803B2 (en) 2006-08-24 2010-09-14 Advanced Mask Inspection Technology Inc. Image correction method and apparatus for use in pattern inspection system
US20090213226A1 (en) * 2008-02-11 2009-08-27 Ati Technologies Ulc Low-cost and pixel-accurate test method and apparatus for testing pixel generation circuits
US8749534B2 (en) * 2008-02-11 2014-06-10 Ati Technologies Ulc Low-cost and pixel-accurate test method and apparatus for testing pixel generation circuits
US9965844B1 (en) * 2011-03-28 2018-05-08 Hermes Microvision Inc. Inspection method and system
US20140146169A1 (en) * 2011-07-11 2014-05-29 Luceo Method of acquiring several images of the same package with the aid of a single linear camera
US9514527B2 (en) * 2011-07-11 2016-12-06 Bizerba Luceo Method and device for acquiring several images of the same package with the aid of a single linear camera

Also Published As

Publication number Publication date Type
US7706623B2 (en) 2010-04-27 grant
JP4174487B2 (en) 2008-10-29 grant
JP2006268396A (en) 2006-10-05 application

Similar Documents

Publication Publication Date Title
US6347150B1 (en) Method and system for inspecting a pattern
US5525808A (en) Alignment method and alignment apparatus with a statistic calculation using a plurality of weighted coordinate positions
US20060269120A1 (en) Design-based method for grouping systematic defects in lithography pattern writing system
US20060239535A1 (en) Pattern defect inspection method and apparatus
US6674889B1 (en) Pattern inspection method and pattern inspection apparatus
US6902855B2 (en) Qualifying patterns, patterning processes, or patterning apparatus in the fabrication of microlithographic patterns
US6268093B1 (en) Method for reticle inspection using aerial imaging
US20050238221A1 (en) Method of manufacturing photo mask, mask pattern shape evaluation apparatus, method of judging photo mask defect corrected portion, photo mask defect corrected portion judgment apparatus, and method of manufacturing a semiconductor device
US7345754B1 (en) Fourier filters and wafer inspection systems
US6361910B1 (en) Straight line defect detection
US20040105578A1 (en) Pattern inspection apparatus
US20020194576A1 (en) Method of evaluating the exposure property of data to wafer
US6999611B1 (en) Reticle defect detection using simulation
US20070064995A1 (en) Image density-adapted automatic mode switchable pattern correction scheme for workpiece inspection
US6411378B1 (en) Mask, structures, and method for calibration of patterned defect inspections
US20110044528A1 (en) Inspection system
US20040264760A1 (en) Defect inspecting method, defect inspecting apparatus and inspection machine
US8019144B2 (en) Pattern image correcting apparatus, pattern inspection apparatus, and pattern image correcting method
US6546125B1 (en) Photolithography monitoring using a golden image
US20020028013A1 (en) Size checking method and apparatus
US6477265B1 (en) System to position defect location on production wafers
US7046352B1 (en) Surface inspection system and method using summed light analysis of an inspection surface
US20080181484A1 (en) Advanced cell-to-cell inspection
US20070076195A1 (en) Defect inspection apparatus and defect inspection method
US6208748B1 (en) Monitoring focus of a lens imaging system based on astigmatism

Legal Events

Date Code Title Description
AS Assignment

Owner name: ADVANCED MASK INSPECTION TECHNOLOGY INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OAKI, JUNJI;REEL/FRAME:017616/0078

Effective date: 20051202

Owner name: ADVANCED MASK INSPECTION TECHNOLOGY INC.,JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OAKI, JUNJI;REEL/FRAME:017616/0078

Effective date: 20051202

AS Assignment

Owner name: ADVANCED MASK INSPECTION TECHNOLOGY INC., JAPAN

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT ASSIGNEE'S ADDRESS PREVIOUSLY RECORDED AT REEL 017616 FRAME 0078;ASSIGNOR:OAKI, JUNJI;REEL/FRAME:017954/0741

Effective date: 20051202

Owner name: ADVANCED MASK INSPECTION TECHNOLOGY INC.,JAPAN

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT ASSIGNEE'S ADDRESS PREVIOUSLY RECORDED AT REEL 017616 FRAME 0078;ASSIGNOR:OAKI, JUNJI;REEL/FRAME:017954/0741

Effective date: 20051202

AS Assignment

Owner name: ADVANCED MASK INSPECTION TECHNOLOGY INC., JAPAN

Free format text: CORPORATE ADDRESS CHANGE;ASSIGNOR:ADVANCED MASK INSPECTION TECHNOLOGY INC.;REEL/FRAME:019385/0760

Effective date: 20070324

Owner name: ADVANCED MASK INSPECTION TECHNOLOGY INC.,JAPAN

Free format text: CORPORATE ADDRESS CHANGE;ASSIGNOR:ADVANCED MASK INSPECTION TECHNOLOGY INC.;REEL/FRAME:019385/0760

Effective date: 20070324

AS Assignment

Owner name: NEC CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ADVANCED MASK INSPECTION TECHNOLOGY INC.;REEL/FRAME:025008/0164

Effective date: 20100915

Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ADVANCED MASK INSPECTION TECHNOLOGY INC.;REEL/FRAME:025008/0164

Effective date: 20100915

FPAY Fee payment

Year of fee payment: 4

MAFP

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552)

Year of fee payment: 8

AS Assignment

Owner name: TOSHIBA MEMORY CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KABUSHIKI KAISHA TOSHIBA;REEL/FRAME:045574/0652

Effective date: 20180316