WO1991020054A1 - Patterned part inspection - Google Patents

Patterned part inspection Download PDF

Info

Publication number
WO1991020054A1
WO1991020054A1 PCT/US1991/004266 US9104266W WO9120054A1 WO 1991020054 A1 WO1991020054 A1 WO 1991020054A1 US 9104266 W US9104266 W US 9104266W WO 9120054 A1 WO9120054 A1 WO 9120054A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
correlation
pixels
pixel
Prior art date
Application number
PCT/US1991/004266
Other languages
French (fr)
Inventor
Benjamin Dawson
Eric Poullain
Original Assignee
Imaging Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imaging Technology, Inc. filed Critical Imaging Technology, Inc.
Publication of WO1991020054A1 publication Critical patent/WO1991020054A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Definitions

  • Patterned Part Inspection Background of the Invention This invention relates to a method and apparatus for automated visual inspection of patterned parts for defects.
  • a patterned part is a precisely formed object, such as an integrated circuit (IC) , printed circuit, or printed material, with precisely structured surface detail (patterns) that can be visually inspected for defects.
  • IC integrated circuit
  • patterns precisely structured surface detail
  • Integrated circuits are used as the example in describing this invention, but the invention is applicable to automated visual inspection of other kinds of patterned parts.
  • Structural techniques characterize and measure the part's components. For ICs, such component measures might include the location and shape of traces, bonding pads, and transistors. See, for example, Billiotte, United States Patent 4,881,269. 2. Knowledge based techniques use knowledge of the design of the part to verify the correctness of the sample part. In IC inspection, Yoda et al. "An Automatic Wafer Inspection System", IEEE Transactions on Pattern Analysis and Machine Intelligence, Voi. 10, No. 1, January, 1988, describe a method of generating a comparison image and design rules from the CAD (Computer Aided Design) files used to design an IC. The comparison image can then be compared with the sample image and the design rules used to reduce false positive defects.
  • CAD Computer Aided Design
  • Comparison techniques compare sample parts to known good parts. Differences between the sample and good parts indicate possible defects, and these possible defects are typically further processed to reduce false positive defects. See for example, Ehrat, United States Patent 4,139,779.
  • Structural and knowledge based techniques require extensive programming to set up and require a large amount of computation during the inspection. For example, in structural techniques, the location and specifications of all structures of interest must be recorded in the inspection apparatus. Even with automated techniques for set up, these methods are computationally expensive and inflexible. Comparison techniques, on the other hand, are simple to set up and hence quite flexible. The visual comparison apparatus can be taught the inspection task by providing it with images of known good parts, and indicating the size and range of differences that can be tolerated. Furthermore, the comparison technique can be embodied in fast and relatively inexpensive image processing hardware. Comparison techniques have the disadvantage that naturally occurring and acceptable shifts in part component sizes, position, and reflectivity could appear as false positive defects. The construction of a reference image by averaging images of good parts is disclosed in Crane, United States patent 4,040,010 (Col. 1, lines 38-45) (Col. 2, lines 50- 56) .
  • fiducial area Some known methods require the fiducial area to have special structural properties. For example, Ehrat, United States patent 4,131,879 (Col. 8, line 25, to Col. 10, line 32) describes the use of edges of letters or printed areas as fiducials. Wenta, United States patent 4,880,309 describes special fiducial marks that can be etched into an IC to help locate and position the IC.
  • Locating the fiducial location to integer pixel positions by correlation has been done for image registration—see for example Pratt, Di ⁇ ital Image
  • Ehrat 4,131,878 (Col. 2, lines 28-42)
  • Ehrat 4,131,878 (Col. 2, lines 28-42)
  • the two images are brought into alignment by electronically moving one image. This is commonly done by reading pixel values from one image memory using an address offset that " corresponds to the whole pixel difference between the two images. See for example Ehrat, United States patent 4,131,879 (Col. 13, lines 30-54).
  • the image must i>e positioned to within a fraction of a pixel. Because image memory is only addressed in integer increments, fractional pixel values are obtained by interpolation (see, for example Ehrat, United States patent 4,131,879 (Col. 13, line 66 to Col 14, line 21) . More elaborate image registration methods have been described, for example, Morishita, United States patent 4,644,582, but these methods are generally unnecessary when inspecting parts with unchanging dimensions, such ICs.
  • Another image alignment method for detecting pattern defects is to make multiple, shifted (horizontally and vertically) copies of the reference image. Then the sample image is compared against these shifted references and if it does not match any of them, it is deemed defective.
  • the final image contains blobs of pixels that represent errors. These blobs are measured, for example, for size, peak value, or perimeter. The resulting measures are compared with previously specified limits to determine if the part is defective or not. See, for example Pratt, pages 514-533.
  • Typical inspection schemes use binary (two valued) images for inspection (See, for example. Linger, 4,477,926, and the paper by Yoda et al.).
  • Binary images require less computation than grey-level (multi-valued) images but are inaccurate in representing images. This inaccuracy shows up as errors in edge position, increased false positives, and reduced error detection sensitivity.
  • the present invention performs automated visual inspection on patterned parts at high speed and inexpensively.
  • This invention uses the comparison technique because of its speed and flexibility, but augments it with two processing steps that greatly reduce sensitivity to naturally occurring and acceptable shifts in part component sizes, positions, reflectivity, and etc.
  • the invention features determining an amount of misalignment between a digitized sample image of a patterned part and a corresponding digitized reference image.
  • a correlation operation is performed on the images to generate a correlation surface indicative of an integral number of pixels by which the images are misaligned.
  • an analytic surface is fitted to the correlation surface to determine a fractional number of pixels by which the images are misaligned.
  • the analytic surface is an elliptic parabaloid.
  • the invention features a method for inspecting a patterned part.
  • a digitized sample image of the patterned part is formed.
  • a convolution operation is performed on the digitized sample image to effectively shift the digitized sample image by a fraction of a pixel relative to a corresponding digitized reference image.
  • the shifted digitized sample image is then compared with the reference image.
  • Preferred embodiments of the invention include the following features.
  • the convolution operation is a bicubic convolution. Adjacent pixels of the digitized sample image are stored at successive addresses in a memory, and references to the addresses are adjusted to effectively shift the digitized sample image by an integer number of pixels relative to a corresponding digitized reference image.
  • the pixel values are grey scale.
  • Fig. 1 is a block diagram of an inspection system.
  • Fig. 2 is a flow chart of a digitizer offset and gain adjustment procedure.
  • Fig. 3a illustrates a reference fiducial pattern (squares) being correlated with one portion of a sample image patch.
  • Fig. 3b illustrates the result of multiple correlations, where the height of the arrow indicates the degree of reference and sample pattern match.
  • Fig. 3c shows some examples of patterns that would make poor reference patterns (fiducials) , as (from left to right) the pattern has only one edge, the pattern repeats, and the pattern is noisy.
  • Fig. 3d, 3e, and 3f are equations and expressions.
  • Fig. 4a illustrates correlation peaks when the reference and sample fiducials match at a point between exact (integer) pixel locations.
  • Fig. 4b illustrates an analytic surface fit to the correlation peaks in Fig. 4a, recovering correlation peak position and hence the match position.
  • Fig. 5a shows the local coordinate system used for least-squares fitting of the correlation data to the analytic surface.
  • Fig. 5b are the kernels used for computing the least-squares fitting coefficients.
  • Fig. 6a is a graph of the sine function, sin(x)/x, for a limited range of x values.
  • Fig. 6b is a graph of a symmetric cubic polynomial function, approximating the sine function.
  • Fig. 7 illustrates the reconstruction of a shifted pixel value using cubic convolution in one dimension.
  • the method and apparatus illustrated in Fig. 1 is intended to automatically visually inspect patterned parts. Such parts do not differ appreciably in their dimensions or reflectivity, except when there are defects.
  • the method and apparatus is designed to reliably detect light reflectivity (or transmission) differences that indicate defects.
  • a working embodiment of this invention uses the Series 150 modular image processing hardware available from Imaging Technology Inc. of Woburn, MA for the image processing functions.
  • virtual data paths needed for the method are shown in the figures. These virtual data paths correspond in some cases to the Series 150 data paths.
  • the Series 150 has additional data paths and capabilities that are not shown as they are not relevant to the disclosure of the method described herein.
  • a mechanical positioner 1 presents an IC (the "part") 2 to an optical system that forms an image in a camera 3 from light reflected or transmitted by the part from an adjustable light source 4.
  • the mechanical positioner positions the part to within a few pixels (digital picture elements) of a known location.
  • the optical system has a zoom lens 5 that is zoomed and focused to form the image of the part, and a contrast enhancing filter 6.
  • the optical system is normally adjusted by the operator only at the beginning of an inspection run to provide a focused and correctly sized image of the part to the camera 3.
  • An inspection run is a long sequence of inspections of copies of the same part, for example one type of IC.
  • the regulated light source 4 provides constant light intensity that does not change appreciably during the inspection run.
  • a method to be described adjusts the image acquisition electronics to best utilize the light reflected from or transmitted through the inspected part.
  • the camera 3 is a high-resolution solid-state camera that has low geometric distortion.
  • the analog image signal from the camera is digitized and quantized in digitizer 7 and the resulting pixels are stored in a pair of image memories 8.
  • image memories 8 For high speed operation, an image is acquired into one image memory while the system processes a previously acquired image stored in the other memory. At the end of processing and acquisition, the role of the two memories are reversed (by switch 9) .
  • This "double buffer" technique effectively eliminates the delay incurred in image acquisition.
  • the digitizer and two image memories are part of a single VSI-150 (Variable Scan Interface) hardware module also available from Imaging Technology Inc. These components, as well as the rest of the components in the system, are controlled by a general purpose computer 100, called the control computer.
  • the control computer can set and read any of the system's control signals and can also read and write pixel values from any of the image memories.
  • the arrows 101 symbolize the ability of the control computer to access any system component.
  • the correct operating range for the digitizer 7 is adjusted by the control computer 100.
  • the Gain control 701 adjusts the amplification of the analog camera signal and the Offset control 702 adds a constant value to this signal.
  • the operating range of the digitizer is automatically adjusted so that the actual quantized intensity values represent nearly the full range of possible quantized intensity values.
  • the method for setting Gain and Offset is shown as a flow diagram in Fig. 2, and is executed as a program in the control computer.
  • the method starts with the acquisition (digitization) of an image (801) of an IC into image memory 8.
  • the control computer computes (802) the maximum (max) and minimum (min) pixel intensity values of this image.
  • a new image is acquired (806) using the new offset value and the maximum and minimum pixel values are again computed (807) . If the minimum value is greater than the LI threshold and the maximum value is less than the UI threshold (808) , then the gain is increased by 1 (809) . If the minimum value is less than the LD threshold or the maximum value is greater than the UD threshold (810) , then the gain is decreased by l (811) . The loop repeats until the minimum value is between LI and LD and the maximum value is between UI and UD (812) or until some specified number of iterations have occurred. The increase and decrease thresholds are separated by a value that is at least twice the expected digitization noise value to make sure the process converges.
  • the optics, camera, and digitizer are designed so that spurious spatial frequency components are not introduced into the sampled image. Spurious components (aliasing) can appear when the camera analog signal is not band-limited to below the Nyquist rate, as set by the spatial digitizing frequency.
  • Spurious components aliasing
  • a solid-state camera with RS-170 timing and a resolution of 780 (horizontal) by 488 (vertical) sensor locations is used, and the analog filter on the VSI-150 is set to a 3 db roll-off at 4.2 Mhz.
  • the digitizer samples only 512 by 480 pixels, so the spatial sampling is well below the Nyquist rate.
  • Another source of spurious spatial frequency components is if the camera's spatial frequency response is not smooth within the sampling bandwidth. This may happen when there are non-responsive areas in between the light-sensitive elements of the camera. If spurious spatial frequency components are introduced, they can decrease the accuracy of the alignment steps described below, and this in turn can increase the rate of incorrect defect identification (false positive defects) .
  • the analog camera data are quantized to eight bits.
  • the quantized data must not be obscured by electrical and sampling noise.
  • the image is digitized with enough spatial resolution so that the defects to be detected fully cover one or more pixels.
  • Cameras other than RS-170 cameras may be used to get larger image sizes. Larger image sizes may be used to increase the fineness of the inspection or to inspect larger ICs.
  • the light source, optics, camera, and digitizer must be stable enough so that they need only be adjusted at the beginning of a run. If any of these elements changes appreciably during the inspection process, the inspection may report more errors than are actually in the parts. If the sample images are acquired under different light or reflectance conditions than the reference image, there can be average offsets between the images. These offsets appear as constant differences when the images are subtracted, and hence as false positive defects.
  • This invention uses a controlled light source that removes this source of variability. If this is not sufficient, there are common and well known techniques to compensate for illumination intensity changes — see for example, Pratt, Digital Image Processing. John Wiley & Sons (New York), 1978, pages 307-317 incorporated herein by reference.
  • Ehrat United States patent 4,139,779, col. 3, lines 13-48, incorporated herein by reference, uses a neighborhood average to remove this offset (shade correction), and Yoshida, United States patent 4,449,240, incorporated herein by reference, describes a tracking method to compensate for intensity changes due to aging of the light source.
  • the image area to be inspected must not occupy the entire image memory. Rather, a small border of inspected pixels is required to allow the image to be shifted for alignment and to allow for errors in the mechanical positioner.
  • training mode There are two modes of operation of this invention: training mode and inspection mode.
  • training mode the invention is taught a reference image, fiducial locations, a variability image, erosion size, and defect limits.
  • a reference image is a digital image formed by averaging images of known good parts. A part to be inspected is then compared to this reference image to locate possible defects.
  • a reference image is started by acquiring a single master reference.
  • an image of a known good part is digitized and stored in one of the image memories 8.
  • This image is transferred on a data bus 15, through switches 14 and 31 and into a Reference Image Memory 30.
  • This master image is displayed on a CRT (television) monitor 32.
  • CTR television
  • Any of the images in the various memories can be displayed on the monitor 32, but for simplicity, only the Reference Image Memory 30 is shown connected to the monitor 32) .
  • the Reference Image Memory is an FB-150 frame buffer (available from Imaging Technology Inc.). The operator then selects an area of the image to inspect and the location of a fiducial by moving outline rectangles displayed over the image of the master reference.
  • the fiducial is an image area that will be located in subsequent images and used to align subsequent images to the master reference image.
  • the selection of the inspection area and fiducial can be also be automatic, based on information in a data base about the part to be inspected. A method is described below for automatic selection of the fiducial based on certain quality measures.
  • the fiducial image area is copied from the reference image memory 30 into the fiducial image memory 10 by the control computer 100.
  • Sample images to be aligned are acquired and stored in one of the image memories 8.
  • a small patch of the sample image is transferred to the alignment computation block 11 via data bus 15.
  • This image patch is taken from the input image area where the fiducial is expected to be located, and is slightly larger than the fiducial.
  • the alignment procedure has two major steps, described in more detail below. First, the image patch from the sample image is searched to find the location of the fiducial to within a fraction of a pixel. The positional difference between the location of the fiducials in the sample image and in the reference image indicates the amount of positional shifting required to bring the images into alignment (registration) . Second, the sample image is shifted to align (register) with the reference image by moving it by the distances computed in the first step.
  • the fiducial location computation 11 is illustrated by Fig. 3.
  • the patch (202) from the sample image is shown as an 11 by 11 array of dots.
  • the expected location of the fiducial in the sample images is at the center of this image patch (white dot 204) , but positioning and part variability can cause the true position of the fiducial to be anywhere in the 7 by 7 area (206) shown as grey dots.
  • the black dots are border pixels, where a match cannot be found because the reference fiducial pattern would extend outside the sample image patch.
  • the location of the fiducial (in the Fig., each pixel of the fiducial is shown in a small square) in the sample image is found by correlating the patch of sample image containing the fiducial with the image of the reference fiducial stored in the fiducial image memory 10.
  • the reference fiducial is correlated at every location within the sample image patch where it can fit (i.e., with the grey dot area). For example, if the reference fiducial is 5 by 5 pixels there are 49 possible correlation locations (grey dots) .
  • Correlation consists of multiplying the reference fiducial values with the values in an area of the sample image patch.
  • a correlation at one location is represented by a 5 by 5 array of square "reference" pixels surrounding the sample pixels. This represents the reference image values being multiplied by the sample image values. These multiplied values are then summed, and a high value of the sum (i.e., the correlation value) indicates a close match between the reference fiducial pattern and the pattern of pixel intensities in a sample image area.
  • the correlation values at the various possible positions form an array of values, considered to be a surface, which peaks where the input and reference images are in registration, as shown in Fig. 3b.
  • the sample image patch must be large enough to contain the fiducial and additional borders of pixels equal to 1/2 the size of the fiducial, or the registration point will not be found.
  • the correlation method used is Fishers Correlation, or normalized grey-scale correlation. If a is the input image and b is the reference image, then the correlation at each reference image location x',y', r ⁇ fx ⁇ y'), is given by equation 1 of Fig. 3d, where the indices i and j range over the N reference image points and over the N sample image points offset by x',y' in the sample image patch. For simplicity in writing and reading this and the following equations single summations signs are often used to represent double summations, as shown by the double subscripts, i,j, on the summations.
  • the fiducial image memory 10 and alignment computation 11 of Fig. 1 are subsumed in the IPA-150 (image processing accelerator) hardware.
  • the IPA-150 is a high-speed, floating-point image processor also available from Imaging Technology Inc. To increase the computation speed the square root need not be taken. Instead equation 1 is squared so that r ab is really the square of r ab . This means that negative correlation values are not reported so when the numerator of equation 1 is less than zero, zero is returned. Negative correlation values occur when the sample and reference images are negatives of each other. Since this is unlikely to occur in the small patches chosen, the loss of negative values causes no problems.
  • the peak of the correlation values occurs where the input sample image patch and reference fiducial image best match. Some care must be taken in the choice of fiducials to prevent poor or multiple correlation peaks.
  • Fig. 3c shows some example of poor fiducial choices.
  • a fiducial with a single edge or with image components in only one or a few directions will generate a "ridge" of correlation peaks that cannot be used for registration. If the fiducial consists of a repeating pattern then there will be peaks at each pattern position and no way to determine which is the correct peak for registration of the images. A low contrast or noisy image will not generate a clearly defined peak.
  • Fiducials from areas that have a significant amount of part-to-part variation will also not work, as the correlation peaks may vary randomly in height and position.
  • defects in the sample image may occur in the area used for the fiducial. In this case, the correlation peak will be significantly reduced. This condition is detected and causes the part being inspected to be classed as defective.
  • a small part of the semiconductor structure works well as a fiducial.
  • the system can make certain measurements on the correlation surface and use them as a quality score. These measures are taken on a correlation surface formed by correlating the reference fiducial over a small patch of the reference image, equivalent in size to the patch to be searched in the sample image. These measures include: 1. Sharpness of the correlation peak: The height of the highest peak compared with the next highest correlation value. Fiducial quality is a function of correlation peak sharpness.
  • Peak singularity Is there only one major peak in the correlation image? If there is more than one peak, the fiducial is rejected. 3. Symmetry and smoothness of the peak. The peak ideally falls off equally in all directions. This measure rejects fiducials with "ridges" of correlation and gives some confidence that the next step of alignment (analytic surface fitting) will work well.
  • the quality score can be used to automatically find fiducial areas. To do this, a regular array of fiducial areas are chosen from the master reference image and a quality score is computed for each. The area with the best quality score is chosen as the fiducial for this part.
  • the location of the correlation peak is used to compute the whole pixel position shift between the sample and reference images. For example, if the center of the reference fiducial is assigned a value of 0,0 and the best match of the reference fiducial and sample image patch is when the reference fiducial is up one pixel and over two pixels to the left (see Fig. 3b) , then the input image must be moved down one pixel and over two pixels to the right to register with the reference image.
  • Correlation peaks can be modeled by an analytic surface such as an elliptic paraboloid.
  • an analytic surface such as an elliptic paraboloid.
  • equation 2 of Fig. 3d where p(x,y) is the height of the surface. Assuming that exy is small, then this equation defines a surface that is essentially an elliptic paraboloid. Sections through the surface normal to the p(x,y) axis appear as ellipses. Sections taken normal to the x or y axes appear as parabolas.
  • Equation (2) is differentiated with respect to x and y, to get equation 3 of Fig. 3d.
  • the peak occurs when these derivatives are 0. Setting these equations (3) to 0 we get two equations in two unknowns (equations 4 of Fig. 3d) which may be solved for the x and y position of the peak. Note that the value a, the offset term, disappears.
  • a least-squares fit of the correlation surface data is used to generate the coefficients for the analytic surface.
  • the minimum of this function occurs when its derivative is zero.
  • Equations are implemented by the simple kernels shown in Fig. 5b.
  • the coefficients for the least-squares fit of the correlation data to the analytic model are derived by convolving these kernels with the nine (3 by 3) correlation values surrounding the correlation peak, and scaling the results by the fractions shown to the left of each kernel. Note that the order of evaluation of the kernel values may change, depending upon the coordinate system used. If there are neighboring correlation peaks with the same values, the peak closer to the center of the sample image patch is chosen as the center of the convolutions.
  • the sub-pixel location of the sample fiducial is computed by substituting the coefficients derived from the data using equations (8) , into equations (4) . This gives an approximation to the true x,y peak that is accurate to a fraction of a pixel position, typically 1/6 of a pixel position or better.
  • fractional x and y values are shown as sx and sy on lines 13 in Fig. 1.
  • sx and sy are used to compute a convolution kernel 19 that is then loaded into convolver 20.
  • This convolver is used to shift the input sample image, still stored in the image memory 8, by a fraction of a pixel as described below.
  • the second part of the alignment (registration) process is moving the sample (or reference) image into alignment with the reference (or sample) image. Moving the image integer pixel distances has been described above, and is a standard method of alignment. To get more precise registration, the fractional pixel distances computed by the above surface fitting are used to move the image by fractional pixel distances. This method is described below.
  • a properly sampled analog image where there are no aliased components, can in theory be reconstructed from the sample values by interpolation. Once the signal has been reconstructed from the sample values, it may be resampled. By using shifted (in space) sample points on resampling, a shifted sampled image is produced. Thus, images can be shifted sub-pixel distances by reconstructing and resampling them.
  • Reconstructing a signal from sample data can be done in many different ways.
  • the general ideas is that some small number of sample points, g s , are multiplied by weights derived from an interpolation function, r(x-n), and the sum is the value of the reconstructed signal, g r , at some intermediate sampling point, x.
  • this operation is the convolution shown in the equation labeled 9 in Fig. 3f.
  • the sine function (sin(x)/x) is the ideal interpolation function, but is impossible to use in practice as it extends to infinity.
  • a simple approximation of the sine function is a cubic function, reflected about the y axis (see Fig. 6b) .
  • the cubic is limited in extent, is simple to compute, and has no discontinuity at its end points.
  • Park and Schowengerdt ("Image Reconstruction by Parametric Cubic Convolution", CVGIP, Vol. 23, pp. 258-272 incorporated herein by reference) the equations for the cubic reconstruction function are equations 10 in Fig. 3f. (Cubic convolution has been described for other applications other than inspection by comparison—see, for example, Yui, United States patent 4,578,812) .
  • Equations (10) represent a family of cubic curves that depend upon the parameter k.
  • k controls the high-frequency content of the cubic curve with more negative values accentuating high frequencies.
  • v(x) is the value of the cubic at x.
  • resampling using a cubic interpolation function works as follows (see Fig. 7) . Suppose that the image is to be shifted to the right by amount s, a fraction of a pixel distance.
  • the surrounding pixel values are multiplied by weighting values derived from the cubic interpolation equation (10) and summed to give the reconstructed intensity value of the original image function at the fractional pixel location s. Note that this reconstruction of the intensity at s is an approximation, due to noise, a non-ideal interpolation function, and round-off errors in the calculations. These errors are not significant in practice.
  • sample pixel values at any desired locations can be reconstructed from a set of pixels at fixed locations.
  • the vector of interpolation values, r includes a fifth value of 0 on one end or the other.
  • a set of interpolation function values for two dimensional reconstruction and resampling are the Cartesian product of the interpolation values for shifting in x and the interpolation values for shifting in y.
  • the Cartesian product R(i,j) r ⁇ (i)r (j) , where r ⁇ represents the five interpolation values in the x direction and r the five interpolation values in the y direction. In both sets of interpolation values, one or more values will be 0.
  • the resulting array of interpolation values R(i,j) is a kernel for two-dimensional convolution. Convolving the image to be shifted with this kernel interpolates the image values and resamples them, such that the output from the convolution represents the reconstructed image intensity values at shifts sx and sy.
  • the image sampling points (pixel centers) are effectively "moved" distances sx and sy by convolving with cubic interpolation functions, or "bicubic convolution" for short.
  • the fractional alignment values sx and sy computed using the analytic curve fitting of the correlational data (described above) are used to compute a "shifting" kernel.
  • This computation 19 is done either in the alignment computation hardware (IPA-150) or on the control computer and the resulting kernel is loaded into the convolver 20.
  • Sub-pixel shifting of an image using bicubic convolution requires 16 multiplications, additions, and other operations at each image point. Shifting an image with many thousands of pixels can therefore take a significant amount of time.
  • the Series 150 has hardware that performs convolutions of up to eight by eight kernel elements at very high speeds (CM150-RTC8 also from Imaging Technology Inc.), thus making this alignment and the subsequent mean accumulation or inspection very fast.
  • the output of the convolver 20 is rounded to eight bits of value by using the look-up table (LUT) 19.
  • a LUT is a general "function box" that can map any input value to any output value.
  • the two steps described above are a general procedure for aligning (registering) two images, given that an unchanging fiducial can be found in both images.
  • This alignment method is used both in the construction of the reference image, given a master reference image, and in the subsequent inspection of images.
  • additional sample images are acquired into image memory 8, and then aligned to the master reference image, using the fiducial values stored in the fiducial image memory 10.
  • the whole pixel shifts of the acquired images are accomplished by changing the pan and scroll address values on the image memory and the fractional shifts by sending the image from bus 15 through the convolver 20 where it is reconstructed and resa pled. This shifted image is then averaged into the reference image memory.
  • Averaging is done by opening data switch 31 and closing data switch 33. Shifted images are added by the arithmetic-logic unit (ALU-150 also from Imaging Technology Inc.) 40 with the existing data 34 in the reference image memory 30, and the sum 41 is put back into the reference image memory. With N images summed together, the sum image is divided by N using the ALU 40. Data from the Reference image memory 30 goes through ' data path 34 into the ALU where it is divided by N, output on data path 41 and returned to the reference image memory 30 via data switch 33.
  • the ALU-150 performs division by shifting, so N is chosen to be a power of 2. This is not required and other implementations might allow any value for N.
  • a "variability" image is constructed in conjunction with the construction of the reference image.
  • the variability image specifies the acceptable range of part pattern intensity variation at each point (pixel) in the image.
  • the disclosed method uses the standard deviation of pixel intensities at each point in the image to construct the variability image, but other methods such as using some fraction of the pixel intensity range at each point could also be used.
  • the mean M 2 is the average value accumulated in the reference image memory.
  • the sum of the squares of z k are accumulated in the variability image memory 53.
  • aligned images come from the convolver 20 to be added into the reference image memory 30, they are also sent to the variance computation 50 via data switch 51.
  • the variability image memory 53 is all zeros. As images come into the variance computation 50, the pixel values are squared and added to the values in the variability image memory 53 using data buses 52. This accumulates the sum of the pixel values squared for each image point.
  • these mean values are transferred via data switch 54 to the variance computation where they are squared and subtracted from the sum of squared pixel values held in variability image memory 53. These values are divided by N and the square root is taken to give the standard deviation.
  • the variability image is then constructed by setting each pixel in this memory to some multiple plus an offset of the standard deviation at that point. For example, to set a limit of variability of three standard deviations, the s.d. values would be multiplied by three.
  • a three standard deviations limit means that the variation in sample image intensity at a point must be greater than 99% of the maximum variation normally expected for the point to be declared a potential defect.
  • the computed and scaled standard deviation values are stored back into variability image memory 53 via data buses 52.
  • the next step in the training mode is to select a shape and size of kernel for the grey-level erosion step.
  • the grey-level erosion will set the edges of groups of pixels to zero value. This removes small patches of noise, or "ghost" edges due to slight misalignment in the structures of reference and sample parts.
  • the erosion is performed by the CM150-RVF (also from Imaging Technology Inc.), and can have a kernel size of up to eight by eight, meaning that groups of pixels as large as eight by eight can be removed (zeroed) from the potential defect image in one pass through this hardware.
  • limits on acceptable defects are selected. The limits selected depend upon the defect analysis 70 selected. For the example analysis described below, the maximum acceptable number of defect pixels and the value of the maximum acceptable defect are selected.
  • the multiple and offset of the standard deviation, the erosion kernel shape and size, and the limits for defects are usually selected based on comparing the results of a test run with human visual inspection.
  • an IC wafer containing many ICs is inspected and the parameters of the automated inspection are adjusted to bring the reported defects into agreement with those found by careful visual inspection by a human of the same wafer.
  • a single set of parameters can be used for many different kinds of ICs manufactured by the same fabrication process.
  • the Positioner 1 brings the part to be inspected to within a few pixel positions of a known position under the camera 3 and with near zero angular rotation.
  • the camera image is digitized 7 and stored in one of the image memories 8. As described above, the storing of this sample image may proceed in parallel with the processing of the previously digitized sample image.
  • a small patch of the sample image is transferred via switch 9 and data bus 15 into the alignment computation 11.
  • the location of the fiducial in the sample image is determined to sub-pixel location using the method of correlation and analytic curve fitting described above.
  • the shifts required to bring the sample and reference images into alignment are sent (12) to the image memory 8 (for whole pixel shifts) and are used to compute (19) a bicubic convolution kernel.
  • This kernel is placed in the convolver 20 and is used to align the sample and reference image to a fraction of a pixel in position.
  • the whole pixel aligned sample image comes from image memory 8 through data switch 9 and data bus 14 and is shifted by convolver 20 and rounded to eight bits by LUT 21.
  • the aligned reference image goes through the data switch 14 and is subtracted on a point-by-point basis from the reference image stored in the reference image memory 30 using the ALU 40.
  • the subtracted image values are sent via data bus 41 to the absolute value computation 42 where negative values are turned into the equivalent positive values. That is, if z is less than 0, then z is replaced with -z (a positive number) .
  • both the subtraction and the absolute value are performed at high speed using the ALU-150 hardware.
  • the absolute value is taken because, in general, sample defects may be brighter or darker than the reference image.
  • the result of subtracting the sample from the reference image and taking the absolute value is called the difference image, and these differences could represent defects.
  • acceptable variations in parts, noise, and minor misalignments give rise to differences that are not considered to be defects (false positives) .
  • the variability image stored in the variability image memory 53 is subtracted using data paths 43 and 54 by the look-up table 50.
  • the look-up table is programmed to subtract the variability image values from the difference image and if the result is less than 0, a 0 is output.
  • the look-up table 50 is implemented using a CM150-LUT16 (also from Imaging Technology Inc.).
  • the erosion processor 60 sets small groups of pixels to zero and zeros pixels at the edges of larger groups (blobs) of pixels.
  • the erosion step is optional and its use and the size of the erosion is selected during the training procedure.
  • the resulting image is called the error image, and this image is transferred via data bus 61 to the Defect Analysis computation 70.
  • the defect analysis is designed to measure the values and structure of groups of pixels in the error image. These measures are compared against previously specified limits for the measure and if some or all of these measures are outside of the limits, the control computer 100 is notified via data path 71.
  • the defect analysis computation can take many forms. Some example measures are the size of the largest grouping of pixels (a "blob" of pixels) , or the orientation of the longest group of pixels—perhaps indicating a scratch on the IC surface. For illustrative purposes two simple measures are described herein. As pixels in the error image are sent to the defect analysis computation, the number of pixels greater than zero are counted and the maximum value of all of these pixels is remembered. These two values give a measure of the total amount of error "area” (the total number of pixels) and an idea of the magnitude of the error (the largest value of the error image) . While quite simple, these measures are sufficient for many IC inspection tasks.
  • the present invention by maintaining gray-scale (continuous valued) pixel values throughout the processing, allows a variety of defect measures to be made on the error image, such as the "volume" (x, y area times pixel value) of blobs or groups of pixels.
  • defect measures such as the "volume" (x, y area times pixel value) of blobs or groups of pixels.
  • the present invention can be adjusted to provide more sophisticated and sensitive measures of defects as well as to provide better classifications of defects for process control.
  • the measures and an error signal are sent to the control computer.
  • the measures may be sent to the control computer where they are compared against the trained limits.
  • an error report 101 is issued if the part under inspection has produced defect measures that are greater than the previously specified limits, and the part is then rejected as being defective.
  • the measures generated by the defect analysis can be accumulated and summarized to provide some indication of the types and magnitude of errors. This information can be used to adjust the process that produces the parts. For example, in IC inspection the existence of larger blobs may indicate that dust and other particulate matter is contaminating the fabrication process.

Abstract

A sample or input image (8) of a part to be inspected is precisely aligned (10) with a reference image derived from known good parts, so that the two images may be compared on a point-to-point (pixel) basis. This alignment is performed by matching the two images using correlation and curve fitting, and then shifting one image into alignment with the other using cubic convolution (20). Differences between the two aligned images are potential defects in the sample image. The pixel values in the difference image are reduced with a position varying threshold (53, 55) such that pixels in image areas with high intrinsic intensity variability are not incorrectly classified as defects. The remaining non-zero pixels are modified by morphological techniques (60), and then analyzed (70). The analysis may specify such measures as the size, peak value, mean, etc. of pixel groups (defects). The part is classified as good or bad comparing these measures to allowable tolerances.

Description

Patterned Part Inspection Background of the Invention This invention relates to a method and apparatus for automated visual inspection of patterned parts for defects. A patterned part is a precisely formed object, such as an integrated circuit (IC) , printed circuit, or printed material, with precisely structured surface detail (patterns) that can be visually inspected for defects. Integrated circuits are used as the example in describing this invention, but the invention is applicable to automated visual inspection of other kinds of patterned parts.
Humans have difficulty visually inspecting patterned parts for small errors in manufacturing. We quickly see gross errors and malformations, but have trouble seeing smaller errors in highly patterned objects. Instead, we tend to perceive detailed patterns as a visual texture and do not easily comprehend individual elements making up the pattern. Humans are slow in making this sort of inspection, they tire and become unreliable, and they have trouble making quantitative visual measurements.
With billions of patterned parts manufactured each year and given the poor performance of humans on inspecting these parts, automated visual inspection by a machine is useful. There are three general techniques for automated visual inspection of patterned parts:
1. Structural techniques characterize and measure the part's components. For ICs, such component measures might include the location and shape of traces, bonding pads, and transistors. See, for example, Billiotte, United States Patent 4,881,269. 2. Knowledge based techniques use knowledge of the design of the part to verify the correctness of the sample part. In IC inspection, Yoda et al. "An Automatic Wafer Inspection System...", IEEE Transactions on Pattern Analysis and Machine Intelligence, Voi. 10, No. 1, January, 1988, describe a method of generating a comparison image and design rules from the CAD (Computer Aided Design) files used to design an IC. The comparison image can then be compared with the sample image and the design rules used to reduce false positive defects.
3. Comparison techniques compare sample parts to known good parts. Differences between the sample and good parts indicate possible defects, and these possible defects are typically further processed to reduce false positive defects. See for example, Ehrat, United States Patent 4,139,779.
Structural and knowledge based techniques require extensive programming to set up and require a large amount of computation during the inspection. For example, in structural techniques, the location and specifications of all structures of interest must be recorded in the inspection apparatus. Even with automated techniques for set up, these methods are computationally expensive and inflexible. Comparison techniques, on the other hand, are simple to set up and hence quite flexible. The visual comparison apparatus can be taught the inspection task by providing it with images of known good parts, and indicating the size and range of differences that can be tolerated. Furthermore, the comparison technique can be embodied in fast and relatively inexpensive image processing hardware. Comparison techniques have the disadvantage that naturally occurring and acceptable shifts in part component sizes, position, and reflectivity could appear as false positive defects. The construction of a reference image by averaging images of good parts is disclosed in Crane, United States patent 4,040,010 (Col. 1, lines 38-45) (Col. 2, lines 50- 56) .
To compare a sample with a reference image, the two images must be in precise alignment. Mechanical methods are too slow and imprecise for the high speed inspection of small parts such as ICs. Electronic alignment techniques use fiducial areas on the sample and reference images to determine their position difference and then to shift the images into alignment (registration) .
Some known methods require the fiducial area to have special structural properties. For example, Ehrat, United States patent 4,131,879 (Col. 8, line 25, to Col. 10, line 32) describes the use of edges of letters or printed areas as fiducials. Wenta, United States patent 4,880,309 describes special fiducial marks that can be etched into an IC to help locate and position the IC.
Locating the fiducial location to integer pixel positions by correlation has been done for image registration—see for example Pratt, Diσital Image
Processing. John Wiley & Sons (New York) , 1978) , pages 562-566. Other known methods use variations or simplifications of correlation. For example, Ehrat, 4,131,878 (Col. 2, lines 28-42), describes a difference operation that gives the direction of shift between two images. Based on the relative locations of the fiducials in the sample and reference images, the two images are brought into alignment by electronically moving one image. This is commonly done by reading pixel values from one image memory using an address offset that"corresponds to the whole pixel difference between the two images. See for example Ehrat, United States patent 4,131,879 (Col. 13, lines 30-54).
To improve the accuracy of the match and reduce false positive defects, the image must i>e positioned to within a fraction of a pixel. Because image memory is only addressed in integer increments, fractional pixel values are obtained by interpolation (see, for example Ehrat, United States patent 4,131,879 (Col. 13, line 66 to Col 14, line 21) . More elaborate image registration methods have been described, for example, Morishita, United States patent 4,644,582, but these methods are generally unnecessary when inspecting parts with unchanging dimensions, such ICs. Another image alignment method for detecting pattern defects is to make multiple, shifted (horizontally and vertically) copies of the reference image. Then the sample image is compared against these shifted references and if it does not match any of them, it is deemed defective. See for example, Wanta, United states patent 4,803,734. The two aligned images are then subtracted and the absolute value is taken to form a difference image. The method of subtraction and absolute value is a common and well known technique. See, for example, Crane, United States patent 4,040,010 (Col. 2, lines 19-24, Ehrat, United States patent 4,139,779 (Col. 2, lines 9-20), Huber, United States patent 4,311,914 (Col. 1, lines 50-59, and Yoshida, United States patent 4,449,240 (Col. 2, lines 64-66. Various techniques have been proposed to reduce false positive defects that could appear in the difference image. For example Schmitz, United States patent 3,623,015 (Col. 2, line 8 to Col. 3, line 17) describes a threshold method that dynamically adjusts to reflect the statistics of the incoming signal. Yoshida, 4,449,240, also describes a tracking system whereby two thresholds are updated as new images are acquired. These two thresholds are used for the entire image, rather than varying over the image. Crane, 4,040,010 (Col. 1, lines 38-56 and Col. 2, lines 19-46) describes a threshold based on the standard deviation of the ensemble of known good signals. Ehrat, United States patent 4,139,779 (Col. 4, lines 14-26) describe a threshold that varies over the image as a function of intrinsic part variation and is set based on the size of detectable errors. Huber, 4,311,914 (Col. 4, lines 15-33) describes a statistical weighting that acts as a threshold and that varies over the image and is proportional to the intrinsic or expected variability of the images. Schrader, 4,859,863 (Col. 1, lines 62-68) describes the use of a standard deviation threshold for each pixel in the image, set by making the threshold the three standard deviation value for each image point. DeGasperi, United States patent 4,433,385 (Col. 5, lines 15-53) describes using within image variance to set acceptance thresholds. This is different from the across image variances (standard deviation) described in some of the above work.
After thresholding, there still may be false positive defects due to shifts in part pattern locations or sizes. Erosion operations may be used to remove some of these false positives. See, for example. Linger, United States patent 4,477,926 and the paper by Yoda et al. Unlike many previous inventions, this invention uses grey scale erosion rather than binary operations. Another method of reducing false positives due to structural changes is to smooth the potential defect image using convolutional blurring followed by a threshold (see Ehrat, 4,139,779 (Col.4, line 56 to Col. 6, line 2).
The final image contains blobs of pixels that represent errors. These blobs are measured, for example, for size, peak value, or perimeter. The resulting measures are compared with previously specified limits to determine if the part is defective or not. See, for example Pratt, pages 514-533.
Typical inspection schemes use binary (two valued) images for inspection (See, for example. Linger, 4,477,926, and the paper by Yoda et al.). Binary images require less computation than grey-level (multi-valued) images but are inaccurate in representing images. This inaccuracy shows up as errors in edge position, increased false positives, and reduced error detection sensitivity. Summary of the Invention
The present invention performs automated visual inspection on patterned parts at high speed and inexpensively.
This invention uses the comparison technique because of its speed and flexibility, but augments it with two processing steps that greatly reduce sensitivity to naturally occurring and acceptable shifts in part component sizes, positions, reflectivity, and etc.
In general, in one aspect, the invention features determining an amount of misalignment between a digitized sample image of a patterned part and a corresponding digitized reference image. A correlation operation is performed on the images to generate a correlation surface indicative of an integral number of pixels by which the images are misaligned. Then an analytic surface is fitted to the correlation surface to determine a fractional number of pixels by which the images are misaligned. In preferred embodiments, the analytic surface is an elliptic parabaloid.
In general, in another aspect, the invention features a method for inspecting a patterned part. A digitized sample image of the patterned part is formed. A convolution operation is performed on the digitized sample image to effectively shift the digitized sample image by a fraction of a pixel relative to a corresponding digitized reference image. The shifted digitized sample image is then compared with the reference image. Preferred embodiments of the invention include the following features. The convolution operation is a bicubic convolution. Adjacent pixels of the digitized sample image are stored at successive addresses in a memory, and references to the addresses are adjusted to effectively shift the digitized sample image by an integer number of pixels relative to a corresponding digitized reference image. The pixel values are grey scale.
The invention enables rapid, effective image matching, alignment, and comparison for inspection. Other advantages and features will become apparent from the following description of the preferred embodiment, and from the claims.
Description of the Drawings Fig. 1 is a block diagram of an inspection system. Fig. 2 is a flow chart of a digitizer offset and gain adjustment procedure. Fig. 3a illustrates a reference fiducial pattern (squares) being correlated with one portion of a sample image patch.
Fig. 3b illustrates the result of multiple correlations, where the height of the arrow indicates the degree of reference and sample pattern match.
Fig. 3c shows some examples of patterns that would make poor reference patterns (fiducials) , as (from left to right) the pattern has only one edge, the pattern repeats, and the pattern is noisy.
Fig. 3d, 3e, and 3f are equations and expressions.
Fig. 4a illustrates correlation peaks when the reference and sample fiducials match at a point between exact (integer) pixel locations. Fig. 4b illustrates an analytic surface fit to the correlation peaks in Fig. 4a, recovering correlation peak position and hence the match position.
Fig. 5a shows the local coordinate system used for least-squares fitting of the correlation data to the analytic surface.
Fig. 5b are the kernels used for computing the least-squares fitting coefficients.
Fig. 6a is a graph of the sine function, sin(x)/x, for a limited range of x values. Fig. 6b is a graph of a symmetric cubic polynomial function, approximating the sine function.
Fig. 7 illustrates the reconstruction of a shifted pixel value using cubic convolution in one dimension. Detailed Description of a Preferred Embodiment The method and apparatus illustrated in Fig. 1 is intended to automatically visually inspect patterned parts. Such parts do not differ appreciably in their dimensions or reflectivity, except when there are defects. The method and apparatus is designed to reliably detect light reflectivity (or transmission) differences that indicate defects.
A working embodiment of this invention uses the Series 150 modular image processing hardware available from Imaging Technology Inc. of Woburn, MA for the image processing functions. To simplify the description of the method, virtual data paths needed for the method are shown in the figures. These virtual data paths correspond in some cases to the Series 150 data paths. The Series 150 has additional data paths and capabilities that are not shown as they are not relevant to the disclosure of the method described herein.
The inspection method is based on comparing the images derived from known good parts to sample parts, reducing the resulting difference values where necessary to account for acceptable variability between parts, and then comparing the resulting values to see if the exceed preset limits. If so, the part under inspection is rejected as defective. As shown in Fig. 1, a mechanical positioner 1 presents an IC (the "part") 2 to an optical system that forms an image in a camera 3 from light reflected or transmitted by the part from an adjustable light source 4. The mechanical positioner positions the part to within a few pixels (digital picture elements) of a known location.
The optical system has a zoom lens 5 that is zoomed and focused to form the image of the part, and a contrast enhancing filter 6. The optical system is normally adjusted by the operator only at the beginning of an inspection run to provide a focused and correctly sized image of the part to the camera 3. An inspection run is a long sequence of inspections of copies of the same part, for example one type of IC.
The regulated light source 4 provides constant light intensity that does not change appreciably during the inspection run. A method to be described adjusts the image acquisition electronics to best utilize the light reflected from or transmitted through the inspected part.
The camera 3 is a high-resolution solid-state camera that has low geometric distortion. The analog image signal from the camera is digitized and quantized in digitizer 7 and the resulting pixels are stored in a pair of image memories 8. For high speed operation, an image is acquired into one image memory while the system processes a previously acquired image stored in the other memory. At the end of processing and acquisition, the role of the two memories are reversed (by switch 9) . This "double buffer" technique effectively eliminates the delay incurred in image acquisition.
The digitizer and two image memories are part of a single VSI-150 (Variable Scan Interface) hardware module also available from Imaging Technology Inc. These components, as well as the rest of the components in the system, are controlled by a general purpose computer 100, called the control computer. The control computer can set and read any of the system's control signals and can also read and write pixel values from any of the image memories. The arrows 101 symbolize the ability of the control computer to access any system component. The correct operating range for the digitizer 7 is adjusted by the control computer 100. The Gain control 701 adjusts the amplification of the analog camera signal and the Offset control 702 adds a constant value to this signal. The operating range of the digitizer is automatically adjusted so that the actual quantized intensity values represent nearly the full range of possible quantized intensity values.
The method for setting Gain and Offset is shown as a flow diagram in Fig. 2, and is executed as a program in the control computer.
In Fig. 2, the method starts with the acquisition (digitization) of an image (801) of an IC into image memory 8. The control computer computes (802) the maximum (max) and minimum (min) pixel intensity values of this image.
These values are then compared with four thresholds: upper and lower "increase" thresholds (UI and LI) and upper and lower "decrease" thresholds (UD and LD) to adjust digitizer gain and offset. The offset is adjusted by computing (803) F = ((UD-max)-(min-LD) )/2. If F is greater than some noise limit (804) , then the offset is adjusted (805) up or down by F.
A new image is acquired (806) using the new offset value and the maximum and minimum pixel values are again computed (807) . If the minimum value is greater than the LI threshold and the maximum value is less than the UI threshold (808) , then the gain is increased by 1 (809) . If the minimum value is less than the LD threshold or the maximum value is greater than the UD threshold (810) , then the gain is decreased by l (811) . The loop repeats until the minimum value is between LI and LD and the maximum value is between UI and UD (812) or until some specified number of iterations have occurred. The increase and decrease thresholds are separated by a value that is at least twice the expected digitization noise value to make sure the process converges. If the light is too dim or intense, the above adjustment process will be unable to converge. Hence, this process will exit after a number of iterations (not indicated in the Figure) and warn the operator that the light source needs adjustment. The optics, camera, and digitizer are designed so that spurious spatial frequency components are not introduced into the sampled image. Spurious components (aliasing) can appear when the camera analog signal is not band-limited to below the Nyquist rate, as set by the spatial digitizing frequency. To meet these requirements, a solid-state camera with RS-170 timing and a resolution of 780 (horizontal) by 488 (vertical) sensor locations is used, and the analog filter on the VSI-150 is set to a 3 db roll-off at 4.2 Mhz. For RS-170 cameras, the digitizer samples only 512 by 480 pixels, so the spatial sampling is well below the Nyquist rate. Another source of spurious spatial frequency components is if the camera's spatial frequency response is not smooth within the sampling bandwidth. This may happen when there are non-responsive areas in between the light-sensitive elements of the camera. If spurious spatial frequency components are introduced, they can decrease the accuracy of the alignment steps described below, and this in turn can increase the rate of incorrect defect identification (false positive defects) .
The analog camera data are quantized to eight bits. The quantized data must not be obscured by electrical and sampling noise. The image is digitized with enough spatial resolution so that the defects to be detected fully cover one or more pixels. Cameras other than RS-170 cameras may be used to get larger image sizes. Larger image sizes may be used to increase the fineness of the inspection or to inspect larger ICs.
The light source, optics, camera, and digitizer must be stable enough so that they need only be adjusted at the beginning of a run. If any of these elements changes appreciably during the inspection process, the inspection may report more errors than are actually in the parts. If the sample images are acquired under different light or reflectance conditions than the reference image, there can be average offsets between the images. These offsets appear as constant differences when the images are subtracted, and hence as false positive defects.
This invention uses a controlled light source that removes this source of variability. If this is not sufficient, there are common and well known techniques to compensate for illumination intensity changes — see for example, Pratt, Digital Image Processing. John Wiley & Sons (New York), 1978, pages 307-317 incorporated herein by reference. Ehrat, United States patent 4,139,779, col. 3, lines 13-48, incorporated herein by reference, uses a neighborhood average to remove this offset (shade correction), and Yoshida, United States patent 4,449,240, incorporated herein by reference, describes a tracking method to compensate for intensity changes due to aging of the light source. The image area to be inspected must not occupy the entire image memory. Rather, a small border of inspected pixels is required to allow the image to be shifted for alignment and to allow for errors in the mechanical positioner.
There are two modes of operation of this invention: training mode and inspection mode. In training mode the invention is taught a reference image, fiducial locations, a variability image, erosion size, and defect limits.
A reference image is a digital image formed by averaging images of known good parts. A part to be inspected is then compared to this reference image to locate possible defects.
A reference image is started by acquiring a single master reference. Returning to Fig. 1, an image of a known good part is digitized and stored in one of the image memories 8. This image is transferred on a data bus 15, through switches 14 and 31 and into a Reference Image Memory 30. This master image is displayed on a CRT (television) monitor 32. (Any of the images in the various memories can be displayed on the monitor 32, but for simplicity, only the Reference Image Memory 30 is shown connected to the monitor 32) . The Reference Image Memory is an FB-150 frame buffer (available from Imaging Technology Inc.). The operator then selects an area of the image to inspect and the location of a fiducial by moving outline rectangles displayed over the image of the master reference. The fiducial is an image area that will be located in subsequent images and used to align subsequent images to the master reference image. The selection of the inspection area and fiducial can be also be automatic, based on information in a data base about the part to be inspected. A method is described below for automatic selection of the fiducial based on certain quality measures. The fiducial image area is copied from the reference image memory 30 into the fiducial image memory 10 by the control computer 100.
Next a series of other known good images are acquired and averaged into the reference image memory 30. Because these additional images may be shifted in position with respect to the master image, these additional images must be aligned or brought into registration with the master reference image before averaging. The method of aligning or registering the two images is described below. This same method of alignment is also used during the inspection mode of operation, so the additional images with unknown positions are called the "sample" images.
Sample images to be aligned are acquired and stored in one of the image memories 8. A small patch of the sample image is transferred to the alignment computation block 11 via data bus 15. This image patch is taken from the input image area where the fiducial is expected to be located, and is slightly larger than the fiducial. The alignment procedure has two major steps, described in more detail below. First, the image patch from the sample image is searched to find the location of the fiducial to within a fraction of a pixel. The positional difference between the location of the fiducials in the sample image and in the reference image indicates the amount of positional shifting required to bring the images into alignment (registration) . Second, the sample image is shifted to align (register) with the reference image by moving it by the distances computed in the first step.
The fiducial location computation 11 is illustrated by Fig. 3. In Fig. 3a the patch (202) from the sample image is shown as an 11 by 11 array of dots. The expected location of the fiducial in the sample images is at the center of this image patch (white dot 204) , but positioning and part variability can cause the true position of the fiducial to be anywhere in the 7 by 7 area (206) shown as grey dots. The black dots are border pixels, where a match cannot be found because the reference fiducial pattern would extend outside the sample image patch. The location of the fiducial (in the Fig., each pixel of the fiducial is shown in a small square) in the sample image is found by correlating the patch of sample image containing the fiducial with the image of the reference fiducial stored in the fiducial image memory 10.
The reference fiducial is correlated at every location within the sample image patch where it can fit (i.e., with the grey dot area). For example, if the reference fiducial is 5 by 5 pixels there are 49 possible correlation locations (grey dots) .
Correlation consists of multiplying the reference fiducial values with the values in an area of the sample image patch. In Fig. 3a, a correlation at one location is represented by a 5 by 5 array of square "reference" pixels surrounding the sample pixels. This represents the reference image values being multiplied by the sample image values. These multiplied values are then summed, and a high value of the sum (i.e., the correlation value) indicates a close match between the reference fiducial pattern and the pattern of pixel intensities in a sample image area.
The correlation values at the various possible positions form an array of values, considered to be a surface, which peaks where the input and reference images are in registration, as shown in Fig. 3b. The sample image patch must be large enough to contain the fiducial and additional borders of pixels equal to 1/2 the size of the fiducial, or the registration point will not be found.
The correlation method used is Fishers Correlation, or normalized grey-scale correlation. If a is the input image and b is the reference image, then the correlation at each reference image location x',y', r^fx^y'), is given by equation 1 of Fig. 3d, where the indices i and j range over the N reference image points and over the N sample image points offset by x',y' in the sample image patch. For simplicity in writing and reading this and the following equations single summations signs are often used to represent double summations, as shown by the double subscripts, i,j, on the summations.
The fiducial image memory 10 and alignment computation 11 of Fig. 1 are subsumed in the IPA-150 (image processing accelerator) hardware. The IPA-150 is a high-speed, floating-point image processor also available from Imaging Technology Inc. To increase the computation speed the square root need not be taken. Instead equation 1 is squared so that rab is really the square of rab. This means that negative correlation values are not reported so when the numerator of equation 1 is less than zero, zero is returned. Negative correlation values occur when the sample and reference images are negatives of each other. Since this is unlikely to occur in the small patches chosen, the loss of negative values causes no problems.
The peak of the correlation values (surface) occurs where the input sample image patch and reference fiducial image best match. Some care must be taken in the choice of fiducials to prevent poor or multiple correlation peaks. Fig. 3c shows some example of poor fiducial choices. A fiducial with a single edge or with image components in only one or a few directions will generate a "ridge" of correlation peaks that cannot be used for registration. If the fiducial consists of a repeating pattern then there will be peaks at each pattern position and no way to determine which is the correct peak for registration of the images. A low contrast or noisy image will not generate a clearly defined peak. Fiducials from areas that have a significant amount of part-to-part variation will also not work, as the correlation peaks may vary randomly in height and position. During the construction of the reference image or during inspection, defects in the sample image may occur in the area used for the fiducial. In this case, the correlation peak will be significantly reduced. This condition is detected and causes the part being inspected to be classed as defective.
In most IC images, a small part of the semiconductor structure, but not metalized areas or bonding pads, works well as a fiducial. To improve the quality of the fiducials chosen, the system can make certain measurements on the correlation surface and use them as a quality score. These measures are taken on a correlation surface formed by correlating the reference fiducial over a small patch of the reference image, equivalent in size to the patch to be searched in the sample image. These measures include: 1. Sharpness of the correlation peak: The height of the highest peak compared with the next highest correlation value. Fiducial quality is a function of correlation peak sharpness.
2. Peak singularity: Is there only one major peak in the correlation image? If there is more than one peak, the fiducial is rejected. 3. Symmetry and smoothness of the peak. The peak ideally falls off equally in all directions. This measure rejects fiducials with "ridges" of correlation and gives some confidence that the next step of alignment (analytic surface fitting) will work well.
These measures are computed and combined to form a quality score that is reported to the operator. Alternatively, the quality score can be used to automatically find fiducial areas. To do this, a regular array of fiducial areas are chosen from the master reference image and a quality score is computed for each. The area with the best quality score is chosen as the fiducial for this part.
The location of the correlation peak is used to compute the whole pixel position shift between the sample and reference images. For example, if the center of the reference fiducial is assigned a value of 0,0 and the best match of the reference fiducial and sample image patch is when the reference fiducial is up one pixel and over two pixels to the left (see Fig. 3b) , then the input image must be moved down one pixel and over two pixels to the right to register with the reference image.
Whole pixel alignment is accomplished by changing the addressing of the Image Memory 8. The apparent starting location of this memory can be "panned" (moved in x) or
"scrolled" (moved in y) by whole pixel values and the output is then an image shifted by whatever amount is required. The required pixel position shifts x,y are transferred from the alignment computation 11 via 12 to the image memory 8. The actual transfer is through the control computer 100. If alignment (registration) is performed only to whole pixel positions, then the two images may not be precisely aligned. When the input sample image and the reference image are compared (by subtracting the two images) , areas of quickly changing intensity, such as edges will not exactly overlap in the two images and large difference will result. These differences are not defects, but are false positives (incorrect defects) . To reduce the false positives level it is very important to precisely align the two images, to within a fraction of a pixel. This is called sub-pixel alignment. For most sample images, the best registration location falls in between pixel centers. The correlation surface becomes flat or rounded as none of the correlation samples occur at the peak location. To approximate the location of the peak, an analytic surface is fitted to the surrounding and rounded peak values. In Fig. 3b the reference fiducial happens to fall on a whole pixel location in the sample image, producing a sharp peak in the correlation. In Fig. 4a the sample image has moved so the correlation peak would be midway between four pixels. The four correlation values surrounding the correlation peak are approximately equal and lower than the true peak. In Fig. 4b an analytic surface has been fitted to these surrounding correlation values, and the peak 302 of this surface approximates the location of the true correlation peak. Correlation peaks can be modeled by an analytic surface such as an elliptic paraboloid. With additional terms for offsets in x, y, height, and a cross axis term, the equation used for modeling is equation 2 of Fig. 3d, where p(x,y) is the height of the surface. Assuming that exy is small, then this equation defines a surface that is essentially an elliptic paraboloid. Sections through the surface normal to the p(x,y) axis appear as ellipses. Sections taken normal to the x or y axes appear as parabolas.
This surface is analytically differentiated and set to 0 to find the peak location. Equation (2) is differentiated with respect to x and y, to get equation 3 of Fig. 3d. The peak occurs when these derivatives are 0. Setting these equations (3) to 0 we get two equations in two unknowns (equations 4 of Fig. 3d) which may be solved for the x and y position of the peak. Note that the value a, the offset term, disappears.
A least-squares fit of the correlation surface data is used to generate the coefficients for the analytic surface. In other words, minimize the squared difference between the correlation data points, rab(x,y), and the analytic surface data points p(x,y) as in expression 5 of Fig. 3d, where the equation (2) has been substituted for P(x,Y) • -*- expression 5, i,j replace x,y as they represent a local coordinate system that will be chosen to simplify the equations. The minimum of this function occurs when its derivative is zero. (The analytic surface is suggested in Bookstein, "From Biostereometrics to the Comprehension of Form", NATO Conference on Applications of Human Biostereometrics. SPIE Vol. 166, 1978, and the least squares method for fitting the surface to the correlation data is based on discussion at pp. 44-53 and pp. 133-151 in
Lancaster and Salkauskas, Curve and Surface Fitting: An Introduction. Academic Press (London), 1986, both incorporated by reference) . Differentiating (5) with respect to a through f, and setting the resulting expressions to zero gives the six equations labeled 6 in Fig. 3e, where the i,j indices in the (double) summations are not shown to make the equations simpler to write and points are used for the surface fitting data. These data are indexed using the local coordinate system shown in Fig. 5a. Performing the summations in equations (6) for i and j with this coordinate system simplifies the six equations to the equations labeled 7 in Fig. 3e.
Solving these equations (7) gives the coefficients shown at 8 in Fig. 3e.
These equations are implemented by the simple kernels shown in Fig. 5b. The coefficients for the least-squares fit of the correlation data to the analytic model are derived by convolving these kernels with the nine (3 by 3) correlation values surrounding the correlation peak, and scaling the results by the fractions shown to the left of each kernel. Note that the order of evaluation of the kernel values may change, depending upon the coordinate system used. If there are neighboring correlation peaks with the same values, the peak closer to the center of the sample image patch is chosen as the center of the convolutions. The sub-pixel location of the sample fiducial is computed by substituting the coefficients derived from the data using equations (8) , into equations (4) . This gives an approximation to the true x,y peak that is accurate to a fraction of a pixel position, typically 1/6 of a pixel position or better.
These fractional x and y values are shown as sx and sy on lines 13 in Fig. 1. sx and sy are used to compute a convolution kernel 19 that is then loaded into convolver 20. This convolver is used to shift the input sample image, still stored in the image memory 8, by a fraction of a pixel as described below. The second part of the alignment (registration) process is moving the sample (or reference) image into alignment with the reference (or sample) image. Moving the image integer pixel distances has been described above, and is a standard method of alignment. To get more precise registration, the fractional pixel distances computed by the above surface fitting are used to move the image by fractional pixel distances. This method is described below. A properly sampled analog image, where there are no aliased components, can in theory be reconstructed from the sample values by interpolation. Once the signal has been reconstructed from the sample values, it may be resampled. By using shifted (in space) sample points on resampling, a shifted sampled image is produced. Thus, images can be shifted sub-pixel distances by reconstructing and resampling them.
Reconstructing a signal from sample data can be done in many different ways. The general ideas is that some small number of sample points, gs, are multiplied by weights derived from an interpolation function, r(x-n), and the sum is the value of the reconstructed signal, gr, at some intermediate sampling point, x. In one dimension, this operation is the convolution shown in the equation labeled 9 in Fig. 3f. The sine function (sin(x)/x) is the ideal interpolation function, but is impossible to use in practice as it extends to infinity. A simple approximation of the sine function is a cubic function, reflected about the y axis (see Fig. 6b) . The cubic is limited in extent, is simple to compute, and has no discontinuity at its end points. As described by Park and Schowengerdt, ("Image Reconstruction by Parametric Cubic Convolution", CVGIP, Vol. 23, pp. 258-272 incorporated herein by reference) the equations for the cubic reconstruction function are equations 10 in Fig. 3f. (Cubic convolution has been described for other applications other than inspection by comparison—see, for example, Yui, United States patent 4,578,812) .
Equations (10) represent a family of cubic curves that depend upon the parameter k. k is the slope of v(x) at jxj = 1, and can range from 1 to -1. k controls the high-frequency content of the cubic curve with more negative values accentuating high frequencies. This invention uses an optimal value of k = -0.5 as a default, but other values may be selected. The form of the cubic curve with k = -0.5 is shown in Fig. 6b. v(x) is the value of the cubic at x. In one dimension, resampling using a cubic interpolation function works as follows (see Fig. 7) . Suppose that the image is to be shifted to the right by amount s, a fraction of a pixel distance. To reconstruct the image intensity value at this point s, the surrounding pixel values are multiplied by weighting values derived from the cubic interpolation equation (10) and summed to give the reconstructed intensity value of the original image function at the fractional pixel location s. Note that this reconstruction of the intensity at s is an approximation, due to noise, a non-ideal interpolation function, and round-off errors in the calculations. These errors are not significant in practice.
In this manner, sample pixel values at any desired locations can be reconstructed from a set of pixels at fixed locations. Note that due to the limited nature of the cubic interpolation function, at most four pixel values are used, but the four values chosen can range over five locations, so the vector of interpolation values, r, includes a fifth value of 0 on one end or the other. When the shift value, s equals 0, then the interpolation function is equal to 1, so the fixed pixel value is returned. This interpolation method for reconstructing and resampling an image signal is independent in the x and y direction. Therefore, a set of interpolation function values for two dimensional reconstruction and resampling are the Cartesian product of the interpolation values for shifting in x and the interpolation values for shifting in y. The Cartesian product R(i,j) = rχ(i)r (j) , where rχ represents the five interpolation values in the x direction and r the five interpolation values in the y direction. In both sets of interpolation values, one or more values will be 0.
The resulting array of interpolation values R(i,j) is a kernel for two-dimensional convolution. Convolving the image to be shifted with this kernel interpolates the image values and resamples them, such that the output from the convolution represents the reconstructed image intensity values at shifts sx and sy. Thus, the image sampling points (pixel centers) are effectively "moved" distances sx and sy by convolving with cubic interpolation functions, or "bicubic convolution" for short. Returning to Fig. 1, the fractional alignment values sx and sy computed using the analytic curve fitting of the correlational data (described above) are used to compute a "shifting" kernel. This computation 19 is done either in the alignment computation hardware (IPA-150) or on the control computer and the resulting kernel is loaded into the convolver 20. Sub-pixel shifting of an image using bicubic convolution requires 16 multiplications, additions, and other operations at each image point. Shifting an image with many thousands of pixels can therefore take a significant amount of time. The Series 150 has hardware that performs convolutions of up to eight by eight kernel elements at very high speeds (CM150-RTC8 also from Imaging Technology Inc.), thus making this alignment and the subsequent mean accumulation or inspection very fast. The output of the convolver 20 is rounded to eight bits of value by using the look-up table (LUT) 19. A LUT is a general "function box" that can map any input value to any output value.
The two steps described above are a general procedure for aligning (registering) two images, given that an unchanging fiducial can be found in both images. This alignment method is used both in the construction of the reference image, given a master reference image, and in the subsequent inspection of images. Returning to the construction of a reference image, additional sample images are acquired into image memory 8, and then aligned to the master reference image, using the fiducial values stored in the fiducial image memory 10. The whole pixel shifts of the acquired images are accomplished by changing the pan and scroll address values on the image memory and the fractional shifts by sending the image from bus 15 through the convolver 20 where it is reconstructed and resa pled. This shifted image is then averaged into the reference image memory. Averaging is done by opening data switch 31 and closing data switch 33. Shifted images are added by the arithmetic-logic unit (ALU-150 also from Imaging Technology Inc.) 40 with the existing data 34 in the reference image memory 30, and the sum 41 is put back into the reference image memory. With N images summed together, the sum image is divided by N using the ALU 40. Data from the Reference image memory 30 goes through'data path 34 into the ALU where it is divided by N, output on data path 41 and returned to the reference image memory 30 via data switch 33. The ALU-150 performs division by shifting, so N is chosen to be a power of 2. This is not required and other implementations might allow any value for N.
When the reference image has been computed and stored in 30, a small patch of this image containing the image of the fiducial area is transferred by the control computer 100 into the Fiducial Image Memory 10. This will serve as the reference fiducial for the alignment of images of inspected parts.
A "variability" image is constructed in conjunction with the construction of the reference image. The variability image specifies the acceptable range of part pattern intensity variation at each point (pixel) in the image. The disclosed method uses the standard deviation of pixel intensities at each point in the image to construct the variability image, but other methods such as using some fraction of the pixel intensity range at each point could also be used.
Standard deviation (s.d.) at a given point in the image is given in equation 11 of Fig. 3f, where zk are intensity values across the ensemble of images and Mχ are the mean (average) intensity values across the ensemble of images. Rearranging (11) gives equation 12 of Fig. 3f.
The mean M2 is the average value accumulated in the reference image memory. The sum of the squares of zk (pixel intensity values across the ensemble of images) are accumulated in the variability image memory 53. As aligned images come from the convolver 20 to be added into the reference image memory 30, they are also sent to the variance computation 50 via data switch 51.
Initially the variability image memory 53 is all zeros. As images come into the variance computation 50, the pixel values are squared and added to the values in the variability image memory 53 using data buses 52. This accumulates the sum of the pixel values squared for each image point.
At the end of the formation of the reference image average, these mean values are transferred via data switch 54 to the variance computation where they are squared and subtracted from the sum of squared pixel values held in variability image memory 53. These values are divided by N and the square root is taken to give the standard deviation. The variability image is then constructed by setting each pixel in this memory to some multiple plus an offset of the standard deviation at that point. For example, to set a limit of variability of three standard deviations, the s.d. values would be multiplied by three. A three standard deviations limit means that the variation in sample image intensity at a point must be greater than 99% of the maximum variation normally expected for the point to be declared a potential defect. The computed and scaled standard deviation values are stored back into variability image memory 53 via data buses 52.
The next step in the training mode is to select a shape and size of kernel for the grey-level erosion step. The grey-level erosion will set the edges of groups of pixels to zero value. This removes small patches of noise, or "ghost" edges due to slight misalignment in the structures of reference and sample parts. In the implementation with the Series 150, the erosion is performed by the CM150-RVF (also from Imaging Technology Inc.), and can have a kernel size of up to eight by eight, meaning that groups of pixels as large as eight by eight can be removed (zeroed) from the potential defect image in one pass through this hardware.
Finally, limits on acceptable defects are selected. The limits selected depend upon the defect analysis 70 selected. For the example analysis described below, the maximum acceptable number of defect pixels and the value of the maximum acceptable defect are selected.
The multiple and offset of the standard deviation, the erosion kernel shape and size, and the limits for defects are usually selected based on comparing the results of a test run with human visual inspection. To accomplish this, an IC wafer containing many ICs is inspected and the parameters of the automated inspection are adjusted to bring the reported defects into agreement with those found by careful visual inspection by a human of the same wafer. Typically, a single set of parameters can be used for many different kinds of ICs manufactured by the same fabrication process. Once the inspection system has been taught the reference image, the range of acceptable variations and etc., the system is put into inspection mode and can inspect patterned parts at high speeds. A typical inspection cycle is described below. The Positioner 1 brings the part to be inspected to within a few pixel positions of a known position under the camera 3 and with near zero angular rotation. The camera image is digitized 7 and stored in one of the image memories 8. As described above, the storing of this sample image may proceed in parallel with the processing of the previously digitized sample image. A small patch of the sample image is transferred via switch 9 and data bus 15 into the alignment computation 11. In 11 the location of the fiducial in the sample image is determined to sub-pixel location using the method of correlation and analytic curve fitting described above. The shifts required to bring the sample and reference images into alignment are sent (12) to the image memory 8 (for whole pixel shifts) and are used to compute (19) a bicubic convolution kernel. This kernel is placed in the convolver 20 and is used to align the sample and reference image to a fraction of a pixel in position. The whole pixel aligned sample image comes from image memory 8 through data switch 9 and data bus 14 and is shifted by convolver 20 and rounded to eight bits by LUT 21. The aligned reference image goes through the data switch 14 and is subtracted on a point-by-point basis from the reference image stored in the reference image memory 30 using the ALU 40. The subtracted image values are sent via data bus 41 to the absolute value computation 42 where negative values are turned into the equivalent positive values. That is, if z is less than 0, then z is replaced with -z (a positive number) . In practice, both the subtraction and the absolute value are performed at high speed using the ALU-150 hardware. The absolute value is taken because, in general, sample defects may be brighter or darker than the reference image. The result of subtracting the sample from the reference image and taking the absolute value is called the difference image, and these differences could represent defects. However, acceptable variations in parts, noise, and minor misalignments give rise to differences that are not considered to be defects (false positives) . To reduce these false positive defects the variability image stored in the variability image memory 53 is subtracted using data paths 43 and 54 by the look-up table 50. The look-up table is programmed to subtract the variability image values from the difference image and if the result is less than 0, a 0 is output. This "thresholds" or reduces difference image values, and this threshold depends upon the intrinsic variability of the ensemble of known good images used to construct the variability image. The result of this subtraction thresholding is called the potential defect image. The look-up table 50 is implemented using a CM150-LUT16 (also from Imaging Technology Inc.).
There can still be false positive defects in the potential defect image, and so this image is passed via data bus 59 to the erosion processor 60. The erosion processor sets small groups of pixels to zero and zeros pixels at the edges of larger groups (blobs) of pixels. The erosion step is optional and its use and the size of the erosion is selected during the training procedure. The resulting image is called the error image, and this image is transferred via data bus 61 to the Defect Analysis computation 70. The defect analysis is designed to measure the values and structure of groups of pixels in the error image. These measures are compared against previously specified limits for the measure and if some or all of these measures are outside of the limits, the control computer 100 is notified via data path 71.
The defect analysis computation can take many forms. Some example measures are the size of the largest grouping of pixels (a "blob" of pixels) , or the orientation of the longest group of pixels—perhaps indicating a scratch on the IC surface. For illustrative purposes two simple measures are described herein. As pixels in the error image are sent to the defect analysis computation, the number of pixels greater than zero are counted and the maximum value of all of these pixels is remembered. These two values give a measure of the total amount of error "area" (the total number of pixels) and an idea of the magnitude of the error (the largest value of the error image) . While quite simple, these measures are sufficient for many IC inspection tasks.
The present invention, by maintaining gray-scale (continuous valued) pixel values throughout the processing, allows a variety of defect measures to be made on the error image, such as the "volume" (x, y area times pixel value) of blobs or groups of pixels. Thus the present invention can be adjusted to provide more sophisticated and sensitive measures of defects as well as to provide better classifications of defects for process control.
If a defect is detected by comparing the measures with previously trained limits, then the measures and an error signal are sent to the control computer. Alternatively, the measures may be sent to the control computer where they are compared against the trained limits. In either case, an error report 101 is issued if the part under inspection has produced defect measures that are greater than the previously specified limits, and the part is then rejected as being defective. If desired, the measures generated by the defect analysis can be accumulated and summarized to provide some indication of the types and magnitude of errors. This information can be used to adjust the process that produces the parts. For example, in IC inspection the existence of larger blobs may indicate that dust and other particulate matter is contaminating the fabrication process.

Claims

Claims l. A method for determining the amount of misalignment between a digitized sample image of a patterned part and a corresponding digitized reference image, comprising performing a correlation operation on said images to generate a correlation surface indicative of an integral number of pixels by which said images are misaligned, and fitting said correlation surface to an analytic surface to determine a fractional number of pixels by which said images are misaligned.
2. The method of claim 1 wherein said analytic surface comprises an elliptic parabaloid-like surface.
3. The method of claim 1 or 2 further comprising setting negative correlation values to zero in said correlation operation.
4. A method for inspecting a patterned part comprising forming a digitized sample image of said patterned part, performing a convolution operation on said digitized sample image to effectively shift said digitized sample image by a fraction of a pixel relative to a corresponding digitized reference image, and comparing said shifted digitized sample image with said reference image.
5. The method of claim 4 further comprising storing adjacent pixels of said digitized sample image at successive addresses in a memory, and adjusting references to said addresses to effectively shift said digitized sample image by an integer number of pixels relative to a corresponding digitized reference image.
6. The method of claim 4 wherein said convolution operation comprises a bicubic convolution.
7. the method of claim 4 further comprising prior to performing said convolution operation, determining an amount of misalignment between said digitized sample image and said corresponding digitized reference image, comprising performing a correlation operation on said images to generate a correlation surface indicative of an integral number of pixel by which said images are misaligned, and fitting said correlation surface to an analytic surface to determine a fractional number of pixels by which said images are misaligned.
8. A method for automatically adjusting the offset and gain of a digitizer with respect to a signal representing images of patterned parts comprising (a) digitizing an image signal as pixel intensity values using an initial offset value, (b) determining the maximum (max) and minimum (min) of said pixel intensity values, (c) adjusting said initial offset value up or down by a value F = ( (UD-max) - (min-LD))/2 (where UD and LD are respectively upper and lower decrease thresholds) to obtain an adjusted offset value, (d) digitizing an image signal as pixel intensity values using said adjusted offset value, (e) repeating step (b) (f) if min is greater than LI, a lower increase threshold and max is less than UI, an upper increase threshold, the gain is increased by an increment; otherwise, if min is less than LD and max is greater than UD, the gain is decreased by a decrement, (g) repeating steps (a) through (f) until neither condition of (f) is true.
9. A method of automatically choosing a fiducial area within a digitized sample image for use in aligning said image with a reference image, comprising selecting multiple possible fiducial areas with said image, forming a correlation surface for each said possible fiducial area by correlating said fiducial area with itself and adjacent portions of said image, generating a measure of the merit of each said possible fiducial area, and comparing said measures to choose said fiducial area.
10. The method of claim 9 wherein said measure comprises at least one of: (a) the sharpness of a peak of said correlation surface, (b) whether or not there is a single peak of said surface, (c) the symmetry and smoothness of said peak.
11. The method of claim 1, 4, 8, or 9 wherein said pixels are grey level pixels, each having one of more than two possible values.
PCT/US1991/004266 1990-06-14 1991-06-14 Patterned part inspection WO1991020054A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US53819990A 1990-06-14 1990-06-14
US538,199 1990-06-14

Publications (1)

Publication Number Publication Date
WO1991020054A1 true WO1991020054A1 (en) 1991-12-26

Family

ID=24145921

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1991/004266 WO1991020054A1 (en) 1990-06-14 1991-06-14 Patterned part inspection

Country Status (2)

Country Link
AU (1) AU8204391A (en)
WO (1) WO1991020054A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0588157A2 (en) * 1992-09-15 1994-03-23 Gaston A. Vandermeerssche Abrasion analyzer and testing method
WO1997006502A1 (en) * 1995-08-07 1997-02-20 Mikoh Technology Limited Optical image authenticator
EP0820039A2 (en) * 1996-07-15 1998-01-21 Matsushita Electric Works, Ltd. Image processing inspection apparatus
EP0918302A2 (en) * 1997-11-24 1999-05-26 Weiglhofer, Gerhard Coherence detector
EP1453298A1 (en) * 2001-12-03 2004-09-01 Fuso Precision Co., Ltd. Digital data false alteration detection program and digital data false alteration detection apparatus
US6952491B2 (en) * 1990-11-16 2005-10-04 Applied Materials, Inc. Optical inspection apparatus for substrate defect detection
US8908901B2 (en) 2007-04-24 2014-12-09 Renishaw Plc Apparatus and method for surface measurement
US9689655B2 (en) 2008-10-29 2017-06-27 Renishaw Plc Measurement method
CN110517231A (en) * 2019-08-13 2019-11-29 云谷(固安)科技有限公司 Shield the detection method and device of body showing edge
EP3582182A1 (en) * 2018-06-12 2019-12-18 Axis AB A method, a device, and a system for estimating a sub-pixel position of an extreme point in an image
WO2020083612A1 (en) * 2018-10-23 2020-04-30 Asml Netherlands B.V. Method and apparatus for adaptive alignment
EP4300421A1 (en) * 2022-06-28 2024-01-03 EM Microelectronic-Marin SA Improvement of image correlation processing by addition of reaggregation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4578812A (en) * 1982-12-01 1986-03-25 Nec Corporation Digital image processing by hardware using cubic convolution interpolation
US4613857A (en) * 1983-01-04 1986-09-23 Micro Consultants Limited Repeated information detection
US4794648A (en) * 1982-10-25 1988-12-27 Canon Kabushiki Kaisha Mask aligner with a wafer position detecting device
US4805123A (en) * 1986-07-14 1989-02-14 Kla Instruments Corporation Automatic photomask and reticle inspection method and apparatus including improved defect detector and alignment sub-systems
US4963036A (en) * 1989-03-22 1990-10-16 Westinghouse Electric Corp. Vision system with adjustment for variations in imaged surface reflectivity

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4794648A (en) * 1982-10-25 1988-12-27 Canon Kabushiki Kaisha Mask aligner with a wafer position detecting device
US4578812A (en) * 1982-12-01 1986-03-25 Nec Corporation Digital image processing by hardware using cubic convolution interpolation
US4613857A (en) * 1983-01-04 1986-09-23 Micro Consultants Limited Repeated information detection
US4805123A (en) * 1986-07-14 1989-02-14 Kla Instruments Corporation Automatic photomask and reticle inspection method and apparatus including improved defect detector and alignment sub-systems
US4805123B1 (en) * 1986-07-14 1998-10-13 Kla Instr Corp Automatic photomask and reticle inspection method and apparatus including improved defect detector and alignment sub-systems
US4963036A (en) * 1989-03-22 1990-10-16 Westinghouse Electric Corp. Vision system with adjustment for variations in imaged surface reflectivity

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6952491B2 (en) * 1990-11-16 2005-10-04 Applied Materials, Inc. Optical inspection apparatus for substrate defect detection
EP0588157A3 (en) * 1992-09-15 1994-12-21 Gaston A Vandermeerssche Abrasion analyzer and testing method.
US5835621A (en) * 1992-09-15 1998-11-10 Gaston A. Vandermeerssche Abrasion analyzer and testing method
EP0588157A2 (en) * 1992-09-15 1994-03-23 Gaston A. Vandermeerssche Abrasion analyzer and testing method
WO1997006502A1 (en) * 1995-08-07 1997-02-20 Mikoh Technology Limited Optical image authenticator
EP0820039A2 (en) * 1996-07-15 1998-01-21 Matsushita Electric Works, Ltd. Image processing inspection apparatus
EP0820039A3 (en) * 1996-07-15 2003-01-02 Matsushita Electric Works, Ltd. Image processing inspection apparatus
US6980210B1 (en) 1997-11-24 2005-12-27 3-D Image Processing Gmbh 3D stereo real-time sensor system, method and computer program therefor
EP0918302A2 (en) * 1997-11-24 1999-05-26 Weiglhofer, Gerhard Coherence detector
EP0918302A3 (en) * 1997-11-24 1999-08-11 Weiglhofer, Gerhard Coherence detector
US7840007B2 (en) 2001-12-03 2010-11-23 Fuso Precision Co., Ltd. Digital data false alteration detection program and digital data false alteration detection apparatus
EP1453298A4 (en) * 2001-12-03 2007-04-25 Fuso Prec Co Ltd Digital data false alteration detection program and digital data false alteration detection apparatus
EP1453298A1 (en) * 2001-12-03 2004-09-01 Fuso Precision Co., Ltd. Digital data false alteration detection program and digital data false alteration detection apparatus
US8908901B2 (en) 2007-04-24 2014-12-09 Renishaw Plc Apparatus and method for surface measurement
US9689655B2 (en) 2008-10-29 2017-06-27 Renishaw Plc Measurement method
CN110598689A (en) * 2018-06-12 2019-12-20 安讯士有限公司 Method, device and system for estimating sub-pixel positions of extreme points in an image
EP3582182A1 (en) * 2018-06-12 2019-12-18 Axis AB A method, a device, and a system for estimating a sub-pixel position of an extreme point in an image
US10783662B2 (en) 2018-06-12 2020-09-22 Axis Ab Method, a device, and a system for estimating a sub-pixel position of an extreme point in an image
CN110598689B (en) * 2018-06-12 2021-04-02 安讯士有限公司 Method, device and system for estimating sub-pixel positions of extreme points in an image
CN110598689B9 (en) * 2018-06-12 2021-09-03 安讯士有限公司 Method, device and system for estimating sub-pixel positions of extreme points in an image
WO2020083612A1 (en) * 2018-10-23 2020-04-30 Asml Netherlands B.V. Method and apparatus for adaptive alignment
US11308635B2 (en) 2018-10-23 2022-04-19 Asml Netherlands B.V. Method and apparatus for adaptive alignment
US11842420B2 (en) 2018-10-23 2023-12-12 Asml Netherlands B.V. Method and apparatus for adaptive alignment
CN110517231A (en) * 2019-08-13 2019-11-29 云谷(固安)科技有限公司 Shield the detection method and device of body showing edge
CN110517231B (en) * 2019-08-13 2023-12-22 云谷(固安)科技有限公司 Method and device for detecting display edge of screen body
EP4300421A1 (en) * 2022-06-28 2024-01-03 EM Microelectronic-Marin SA Improvement of image correlation processing by addition of reaggregation

Also Published As

Publication number Publication date
AU8204391A (en) 1992-01-07

Similar Documents

Publication Publication Date Title
US4805123A (en) Automatic photomask and reticle inspection method and apparatus including improved defect detector and alignment sub-systems
US6141038A (en) Alignment correction prior to image sampling in inspection systems
EP0186874B1 (en) Method of and apparatus for checking geometry of multi-layer patterns for IC structures
EP0117559B1 (en) Pattern checking apparatus
US6005978A (en) Robust search for image features across image sequences exhibiting non-uniform changes in brightness
US4559603A (en) Apparatus for inspecting a circuit pattern drawn on a photomask used in manufacturing large scale integrated circuits
US6148120A (en) Warping of focal images to correct correspondence error
US6865288B1 (en) Pattern inspection method and apparatus
US20080298719A1 (en) Sub-resolution alignment of images
JP3749090B2 (en) Pattern inspection device
US7024041B2 (en) Pattern inspection apparatus and method
US6400838B2 (en) Pattern inspection equipment, pattern inspection method, and storage medium storing pattern inspection program
WO1991020054A1 (en) Patterned part inspection
TW508709B (en) System and method for inspecting bumped wafers
JP2002140694A (en) Image processor, its method and recording medium with recorded image processing program
US5912985A (en) Pattern detection method
JPH08292014A (en) Measuring method of pattern position and device thereof
JP2006258582A (en) Image input device and image input method
CN107085843B (en) System and method for estimating modulation transfer function in optical system
US5337373A (en) Automatic threshold generation technique
US6888958B1 (en) Method and apparatus for inspecting patterns
CN114964032A (en) Blind hole depth measuring method and device based on machine vision
JP3327600B2 (en) Pattern defect inspection method and apparatus
JPH076777B2 (en) Pattern contour detection method and length measuring apparatus using this method
JP4235756B2 (en) Misalignment detection method, misalignment detection apparatus, image processing method, image processing apparatus, and inspection apparatus using the same

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP KR

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE

NENP Non-entry into the national phase

Ref country code: CA