WO2008068894A1 - Defect detecting device, defect detecting method, information processing device, information processing method and program - Google Patents

Defect detecting device, defect detecting method, information processing device, information processing method and program Download PDF

Info

Publication number
WO2008068894A1
WO2008068894A1 PCT/JP2007/001335 JP2007001335W WO2008068894A1 WO 2008068894 A1 WO2008068894 A1 WO 2008068894A1 JP 2007001335 W JP2007001335 W JP 2007001335W WO 2008068894 A1 WO2008068894 A1 WO 2008068894A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
identification information
images
model
inspected
Prior art date
Application number
PCT/JP2007/001335
Other languages
French (fr)
Japanese (ja)
Inventor
Hiroshi Kawaragi
Original Assignee
Tokyo Electron Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tokyo Electron Limited filed Critical Tokyo Electron Limited
Publication of WO2008068894A1 publication Critical patent/WO2008068894A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Definitions

  • Defect detection apparatus defect detection method, information processing apparatus, information processing method, and program thereof
  • the present invention relates to a defect detection apparatus capable of inspecting the appearance of a microstructure such as MEMS (Micro Electro Mechanical Systems) formed on a semiconductor wafer and detecting defects such as foreign matter and scratches.
  • the present invention relates to a defect detection method, an information processing apparatus, an information processing method, and a program thereof.
  • MEMS that integrate various functions in the fields of machinery, electronics, light, chemistry, etc., especially using semiconductor microfabrication technology
  • MEMS devices that have been put into practical use include acceleration sensors, pressure sensors, and airflow sensors as sensors for automobiles and medical use.
  • MEMS devices are also used in printer heads for ink jet printers, micromirror arrays for reflective projectors, and other actuators.
  • MEMS devices are also applied in fields such as chemical synthesis and bioanalysis such as protein analysis chips (so-called protein chips) and DNA analysis chips.
  • Patent Document 1 for example, a CCD (Charge Coupled Device) camera or the like images an arbitrary plurality of non-defective products.
  • a CCD (Charge Coupled Device) camera or the like images an arbitrary plurality of non-defective products.
  • a technique for determining pass / fail of an inspection product is described.
  • each of the reference patterns is imaged.
  • the reference pattern image is stored, the position of each reference pattern is aligned, the average value calculation or the intermediate value calculation is performed between each image data for each pixel, and data that has a large variation and abnormal values are avoided. It is described that reference image data serving as a proper reference is created, and pattern defects are detected by comparing the reference image data with inspection image data.
  • Patent Document 1 Japanese Patent Laid-Open No. 2 065 _ 2 6 5 6 6 1
  • Patent Document 2 Japanese Patent Laid-Open No. 11-7 3 5 1 3 (paragraph [0 0 8 0] etc.)
  • Patent Document 3 Japanese Patent Laid-Open No. 2 0 0 1 _ 2 0 9 7 9 8 ( Figure 1 etc.)
  • model image data non-defective image data or reference image data (hereinafter referred to as model image data) serving as an inspection standard is Created based on images of multiple non-defective products prepared separately from the image to be inspected. Therefore, prior to the creation of model image data, it is necessary to determine whether or not it is a non-defective product and to select a non-defective product. It will be. And in the inspection of extremely small structures such as MEMS devices, where a few scratches or foreign objects become defects, preparing an absolute good product (model) itself is model image data. It is difficult in terms of maintenance and management.
  • the inspection object is a three-dimensional object with unevenness or curvature
  • uneven luminance may occur in the captured image due to illumination conditions, optical conditions, and the like.
  • noise may be mixed in, and noise may be misdetected as a defect.
  • Patent Document 3 the entire surface of a wafer having a matrix-like repetitive pattern is imaged as a single image, and a median filter or the like is applied to the captured image.
  • a method is described in which a processed image is obtained, and the brightness value of the extracted image is binarized to determine the presence or absence of a defective portion.
  • the object of the present invention is to provide a MEMS device with high accuracy and efficiency while preventing false detection due to uneven brightness and noise, etc., without requiring an absolute model image. It is to provide a defect detection apparatus, a defect detection method, an information processing apparatus, an information processing method, and a program for the same.
  • a defect detection apparatus includes a structure in which a microstructure formed on each of a plurality of dies on a semiconductor wafer is divided into a plurality of areas of each of the dies.
  • Imaging means for imaging each divided area, illumination means for illuminating the microstructure to be imaged, and each of the captured divided areas
  • Storage means for storing each image as an inspection target image in association with identification information for identifying the position of each divided region in each die, and for each of the stored inspection target images,
  • a first filtering means for performing filtering for removing low-frequency components in the inspection target image; and in each of the divided regions corresponding to the identification information between the dies of the filtered inspection target images.
  • Model image creation means for creating an average image obtained by averaging the images to be inspected for each identification information as a model image, each created model image, and the filtering corresponding to the identification information for each model image
  • detecting means for detecting a defect of the microstructure by comparing each of the inspection target images.
  • the microstructure is a so-called M EMS (Micro Electro Mechanical System).
  • the imaging means is, for example, a camera with a built-in imaging element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) sensor.
  • the illumination means is, for example, a flash light emitting unit such as a high-intensity white LED (Light Emitting Diode) or a xenon lamp.
  • the first filtering means is a high-pass filter in other words. Defects are, for example, scratches and foreign objects.
  • each inspection object image is imaged under the same illumination condition by the imaging means and the light source.
  • Each model image is created based on the image, and each model image is compared with each image to be inspected to detect defects in multiple microstructures, change illumination conditions such as shadows, and the lens of the imaging means. It can be detected accurately and stably without being affected by fixed noise such as dirt.
  • the first filtering means can remove luminance unevenness (low frequency component) caused by the optical axis shift of the imaging means, the flatness of the microstructure, and the like. Since the model image is created and the defect is detected based on the filtered image, both the quality of the model image and the defect detection accuracy can be improved by this one-time filtering.
  • the captured image to be inspected can be used for both model image creation processing and defect detection processing, it is more efficient than creating model images separately. Defects can be detected.
  • the detection unit includes a difference in which each model image and each inspection target image corresponding to the identification information are overlapped with each model image and a difference between both images is extracted as a difference image.
  • Extraction means for performing filtering for removing a pixel area having a predetermined area or less from a series of pixel areas having a luminance equal to or higher than a predetermined value in the extracted difference image. Also good.
  • the second filtering means performs filtering by analyzing, for example, a block of pixels having a predetermined (or a predetermined range) grayscale value in the difference image, that is, B I o b.
  • further filtering by the second filtering means can prevent the noise component in the difference image from being erroneously detected as a defect, and the detection accuracy can be further improved.
  • the model image creating unit superimposes each inspection target image corresponding to the identification information, and calculates an average value of luminance values for each pixel constituting each inspection target image. You may have a means.
  • the imaging unit may continuously image the microstructure in each divided region having identification information corresponding to each die.
  • the imaging unit images the microstructures of all the divided regions in one die, and then the microstructures of the divided regions of the other die adjacent to the one die. You may make it image.
  • the microstructure discharges a reagent and a plurality of concave portions having a thin film bottom surface for introducing an antibody that cross-reacts with the reagent, and the reagent that does not react with the antibody. Therefore, it may be a screening examination container having a plurality of holes provided in the bottom surface of each of the recesses.
  • the container is a protein chip.
  • cracks and scratches on the thin film (membrane) of the protein chip, foreign matter adhering to the thin film, etc. can be detected with high accuracy.
  • the model image creating means prior to averaging each inspection object image corresponding to the identification information corresponding to each model image, the shape of each recess of the container in the inspection object image
  • each concave portion of the container by using the shape of each concave portion of the container, it is possible to accurately match the overlapping positions of the images to be inspected and to create a higher quality model image.
  • the alignment is performed by changing the relative position of each image by moving each image in the X and Y directions or rotating in the 0 direction.
  • the difference extracting means prior to extracting the difference, the shape of each concave portion of the container in each model image, and each inspection corresponding to the identification information corresponding to each model image It may have means for aligning each model image and each inspection object image based on the shape of each recess in the object image.
  • the microstructure includes a plate member having a plurality of window holes for irradiating a plurality of electron beams, and a thin film provided to cover the window holes. It may be an electron beam irradiation plate.
  • the model image creating means prior to averaging each inspection object image corresponding to the identification information for each model image, the electron beam irradiation plate in each inspection object image There may be provided means for aligning each inspection object image based on the shape of each window hole.
  • each window hole of the electron beam irradiation plate it is possible to accurately match the overlapping positions of the images to be inspected and to create a higher quality model image.
  • the difference extracting means prior to extracting the difference, the shape of each window hole of the electron beam irradiation plate in each model image, and the identification information in each model image.
  • a defect detection method provides a microstructure formed on each of a plurality of dies on a semiconductor wafer for each divided region obtained by dividing the region of each die into a plurality of regions.
  • a step of storing as an inspection target image in association with identification information for identifying the position of each of the divided regions, and for each of the stored inspection target images, a low frequency component in each of the inspection target images is stored.
  • the detecting step includes superimposing the model images and inspection target images corresponding to the identification information on the model images, and extracting a difference between the images as a difference image. And a step of performing filtering for removing a pixel area having a predetermined area or less from a series of pixel areas having a luminance equal to or higher than a predetermined value in the extracted difference image.
  • An information processing apparatus is such that a microstructure formed in each of a plurality of dies on a semiconductor wafer is illuminated in each divided region in which each die is divided into a plurality.
  • Storage means for storing each of the images picked up in (2) as an inspection target image in association with identification information for identifying the position of each of the divided regions in each die, and for each of the stored inspection target images
  • a filtering unit that performs filtering to remove low-frequency components in each image to be inspected, and each division corresponding to the identification information between each die among the filtered images to be inspected
  • a model image creating means for creating, for each identification information, an average image obtained by averaging the images to be inspected in each region as a model image, each created model image, and each model Detecting means for detecting a defect of the microstructure by comparing the filtered image to be inspected with the identification information corresponding to a Dell image.
  • the information processing apparatus is a computer such as a PC (Personal Computer). So-called notebook type or desktop type.
  • An information processing method is such that a microstructure formed in each of a plurality of dies on a semiconductor wafer is illuminated in each divided region in which each die is divided into a plurality. Storing each of the captured images as an inspection target image in association with identification information for identifying the position of each of the divided regions in each die, and for each of the stored inspection target images, Filtering for removing low-frequency components in each image to be inspected, and each inspection of each divided region corresponding to the identification information between the dies among the filtered image to be inspected An average image obtained by averaging the target images is created for each identification information as a model image, each created model image, and each identification image is associated with the identification information. And comparing each filtered image to be inspected corresponding to a report to detect a defect in the microstructure.
  • a program according to still another aspect of the present invention provides a program in which a microstructure formed on each of a plurality of dies on a semiconductor wafer is divided into a plurality of divided regions obtained by dividing each of the dies into a plurality of dies. Storing each image captured under illumination for each of the images as an inspection object image in association with identification information for identifying the position of each of the divided regions in each die; and each of the stored inspection object images In contrast, a step for performing filtering for removing a low frequency component in each inspection target image, and each of the filtered inspection target images corresponding to the identification information between the dies.
  • a step of creating an average image obtained by averaging the images to be inspected in the divided areas for each of the identification information as a model image, the created model images, and the models By comparing the respective inspection target image before Symbol filtering said identification information corresponding to the image, is intended to execute the steps of detecting a defect of the microstructure.
  • an absolute model image is not required, and the ME can be performed with high accuracy and efficiency while preventing erroneous detection due to uneven brightness or noise. MS device defects can be detected.
  • FIG. 1 is a diagram showing a configuration of a defect detection apparatus according to an embodiment of the present invention.
  • the defect detection apparatus 100 is a semiconductor wafer 1 made of, for example, silicon. (Hereinafter, also simply referred to as wafer 1), a wafer table 2, a XYZ stage 3 for moving the wafer table 2 in the X, Y and Z directions in the figure, and a CCD for imaging the wafer 1 from above
  • the camera 6 includes a light source 7 that illuminates the wafer 1 during imaging by the CCD camera 6, and an image processing PC (Personal Computer) 10 that controls the operation of each unit and performs image processing to be described later.
  • PC Personal Computer
  • the woofer 1 is transported onto the woofer table 2 by a transport arm (not shown) and is adsorbed and fixed to the woofer table 2 by suction means such as a vacuum pump (not shown).
  • suction means such as a vacuum pump (not shown).
  • a tray (not shown) that can hold wafer 1 is prepared separately, and the tray is adsorbed and fixed with wafer 1 held on the tray. You may do so.
  • a protein chip is formed as a MEMS device.
  • the defect detection apparatus 100 is an apparatus for detecting defects such as foreign matters and scratches on the protein chip using the protein chip as an inspection object. Details of the protein chip will be described later.
  • the CCD camera 6 is fixed at a predetermined position above the wafer 1 and incorporates a lens, a shirter (not shown) and the like. Based on the trigger signal output from the image processing PC 10, the CCD camera 6 emits an image of a protein chip formed on a predetermined portion of the wafer 1, enlarged by a built-in lens, by a light source 7. The image is captured under the flashlight and the captured image is transferred to the image processing PC 10 To send.
  • the XYZ stage 3 moves the wafer 1 in the vertical direction (Z direction), thereby changing the relative distance between the CCD camera 6 and the wafer 1, and the focal point when the CCD camera 6 captures the wafer 1 The position can be changed.
  • the focal position may be varied by moving the CCD camera 6 in the Z direction instead of the XYZ stage 3.
  • the lens of the CCD camera 6 is configured as a zoom lens, and the protein chip can be imaged at different magnifications by changing the focal length.
  • magnification of the CCD camera 6 can be varied in two stages of about 7 times (low magnification) and about 18 times (high magnification).
  • the field size in the case of low magnification is, for example, 6 80 X 5 10 (m 2 )
  • the field size in the case of high magnification is, for example, 2 70 X 200 (U m ⁇ ). It is not a thing.
  • a camera incorporating another image sensor such as a CM OS sensor may be used.
  • the light source 7 is fixed at a predetermined position above the wafer 1, and includes a flash lamp composed of, for example, a high-intensity white LED or a xenon lamp, and a flash lighting circuit that controls lighting of the flash lamp. Based on the flash signal output from the image processing PC 10, the light source 7 illuminates the predetermined portion of the wafer 1 by emitting a high-intensity flash for a predetermined time of, for example, several U seconds.
  • the XYZ stage 3 includes a motor 4 for moving the X stage 1 1 and the Y stage 1 2 along the movement axis 1 3 in the X direction, the Y direction, and the Z direction, respectively, and the X stage 1 1 and Y Encoder 5 is used to determine the moving distance of stage 1 2.
  • the motor 4 is, for example, an AC servo motor, a DC servo motor, a stepping motor, a linear motor, etc.
  • the encoder 5 is, for example, various motor encoders, a linear scale, or the like.
  • the encoder 5 Each time the X stage 1 1 and the Y stage 1 2 move by a unit distance in the X, Y, and Z directions, the encoder 5 generates an encoder signal that is the movement information (coordinate information) and outputs the encoder signal. Output to PC 10 for image processing.
  • the image processing PC 10 inputs the encoder signal from the encoder 5 and outputs a flash signal to the light source 7 based on the encoder signal. Is output.
  • the image processing PC 10 outputs a motor control signal for controlling the driving of the motor 4 to the motor 4 based on the encoder signal input from the encoder 5.
  • FIG. 2 is a block diagram showing a configuration of the image processing PC 10.
  • the image processing PC 10 includes a CPU (Central Processing Unit) 21, ROM (Read Only Memory) 22, RAM (Random Access Memory) 23, I / O interface 24, HDD (Hard Disk Drive 25) A display unit 26 and an operation input unit 27, which are electrically connected to each other via an internal bus 28.
  • CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • HDD Hard Disk Drive
  • the C P U 21 comprehensively controls each part of the image processing PC 10 and performs various calculations in image processing to be described later.
  • the ROM 22 is a non-volatile memory that stores programs necessary for starting up the image processing PC 10 and other programs and data that do not require rewriting.
  • the RAM 23 is used as a work area of the CPU 21 and is a volatile memory that reads various data and programs from the HDD 25 and ROM 22 and temporarily stores them.
  • the input / output interface 24 is connected to the operation input unit 27, the motor 4, the encoder 5, the light source 7, and the CCD camera 6 with the internal bus 28, and the operation input unit 27 inputs the operation.
  • This interface is used to input signals and exchange various signals with the motor 4, encoder 5, light source 7 and CCD camera 6.
  • the HDD 25 includes an OS (Operating System), various programs for performing imaging processing and image processing, which will be described later, various other applications, and a protein chip as an inspection target image captured by the CCD camera 6.
  • Image data such as a model image (described later) created from the image and the image to be inspected, and various data for reference in imaging processing and image processing are stored in a built-in hard disk.
  • the display unit 26 includes, for example, an LCD (Liquid Crystal Display), a CRT (Cathode Ray Tube), and the like. The image captured by the CCD camera 6 and various image processing images are displayed. Displays the operation screen.
  • the operation input unit 27 includes, for example, a keyboard and a mouse, and inputs an operation from a user in image processing described later.
  • FIG. 3 is a top view of the wafer 1.
  • 88 semiconductor chips 30 (hereinafter also simply referred to as chips 30 or dies 30) are formed on the wafer 1 in a grid.
  • the number of dies 30 is not limited to 88.
  • FIG. 4 is a top view showing one of the dies 30 on the wafer 1.
  • each die 30 is formed with a protein chip 35 having a plurality of circular recesses 50 over its entire surface.
  • Each die 30, that is, each protein chip 35 has a substantially square shape, and the length s of one side thereof is, for example, about several mm to several tens of mm, but is not limited to this dimension.
  • FIG. 5 is an enlarged view showing one recess 50 in the protein chip 35.
  • FIG. 4A is a top view of the recess 50
  • FIG. 4B is a sectional view of the recess 50 in the Z direction.
  • a thin film (membrane) 53 having a plurality of holes 55 is formed on the bottom surface 52 of each recess 50 of the protein chip 35.
  • the hole 55 is formed over the entire surface of the circular bottom surface 52 of each recess 50.
  • the diameter d 1 of each recess 50 is, for example, several hundred; U m
  • the diameter d 2 of each hole 55 is, for example, several m
  • the height h) is, for example, several hundreds; U m, but is not limited to these dimensions.
  • latex fine particles (latex beads) are placed on the bottom surface 52 of the recess 50 as a carrier, and an antibody (protein) is introduced as a reagent into the recess 50, Silicon for screening proteins with specific properties that adsorb to latex beads by antibody cross-reaction
  • the container made The reagent (protein) that has not adsorbed to the latex beads is discharged from each hole 55 of the bottom surface 52, and only the protein having a specific property remains in the recess 50.
  • a thin film 53 such as a silicon oxide film is formed on one surface of the wafer 1 by a C V D (Chemical Vapor Deposition) method.
  • a photoresist is applied to the other surface of the wafer 1, unnecessary portions are removed by a photolithography technique, and etching is performed using the resist pattern as a mask.
  • a plurality of recesses 50 are formed on the wafer 1 while leaving the thin film 53.
  • a photoresist is applied to the thin film 53 of each recess 50, the hole 55 is removed by photolithography, and etching is performed using the resist pattern as a mask.
  • the protein chip 35 composed of a plurality of recesses 50 having the thin film 53 formed with a large number of holes 55 as shown in FIG. 5 can be formed.
  • FIG. 6 is a flowchart showing a rough flow of operations until the defect detection apparatus 100 detects a defect.
  • the CCD camera 6 captures an image of each die 30 on which the protein chip 35 is formed at the low magnification (step 1001). Specifically, as shown in FIG. 7, each die is divided into, for example, 1 8 rows X 1 3 columns in total 2 3 4 first divided areas 7 1, and a CCD camera 6 under the flash of the light source 7 An image for each first divided area 71 is acquired. The number and aspect ratio of the first divided areas 71 are not limited to this number. Each first divided area 71 is pre-assigned with an ID for identifying its position, and the HDD 25 of the image processing PC 10 stores the ID. With this ID, the image processing PC 10 can identify the first divided area 71 existing at the same position between different dies 30. Each die 30 is also assigned an ID, and the image processing PC 10 has each first divided region 7 1 force which die 30 has which first. It is possible to identify whether it is a divided area 71.
  • the image processing PC 10 outputs a motor drive signal to the motor 4 based on the encoder signal from the encoder 5 to move the XYZ stage 3, and the encoder signal
  • the trigger signal and flash signal are generated based on the above, and the trigger signal is output to the CCD camera 6 and the flash signal is output to the light source 7, respectively.
  • the light source 7 emits a flash to the protein chip 35 every U seconds, based on the flash signal, and the CCD camera 6 Then, based on the trigger signal, for example, the first divided areas 71 of the protein chip 35 on the wafer 1 are continuously imaged at a speed of 50 sheets / second, for example.
  • FIG. 8 is a diagram showing the locus of the imaging position when the CCD camera 6 images the protein chip 35 for each first divided area 71.
  • two imaging paths are conceivable, as shown in FIGS. (A) and (b).
  • the CCD camera 6 starts from the leftmost die 30 of the die 30 having the maximum Y coordinate among the 88 dies 30 of the wafer 1, for example.
  • the first divided area 71 of each of the 18 rows X 1 3 columns of the die 30 is imaged continuously for every row, for example, and then moved to the next die 30 and all the first divided regions 71 for every row. 1 divided area 7 1 is imaged.
  • the image processing PC 10 0 sets the imaging position in each first divided area 7 1 of one die 30, for example, the first divided area 7 1 belonging to the uppermost row and the leftmost column. Move to the right in the X direction, move to the right end, move one line in the Y direction, move to the left in the X direction, move to the left end, move one line in the Y direction, and move to the next line. When the image is moved to the right in the X direction, the operation is repeated.When imaging of all the first divided areas 71 of one die 30 is completed, the image is moved to the other adjacent die 30 and the same. A motor drive signal is output to the motor 4 so that the movement is repeated.
  • the CCD camera 6 continuously images each first divided area 71 based on the trigger signal output from the image processing PC 10 in accordance with this movement.
  • the CCD camera 6 continuously captures images of the first divided area 71 having an ID corresponding to each die 30 (existing at the same position).
  • the imaging position may be moved to.
  • the image processing PC 10 sets the imaging position of the CCD camera 6, for example, the die 30 having the maximum Y coordinate from the leftmost die 30 as a starting point, It passes over each first divided area 7 1 (first divided area 7 1 indicated by a black circle) that has an ID corresponding to 30 and exists at the position where the X coordinate is the minimum and the Y coordinate is the maximum.
  • each of the first divided areas 7 1 (with white circles) adjacent to the first position in the X direction and having a corresponding ID is moved in the second order.
  • the first divided area 71 1) shown above is moved so that the CCD camera 6 repeats the movement over each first divided area 71 located at the same position between the dies 30 in the same manner.
  • the CCD camera 6 performs all the operations for continuously imaging the plurality of first divided areas 71 having the corresponding ID. Repeat for Die 30.
  • the image processing PC 10 selects a path in which the imaging time is shortened from the two imaging paths and causes the CCD camera 6 to capture an image.
  • the imaging path shown in Fig. 5 (a) is taken, the imaging interval of each first divided area 71 at the time of each imaging, that is, the movement interval of the XYZ stage 3 is the interval of each first divided area 71.
  • the movement interval of the XYZ stage 3 is the same as the interval of each die 30. Therefore, the CPU 21 of the image processing PC 10 can calculate the driving speed of the motor 4 from these movement intervals and the imaging frequency of the CCD camera 6. By multiplying this driving speed by the entire imaging path shown in FIGS.
  • the image processing PC 10 compares the respective imaging times to determine which imaging route (a) and (b) is used, the imaging time is faster, and the imaging time is faster. Select the imaging path.
  • the image of each first divided region 71 captured by the CCD camera 6 is transmitted to the image processing PC 10 as an inspection target image together with the ID for identifying each first divided region 71, It is stored in the HDD 25 or RAM 23 via the input / output interface 24 of the image processing PC 10.
  • the size of the inspection target image captured by the CCD camera 6 is a so-called VGA (Video Graphics Array) size (640 ⁇ 480 pixels) image, but is not limited to this size.
  • the CCD camera 6 moves the XYZ table 3 in the Z direction, thereby changing the distance between the wafer 1 and the protein chip 35 and having different focal positions. It is possible to take an image to be inspected.
  • Figure 9 shows the situation.
  • the XYZ stage 3 is based on the focus signal from the image processing PC 10 and is directed upward (Z1 direction in the figure) and downward (Z2 direction in the figure).
  • the distance between the CCD camera 6 and the protein chip 35 is changed to, for example, the third stage (focal points F1 to F3). That is, the CCD camera 6 moves the XYZ stage 3 in the Z2 direction so that the focal point is aligned with the upper surface 51 of the protein chip 35 (focal point F 1), and then the XYZ stage 3 moves in the Z1 direction.
  • the focus position is adjusted to the approximate middle position between the upper surface 51 and the bottom surface 52 of the protein chip 35 (focal point F2). It is possible to align (focal point F 3).
  • the variable focal position is not limited to three.
  • the defect detection apparatus 100 captures images at a plurality of different focal positions so that the inspection target is the protein chip 35 according to the present embodiment. Even in the case of a three-dimensional shape having a thickness (depth or height) in the z direction, it is possible to acquire an image of each position in the z direction and prevent defect detection omission.
  • the CCD camera 6 sorts the image of each focal position imaged by the route shown in Fig. 8 (a) or (b) for each focal position and transmits it to the image processing PC 10 for image processing.
  • the PC 10 identifies these images as inspection target images for each focal position and stores them in the HDD 25 or RAM 23. That is, as described above, when the focal points are F1 to F3, the CCD camera 6 repeats the movement along the imaging path shown in FIG. 8 (a) or (b) three times in total for each focal position. Imaging.
  • the CPU 21 of the image processing PC 10 obtains each inspection object image from the CCD force camera 6 in parallel with the imaging process by the CCD camera 6.
  • a filtering process using a high-pass filter is performed on each acquired image to be inspected (step 102).
  • the protein chip 35 in the present embodiment has a thin film 53 on the bottom surface 52, and uneven brightness may occur depending on the flatness of the thin film 53, for example, when the thin film 53 is bent. .
  • luminance unevenness may also occur due to the deviation of the optical axis of the CCD camera 6 or the uniformity of how the light source 7 hits the flash. Such luminance unevenness is extracted as a difference in a difference extraction process with a model image described later, which leads to erroneous detection of a defect.
  • This uneven brightness portion is a portion where the brightness changes gently in the inspection target image. That is, it can be said that the luminance unevenness component is a low frequency component. Therefore, in this embodiment, a high-pass filter is applied to each captured image to be inspected to remove this low frequency component.
  • FIG. 10 is a flowchart showing the detailed flow of this high-pass filter process.
  • the CPU 21 of the image processing PC 10 reads a copy of the image to be inspected from the HDD 25 to the RAM 23 (step 61), and performs Gaussian blurring processing on the image to be inspected. Apply (step 62).
  • the blur setting value is set to a radius of about 15 to 16 pixels, for example, but is not limited to this setting value.
  • the output image obtained by the Gaussian blurring process (hereinafter referred to as the Gaussian blurring image) is an image in which only the low-frequency components remain as a result of smoothing the high-frequency components in the original image to be inspected.
  • the C P U 21 subtracts the Gaussian blurred image from the original image to be inspected (step 6 3).
  • the low frequency component at the corresponding position in the Gaussian blurred image is bowed from the high frequency component in the original inspection target image, so that the original high frequency component remains and the original inspection target.
  • the original low frequency component is removed by subtracting the low frequency component at the corresponding position in the Gaussian blurred image from the low frequency component in the image.
  • the image obtained by this subtraction process is an image in which only the high-frequency component remains, with the low-frequency component removed from the original image to be inspected.
  • C PU 21 updates the original image to be inspected with the image after the subtraction process and stores it in HD 25 (step 6 4).
  • the CPU 21 determines whether or not each of the images to be inspected has been imaged for all of the first divided areas 71, and performs the filtering process by the high-pass filter for all the inspections. It is determined whether or not it has been performed on the target image (steps 10 3 and 10 4). If it is determined that all of the inspection target images have been imaged and filtered (Y es), this filter The process proceeds to a process of creating a model image of each divided region using the image to be inspected after the tulling (step 10 5). In the present embodiment, the imaging process of the inspection target image and the high-pass filter process are performed in parallel. The high-pass filter processing may be performed after the image capturing processing is completed for all the first divided areas 71 (that is, the processing in step 1002 and the processing in step 103 are performed). The reverse is also possible.)
  • Fig. 11 is a flowchart showing the flow of processing until the image processing PC 10 creates a model image.
  • Fig. 12 shows how the image processing PC 10 creates a model image. It is the figure shown conceptually.
  • the CPU 21 of the image processing PC 10 has an ID corresponding to each die 30 in the inspection target image after the high-pass filter processing.
  • the inspection target image is read from the HDD 25 to the RAM 23 (Step 4 1), and the alignment of each of the read inspection target images is performed (Step 4 2).
  • the CPU 21 detects, for example, an edge portion of the recess 50 of the protein chip 35 from the images to be inspected, which are images of the first divided area 71 existing at the same position between the dies 30. , Etc., and align them while adjusting by shifting in the X and Y directions and rotating in the 0 direction so that the shapes overlap each other.
  • the CPU 21 captures each inspection object image 4 having a corresponding ID obtained by imaging the first divided area 7 1 a existing at the same position between the dies 30. Read 0 a to 40 f, ⁇ ⁇ ⁇ ⁇ .
  • the total number of inspection target images 40 having corresponding IDs is 88.
  • the C P U 21 overlaps all of the 88 images to be inspected 40 and aligns them based on the shape of the recess 50 and the like. Thus, by performing alignment based on the shape or the like of the recess 50, easy and accurate alignment is possible.
  • the CPU 21 calculates an average luminance value for each pixel (pixel) at the same position in each inspection object image 40 in a state where the above alignment is completed (step 4 3). .
  • the CPU 21 calculates the average luminance value for all the pixels in each inspection target image 40 in the first divided area 7 1 a (Yes in step 44)
  • the CPU 21 calculates the result of the calculation. Based on this, the image composed of this average luminance value is model image 45 and Is generated and stored in the HDD 25 (step 45).
  • the CPU 21 repeats the above processing to determine whether or not the model images 45 have been created for all the first divided regions 71 corresponding to the dies 30 (step 46). If it is determined that the model image 45 is created (Y es), the process is terminated.
  • model image 45 is created based on actual inspection target image 40.
  • Each inspection image 40 may have defects such as foreign matter, scratches, and thin film cracks.
  • Each die 30 is divided into a plurality of (in this embodiment, 234) first divided regions 71, and a plurality of (in this embodiment, 88 in this embodiment) average luminance values for 30 dies are obtained.
  • the defects in each inspection target image 40 are absorbed, and a model image 45 that is very close to the ideal shape can be created, and highly accurate defect detection becomes possible.
  • step 106 performs a difference extraction process between the model image 45 and each inspection target image 40 after the high-pass filter for each first divided area 71.
  • the CPU 21 performs X, Y based on the shape of the concave portion 50 existing in the model image 45 and each inspection object image 40 in the same manner as the alignment process at the time of creating the model image 45 described above. Align the image while adjusting in the 0 direction, extract the difference by subtraction processing of both images, perform binarization processing, and output as a difference image.
  • BI ob is in the difference image A block of pixels having a predetermined (or predetermined range) grayscale value.
  • the CPU 21 performs processing for extracting only B I ob having a predetermined area (for example, 3 pixels) or more from the B I o b from the difference image.
  • FIG. 13 is a diagram showing a difference image before and after the B I ob extraction process.
  • Figure (a) shows the difference image 60 before B I ob extraction
  • (b) shows the difference image after B I ob extraction (hereinafter referred to as B I ob extraction image 65).
  • a white portion is a portion that appears as a difference between the model image 45 and the inspection target image 40.
  • the difference image 60 in order to emphasize the difference, a process for enhancing the luminance value, for example, about 40 times is performed on the original difference image.
  • defects such as foreign matter and scratches, for example, contamination of the lens 14 of the CCD camera 6 and illumination of the light source 7 Due to various factors such as uniformity, there is a small noise 84 shown in the part surrounded by a white broken line. If this noise 84 remains, it will lead to false detection of a defect, so it is necessary to remove this noise 84.
  • the noise 84 has a smaller area than a defect such as a foreign object or a scratch. Therefore, as shown in FIG. 6B, the noise 60 is obtained by filtering the difference image 60 by removing BI ob below a predetermined area and extracting only BI ob larger than the predetermined area. 84 can be removed.
  • the B I ob extraction process only the foreign matter 82 such as dust 81 attached to the protein chip 35 and the thin film crack 81 of the concave portion 50 of the protein chip 35 are extracted from the B l ob extraction image 65. At this time, the CPU 21 does not recognize the types of defects such as these foreign objects, cracks, and scratches, but merely recognizes them as defect candidates.
  • the CPU 21 detects the protein chip in which this defect candidate is detected. Determine whether further images need to be taken at high magnification (with a narrow field of view) 35 (step 110).
  • CPU 2 1 determines, for example, whether or not a user operation instructing that the first divided area 71 to which the inspection target image 40 where the defect candidate appears belongs belongs to be captured in more detail at a high magnification is input. If it is determined that high-magnification imaging is necessary (Y es), the first divided area 71 in which the defect candidate is detected and the other first divided area 71 having an ID corresponding to each die 30. Is imaged by the CCD camera 6 at a high magnification for each of the second divided areas 72 divided further finely (step 11 13).
  • the defect classification process described later for example, based on the extracted BI ob area, it is determined whether or not it is a defect and the defect is classified, but the inspection object image captured at a low magnification is used.
  • the BI ob extracted image 65 created based on the above the BI ob area may not be calculated accurately.
  • the correct shape of the defect cannot be recognized and the defect cannot be accurately classified. Therefore, by imaging the protein chip 35 at a higher magnification, it is possible to accurately determine whether or not it is a defect later and to perform defect classification processing.
  • FIG. 14 is a diagram conceptually showing a state in which the first divided region 71 in which the defect candidate is detected is imaged at a high magnification for each second divided region 72.
  • this first divided region 71a is Furthermore, it is divided into a total of nine second divided areas 72 in 3 rows and 3 columns.
  • the first divided regions 71 having IDs corresponding to the first divided regions 71 a are also divided into the second divided regions 72, respectively.
  • Each second divided region 72 is given an ID for identifying the position of each second divided region 72 in each die 30, similarly to each first divided region 71.
  • the C C D camera 6 images each of the second divided areas 72 at the same size (VGA size) as the first divided area 71. That is, the C C D camera 6 images the second divided region 72 at a magnification three times that when the first divided region 71 is imaged.
  • the captured image is stored in the HD 25 of the image processing PC 10 as the inspection target image together with the ID of each of the second divided regions 72.
  • CPU 2 1 selects the fastest path (a) or (b) in FIG. 8 as in the case of imaging of the first divided area 71.
  • the CPU 21 captures all the second divided areas 72 in the divided area 71 of the one die 30 and then each second divided area 71 in the corresponding divided area 71 of the other die 30.
  • the path to image the area 72 and the path to collectively image each second divided area 72 having the corresponding ID between the corresponding first divided areas 71 of each die 30 Determine which route is faster, and have the image taken faster.
  • step 1 1 3 When the imaging of the first divided area 71 1 in which the defect candidate is detected and the second divided area 7 2 in the first divided area 71 corresponding thereto is finished (step 1 1 3), the CPU 2 1 Similar to the processing in steps 10 2 to 10 7 described above, filtering processing (step 1 1 4) and model image creation processing (step 1 1 7) using a high-pass filter are performed for each inspection target image, Difference extraction processing is performed on each inspection target image obtained by imaging each second divided region 72 in the first divided region 71 in which the defect candidate is detected (step 1 1 8), and further by BI ob extraction. Perform the filtering process (step 1 1 9).
  • each inspection target image in the second divided area 72 is captured at a higher resolution than the inspection target image in the first divided area 71, the BI ob extraction process in step 1 1 8
  • the threshold value (pixel) of the BI ob region extracted in step B is set to a value larger than the threshold value of the BI ob region extracted in the BI ob extraction process for each first divided region 71 in step 107.
  • FIG. 15 is a diagram showing a comparison of each BI ob extracted image 65 extracted from each inspection target image in the first divided region 71 and the second divided region 72.
  • BI 0! 3 extracts image 6 5 3
  • Fig. (A) is extracted from the first division region 7 1
  • Fig. (B) is a BI ob extracted image 6 5 b extracted from the second divided region 7 2 Show.
  • the first divided area 71 is divided into nine second divided areas 72, and the second divided area 7 where the foreign matter 82 appears is shown.
  • the foreign object 82 is displayed at a high resolution, and the area can be accurately calculated.
  • step 10 08 when the defect candidate is extracted in step 10 08 without performing the above-described determination of necessity of high-magnification imaging in step 109, the high-magnification imaging is automatically performed. May be.
  • the motor 4 and the encoder 5 when the performance of the image processing PC 10, the motor 4 and the encoder 5 is high and the processing time is within the allowable range, not only the first divided area 71 from which the defect candidate is extracted, All die 3 0 all first
  • the second divided area 7 2 in the one divided area 71 may be imaged and the model image 45 may be created for all the second divided areas 7 2.
  • the imaging process for each of the second divided areas 7 2 is performed without determining whether or not high magnification imaging is required in the above steps 1 0 9. If the CPU 21 determines that there is a first divided area 7 1 in which a defect candidate is detected after performing high-pass filter processing and model image creation processing, the first
  • the B Iob extraction process may be performed for each second divided area 7 2 in the one divided area 71.
  • step 1009 if it is determined in step 1009 that high-magnification imaging is not necessary (No), or the second divided region in steps 113 to 1119 above
  • the CPU 21 performs the classification process of defect candidates appearing in the Biob extraction image 65 (step 110).
  • the CPU 21 is based on feature points such as area, perimeter, non-roundness, aspect ratio, etc., for each BI ob that appears white in the BI ob extracted image 65. Whether or not each BI ob is a defect is classified and whether the type of the defect is a foreign object, a flaw or a crack.
  • the image processing PC 10 is provided for each type of defect, such as a foreign object, a flaw, or a crack. These sample images are collected and their feature point data is saved as a feature point database in HDD 25, etc., and the saved feature point data and each BI ob extracted image 6 5 in the inspection target BI 5 Compare with each feature point detected from ob.
  • the foreign matter in the present embodiment has a number of about several sides to about several tens; and the scratch has a length of several numbers; about U m to several hundreds.
  • scratches when comparing foreign objects with scratches, scratches have an aspect ratio that is extremely horizontal or vertical and has a longer perimeter than foreign objects.
  • the thin film cracks appear in a curved line at the edge of the recess 50, but the roundness of the recess 50 is lower than that of a normal one.
  • the image processing PC 10 stores these data as feature point data, and classifies the defects by comparison with each feature point of the detected BIo.
  • the protein chip 35 has the hole 55 having a diameter of, for example, a few; U m on the thin film 53 of the bottom surface 52 of the recess 50.
  • the holes 5 5 serve to discharge the reagent. Therefore, even when foreign matter is adhered in the recess 50, if the diameter is smaller than the number of holes 55, the protein chip 35 is used because it is discharged from the holes 55 together with the reagent. There is no problem during screening. Therefore, for foreign objects, the diameter of the hole 55 is used as a threshold value, and foreign objects with a smaller diameter are not treated as defects. On the other hand, scratches and cracks are treated as defects unconditionally because normal screening cannot be performed due to the leakage of the reagent from there.
  • the CPU 21 further increases the magnification.
  • the feature points are measured using the BI ob extracted image 65 extracted from the captured second divided area 72, and various defects are classified. In this way, by performing high-magnification imaging as necessary, processing after defect detection can be performed smoothly.
  • Step 1 1 1 1 the CPU 21 judges whether or not there is a defect for all defect candidates, and When a defect is classified (YES in Step 1 1 1), the information on the BI ob extracted image and the detected defect type is output to the display unit 26 as a detection result (Step 1 1 2), End the process.
  • the image processing PC 10 may display an image on the display unit 26 so that it can be recognized at a glance which type of defect exists at which position on the wafer 1.
  • the user On the basis of the output result, the user performs the removal work if there is a foreign object, and discards the protein chip 35 as a defective product if there is a scratch or crack. If no defect candidate is detected in step 108, the protein chip 35 to be inspected is processed as a non-defective product, and the defect detection process ends.
  • each first divided region 71 or each second divided region 72 Since a model image can be created based on the inspection target image 40, a highly accurate defect detection process can be performed.
  • the model image 45 is created based on each inspection object image 40 taken under the same optical conditions and illumination conditions, it is possible to prevent erroneous detection due to a difference in these conditions.
  • the protein chip is applied as the MEMS device to be inspected, but the MEMS device is not limited to this.
  • an electron beam irradiation plate EB window
  • EB window electron beam irradiation plate
  • FIG. 16 is a view showing the appearance of this electron beam irradiation plate.
  • the figure (a) is a top view, and the figure (b) shows a cross-sectional view in the Z direction in the figure (a).
  • the electron beam irradiation plate 90 includes a plate 92 having a plurality of window holes 95 for irradiating an electron beam (EB), and each window hole 95. And a thin film 9 1 provided so as to cover.
  • EB electron beam
  • the length w in the X direction and the length I in the Y direction of the plate 9 2 are each formed in a rectangular shape of, for example, several tens mm, and the length h in the Z direction is, for example, about several mm. It is not restricted to these lengths and shapes.
  • Each window 95 is, for example, a square with a side s of several millimeters, but is not limited to this length and shape, and may be rectangular.
  • the number of window holes 9 5 is 6 in 4 rows x 9 columns, but is not limited to this number.
  • This electron beam irradiation plate 90 constitutes an electron beam irradiation apparatus by being connected to an end of a vacuum vessel (not shown). Electron beam (EB) force emitted from the electron beam generator provided inside the vacuum vessel is emitted into the atmosphere through the window hole 95 as indicated by the arrow in FIG. Is irradiated.
  • This electron beam irradiation apparatus is used for various purposes such as sterilization, physical property modification, and chemical property modification of an object irradiated with an electron beam.
  • By providing the thin film 91 it is possible to irradiate an electron beam while maintaining a vacuum state. Note that a plurality of thin films 91 may be stacked to form a multilayer film structure.
  • This electron beam irradiation plate 90 is also formed on each die 30 on the wafer 1 by an etching process using a photolithography technique or the like, similar to the protein chip 35 in the above-described embodiment.
  • the size of each die is the same as the size of the plate 92.
  • the defect detection apparatus 10 0 0 also performs the same imaging process, high-pass filter process, model image creation process, BI ob extraction process as the above-described protein chip 35 for the electron beam irradiation plate 90. Etc., and detect defects such as foreign matter, scratches and cracks on the electron beam irradiation plate 90. It is also possible to take images at low and high magnifications and at multiple focal points in the Z direction.
  • each inspection target image is adjusted in the X, Y, and 0 directions so that the edge shapes of the window holes 95 appearing in each inspection target image overlap. While aligning.
  • the image processing PC 10 has a sample of the electron beam irradiation plate 90, etc. Based on this, original feature data is created to classify defects.
  • various sensors such as an acceleration sensor, a pressure sensor, an air flow sensor, a printer head for an ink jet printer, and a reflective projector. It is possible to apply other MEMS devices such as micro-mirror arrays for use, other actuators, and various biochips as inspection objects.
  • each image necessary for image processing such as the inspection target image 40 and the model image 45, the difference image 60 and the BI ob extracted image 65 is stored in the HDD 25.
  • these images may be temporarily stored in the RAM 23, or temporarily stored in a buffer area provided separately from the RAM 23, and deleted when the defect classification process is completed. It doesn't matter if you do.
  • the image in which the difference was not extracted by the difference extraction that is, the image in which the defect was not detected was not detected because it is unnecessary in the subsequent processing. You may delete them one at a time.
  • the image to be inspected in the first divided region 71 captured at a low magnification when the second divided region 72 is imaged at a higher magnification, after the second divided region 72 is imaged, the first divided region 71 Since the images to be inspected 71 are not necessary, these images may be deleted when the imaging of the second divided region 72 is completed.
  • the number of images to be captured is enormous, it is possible to reduce the load on the image processing PC by reducing the storage capacity of RAM 23 and HDD 25 by processing in this way. It becomes.
  • the filtering process using the high-pass filter is performed in consideration of the case where the thin film 53 of the bottom surface 52 of each of the recesses 50 of the protein chip 35 is bent.
  • the flatness of the surface is high and brightness unevenness does not occur In such a MEMS device, this high-pass filter processing may be omitted.
  • the image processing PC 10 may measure the flatness of the imaging surface of the MEMS device to be inspected and determine whether or not to execute the high-pass filter processing according to the flatness.
  • FIG. 1 is a diagram showing a configuration of a defect detection apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a configuration of an image processing PC in an embodiment of the present invention.
  • FIG. 3 is a top view of a wafer in one embodiment of the present invention.
  • FIG. 4 is a top view showing one of the dies on the wafer in one embodiment of the present invention.
  • FIG. 5 is an enlarged view showing one recess of a protein chip according to an embodiment of the present invention.
  • FIG. 6 is a flowchart showing a general flow of operations until a defect detection apparatus detects a defect in an embodiment of the present invention.
  • FIG. 7 is a diagram showing a state in which each die is divided into a plurality of divided regions in an embodiment of the present invention.
  • FIG. 8 is a diagram showing a locus of an imaging position when a CCD camera images a protein chip for each divided region in an embodiment of the present invention.
  • FIG. 9 is a diagram showing a state in which the CCD camera captures an inspection object image at different focal positions in an embodiment of the present invention.
  • FIG. 10 is a flowchart showing a detailed flow of high-pass filter processing in one embodiment of the present invention.
  • FIG. 11 is a flowchart showing the flow of processing until the image processing PC creates a model image in an embodiment of the present invention.
  • FIG. 12 is a diagram conceptually showing how an image processing PC creates a model image in an embodiment of the present invention.
  • FIG. 13 is a view showing a difference image before and after the BIob extraction process in one embodiment of the present invention.
  • FIG. 14 is a diagram conceptually showing a state in which a first divided area where a defect candidate is detected is imaged at a high magnification for each second divided area in one embodiment of the present invention.
  • FIG. 15 is a diagram showing a comparison of each B Iob extracted image extracted from each inspection target image in the first divided region and the second divided region in the embodiment of the present invention.
  • FIG. 16 is a view showing the appearance of an electron beam irradiation plate in another embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Biochemistry (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Analytical Chemistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Processing (AREA)
  • Testing Or Measuring Of Semiconductors Or The Like (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

[PROBLEMS TO BE SOLVED] A problem to be solved is to efficiently detect defects of an MEMS device without the necessity of an absolute model image while preventing mistakenly detecting with a high degree of accuracy. [MEANS FOR SOLVING THE PROBLEMS] A defect detecting device (100) picks up an image of a protein chip (35) formed in each die (30) of a wafer (1) for every first divided region (71), wherein each die is divided into a plurality of the first divided regions. The defect detecting device preserves the image together with an ID to identify each divided region (71) as an examination subject image, eliminates low frequency components from the examination subject image through a high-pass filter, then calculates average brightness for each pixel of each examination subject image having a corresponding ID, and forms a model image for each first divided region (71). The defect detecting device extracts a difference between the model image and each examination subject image as a difference image, carries out filtering of the difference image by the extraction of a Blob, extracts the Blob with not less than a predetermined area as a Blob defect, and classifies kinds of defects in accordance with a characteristic amount of the extracted Blob.

Description

明 細 書  Specification
欠陥検出装置、 欠陥検出方法、 情報処理装置、 情報処理方法及びそ のプログラム  Defect detection apparatus, defect detection method, information processing apparatus, information processing method, and program thereof
技術分野  Technical field
[0001] 本発明は、 半導体ゥヱハ上に形成された M EMS (Micro Electro Mechani cal Systems) 等の微小構造体の外観を検査して異物や傷等の欠陥を検出する ことが可能な欠陥検出装置、 欠陥検出方法、 情報処理装置、 情報処理方法及 びそのプログラムに関する。  The present invention relates to a defect detection apparatus capable of inspecting the appearance of a microstructure such as MEMS (Micro Electro Mechanical Systems) formed on a semiconductor wafer and detecting defects such as foreign matter and scratches. The present invention relates to a defect detection method, an information processing apparatus, an information processing method, and a program thereof.
背景技術  Background art
[0002] 近年、 特に半導体微細加工技術を用いて、 機械、 電子、 光、 化学等の分野 における多様な機能を集積化した MEMSが注目されている。 これまで実用 化されている MEMSデバイスとしては、 例えば自動車や医療用の各種セン サとして、 加速度センサや圧力センサ、 エアフローセンサ等がある。 また、 特にインクジエツトプリンタ用のプリンタへッドゃ反射型プロジェクタ用の マイクロミラ一アレイ、 その他のァクチユエ一タ等にも MEMSデバイスが 採用されている。 更には、 例えばタンパク質分析用のチップ (いわゆるプロ ティンチップ) や D N A分析用のチップ等、 化学合成やバイオ分析等の分野 においても M E M Sデバイスが応用されている。  [0002] In recent years, MEMS that integrate various functions in the fields of machinery, electronics, light, chemistry, etc., especially using semiconductor microfabrication technology, has attracted attention. Examples of MEMS devices that have been put into practical use include acceleration sensors, pressure sensors, and airflow sensors as sensors for automobiles and medical use. In particular, MEMS devices are also used in printer heads for ink jet printers, micromirror arrays for reflective projectors, and other actuators. Furthermore, MEMS devices are also applied in fields such as chemical synthesis and bioanalysis such as protein analysis chips (so-called protein chips) and DNA analysis chips.
[0003] ところで、 MEMSデバイスは、 極めて微細な構造体であるがゆえに、 製 造にあたってはその外観における異物や傷等の欠陥の検査が重要である。 従 来から、 MEMSデバイスの外観検査においては、 顕微鏡を使った、 人手に よる検査が行われていたが、 この検査は多大な時間を要し、 また検査員によ る目視で行うために判断ブレが生じてしまう。  [0003] By the way, since a MEMS device is an extremely fine structure, inspection of defects such as foreign matters and scratches in its appearance is important in manufacturing. Conventionally, in the appearance inspection of MEMS devices, manual inspection using a microscope has been performed, but this inspection takes a lot of time and is judged to be performed visually by an inspector. Blur will occur.
[0004] そこで、 このような外観検査を自動化するための技術として、 例えば下記 特許文献 1には、 例えば CCD (Charge Coupled Device) カメラ等で検査対 象物のうち任意の複数の良品を撮像して複数の良品画像としてメモリに記憶 し、 これら各良品画像の位置合わせを行った上で、 これらの各良品画像の同 一位置の画素毎に輝度値の平均と標準偏差を算出しておき、 これら平均輝度 値と標準偏差と、 被検査品を撮像した画像の各画素の輝度値とを比較するこ とで、 被検査品の良否判定を行う技術が記載されている。 [0004] Therefore, as a technique for automating such an appearance inspection, for example, in Patent Document 1 below, for example, a CCD (Charge Coupled Device) camera or the like images an arbitrary plurality of non-defective products. Are stored in the memory as a plurality of non-defective images, and after aligning the non-defective images, the same non-defective images are The average and standard deviation of the luminance values are calculated for each pixel at one position, and the average luminance value and standard deviation are compared with the luminance value of each pixel of the image obtained by imaging the inspected product. A technique for determining pass / fail of an inspection product is described.
[0005] また、 下記特許文献 2には、 配線基板や印刷物等のパターンの検査におい て、 良品の基準となる基準画像データを作成するにあたり、 複数の基準バタ ーンをそれぞれ撮像して、 各基準パターン画像を記憶しておき、 各基準バタ ーンの位置を合わせ、 各画素毎に各画像データ間の平均値演算または中間値 演算を行い、 ばらつきの大きなデータや異常値を避けて、 適正な基準となる 基準画像データを作成しておき、 当該基準画像データと検査画像データとを 比較することでパターンの欠陥の検出を行うことが記載されている。  [0005] In addition, in Patent Document 2 below, when creating reference image data that serves as a reference for non-defective products in the inspection of patterns such as wiring boards and printed materials, each of the reference patterns is imaged. The reference pattern image is stored, the position of each reference pattern is aligned, the average value calculation or the intermediate value calculation is performed between each image data for each pixel, and data that has a large variation and abnormal values are avoided. It is described that reference image data serving as a proper reference is created, and pattern defects are detected by comparing the reference image data with inspection image data.
特許文献 1 :特開 2 0 0 5 _ 2 6 5 6 6 1号公報 (図 1等)  Patent Document 1: Japanese Patent Laid-Open No. 2 065 _ 2 6 5 6 6 1
特許文献 2:特開平 1 1—7 3 5 1 3号公報 (段落 [ 0 0 8 0 ] 等) 特許文献 3:特開 2 0 0 1 _ 2 0 9 7 9 8号公報 (図 1等)  Patent Document 2: Japanese Patent Laid-Open No. 11-7 3 5 1 3 (paragraph [0 0 8 0] etc.) Patent Document 3: Japanese Patent Laid-Open No. 2 0 0 1 _ 2 0 9 7 9 8 (Figure 1 etc.)
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0006] しかしながら、 上記特許文献 1及び特許文献 2のいずれに記載の技術にお いても、 検査基準となる良品画像データ若しくは基準画像データ (以下、 こ れらをモデル画像データと称する) は、 検査対象の画像とは別個に用意した 複数の良品を撮像した画像を基に作成されている。 したがって、 モデル画像 データの作成に先立って良品か否かを判断し、 良品を選択する処理が必要と なり、 この処理は人手に頼らざるを得ないため、 その分の手間と時間を要す ることとなる。 そして、 M E M Sデバイスのように、 極めて微小で、 かつ、 僅かな傷や異物等が欠陥となってしまう構造体の検査においては、 絶対的な 良品 (モデル) を用意すること自体が、 モデル画像データの維持や管理等の 面から困難である。  [0006] However, even with the techniques described in Patent Document 1 and Patent Document 2, non-defective image data or reference image data (hereinafter referred to as model image data) serving as an inspection standard is Created based on images of multiple non-defective products prepared separately from the image to be inspected. Therefore, prior to the creation of model image data, it is necessary to determine whether or not it is a non-defective product and to select a non-defective product. It will be. And in the inspection of extremely small structures such as MEMS devices, where a few scratches or foreign objects become defects, preparing an absolute good product (model) itself is model image data. It is difficult in terms of maintenance and management.
[0007] また、 上記特許文献 1及び特許文献 2に記載の技術において、 配線基板等 の検査対象物は 1個ずつテーブルに載置され撮像されるため、 各検査対象物 において欠陥ではない製造時の個体差がある場合には、 その個体差を欠陥と して誤検出してしまうおそれがあり、 検査精度が低下してしまう。 [0007] In addition, in the techniques described in Patent Document 1 and Patent Document 2 described above, inspection objects such as wiring boards are placed on a table one by one and imaged, so that each inspection object is not defective. If there is an individual difference of May cause false detection, and inspection accuracy will be reduced.
[0008] 更に、 検査対象物が凹凸や湾曲のある立体的な物体である場合には、 照明 条件や光学条件等により撮像画像中に輝度むらが発生する場合があり、 また 検査対象画像の撮像時にノィズが混入する場合もあるため、 それらの輝度む らゃノイズを欠陥として誤検出してしまうおそれもある。  [0008] Furthermore, when the inspection object is a three-dimensional object with unevenness or curvature, uneven luminance may occur in the captured image due to illumination conditions, optical conditions, and the like. Sometimes noise may be mixed in, and noise may be misdetected as a defect.
[0009] この問題に関連して、 上記特許文献 3には、 表面に行列状の繰り返しバタ ーンを有するウェハの全面を一枚の画像として撮像し、 当該撮像画像に対し てメディアンフィルタ等による平滑化処理を施して、 光の回折による濃淡む らを除去した基準画像 (モデル画像) を作成し、 当該基準画像と撮像画像と を関心領域毎に比較して急峻な濃淡変化量のみを抽出した画像を得て、 当該 抽出画像の明度値を 2値化処理して不良部の有無を判定する処理を行う方法 が記載されている。  [0009] In relation to this problem, in Patent Document 3 above, the entire surface of a wafer having a matrix-like repetitive pattern is imaged as a single image, and a median filter or the like is applied to the captured image. Create a reference image (model image) that has been smoothed to remove unevenness due to light diffraction, and compare the reference image and the captured image for each region of interest to extract only the steep shade change. A method is described in which a processed image is obtained, and the brightness value of the extracted image is binarized to determine the presence or absence of a defective portion.
[0010] し力、し、 この特許文献 3に記載の技術においては、 直径 2 0 O m mのゥェ ハ全面を 1枚の画像として撮像した画像を基に作成されたモデル画像により 検査を行っており、 かつ、 半導体ウェハのように繰り返しパターンを有する 物体を検査対象としている。 したがって、 例えば数; 単位の微細な傷ゃ異 物及び輝度むらが問題となり、 かつ不規則的な形状/ ターンも有する M E M Sデバイスの外観検査に適用することは困難である。  [0010] In the technique described in Patent Document 3, an inspection is performed using a model image created based on an image obtained by capturing the entire wafer having a diameter of 20 Omm as a single image. In addition, an object having a repetitive pattern such as a semiconductor wafer is an inspection target. Therefore, it is difficult to apply to a visual inspection of a MEMS device having, for example, a few flaws in a few units, foreign objects and uneven brightness, and irregular shapes / turns.
[001 1 ] 以上のような事情に鑑み、 本発明の目的は、 絶対的なモデル画像を必要と することなく、 輝度むらやノイズ等による誤検出を防ぎながら高精度かつ効 率的に M E M Sデバイスの欠陥を検出することが可能な欠陥検出装置、 欠陥 検出方法、 情報処理装置、 情報処理方法及びそのプログラムを提供すること にある。  [001 1] In view of the circumstances as described above, the object of the present invention is to provide a MEMS device with high accuracy and efficiency while preventing false detection due to uneven brightness and noise, etc., without requiring an absolute model image. It is to provide a defect detection apparatus, a defect detection method, an information processing apparatus, an information processing method, and a program for the same.
課題を解決するための手段  Means for solving the problem
[0012] 上述の課題を解決するため、 本発明の主たる観点に係る欠陥検出装置は、 半導体ウェハ上の複数のダイにそれぞれ形成された微小構造体を、 前記各ダ ィの領域が複数に分割された分割領域毎にそれぞれ撮像する撮像手段と、 前 記撮像される微小構造体を照明する照明手段と、 前記撮像された各分割領域 毎の画像を、 前記各ダイ内における前記各分割領域の位置を識別する識別情 報と対応付けて検査対象画像として記憶する記憶手段と、 前記記憶された各 検査対象画像に対して、 当該各検査対象画像中の低周波成分を除去するため のフィルタリングを施す第 1のフィルタリング手段と、 前記フィルタリング された各検査対象画像のうち、 前記各ダイ間で前記識別情報が対応する各分 割領域の各検査対象画像を平均化した平均画像をモデル画像として前記識別 情報毎にそれぞれ作成するモデル画像作成手段と、 前記作成された各モデル 画像と、 当該各モデル画像に前記識別情報が対応する前記フィルタリングさ れた前記各検査対象画像とを比較して、 前記微小構造体の欠陥を検出する検 出手段とを具備する。 [0012] In order to solve the above-described problem, a defect detection apparatus according to a main aspect of the present invention includes a structure in which a microstructure formed on each of a plurality of dies on a semiconductor wafer is divided into a plurality of areas of each of the dies. Imaging means for imaging each divided area, illumination means for illuminating the microstructure to be imaged, and each of the captured divided areas Storage means for storing each image as an inspection target image in association with identification information for identifying the position of each divided region in each die, and for each of the stored inspection target images, A first filtering means for performing filtering for removing low-frequency components in the inspection target image; and in each of the divided regions corresponding to the identification information between the dies of the filtered inspection target images. Model image creation means for creating an average image obtained by averaging the images to be inspected for each identification information as a model image, each created model image, and the filtering corresponding to the identification information for each model image And detecting means for detecting a defect of the microstructure by comparing each of the inspection target images.
[0013] ここで微小構造体とはいわゆる M EMS (Micro Electro Mechanical Syst ems) である。 撮像手段は例えば CCD (Charge Coupled Device) や CMO S (Complementary Metal Oxide Semiconductor) センサ等の撮像素子を内蔵 したカメラ等である。 照明手段は例えば高輝度の白色 L E D (Light Emittin g Diode) やキセノンランプ等によるフラッシュ発光部である。 また第 1のフ ィルタリング手段は換言するとハイパスフィルタである。 また欠陥とは例え ば傷や異物等をいう。  Here, the microstructure is a so-called M EMS (Micro Electro Mechanical System). The imaging means is, for example, a camera with a built-in imaging element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) sensor. The illumination means is, for example, a flash light emitting unit such as a high-intensity white LED (Light Emitting Diode) or a xenon lamp. The first filtering means is a high-pass filter in other words. Defects are, for example, scratches and foreign objects.
[0014] この構成により、 被検査対象である各分割領域の各画像を基にモデル画像 を作成するため、 絶対的な良品のモデル (サンプル) を入手することが困難 な微小構造体の外観検査を高精度に行うことができる。  [0014] With this configuration, a model image is created based on each image of each divided region to be inspected, so that it is difficult to obtain an absolute good product model (sample). Can be performed with high accuracy.
[0015] また、 上記微小構造体は、 一の半導体ウェハ上に複数形成されており、 各 検査対象画像は、 上記撮像手段及び光源による同一の照明条件下で撮像され るため、 当該各検査対象画像を基に各モデル画像を作成し、 更に当該各モデ ル画像と各検査対象画像とを比較することで、 複数の微小構造体の欠陥を、 陰影等の照明条件の変化や撮像手段のレンズ汚れ等の固定ノイズに影響され ることがなく精度良く安定して検出することができる。  [0015] Further, a plurality of the microstructures are formed on one semiconductor wafer, and each inspection object image is imaged under the same illumination condition by the imaging means and the light source. Each model image is created based on the image, and each model image is compared with each image to be inspected to detect defects in multiple microstructures, change illumination conditions such as shadows, and the lens of the imaging means. It can be detected accurately and stably without being affected by fixed noise such as dirt.
[0016] そして、 第 1のフィルタリング手段により、 撮像手段の光軸のずれや微小 構造体の平面度等に起因する輝度むら (低周波成分) を除去することができ 、 当該フィルタリング後の画像を基にモデル画像の作成及び欠陥検出を行な うため、 この一度のフィルタリングにより、 モデル画像としての質と欠陥検 出精度の両方を向上させることができる。 [0016] Then, the first filtering means can remove luminance unevenness (low frequency component) caused by the optical axis shift of the imaging means, the flatness of the microstructure, and the like. Since the model image is created and the defect is detected based on the filtered image, both the quality of the model image and the defect detection accuracy can be improved by this one-time filtering.
[001 7] 更に、 撮像された検査対象画像が、 モデル画像の作成処理と、 欠陥検出処 理の両方の処理で兼用することができるため、 モデル画像を別途作成する場 合に比べて効率的に欠陥を検出することができる。  [001 7] Furthermore, since the captured image to be inspected can be used for both model image creation processing and defect detection processing, it is more efficient than creating model images separately. Defects can be detected.
[0018] 上記欠陥検出装置において、 前記検出手段は、 前記各モデル画像と、 当該 各モデル画像に前記識別情報が対応する各検査対象画像とを重ねて両画像の 差分を差分画像として抽出する差分抽出手段と、 前記抽出された差分画像中 の所定値以上の輝度を有する一連の画素領域のうち所定面積以下の画素領域 を除去するためのフィルタリングを施す第 2のフィルタリング手段とを有し ていてもよい。  [0018] In the defect detection apparatus, the detection unit includes a difference in which each model image and each inspection target image corresponding to the identification information are overlapped with each model image and a difference between both images is extracted as a difference image. Extraction means; and second filtering means for performing filtering for removing a pixel area having a predetermined area or less from a series of pixel areas having a luminance equal to or higher than a predetermined value in the extracted difference image. Also good.
[001 9] ここで、 上記第 2のフィルタリング手段は、 例えば差分画像内の所定の ( または所定範囲の) グレースケール値を有する画素の塊、 すなわち B I o b を解析することでフィルタリングを行う。 これにより、 第 2のフィルタリン グ手段によるフィルタリングを更に施すことで、 差分画像中のノィズ成分が 欠陥として誤検出されてしまうのを防ぎ、 検出精度を更に向上させることが できる。  Here, the second filtering means performs filtering by analyzing, for example, a block of pixels having a predetermined (or a predetermined range) grayscale value in the difference image, that is, B I o b. Thus, further filtering by the second filtering means can prevent the noise component in the difference image from being erroneously detected as a defect, and the detection accuracy can be further improved.
[0020] 上記欠陥検出装置において、 前記モデル画像作成手段は、 前記識別情報が 対応する各検査対象画像を重ねて、 当該各検査対象画像を構成する画素毎に それぞれ輝度値の平均値を算出する手段を有していても構わない。  [0020] In the defect detection apparatus, the model image creating unit superimposes each inspection target image corresponding to the identification information, and calculates an average value of luminance values for each pixel constituting each inspection target image. You may have a means.
[0021 ] これにより、 各検査対象画像の画素毎に平均値を算出することで各画像の ばらつきを効果的に吸収して、 質の高いモデル画像を作成することができ、 検出精度を向上させることができる。  [0021] Thereby, by calculating an average value for each pixel of each inspection target image, it is possible to effectively absorb the variation of each image and create a high-quality model image, and improve detection accuracy. be able to.
[0022] 上記欠陥検出装置において、 前記撮像手段は、 前記各ダイ間で対応する識 別情報を有する各分割領域の前記微小構造体を連続して撮像するようにして もよい。  [0022] In the defect detection apparatus, the imaging unit may continuously image the microstructure in each divided region having identification information corresponding to each die.
[0023] これにより、 各ダイ間の同一位置の分割領域をまとめて連続的に撮像する ことで、 各分割領域のモデル画像を効率良く作成して検査効率を向上させる ことができる。 [0023] Thereby, the divided areas at the same position between the dies are collectively imaged continuously. Thus, it is possible to efficiently create a model image of each divided region and improve inspection efficiency.
[0024] また、 前記撮像手段は、 一の前記ダイ内の全ての分割領域の微小構造体を 撮像した後、 当該一の前記ダイに隣接する他の前記ダイの各分割領域の微小 構造体を撮像するようにしてもよい。  [0024] In addition, the imaging unit images the microstructures of all the divided regions in one die, and then the microstructures of the divided regions of the other die adjacent to the one die. You may make it image.
[0025] これにより、 各ダイ間で同一位置の各分割領域をまとめて連続的に撮像す ることで、 各分割領域のモデル画像を効率良く作成して検査効率を向上させ ることができる。 [0025] Thereby, by continuously capturing the divided regions at the same position between the dies together, it is possible to efficiently create a model image of each divided region and improve the inspection efficiency.
[0026] 上記欠陥検出装置において、 前記微小構造体は、 試薬及び当該試薬と交差 反応する抗体を導入するための薄膜状の底面を有する複数の凹部と、 前記抗 体と反応しない前記試薬を排出するために前記各凹部の底面に複数設けられ た孔とを有する、 スクリーニング検査用の容器であってもよい。  [0026] In the defect detection apparatus, the microstructure discharges a reagent and a plurality of concave portions having a thin film bottom surface for introducing an antibody that cross-reacts with the reagent, and the reagent that does not react with the antibody. Therefore, it may be a screening examination container having a plurality of holes provided in the bottom surface of each of the recesses.
[0027] ここで上記容器は、 プロテインチップと呼ばれるものである。 これにより 、 例えばプロテインチップの薄膜 (メンプレン) の割れや傷、 薄膜に付着し た異物等を高精度で検出することができる。  [0027] Here, the container is a protein chip. As a result, for example, cracks and scratches on the thin film (membrane) of the protein chip, foreign matter adhering to the thin film, etc. can be detected with high accuracy.
[0028] この場合、 前記モデル画像作成手段は、 前記各モデル画像に前記識別情報 が対応する前記各検査対象画像の平均化に先立って、 当該各検査対象画像中 の前記容器の各凹部の形状を基に各検査対象画像を位置合わせする手段を有 していてもよい。  [0028] In this case, the model image creating means, prior to averaging each inspection object image corresponding to the identification information corresponding to each model image, the shape of each recess of the container in the inspection object image There may be a means for aligning each image to be inspected based on the above.
[0029] これにより、 容器の各凹部の形状を利用することで、 各検査対象画像の重 なり位置を正確に合わせて、 より質の高いモデル画像を作成することができ る。 なお、 位置合わせは、 具体的には各画像を X方向及び Y方向へ移動させ たり 0方向へ回転させたりして各画像の相対位置を変化させることにより行 われる。  [0029] Thus, by using the shape of each concave portion of the container, it is possible to accurately match the overlapping positions of the images to be inspected and to create a higher quality model image. Note that the alignment is performed by changing the relative position of each image by moving each image in the X and Y directions or rotating in the 0 direction.
[0030] またこの場合、 前記差分抽出手段は、 前記差分の抽出に先立って、 前記各 モデル画像中の前記容器の各凹部の形状と、 当該各モデル画像に前記識別情 報が対応する各検査対象画像中の前記各凹部の形状とを基に、 前記各モデル 画像と前記各検査対象画像とを位置合わせする手段を有していても構わない [0031 ] これにより、 容器の各凹部の形状を利用することで、 モデル画像と検査対 象画像との重なり位置を正確に合わせて、 欠陥をより高精度に検出すること ができる。 [0030] Further, in this case, the difference extracting means prior to extracting the difference, the shape of each concave portion of the container in each model image, and each inspection corresponding to the identification information corresponding to each model image It may have means for aligning each model image and each inspection object image based on the shape of each recess in the object image. [0031] Thereby, by using the shape of each concave portion of the container, the overlapping position of the model image and the inspection target image can be accurately matched, and the defect can be detected with higher accuracy.
[0032] 上記欠陥検出装置において、 前記微小構造体は、 複数の電子ビームを照射 するための複数の窓孔を有するプレート部材と、 当該各窓孔を覆うように設 けられた薄膜とを有する電子ビーム照射プレー卜であってもよい。  [0032] In the defect detection device, the microstructure includes a plate member having a plurality of window holes for irradiating a plurality of electron beams, and a thin film provided to cover the window holes. It may be an electron beam irradiation plate.
[0033] これにより、 例えば電子ビーム照射プレートの薄膜 (メンプレン) の割れ や傷、 薄膜に付着した異物等を高精度で検出することができる。  [0033] This makes it possible to detect, for example, cracks and scratches on the thin film (membrane) of the electron beam irradiation plate, foreign matter attached to the thin film, and the like with high accuracy.
[0034] この場合、 前記モデル画像作成手段は、 前記各モデル画像に前記識別情報 が対応する前記各検査対象画像の平均化に先立って、 当該各検査対象画像中 の前記電子ビーム照射プレー卜の各窓孔の形状を基に各検査対象画像を位置 合わせする手段を有していてもよい。  [0034] In this case, the model image creating means, prior to averaging each inspection object image corresponding to the identification information for each model image, the electron beam irradiation plate in each inspection object image There may be provided means for aligning each inspection object image based on the shape of each window hole.
[0035] これにより、 電子ビーム照射プレー卜の各窓孔の形状を利用することで、 各検査対象画像の重なり位置を正確に合わせて、 より質の高いモデル画像を 作成することができる。  Thus, by using the shape of each window hole of the electron beam irradiation plate, it is possible to accurately match the overlapping positions of the images to be inspected and to create a higher quality model image.
[0036] またこの場合、 前記差分抽出手段は、 前記差分の抽出に先立って、 前記各 モデル画像中の前記電子ビーム照射プレー卜の各窓孔の形状と、 当該各モデ ル画像に前記識別情報が対応する各検査対象画像中の前記各窓孔の形状とを 基に、 前記各モデル画像と前記各検査対象画像とを位置合わせする手段を有 していても構わない。  [0036] In this case, the difference extracting means prior to extracting the difference, the shape of each window hole of the electron beam irradiation plate in each model image, and the identification information in each model image. There may be provided means for aligning each model image and each inspection object image based on the shape of each window hole in each inspection object image corresponding to.
[0037] これにより、 電子ビーム照射プレー卜の各窓孔の形状を利用することで、 モデル画像と検査対象画像との重なり位置を正確に合わせて、 欠陥をより高 精度に検出することができる。  [0037] Thereby, by utilizing the shape of each window hole of the electron beam irradiation plate, the overlapping position of the model image and the inspection target image can be accurately matched, and the defect can be detected with higher accuracy. .
[0038] 本発明の他の観点に係る欠陥検出方法は、 半導体ウェハ上の複数のダイに それぞれ形成された微小構造体を、 前記各ダイの領域が複数に分割された分 割領域毎にそれぞれ撮像するステップと、 前記撮像される微小構造体を照明 するステップと、 前記撮像された各分割領域毎の画像を、 前記各ダイ内にお ける前記各分割領域の位置を識別する識別情報と対応付けて検査対象画像と して記憶するステップと、 前記記憶された各検査対象画像に対して、 当該各 検査対象画像中の低周波成分を除去するためのフィルタリングを施すステツ プと、 前記フィルタリングされた各検査対象画像のうち、 前記各ダイ間で前 記識別情報が対応する各分割領域の各検査対象画像を平均化した平均画像を モデル画像として前記識別情報毎にそれぞれ作成するステップと、 前記作成 された各モデル画像と、 当該各モデル画像に前記識別情報が対応する前記フ ィルタリングされた前記各検査対象画像とを比較して、 前記微小構造体の欠 陥を検出するステップとを具備する。 [0038] A defect detection method according to another aspect of the present invention provides a microstructure formed on each of a plurality of dies on a semiconductor wafer for each divided region obtained by dividing the region of each die into a plurality of regions. An imaging step; an illumination of the microstructure to be imaged; and an image of each of the captured divided regions in each die. A step of storing as an inspection target image in association with identification information for identifying the position of each of the divided regions, and for each of the stored inspection target images, a low frequency component in each of the inspection target images is stored. A step of performing filtering for removal, and an average image obtained by averaging the inspection target images of the divided regions corresponding to the identification information between the dies among the filtered inspection target images. A step of creating each of the identification information as an image, comparing each of the created model images with the filtered image to be inspected corresponding to the identification information corresponding to each model image, Detecting a defect of the microstructure.
[0039] 上記欠陥検出方法において、 前記検出するステップは、 前記各モデル画像 と、 当該各モデル画像に前記識別情報が対応する各検査対象画像とを重ねて 両画像の差分を差分画像として抽出するステップと、 前記抽出された差分画 像中の所定値以上の輝度を有する一連の画素領域のうち所定面積以下の画素 領域を除去するためのフィルタリングを施すステップとを有していても構わ ない。  [0039] In the defect detection method, the detecting step includes superimposing the model images and inspection target images corresponding to the identification information on the model images, and extracting a difference between the images as a difference image. And a step of performing filtering for removing a pixel area having a predetermined area or less from a series of pixel areas having a luminance equal to or higher than a predetermined value in the extracted difference image.
[0040] 本発明のまた別の観点に係る情報処理装置は、 半導体ウェハ上の複数のダ ィにそれぞれ形成された微小構造体が、 前記各ダイが複数に分割された分割 領域毎に照明下でそれぞれ撮像された画像を、 前記各ダイ内における前記各 分割領域の位置を識別する識別情報と対応付けて検査対象画像として記憶す る記憶手段と、 前記記憶された各検査対象画像に対して、 当該各検査対象画 像中の低周波成分を除去するためのフィルタリングを施すフィルタリング手 段と、 前記フィルタリングされた各検査対象画像のうち、 前記各ダイ間で前 記識別情報が対応する各分割領域の各検査対象画像を平均化した平均画像を モデル画像として前記識別情報毎にそれぞれ作成するモデル画像作成手段と 、 前記作成された各モデル画像と、 当該各モデル画像に前記識別情報が対応 する前記フィルタリングされた前記各検査対象画像とを比較して、 前記微小 構造体の欠陥を検出する検出手段とを具備する。  [0040] An information processing apparatus according to still another aspect of the present invention is such that a microstructure formed in each of a plurality of dies on a semiconductor wafer is illuminated in each divided region in which each die is divided into a plurality. Storage means for storing each of the images picked up in (2) as an inspection target image in association with identification information for identifying the position of each of the divided regions in each die, and for each of the stored inspection target images A filtering unit that performs filtering to remove low-frequency components in each image to be inspected, and each division corresponding to the identification information between each die among the filtered images to be inspected A model image creating means for creating, for each identification information, an average image obtained by averaging the images to be inspected in each region as a model image, each created model image, and each model Detecting means for detecting a defect of the microstructure by comparing the filtered image to be inspected with the identification information corresponding to a Dell image.
[0041 ] ここで情報処理装置とは例えば P C (Persona l Computer) 等のコンビュ一 タであり、 いわゆるノート型でもデスク トップ型でもよい。 [0041] Here, the information processing apparatus is a computer such as a PC (Personal Computer). So-called notebook type or desktop type.
[0042] 本発明のまた別の観点に係る情報処理方法は、 半導体ウェハ上の複数のダ ィにそれぞれ形成された微小構造体が、 前記各ダイが複数に分割された分割 領域毎に照明下でそれぞれ撮像された画像を、 前記各ダイ内における前記各 分割領域の位置を識別する識別情報と対応付けて検査対象画像として記憶す るステップと、 前記記憶された各検査対象画像に対して、 当該各検査対象画 像中の低周波成分を除去するためのフィルタリングを施すステップと、 前記 フィルタリングされた各検査対象画像のうち、 前記各ダイ間で前記識別情報 が対応する各分割領域の各検査対象画像を平均化した平均画像をモデル画像 として前記識別情報毎にそれぞれ作成するステップと、 前記作成された各モ デル画像と、 当該各モデル画像に前記識別情報が対応する前記フィルタリン グされた前記各検査対象画像とを比較して、 前記微小構造体の欠陥を検出す るステップとを具備する。  [0042] An information processing method according to still another aspect of the present invention is such that a microstructure formed in each of a plurality of dies on a semiconductor wafer is illuminated in each divided region in which each die is divided into a plurality. Storing each of the captured images as an inspection target image in association with identification information for identifying the position of each of the divided regions in each die, and for each of the stored inspection target images, Filtering for removing low-frequency components in each image to be inspected, and each inspection of each divided region corresponding to the identification information between the dies among the filtered image to be inspected An average image obtained by averaging the target images is created for each identification information as a model image, each created model image, and each identification image is associated with the identification information. And comparing each filtered image to be inspected corresponding to a report to detect a defect in the microstructure.
[0043] 本発明の更に別の観点に係るプログラムは、 情報処理装置に、 半導体ゥェ ハ上の複数のダイにそれぞれ形成された微小構造体が、 前記各ダイが複数に 分割された分割領域毎に照明下でそれぞれ撮像された画像を、 前記各ダイ内 における前記各分割領域の位置を識別する識別情報と対応付けて検査対象画 像として記憶するステップと、 前記記憶された各検査対象画像に対して、 当 該各検査対象画像中の低周波成分を除去するためのフィルタリングを施すス テツプと、 前記フィルタリングされた各検査対象画像のうち、 前記各ダイ間 で前記識別情報が対応する各分割領域の各検査対象画像を平均化した平均画 像をモデル画像として前記識別情報毎にそれぞれ作成するステップと、 前記 作成された各モデル画像と、 当該各モデル画像に前記識別情報が対応する前 記フィルタリングされた前記各検査対象画像とを比較して、 前記微小構造体 の欠陥を検出するステップとを実行させるためのものである。  [0043] A program according to still another aspect of the present invention provides a program in which a microstructure formed on each of a plurality of dies on a semiconductor wafer is divided into a plurality of divided regions obtained by dividing each of the dies into a plurality of dies. Storing each image captured under illumination for each of the images as an inspection object image in association with identification information for identifying the position of each of the divided regions in each die; and each of the stored inspection object images In contrast, a step for performing filtering for removing a low frequency component in each inspection target image, and each of the filtered inspection target images corresponding to the identification information between the dies. A step of creating an average image obtained by averaging the images to be inspected in the divided areas for each of the identification information as a model image, the created model images, and the models By comparing the respective inspection target image before Symbol filtering said identification information corresponding to the image, is intended to execute the steps of detecting a defect of the microstructure.
発明の効果  The invention's effect
[0044] 以上のように、 本発明によれば、 絶対的なモデル画像を必要とすることな く、 輝度むらやノイズ等による誤検出を防ぎながら高精度かつ効率的に M E M Sデバイスの欠陥を検出することができる。 As described above, according to the present invention, an absolute model image is not required, and the ME can be performed with high accuracy and efficiency while preventing erroneous detection due to uneven brightness or noise. MS device defects can be detected.
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0045] 以下、 本発明の実施の形態を図面に基づき説明する。  Hereinafter, embodiments of the present invention will be described with reference to the drawings.
[0046] 図 1は、 本発明の一実施形態に係る欠陥検出装置の構成を示した図である 同図に示すように、 欠陥検出装置 1 0 0は、 例えばシリコン製の半導体ゥ ェハ 1 (以下、 単にウェハ 1 とも称する) を保持するウェハテーブル 2と、 当該ウェハテーブル 2を同図 X方向、 Y方向及び Z方向へ移動させるための X Y Zステージ 3と、 ウェハ 1を上方から撮像する C C Dカメラ 6と、 この C C Dカメラ 6による撮像時にウェハ 1を照明する光源 7と、 これら各部の 動作を制御するとともに後述する画像処理を行う画像処理用 P C (Persona l Computer) 1 0とを有する。  FIG. 1 is a diagram showing a configuration of a defect detection apparatus according to an embodiment of the present invention. As shown in FIG. 1, the defect detection apparatus 100 is a semiconductor wafer 1 made of, for example, silicon. (Hereinafter, also simply referred to as wafer 1), a wafer table 2, a XYZ stage 3 for moving the wafer table 2 in the X, Y and Z directions in the figure, and a CCD for imaging the wafer 1 from above The camera 6 includes a light source 7 that illuminates the wafer 1 during imaging by the CCD camera 6, and an image processing PC (Personal Computer) 10 that controls the operation of each unit and performs image processing to be described later.
[0047] ゥヱハ 1は、 図示しない搬送アーム等によりゥヱハテーブル 2上へ搬送さ れ、 例えば図示しない真空ポンプ等の吸着手段によりゥェハテーブル 2に吸 着され固定される。 なお、 ウェハ 1をウェハテーブル 2に直接吸着させるの ではなく、 例えば別途ウェハ 1を保持可能なトレィ (図示せず) を用意して 、 ウェハ 1が当該トレイに保持された状態でトレイを吸着固定するようにし てもよい。 後述するが、 ウェハ 1に孔が形成されている場合等にはウェハを 直接真空吸着させることが困難な場合もあるため、 このトレィを用いた吸着 方法は有効である。 ウェハ 1上には、 M E M Sデバイスとしてプロテインチ ップが形成されている。 欠陥検出装置 1 0 0は、 このプロテインチップを検 査対象物として、 プロテインチップ上の異物や傷等の欠陥を検出するための 装置である。 当該プロテインチップの詳細については後述する。  The woofer 1 is transported onto the woofer table 2 by a transport arm (not shown) and is adsorbed and fixed to the woofer table 2 by suction means such as a vacuum pump (not shown). Instead of directly adsorbing wafer 1 to wafer table 2, for example, a tray (not shown) that can hold wafer 1 is prepared separately, and the tray is adsorbed and fixed with wafer 1 held on the tray. You may do so. As will be described later, when a hole is formed in the wafer 1, it may be difficult to directly vacuum-suck the wafer, so the suction method using this tray is effective. On the wafer 1, a protein chip is formed as a MEMS device. The defect detection apparatus 100 is an apparatus for detecting defects such as foreign matters and scratches on the protein chip using the protein chip as an inspection object. Details of the protein chip will be described later.
[0048] C C Dカメラ 6は、 ウェハ 1の上方の所定位置に固定されており、 レンズ やシャツタ (図示せず) 等を内蔵している。 C C Dカメラ 6は、 画像処理用 P C 1 0から出力されるトリガ信号に基づいて、 内蔵のレンズにより拡大さ れた、 ウェハ 1の所定部分に形成されたプロテインチップの像を、 光源 7に より発せられた閃光下において撮像し、 撮像画像を画像処理用 P C 1 0へ転 送する。 また、 上記 X Y Zステージ 3は、 ウェハ 1を上下方向 (Z方向) へ 移動させることで、 C C Dカメラ 6とウェハ 1 との相対距離を可変して、 C C Dカメラ 6がゥヱハ 1を撮像する際の焦点位置を可変することが可能とな つている。 なお、 X Y Zステージ 3ではなく C C Dカメラ 6を Z方向に移動 させて焦点位置を可変するようにしても構わない。 The CCD camera 6 is fixed at a predetermined position above the wafer 1 and incorporates a lens, a shirter (not shown) and the like. Based on the trigger signal output from the image processing PC 10, the CCD camera 6 emits an image of a protein chip formed on a predetermined portion of the wafer 1, enlarged by a built-in lens, by a light source 7. The image is captured under the flashlight and the captured image is transferred to the image processing PC 10 To send. The XYZ stage 3 moves the wafer 1 in the vertical direction (Z direction), thereby changing the relative distance between the CCD camera 6 and the wafer 1, and the focal point when the CCD camera 6 captures the wafer 1 The position can be changed. The focal position may be varied by moving the CCD camera 6 in the Z direction instead of the XYZ stage 3.
[0049] 更に、 C C Dカメラ 6の上記レンズはズームレンズとして構成され、 焦点 距離を可変することで、 異なる倍率でプロティンチップを撮像することが可 能となっている。 本実施形態においては、 C C Dカメラ 6の倍率は、 約 7倍 (低倍率) と約 1 8倍 (高倍率) の 2段階に可変可能であるものとする。 低 倍率の場合の視野サイズは例えば 6 80 X 5 1 0 ( m2) 、 高倍率の場合の 視野サイズは例えば 2 7 0 X 200 (U m^) であが、 これらの倍率に限られ るものではない。 なお、 C C Dカメラ 6の代わりに CM O Sセンサ等の他の 撮像素子を内蔵したカメラを用いても構わない。 [0049] Further, the lens of the CCD camera 6 is configured as a zoom lens, and the protein chip can be imaged at different magnifications by changing the focal length. In the present embodiment, it is assumed that the magnification of the CCD camera 6 can be varied in two stages of about 7 times (low magnification) and about 18 times (high magnification). The field size in the case of low magnification is, for example, 6 80 X 5 10 (m 2 ), and the field size in the case of high magnification is, for example, 2 70 X 200 (U m ^). It is not a thing. Instead of the CCD camera 6, a camera incorporating another image sensor such as a CM OS sensor may be used.
[0050] 光源 7は、 ウェハ 1の上方の所定位置に固定されており、 例えば高輝度の 白色 L E Dやキセノンランプ等からなるフラッシュランプ及び当該フラッシ ュランプの点灯を制御するフラッシュ点灯回路等を有する。 光源 7は、 画像 処理用 P C 1 0から出力されるフラッシュ信号に基づいて、 例えば数; U秒程 度の所定時間、 高輝度の閃光を発することにより、 ウェハ 1の上記所定部分 を照明する。  [0050] The light source 7 is fixed at a predetermined position above the wafer 1, and includes a flash lamp composed of, for example, a high-intensity white LED or a xenon lamp, and a flash lighting circuit that controls lighting of the flash lamp. Based on the flash signal output from the image processing PC 10, the light source 7 illuminates the predetermined portion of the wafer 1 by emitting a high-intensity flash for a predetermined time of, for example, several U seconds.
[0051] X Y Zステージ 3は、 Xステージ 1 1及び Yステージ 1 2を移動軸 1 3に 沿ってそれぞれ X方向、 Y方向及び Z方向へ移動させるためのモータ 4と、 当該 Xステージ 1 1及び Yステージ 1 2の移動距離を判別するためのェンコ ーダ 5を有する。 モータ 4は例えば A Cサーポモータ、 D Cサーポモータ、 ステッピングモータ、 リニアモータ等であり、 エンコーダ 5は例えば各種モ —タエンコーダやリニアスケール等である。 エンコーダ 5は、 Xステージ 1 1及び Yステージ 1 2が X、 Y及び Z方向へ単位距離だけ移動する毎に、 そ の移動情報 (座標情報) であるエンコーダ信号を生成し、 当該エンコーダ信 号を画像処理用 P C 1 0へ出力する。 [0052] 画像処理用 PC 1 0は、 エンコーダ 5から上記エンコーダ信号を入力し、 当該エンコーダ信号に基づいて、 光源 7に対してフラッシュ信号を出力し、 —方で CCDカメラ 6に対してトリガ信号を出力する。 また画像処理用 PC 1 0は、 エンコーダ 5から入力したエンコーダ信号を基に、 モータ 4の駆動 を制御するモータ制御信号をモータ 4へ出力する。 [0051] The XYZ stage 3 includes a motor 4 for moving the X stage 1 1 and the Y stage 1 2 along the movement axis 1 3 in the X direction, the Y direction, and the Z direction, respectively, and the X stage 1 1 and Y Encoder 5 is used to determine the moving distance of stage 1 2. The motor 4 is, for example, an AC servo motor, a DC servo motor, a stepping motor, a linear motor, etc., and the encoder 5 is, for example, various motor encoders, a linear scale, or the like. Each time the X stage 1 1 and the Y stage 1 2 move by a unit distance in the X, Y, and Z directions, the encoder 5 generates an encoder signal that is the movement information (coordinate information) and outputs the encoder signal. Output to PC 10 for image processing. [0052] The image processing PC 10 inputs the encoder signal from the encoder 5 and outputs a flash signal to the light source 7 based on the encoder signal. Is output. The image processing PC 10 outputs a motor control signal for controlling the driving of the motor 4 to the motor 4 based on the encoder signal input from the encoder 5.
[0053] 図 2は、 当該画像処理用 P C 1 0の構成を示したブロック図である。  FIG. 2 is a block diagram showing a configuration of the image processing PC 10.
同図に示すように、 画像処理用 PC 1 0は、 CPU (Central Processing Unit) 21、 ROM (Read Only Memory) 22、 RAM (Random Access Mem ory) 23、 入出力インタフェース 24、 HDD (Hard Disk Drive) 25、 表示部 26及び操作入力部 27を有し、 各部は内部バス 28で相互に電気的 に接続されている。  As shown in the figure, the image processing PC 10 includes a CPU (Central Processing Unit) 21, ROM (Read Only Memory) 22, RAM (Random Access Memory) 23, I / O interface 24, HDD (Hard Disk Drive 25) A display unit 26 and an operation input unit 27, which are electrically connected to each other via an internal bus 28.
[0054] C P U 21は、 画像処理用 P C 1 0の各部を統括的に制御し、 後述する画 像処理における各種演算を行う。 ROM22は、 画像処理用 PC 1 0の起動 時に必要なプログラムやその他の書き換え不要のプログラムやデータを記憶 する不揮発性のメモリである。 RAM23は、 C P U 21のワークエリアと して用いられ、 HDD25や ROM22から各種データやプログラムを読み 出して一時的に格納する揮発性のメモリである。  [0054] The C P U 21 comprehensively controls each part of the image processing PC 10 and performs various calculations in image processing to be described later. The ROM 22 is a non-volatile memory that stores programs necessary for starting up the image processing PC 10 and other programs and data that do not require rewriting. The RAM 23 is used as a work area of the CPU 21 and is a volatile memory that reads various data and programs from the HDD 25 and ROM 22 and temporarily stores them.
[0055] 入出力インタフヱ一ス 24は、 操作入力部 27や、 上記モータ 4、 ェンコ —ダ 5、 光源 7及び CCDカメラ 6と内部バス 28とを接続して、 操作入力 部 27からの操作入力信号の入力や、 モータ 4、 エンコーダ 5、 光源 7及び CCDカメラ 6との各種信号のやり取りを行うためのィンタフエースである  [0055] The input / output interface 24 is connected to the operation input unit 27, the motor 4, the encoder 5, the light source 7, and the CCD camera 6 with the internal bus 28, and the operation input unit 27 inputs the operation. This interface is used to input signals and exchange various signals with the motor 4, encoder 5, light source 7 and CCD camera 6.
[0056] HDD 25は、 OS (Operating System) や後述する撮像処理及び画像処 理を行うための各種プログラム、 その他の各種アプリケーション、 そして上 記 C C Dカメラ 6で撮像した検査対象画像としてのプロテインチップの画像 及び当該検査対象画像から作成したモデル画像 (後述) 等の画像データや、 撮像処理及び画像処理で参照するための各種データ等を内蔵のハードディス クへ記憶する。 [0057] 表示部 2 6は、 例えば L C D (L i qu i d Crysta l D i sp l ay) や C R T (Catho de Ray Tube) 等からなり、 上記 C C Dカメラ 6で撮像した画像や画像処理用 の各種操作画面等を表示する。 操作入力部 2 7は、 例えばキーボードやマウ ス等からなり、 後述する画像処理等におけるユーザからの操作を入力する。 [0056] The HDD 25 includes an OS (Operating System), various programs for performing imaging processing and image processing, which will be described later, various other applications, and a protein chip as an inspection target image captured by the CCD camera 6. Image data such as a model image (described later) created from the image and the image to be inspected, and various data for reference in imaging processing and image processing are stored in a built-in hard disk. [0057] The display unit 26 includes, for example, an LCD (Liquid Crystal Display), a CRT (Cathode Ray Tube), and the like. The image captured by the CCD camera 6 and various image processing images are displayed. Displays the operation screen. The operation input unit 27 includes, for example, a keyboard and a mouse, and inputs an operation from a user in image processing described later.
[0058] 次に、 上記ウェハ 1上に形成されるプロテインチップについて説明する。 図 3は、 ウェハ 1の上面図である。 同図に示すように、 ウェハ 1上には、 例えば 8 8個の半導体チップ 3 0 (以下、 単にチップ 3 0またはダイ 3 0と も称する) がグリッド状に形成されている。 もちろんダイ 3 0の数は 8 8個 に限られるものではない。  Next, the protein chip formed on the wafer 1 will be described. FIG. 3 is a top view of the wafer 1. As shown in the figure, for example, 88 semiconductor chips 30 (hereinafter also simply referred to as chips 30 or dies 30) are formed on the wafer 1 in a grid. Of course, the number of dies 30 is not limited to 88.
[0059] 図 4は、 ウェハ 1の各ダイ 3 0のうちの一つを示した上面図である。 同図 に示すように、 各ダイ 3 0には、 その全面に亘つて複数の円形の凹部 5 0を 有するプロテインチップ 3 5が形成されている。 各ダイ 3 0すなわち各プロ ティンチップ 3 5は略正方形状をしており、 その一辺の長さ sは例えば数 m m〜数十 m m程度であるが、 この寸法に限られない。  FIG. 4 is a top view showing one of the dies 30 on the wafer 1. As shown in the figure, each die 30 is formed with a protein chip 35 having a plurality of circular recesses 50 over its entire surface. Each die 30, that is, each protein chip 35, has a substantially square shape, and the length s of one side thereof is, for example, about several mm to several tens of mm, but is not limited to this dimension.
[0060] 図 5は、 プロテインチップ 3 5のうち一つの凹部 5 0を拡大して示した図 である。 同図 (a ) は凹部 5 0の上面図であり、 同図 (b ) は凹部 5 0の Z 方向の断面図である。  FIG. 5 is an enlarged view showing one recess 50 in the protein chip 35. FIG. 4A is a top view of the recess 50, and FIG. 4B is a sectional view of the recess 50 in the Z direction.
[0061 ] 図 4及び図 5に示すように、 プロテインチップ 3 5の各凹部 5 0の底面 5 2には、 複数の孔 5 5を有する薄膜 (メンプレン) 5 3が形成されている。 孔 5 5は各凹部 5 0の円形の底面 5 2の全面に亘つて形成されている。 各凹 部 5 0の径 d 1は例えば数百; U mであり、 各孔 5 5の径 d 2は例えば数 m であり、 また凹部 5 0の深さ (上面 5 1から底面 5 2までの高さ) hは例え ば数百; U mであるが、 これらの寸法に限られるものではない。  As shown in FIG. 4 and FIG. 5, a thin film (membrane) 53 having a plurality of holes 55 is formed on the bottom surface 52 of each recess 50 of the protein chip 35. The hole 55 is formed over the entire surface of the circular bottom surface 52 of each recess 50. The diameter d 1 of each recess 50 is, for example, several hundred; U m, and the diameter d 2 of each hole 55 is, for example, several m, and the depth of the recess 50 (from the top surface 51 to the bottom surface 52) The height h) is, for example, several hundreds; U m, but is not limited to these dimensions.
[0062] このプロテインチップ 3 5は、 凹部 5 0の底面 5 2に担体として例えばラ テックス製の微粒子 (ラテックスビーズ) を載置し、 凹部 5 0に試薬として 抗体 (タンパク質) を投入して、 抗体交差反応によりラテックスビーズと吸 着する特定の性質を有するタンパク質をスクリーニングするためのシリコン 製の容器である。 ラテックスビーズと吸着しなかった試薬 (タンパク質) は 、 上記底面 5 2の各孔 5 5から排出され、 特定の性質を有するタンパク質の みが凹部 5 0内に残ることとなる。 [0062] In this protein chip 35, for example, latex fine particles (latex beads) are placed on the bottom surface 52 of the recess 50 as a carrier, and an antibody (protein) is introduced as a reagent into the recess 50, Silicon for screening proteins with specific properties that adsorb to latex beads by antibody cross-reaction The container made The reagent (protein) that has not adsorbed to the latex beads is discharged from each hole 55 of the bottom surface 52, and only the protein having a specific property remains in the recess 50.
[0063] ここで、 このプロテインチップ 3 5の製造方法について簡単に説明する。 [0063] Here, a method for producing the protein chip 35 will be briefly described.
まずウェハ 1の一方面に C V D (Chem i ca l Vapor Depos i t i on) 法によりシ リコン酸化膜等の薄膜 5 3を形成する。 次に、 ウェハ 1の他方面にフォトレ ジストを塗布し、 フォトリソグラフィ技術により不要な部分を除去し、 レジ ストパターンをマスクとしてエッチングを行う。 これにより、 ウェハ 1に、 薄膜 5 3を残して複数の凹部 5 0を形成する。 そして、 この各凹部 5 0の薄 膜 5 3にフォトレジストを塗布し、 フォトリソグラフィ技術により孔 5 5の 部分を除去し、 レジストパターンをマスクとしてエッチングを行う。 これに より、 上記図 5に示したような、 多数の孔 5 5が形成された薄膜 5 3を有す る複数の凹部 5 0で構成されるプロテインチップ 3 5を形成することができ る。  First, a thin film 53 such as a silicon oxide film is formed on one surface of the wafer 1 by a C V D (Chemical Vapor Deposition) method. Next, a photoresist is applied to the other surface of the wafer 1, unnecessary portions are removed by a photolithography technique, and etching is performed using the resist pattern as a mask. As a result, a plurality of recesses 50 are formed on the wafer 1 while leaving the thin film 53. Then, a photoresist is applied to the thin film 53 of each recess 50, the hole 55 is removed by photolithography, and etching is performed using the resist pattern as a mask. As a result, the protein chip 35 composed of a plurality of recesses 50 having the thin film 53 formed with a large number of holes 55 as shown in FIG. 5 can be formed.
[0064] 次に、 本実施形態における欠陥検出装置 1 0 0がプロテインチップ 3 5の 欠陥を検出する動作について説明する。 図 6は、 欠陥検出装置 1 0 0が欠陥 を検出するまでの動作の大まかな流れを示したフローチャートである。  Next, an operation in which the defect detection device 100 according to this embodiment detects a defect in the protein chip 35 will be described. FIG. 6 is a flowchart showing a rough flow of operations until the defect detection apparatus 100 detects a defect.
[0065] 同図に示すように、 まず、 C C Dカメラ 6は、 プロテインチップ 3 5が形 成された各ダイ 3 0の画像を上記低倍率で撮像する (ステップ 1 0 1 ) 。 具 体的には、 図 7に示すように、 各ダイを例えば 1 8行 X 1 3列の計 2 3 4個 の第 1分割領域 7 1に分け、 光源 7による閃光下において C C Dカメラ 6で 当該第 1分割領域 7 1毎の画像を取得する。 第 1分割領域 7 1の数及び縦横 比はこの数に限られるものではない。 この各第 1分割領域 7 1には、 予めそ の位置を識別するための I Dが付されており、 画像処理用 P C 1 0の H D D 2 5は、 その各 I Dを記憶しておく。 この I Dにより、 画像処理用 P C 1 0 は、 異なるダイ 3 0間で同一位置に存在する第 1分割領域 7 1を識別するこ とが可能となっている。 また、 各ダイ 3 0にもそれぞれ I Dが付されており 、 画像処理用 P C 1 0は、 各第 1分割領域 7 1力 どのダイ 3 0のどの第 1 分割領域 7 1であるかを識別することが可能となっている。 [0065] As shown in the figure, first, the CCD camera 6 captures an image of each die 30 on which the protein chip 35 is formed at the low magnification (step 1001). Specifically, as shown in FIG. 7, each die is divided into, for example, 1 8 rows X 1 3 columns in total 2 3 4 first divided areas 7 1, and a CCD camera 6 under the flash of the light source 7 An image for each first divided area 71 is acquired. The number and aspect ratio of the first divided areas 71 are not limited to this number. Each first divided area 71 is pre-assigned with an ID for identifying its position, and the HDD 25 of the image processing PC 10 stores the ID. With this ID, the image processing PC 10 can identify the first divided area 71 existing at the same position between different dies 30. Each die 30 is also assigned an ID, and the image processing PC 10 has each first divided region 7 1 force which die 30 has which first. It is possible to identify whether it is a divided area 71.
[0066] この時、 上述したように、 画像処理用 P C 1 0は、 エンコーダ 5からのェ ンコーダ信号を基にモータ 4へモータ駆動信号を出力して X Y Zステージ 3 を移動させるとともに、 上記エンコーダ信号を基にトリガ信号及びフラッシ ュ信号を生成し、 トリガ信号を C C Dカメラ 6へ、 フラッシュ信号を光源 7 へそれぞれ出力する。 At this time, as described above, the image processing PC 10 outputs a motor drive signal to the motor 4 based on the encoder signal from the encoder 5 to move the XYZ stage 3, and the encoder signal The trigger signal and flash signal are generated based on the above, and the trigger signal is output to the CCD camera 6 and the flash signal is output to the light source 7, respectively.
[0067] 上記 X Y Zステージ 3が移動する毎に、 光源 7は、 上記フラッシュ信号を 基に、 数; U秒毎にプロテインチップ 3 5へ向けて閃光を発し、 C C Dカメラ 6は、 上記当該閃光下において、 上記トリガ信号を基に、 例えば 5 0枚/秒 の速度でウェハ 1上のプロテインチップ 3 5の各第 1分割領域 7 1を連続し て撮像する。 [0067] Each time the XYZ stage 3 moves, the light source 7 emits a flash to the protein chip 35 every U seconds, based on the flash signal, and the CCD camera 6 Then, based on the trigger signal, for example, the first divided areas 71 of the protein chip 35 on the wafer 1 are continuously imaged at a speed of 50 sheets / second, for example.
[0068] 図 8は、 C C Dカメラ 6がプロテインチップ 3 5を第 1分割領域 7 1毎に 撮像する際の撮像位置の軌跡を示した図である。 本実施形態においては、 同 図 (a ) 及び (b ) に示すように、 2つの撮像経路が考えられる。  FIG. 8 is a diagram showing the locus of the imaging position when the CCD camera 6 images the protein chip 35 for each first divided area 71. In this embodiment, two imaging paths are conceivable, as shown in FIGS. (A) and (b).
[0069] 同図 (a ) に示すように、 C C Dカメラ 6は、 ウェハ 1の 8 8個のダイ 3 0のうち、 例えば Y座標が最大となるダイ 3 0のうち左端のダイ 3 0を始点 として、 当該ダイ 3 0の 1 8行 X 1 3列の各第 1分割領域 7 1を例えば一行 毎に連続的に全て撮像した後、 次のダイ 3 0へ移ってまた一行毎に全ての第 1分割領域 7 1を撮像していく。  [0069] As shown in (a) of the figure, the CCD camera 6 starts from the leftmost die 30 of the die 30 having the maximum Y coordinate among the 88 dies 30 of the wafer 1, for example. For example, the first divided area 71 of each of the 18 rows X 1 3 columns of the die 30 is imaged continuously for every row, for example, and then moved to the next die 30 and all the first divided regions 71 for every row. 1 divided area 7 1 is imaged.
[0070] すなわち、 画像処理用 P C 1 0は、 一のダイ 3 0の各第 1分割領域 7 1に おける撮像位置を、 例えば上端の行かつ左端の列に属する第 1分割領域 7 1 を始点として X方向の右側へ移動させ、 右端まで移動したら一行分だけ Y方 向へ移動させた上で X方向の左側へ移動させ、 左端まで移動したらまた一行 分だけ Y方向へ移動させて次の行において X方向の右側へ移動させるといつ た動作を繰り返し、 一のダイ 3 0の全ての第 1分割領域 7 1の撮像が終了す ると、 隣接する他のダイ 3 0へ移動させて同様の移動を繰り返させるように 、 モータ 4に対してモータ駆動信号を出力する。 なお、 C C Dカメラ 6の位 置自体は固定であるため、 このとき X Y Zステージ 3は、 実際には同図 (a ) に示した軌跡とは逆方向へ移動することとなる。 CCDカメラ 6は、 この 移動に合わせて画像処理用 P C 1 0から出力されるトリガ信号に基づいて各 第 1分割領域 7 1を連続して撮像する。 That is, the image processing PC 10 0 sets the imaging position in each first divided area 7 1 of one die 30, for example, the first divided area 7 1 belonging to the uppermost row and the leftmost column. Move to the right in the X direction, move to the right end, move one line in the Y direction, move to the left in the X direction, move to the left end, move one line in the Y direction, and move to the next line. When the image is moved to the right in the X direction, the operation is repeated.When imaging of all the first divided areas 71 of one die 30 is completed, the image is moved to the other adjacent die 30 and the same. A motor drive signal is output to the motor 4 so that the movement is repeated. Since the position of the CCD camera 6 itself is fixed, the XYZ stage 3 is actually It will move in the opposite direction to the trajectory shown in). The CCD camera 6 continuously images each first divided area 71 based on the trigger signal output from the image processing PC 10 in accordance with this movement.
[0071] また、 同図 (b) に示すように、 CCDカメラ 6は、 各ダイ 30間で対応 する I Dを有する (同一位置に存在する) 第 1分割領域 7 1を連続して撮像 するように撮像位置を移動させてもよい。  In addition, as shown in FIG. 7B, the CCD camera 6 continuously captures images of the first divided area 71 having an ID corresponding to each die 30 (existing at the same position). The imaging position may be moved to.
[0072] すなわち、 画像処理用 PC 1 0は、 CCDカメラ 6の撮像位置を、 例えば Y座標が最大となるダイ 30のうち左端のダイ 30を始点として、 まず 1順 目には、 異なる各ダイ 30間で対応する I Dを有する、 X座標が最小、 Y座 標が最大となる位置に存在する各第 1分割領域 7 1 (黒塗りの丸で示した第 1分割領域 7 1 ) 上を通るように X方向及び Y方向へ移動させ、 続いて 2順 目には、 1順目の撮像位置に X方向で隣接する、 対応する I Dを有する各第 1分割領域 7 1 (白塗りの丸で示した第 1分割領域 7 1 ) 上を通るように移 動させ、 その後も同様に各ダイ 30間で同一位置に存在する各第 1分割領域 7 1上を CCDカメラ 6が通る動作を繰り返すようにモータ 4を駆動させる 。 CCDカメラ 6はこの移動に合わせて画像処理用 PC 1 0から出力される トリガ信号に基づいて、 対応する I Dを有する複数の第 1分割領域 7 1をま とめて連続して撮像する動作を全てのダイ 30について繰り返す。  That is, the image processing PC 10 sets the imaging position of the CCD camera 6, for example, the die 30 having the maximum Y coordinate from the leftmost die 30 as a starting point, It passes over each first divided area 7 1 (first divided area 7 1 indicated by a black circle) that has an ID corresponding to 30 and exists at the position where the X coordinate is the minimum and the Y coordinate is the maximum. In the second order, each of the first divided areas 7 1 (with white circles) adjacent to the first position in the X direction and having a corresponding ID is moved in the second order. The first divided area 71 1) shown above is moved so that the CCD camera 6 repeats the movement over each first divided area 71 located at the same position between the dies 30 in the same manner. To drive motor 4. Based on the trigger signal output from the image processing PC 10 in accordance with this movement, the CCD camera 6 performs all the operations for continuously imaging the plurality of first divided areas 71 having the corresponding ID. Repeat for Die 30.
[0073] 画像処理用 PC 1 0は、 この 2つの撮像経路のうち、 撮像時間が短くなる 経路を選択して CCDカメラ 6に撮像させる。 同図 (a) で示した撮像経路 を採る場合には、 各第 1分割領域 7 1の各撮像時における撮像間隔、 すなわ ち XYZステージ 3の移動間隔は各第 1分割領域 7 1の間隔と同一であり、 同図 (b) で示した撮像経路を採る場合には、 XYZステージ 3の移動間隔 は各ダイ 30の間隔と同一である。 したがって、 画像処理用 PC 1 0の CP U 21は、 これら移動間隔と CCDカメラ 6の撮像周波数から、 モータ 4の 駆動速度を算出することが可能である。 この駆動速度に、 図 3に示したダイ 30のレイアウトから定まる上記図 8 (a) 及び (b) に示した全体の撮像 経路を乗じることで、 全てのダイ 30の第 1分割領域 7 1を撮像する際の同 図 (a) 及び (b) それぞれの場合の撮像時間が推定される。 画像処理用 P C 1 0は、 この各撮像時間を比較することで、 同図 (a) 及び (b) のどち らの撮像経路を採れば撮像時間が早くなるかを判断し、 撮像時間が早い方の 撮像経路を選択する。 The image processing PC 10 selects a path in which the imaging time is shortened from the two imaging paths and causes the CCD camera 6 to capture an image. When the imaging path shown in Fig. 5 (a) is taken, the imaging interval of each first divided area 71 at the time of each imaging, that is, the movement interval of the XYZ stage 3 is the interval of each first divided area 71. In the case where the imaging path shown in FIG. 5B is taken, the movement interval of the XYZ stage 3 is the same as the interval of each die 30. Therefore, the CPU 21 of the image processing PC 10 can calculate the driving speed of the motor 4 from these movement intervals and the imaging frequency of the CCD camera 6. By multiplying this driving speed by the entire imaging path shown in FIGS. 8 (a) and 8 (b) determined from the layout of the die 30 shown in FIG. 3, the first divided areas 71 of all the dies 30 are obtained. Same as when shooting Figures (a) and (b) The imaging time in each case is estimated. The image processing PC 10 compares the respective imaging times to determine which imaging route (a) and (b) is used, the imaging time is faster, and the imaging time is faster. Select the imaging path.
[0074] そして、 CCDカメラ 6で撮像された各第 1分割領域 7 1の画像は、 上記 各第 1分割領域 7 1を識別する I Dとともに検査対象画像として画像処理用 PC 1 0へ送信され、 画像処理用 PC 1 0の入出力インタフ ι_ス 24を介 して H D D 25または RAM 23へ保存される。 なお、 本実施形態において C C Dカメラ 6で撮像される検査対象画像のサイズはいわゆる V G A (Video Graphics Array) サイズ ( 640 x 480ピクセル) の画像であるが、 この サイズに限られるものではない。  [0074] Then, the image of each first divided region 71 captured by the CCD camera 6 is transmitted to the image processing PC 10 as an inspection target image together with the ID for identifying each first divided region 71, It is stored in the HDD 25 or RAM 23 via the input / output interface 24 of the image processing PC 10. In this embodiment, the size of the inspection target image captured by the CCD camera 6 is a so-called VGA (Video Graphics Array) size (640 × 480 pixels) image, but is not limited to this size.
[0075] 本実施形態において、 CCDカメラ 6は、 上述したように、 XYZテ一ブ ル 3が Z方向へ移動することで、 ウェハ 1のプロテインチップ 35との距離 を可変して異なる焦点位置の検査対象画像を撮像することが可能である。 図 9は、 その様子を示した図である。  In the present embodiment, as described above, the CCD camera 6 moves the XYZ table 3 in the Z direction, thereby changing the distance between the wafer 1 and the protein chip 35 and having different focal positions. It is possible to take an image to be inspected. Figure 9 shows the situation.
[0076] 同図に示すように、 XYZステージ 3は、 画像処理用 PC 1 0からのフォ 一カス信号に基づいて、 上方向 (同図 Z 1方向) 及び下方向 (同図 Z 2方向 ) へ移動し、 CCDカメラ 6とプロテインチップ 35との距離を例えば 3段 階 (焦点 F 1〜F3) に可変する。 すなわち、 CCDカメラ 6は、 XYZス テ一ジ 3が Z 2方向へ移動することでプロテインチップ 35の上面 51に焦 点位置を合わせ (焦点 F 1 ) 、 そこから XYZステージ 3が Z 1方向へ移動 することでプロテインチップ 35の上面 51 と底面 52との略中間位置に焦 点位置を合わせ (焦点 F2) 、 更に XYZステージ 3が Z 1方向へ移動する ことでプロテインチップ 35の底面 52に焦点位置を合わせる (焦点 F 3) ことが可能である。 なお、 可変する焦点位置は 3つに限られるものではない  [0076] As shown in the figure, the XYZ stage 3 is based on the focus signal from the image processing PC 10 and is directed upward (Z1 direction in the figure) and downward (Z2 direction in the figure). The distance between the CCD camera 6 and the protein chip 35 is changed to, for example, the third stage (focal points F1 to F3). That is, the CCD camera 6 moves the XYZ stage 3 in the Z2 direction so that the focal point is aligned with the upper surface 51 of the protein chip 35 (focal point F 1), and then the XYZ stage 3 moves in the Z1 direction. By moving, the focus position is adjusted to the approximate middle position between the upper surface 51 and the bottom surface 52 of the protein chip 35 (focal point F2). It is possible to align (focal point F 3). Note that the variable focal position is not limited to three.
[0077] このように、 本実施形態における欠陥検出装置 1 00は、 異なる複数の焦 点位置で撮像することで、 検査対象物が本実施形態のプロテインチップ 35 のように z方向において厚み (深さまたは高さ) がある立体形状を有する場 合でも、 z方向における各位置の画像を取得して、 欠陥の検出漏れを防ぐこ とが可能である。 CCDカメラ 6は、 上記図 8 (a) または (b) の経路に より撮像した各焦点位置の画像をそれぞれ焦点位置毎に分別した上で画像処 理用 P C 1 0へ送信し、 画像処理用 P C 1 0はそれらの画像を各焦点位置毎 の検査対象画像として識別した上で HDD 25または RAM 23へ保存する 。 すなわち、 上述のように焦点が F 1〜F 3の 3つの場合には、 CCDカメ ラ 6は、 上記図 8 (a) または (b) の撮像経路による移動を焦点位置毎に 計 3回繰り返して撮像を行うこととなる。 As described above, the defect detection apparatus 100 according to the present embodiment captures images at a plurality of different focal positions so that the inspection target is the protein chip 35 according to the present embodiment. Even in the case of a three-dimensional shape having a thickness (depth or height) in the z direction, it is possible to acquire an image of each position in the z direction and prevent defect detection omission. The CCD camera 6 sorts the image of each focal position imaged by the route shown in Fig. 8 (a) or (b) for each focal position and transmits it to the image processing PC 10 for image processing. The PC 10 identifies these images as inspection target images for each focal position and stores them in the HDD 25 or RAM 23. That is, as described above, when the focal points are F1 to F3, the CCD camera 6 repeats the movement along the imaging path shown in FIG. 8 (a) or (b) three times in total for each focal position. Imaging.
[0078] 図 6のフローチャートに戻り、 画像処理用 P C 1 0の C P U 21は、 CC Dカメラ 6による上記撮像処理と並行して、 上記各検査対象画像を C C D力 メラ 6から取得する毎に、 その取得した各検査対象画像に対してハイパスフ ィルタによるフィルタリング処理を施す (ステップ 1 02) 。  Returning to the flowchart of FIG. 6, the CPU 21 of the image processing PC 10 obtains each inspection object image from the CCD force camera 6 in parallel with the imaging process by the CCD camera 6. A filtering process using a high-pass filter is performed on each acquired image to be inspected (step 102).
[0079] 本実施形態におけるプロテインチップ 35は、 底面 52に薄膜 53を有し ており、 例えばその薄膜 53が撓んでいる場合等、 薄膜 53の平面度によつ て輝度むらが生じる場合がある。 また、 CCDカメラ 6の光軸のずれや光源 7による閃光の当たり方の均一度等によっても輝度むらが生じる場合がある 。 このような輝度むらは、 後述するモデル画像との差分抽出処理において差 分として抽出されてしまい、 欠陥の誤検出に繋がる。  [0079] The protein chip 35 in the present embodiment has a thin film 53 on the bottom surface 52, and uneven brightness may occur depending on the flatness of the thin film 53, for example, when the thin film 53 is bent. . In addition, luminance unevenness may also occur due to the deviation of the optical axis of the CCD camera 6 or the uniformity of how the light source 7 hits the flash. Such luminance unevenness is extracted as a difference in a difference extraction process with a model image described later, which leads to erroneous detection of a defect.
[0080] この輝度むら部分は、 検査対象画像中において輝度が緩やかに変化してい る部分である。 すなわち輝度むら成分は低周波成分であるといえる。 そこで 本実施形態においては、 撮像された各検査対象画像に対してハイパスフィル タをかけて、 この低周波成分を除去することとしている。  This uneven brightness portion is a portion where the brightness changes gently in the inspection target image. That is, it can be said that the luminance unevenness component is a low frequency component. Therefore, in this embodiment, a high-pass filter is applied to each captured image to be inspected to remove this low frequency component.
[0081] 図 1 0は、 このハイパスフィルタ処理の詳細な流れを示したフローチヤ一 トである。  FIG. 10 is a flowchart showing the detailed flow of this high-pass filter process.
同図に示すように、 まず、 画像処理用 P C 1 0の CPU 21は、 HDD2 5から上記検査対象画像の複製を RAM 23へ読み出し (ステップ 61 ) 、 当該検査対象画像に対してガウスぼかし処理を施す (ステップ 62) 。 なお 、 ぼかしの設定値は例えば半径 1 5〜1 6ピクセル程度に設定されるが、 こ の設定値に限られるものではない。 As shown in the figure, first, the CPU 21 of the image processing PC 10 reads a copy of the image to be inspected from the HDD 25 to the RAM 23 (step 61), and performs Gaussian blurring processing on the image to be inspected. Apply (step 62). In addition The blur setting value is set to a radius of about 15 to 16 pixels, for example, but is not limited to this setting value.
[0082] このガウスぼかし処理において、 元の検査対象画像中の高周波成分 (例え ばエッジ部分) のピクセルは、 その周辺の低周波成分のピクセルを取り込む ようにしてぼかされるため、 高いぼかし効果が得られることとなる。 一方、 元の検査対象画像中の低周波成分 (例えば輝度むら部分) のピクセルは、 そ の周辺の取り込まれるピクセルも低周波成分であるため、 ぼかし効果は低く 、 元の検査対象画像に比べてほとんど変化は見られない。 したがって、 ガウ スぼかし処理により得られる出力画像 (以下、 ガウスぼかし画像という) は 、 元の検査対象画像中の高周波成分が平滑化されて、 全体として低周波成分 のみが残った画像となる。  [0082] In this Gaussian blurring process, pixels with high frequency components (for example, edge portions) in the original image to be inspected are blurred so as to capture pixels in the surrounding low frequency components, so a high blurring effect is obtained. Will be. On the other hand, the pixels of the low frequency component (for example, the uneven brightness portion) in the original inspection target image are low frequency components in the surrounding pixels, so the blurring effect is low and compared to the original inspection target image. Little change is seen. Therefore, the output image obtained by the Gaussian blurring process (hereinafter referred to as the Gaussian blurring image) is an image in which only the low-frequency components remain as a result of smoothing the high-frequency components in the original image to be inspected.
[0083] 続いて、 C P U 2 1は、 元の検査対象画像から、 上記ガウスぼかし画像を 引き算処理する (ステップ 6 3 ) 。 この引き算処理により、 元の検査対象画 像中の高周波成分から、 ガウスぼかし画像中の対応位置の低周波成分が弓 Iき 算されることで、 元の高周波成分が残り、 また元の検査対象画像中の低周波 成分から、 ガウスぼかし画像中の対応位置の低周波成分が引き算されること で、 元の低周波成分は除去される。 すなわち、 この引き算処理により得られ る画像は、 元の検査対象画像中から低周波成分が除去され、 高周波成分のみ が残った画像となる。 C P U 2 1は、 この引き算処理後の画像により、 元の 検査対象画像を更新して H D D 2 5へ保存する (ステップ 6 4 ) 。  [0083] Subsequently, the C P U 21 subtracts the Gaussian blurred image from the original image to be inspected (step 6 3). By this subtraction process, the low frequency component at the corresponding position in the Gaussian blurred image is bowed from the high frequency component in the original inspection target image, so that the original high frequency component remains and the original inspection target. The original low frequency component is removed by subtracting the low frequency component at the corresponding position in the Gaussian blurred image from the low frequency component in the image. In other words, the image obtained by this subtraction process is an image in which only the high-frequency component remains, with the low-frequency component removed from the original image to be inspected. C PU 21 updates the original image to be inspected with the image after the subtraction process and stores it in HD 25 (step 6 4).
[0084] 図 6のフローチャートに戻り、 C P U 2 1は、 全ての第 1分割領域 7 1に ついて上記各検査対象画像の撮像処理を行ったか否か、 及び上記ハイパスフ ィルタによるフィルタリング処理を全ての検査対象画像に対して行ったか否 かを判断し (ステップ 1 0 3、 1 0 4 ) 、 全ての検査対象画像の撮像処理及 びフィルタリング処理を行ったと判断した場合 (Y e s ) には、 このフィル タリング後の検査対象画像を用いて各分割領域のモデル画像を作成する処理 へ移る (ステップ 1 0 5 ) 。 なお、 本実施形態においては、 上記検査対象画 像の撮像処理とハイパスフィルタ処理は並列的に行つているが、 検査対象画 像の撮像処理が全ての第 1分割領域 7 1について完了するのを待ってハイパ スフィルタ処理を行うようにしても構わない (すなわち、 上記ステップ 1 0 2の処理とステップ 1 0 3の処理が逆であっても構わない) 。 Returning to the flowchart of FIG. 6, the CPU 21 determines whether or not each of the images to be inspected has been imaged for all of the first divided areas 71, and performs the filtering process by the high-pass filter for all the inspections. It is determined whether or not it has been performed on the target image (steps 10 3 and 10 4). If it is determined that all of the inspection target images have been imaged and filtered (Y es), this filter The process proceeds to a process of creating a model image of each divided region using the image to be inspected after the tulling (step 10 5). In the present embodiment, the imaging process of the inspection target image and the high-pass filter process are performed in parallel. The high-pass filter processing may be performed after the image capturing processing is completed for all the first divided areas 71 (that is, the processing in step 1002 and the processing in step 103 are performed). The reverse is also possible.)
[0085] ここで、 モデル画像の作成処理について詳細に説明する。 図 1 1は、 画像 処理用 P C 1 0がモデル画像を作成するまでの処理の流れを示したフローチ ヤー卜であり、 図 1 2は、 画像処理用 P C 1 0がモデル画像を作成する様子 を概念的に示した図である。  Here, the model image creation process will be described in detail. Fig. 11 is a flowchart showing the flow of processing until the image processing PC 10 creates a model image. Fig. 12 shows how the image processing PC 10 creates a model image. It is the figure shown conceptually.
[0086] 図 1 1に示すように、 まず、 画像処理用 P C 1 0の C P U 2 1は、 上記ハ ィパスフィルタ処理後の検査対象画像のうち、 各ダイ 3 0間で対応する I D を有する検査対象画像を H D D 2 5から R A M 2 3へ読み出し (ステップ 4 1 ) 、 読み出した各検査対象画像の位置合わせを行う (ステップ 4 2 ) 。 具 体的には、 C P U 2 1は、 各ダイ 3 0間で同一位置に存在する第 1分割領域 7 1を撮像した各検査対象画像中から、 例えばプロテインチップ 3 5の凹部 5 0のエッジ部分等の形状を認識し、 その形状が各検査対象画像間で重なる ように、 X及び Y方向へのシフト及び 0方向への回転で調整しながら位置合 わせを行う。  As shown in FIG. 11, first, the CPU 21 of the image processing PC 10 has an ID corresponding to each die 30 in the inspection target image after the high-pass filter processing. The inspection target image is read from the HDD 25 to the RAM 23 (Step 4 1), and the alignment of each of the read inspection target images is performed (Step 4 2). Specifically, the CPU 21 detects, for example, an edge portion of the recess 50 of the protein chip 35 from the images to be inspected, which are images of the first divided area 71 existing at the same position between the dies 30. , Etc., and align them while adjusting by shifting in the X and Y directions and rotating in the 0 direction so that the shapes overlap each other.
[0087] 例えば、 図 1 2に示すように、 C P U 2 1は、 各ダイ 3 0間で同一位置に 存在する第 1分割領域 7 1 aを撮像した、 対応する I Dを有する各検査対象 画像 4 0 a〜4 0 f 、 ■ ■ ■を読み出す。 本実施形態においてはダイ 3 0の 数は 8 8個であるため、 対応する I Dを有する各検査対象画像 4 0の総数も 8 8枚となる。 C P U 2 1は、 この 8 8枚の検査対象画像 4 0を全て重ねて 、 凹部 5 0の形状等を基に位置合わせする。 このように、 凹部 5 0の形状等 を基に位置合わせを行うことで、 容易かつ正確な位置合わせが可能となる。  For example, as shown in FIG. 12, the CPU 21 captures each inspection object image 4 having a corresponding ID obtained by imaging the first divided area 7 1 a existing at the same position between the dies 30. Read 0 a to 40 f, ■ ■ ■ ■. In this embodiment, since the number of dies 30 is 88, the total number of inspection target images 40 having corresponding IDs is 88. The C P U 21 overlaps all of the 88 images to be inspected 40 and aligns them based on the shape of the recess 50 and the like. Thus, by performing alignment based on the shape or the like of the recess 50, easy and accurate alignment is possible.
[0088] 続いて、 C P U 2 1は、 上記位置合わせができた状態で、 各検査対象画像 4 0中の同一位置の画素 (ピクセル) 毎に、 平均輝度値を演算する (ステツ プ 4 3 ) 。 C P U 2 1は、 第 1分割領域 7 1 aの各検査対象画像 4 0中の全 ての画素分の平均輝度値を演算した場合 (ステップ 4 4の Y e s ) には、 こ の演算結果を基に、 この平均輝度値から構成される画像をモデル画像 4 5と して生成し、 HDD 25へ保存する (ステップ 45) 。 [0088] Subsequently, the CPU 21 calculates an average luminance value for each pixel (pixel) at the same position in each inspection object image 40 in a state where the above alignment is completed (step 4 3). . When the CPU 21 calculates the average luminance value for all the pixels in each inspection target image 40 in the first divided area 7 1 a (Yes in step 44), the CPU 21 calculates the result of the calculation. Based on this, the image composed of this average luminance value is model image 45 and Is generated and stored in the HDD 25 (step 45).
[0089] C P U 21は、 以上の処理を繰り返して、 各ダイ 30間で対応する全ての 第 1分割領域 7 1についてモデル画像 45を作成したか否かを判断し (ステ ップ 46) 、 全てのモデル画像 45を作成したと判断した場合 (Y e s) に は処理を終了する。 The CPU 21 repeats the above processing to determine whether or not the model images 45 have been created for all the first divided regions 71 corresponding to the dies 30 (step 46). If it is determined that the model image 45 is created (Y es), the process is terminated.
[0090] 以上の処理により、 プロテインチップ 35のように、 絶対的な良品サンプ ルが得られない M E M Sデ /くィスの検査においても、 実際の検査対象画像 4 0を基にモデル画像 45を作成することができる。 各検査対象画像 40には 異物や傷、 薄膜の割れ等の欠陥が存在している可能性もある。 し力、し、 各ダ ィ 30を複数 (本実施形態においては 234個) の第 1分割領域 7 1に分け 、 更に複数 (本実施形態においては 88個) のダイ 30分の平均輝度値を算 出することで、 各検査対象画像 40の欠陥は吸収され、 理想形に極めて近い モデル画像 45を作成することができ、 高精度な欠陥検出が可能となる。  [0090] Through the above process, an absolute good product sample cannot be obtained, such as protein chip 35. Even in MEMS device inspection, model image 45 is created based on actual inspection target image 40. Can be created. Each inspection image 40 may have defects such as foreign matter, scratches, and thin film cracks. Each die 30 is divided into a plurality of (in this embodiment, 234) first divided regions 71, and a plurality of (in this embodiment, 88 in this embodiment) average luminance values for 30 dies are obtained. By calculating, the defects in each inspection target image 40 are absorbed, and a model image 45 that is very close to the ideal shape can be created, and highly accurate defect detection becomes possible.
[0091] 上述したように、 一の第 1分割領域 7 1における各検査対象画像 40は上 記 F 1〜F 3の各焦点毎に存在するため、 モデル画像 45もその焦点毎に作 成される。 したがって、 本実施形態においては、 第 1分割領域 7 1の数は各 ダイ 30において 234個であるため、 234 X 3 = 702枚のモデル画像 が作成されることとなる。  [0091] As described above, since each inspection object image 40 in one first divided region 71 exists for each focus of F1 to F3, a model image 45 is also created for each focus. The Therefore, in the present embodiment, since the number of the first divided regions 71 is 234 in each die 30, 234 X 3 = 702 model images are created.
[0092] 図 6のフローチャートに戻り、 モデル画像 45を作成した後は、 CPU 2  [0092] Returning to the flowchart of FIG. 6, after creating the model image 45, the CPU 2
1は、 各第 1分割領域 7 1毎に、 このモデル画像 45と、 ハイパスフィルタ 後の各検査対象画像 40との差分抽出処理を行う (ステップ 1 06) 。  1 performs a difference extraction process between the model image 45 and each inspection target image 40 after the high-pass filter for each first divided area 71 (step 106).
[0093] 具体的には、 C P U 21は、 上述のモデル画像 45作成の際の位置合わせ 処理と同様に、 モデル画像 45及び各検査対象画像 40に存在する凹部 50 の形状を基に X、 Y及び 0方向で調整しながら位置合わせを行い、 両画像の 引き算処理により差分を抽出し、 2値化処理を行い、 差分画像として出力す る。  Specifically, the CPU 21 performs X, Y based on the shape of the concave portion 50 existing in the model image 45 and each inspection object image 40 in the same manner as the alignment process at the time of creating the model image 45 described above. Align the image while adjusting in the 0 direction, extract the difference by subtraction processing of both images, perform binarization processing, and output as a difference image.
[0094] そして、 CPU 21は、 この差分画像に対して、 いわゆる B I o b抽出に よるフィルタリングを施す (ステップ 1 07) 。 B I o bとは、 差分画像中 の所定の (または所定範囲の) グレースケール値を有する画素の塊をいう。Then, the CPU 21 performs filtering by so-called BI ob extraction on the difference image (step 107). BI ob is in the difference image A block of pixels having a predetermined (or predetermined range) grayscale value.
CPU 21は、 差分画像中から、 この B I o bのうち、 所定の面積 (例えば 3ピクセル) 以上の B I o bのみを抽出する処理を行う。 The CPU 21 performs processing for extracting only B I ob having a predetermined area (for example, 3 pixels) or more from the B I o b from the difference image.
[0095] 図 1 3は、 B I o b抽出処理の前後における差分画像を示した図である。  FIG. 13 is a diagram showing a difference image before and after the B I ob extraction process.
同図 (a) が B I o b抽出前の差分画像 60、 同図 (b) が B I o b抽出後 の差分画像 (以下、 B I o b抽出画像 65という) をそれぞれ示している。  Figure (a) shows the difference image 60 before B I ob extraction, and (b) shows the difference image after B I ob extraction (hereinafter referred to as B I ob extraction image 65).
[0096] 同図 (a) において、 白く現れた部分がモデル画像 45と検査対象画像 4 0の差分として浮き出た部分である。 なお、 この差分画像 60においては、 差分を強調するため、 元の差分画像に対して輝度値を例えば 40倍程度強調 する処理を施している。 同図 (a) に示すように、 B I o b抽出前の差分画 像 60においては、 異物や傷等の欠陥とは別に、 例えば CCDカメラ 6のレ ンズ 1 4の汚れや、 光源 7の照明の均一度等の種々の要因により、 白抜きの 破線で囲んだ部分に示す微小なノイズ 84が存在している。 このノイズ 84 が残ったままだと、 欠陥の誤検出に繋がるため、 このノイズ 84を除去する 必要がある。  In FIG. 9A, a white portion is a portion that appears as a difference between the model image 45 and the inspection target image 40. In the difference image 60, in order to emphasize the difference, a process for enhancing the luminance value, for example, about 40 times is performed on the original difference image. As shown in (a) of the figure, in the differential image 60 before BI ob extraction, in addition to defects such as foreign matter and scratches, for example, contamination of the lens 14 of the CCD camera 6 and illumination of the light source 7 Due to various factors such as uniformity, there is a small noise 84 shown in the part surrounded by a white broken line. If this noise 84 remains, it will lead to false detection of a defect, so it is necessary to remove this noise 84.
[0097] このノイズ 84は、 異物や傷等の欠陥に比べて面積が小さい。 そこで、 同 図 (b) に示すように、 この差分画像 60に対して、 所定面積以下の B I o bを除去して所定面積よりも大きい B I o bのみを抽出するフィルタリング 処理を施すことで、 上記ノイズ 84を除去することができる。 この B I o b 抽出処理により、 B l o b抽出画像 65においては、 プロテインチップ 35 の凹部 50の薄膜の割れ 81や、 プロテインチップ 35に付着した埃等の異 物 82のみが抽出される。 なお、 この時点では、 CPU 21は、 これらの異 物や割れ、 傷等の欠陥の種類は認識しておらず、 単に欠陥候補として認識し ている。  [0097] The noise 84 has a smaller area than a defect such as a foreign object or a scratch. Therefore, as shown in FIG. 6B, the noise 60 is obtained by filtering the difference image 60 by removing BI ob below a predetermined area and extracting only BI ob larger than the predetermined area. 84 can be removed. By this B I ob extraction process, only the foreign matter 82 such as dust 81 attached to the protein chip 35 and the thin film crack 81 of the concave portion 50 of the protein chip 35 are extracted from the B l ob extraction image 65. At this time, the CPU 21 does not recognize the types of defects such as these foreign objects, cracks, and scratches, but merely recognizes them as defect candidates.
[0098] 続いて、 図 6のフローチャートに戻り、 B I o b抽出処理により欠陥候補 が検出された場合 (ステップ 1 08の Y e s) には、 CPU 21は、 この欠 陥候補が検出されたプロテインチップ 35を高倍率で (狭い視野で) 更に撮 像する必要があるか否かを判断する (ステップ 1 09) 。 すなわち、 CPU 2 1は、 例えばこの欠陥候補が表れた検査対象画像 4 0が属する第 1分割領 域 7 1を更に詳細に高倍率で撮像することを指示するユーザ操作が入力され たか否かを判断し、 高倍率撮像の必要があると判断した場合 (Y e s ) には 、 欠陥候補が検出された第 1分割領域 7 1及び各ダイ 3 0間で対応する I D を有する他の第 1分割領域 7 1を、 更に細かく分割した第 2分割領域 7 2毎 に高倍率で C C Dカメラ 6に撮像させる (ステップ 1 1 3 ) 。 [0098] Subsequently, returning to the flowchart of FIG. 6, when a defect candidate is detected by the BI ob extraction process (Yes in step 108), the CPU 21 detects the protein chip in which this defect candidate is detected. Determine whether further images need to be taken at high magnification (with a narrow field of view) 35 (step 110). CPU 2 1 determines, for example, whether or not a user operation instructing that the first divided area 71 to which the inspection target image 40 where the defect candidate appears belongs belongs to be captured in more detail at a high magnification is input. If it is determined that high-magnification imaging is necessary (Y es), the first divided area 71 in which the defect candidate is detected and the other first divided area 71 having an ID corresponding to each die 30. Is imaged by the CCD camera 6 at a high magnification for each of the second divided areas 72 divided further finely (step 11 13).
[0099] 後述する欠陥分類処理においては、 例えば抽出された B I o bの面積に基 づいて、 欠陥であるか否かの判断及び欠陥の分類を行うが、 低倍率で撮像さ れた検査対象画像を基に作成された B I o b抽出画像 6 5では、 その B I o bの面積が正確に算出できない場合等もある。 また、 低倍率での撮像だと、 欠陥の正確な形状が認識できずに、 欠陥の正確な分類ができない場合も考え られる。 そこで、 プロテインチップ 3 5を更に高倍率で撮像することで、 後 の欠陥か否かの判断及び欠陥分類処理を正確に行うことを可能としている。  [0099] In the defect classification process described later, for example, based on the extracted BI ob area, it is determined whether or not it is a defect and the defect is classified, but the inspection object image captured at a low magnification is used. In the BI ob extracted image 65 created based on the above, the BI ob area may not be calculated accurately. In addition, when imaging at a low magnification, the correct shape of the defect cannot be recognized and the defect cannot be accurately classified. Therefore, by imaging the protein chip 35 at a higher magnification, it is possible to accurately determine whether or not it is a defect later and to perform defect classification processing.
[0100] 図 1 4は、 欠陥候補が検出された第 1分割領域 7 1を第 2分割領域 7 2毎 に高倍率で撮像する様子を概念的に示した図である。 同図に示すように、 例 えばあるダイ 3 0のある第 1分割領域 7 1 aを撮像した検査対象画像から欠 陥候補が検出された場合には、 この第 1分割領域 7 1 aを例えば更に 3行 X 3列の計 9個の各第 2分割領域 7 2に分割する。 また、 他のダイ 3 0におい てこの第 1分割領域 7 1 aと対応する I Dを有する第 1分割領域 7 1につい ても同様にそれぞれ第 2分割領域 7 2に分割する。 各第 2分割領域 7 2には 、 各第 1分割領域 7 1 と同様に、 各ダイ 3 0における各第 2分割領域 7 2の 位置を識別する I Dが付与されている。  [0100] FIG. 14 is a diagram conceptually showing a state in which the first divided region 71 in which the defect candidate is detected is imaged at a high magnification for each second divided region 72. As shown in the figure, for example, when a defect candidate is detected from an inspection target image obtained by imaging a first divided region 71a having a certain die 30, this first divided region 71a is Furthermore, it is divided into a total of nine second divided areas 72 in 3 rows and 3 columns. Further, in the other dies 30, the first divided regions 71 having IDs corresponding to the first divided regions 71 a are also divided into the second divided regions 72, respectively. Each second divided region 72 is given an ID for identifying the position of each second divided region 72 in each die 30, similarly to each first divided region 71.
[0101 ] C C Dカメラ 6は、 この各第 2分割領域 7 2を上記第 1分割領域 7 1 と同 —のサイズ (V G Aサイズ) で撮像する。 すなわち、 C C Dカメラ 6は、 第 1分割領域 7 1の撮像時の 3倍の倍率で第 2分割領域 7 2を撮像する。 撮像 された画像は、 上記各第 2分割領域 7 2の I Dとともに検査対象画像として 画像処理用 P C 1 0の H D D 2 5等に保存される。  [0101] The C C D camera 6 images each of the second divided areas 72 at the same size (VGA size) as the first divided area 71. That is, the C C D camera 6 images the second divided region 72 at a magnification three times that when the first divided region 71 is imaged. The captured image is stored in the HD 25 of the image processing PC 10 as the inspection target image together with the ID of each of the second divided regions 72.
[0102] なお、 各ダイ 3 0の各第 2分割領域 7 2の撮像経路については、 C P U 2 1は、 上記第 1分割領域 7 1の撮像時と同様に、 上記図 8の (a ) と (b ) のいずれか速い経路を選択する。 すなわち、 C P U 2 1は、 一のダイ 3 0の 分割領域 7 1中の各第 2分割領域 7 2を全て撮像してから他のダイ 3 0の対 応ずる分割領域 7 1中の各第 2分割領域 7 2を撮像していく経路と、 各ダイ 3 0の対応する第 1分割領域 7 1間において対応する I Dを有する各第 2分 割領域 7 2をまとめて撮像していく経路とのうちどちらの経路が早いかを判 断し、 早い経路で撮像させる。 [0102] Regarding the imaging path of each second divided area 7 2 of each die 30, CPU 2 1 selects the fastest path (a) or (b) in FIG. 8 as in the case of imaging of the first divided area 71. In other words, the CPU 21 captures all the second divided areas 72 in the divided area 71 of the one die 30 and then each second divided area 71 in the corresponding divided area 71 of the other die 30. Among the path to image the area 72 and the path to collectively image each second divided area 72 having the corresponding ID between the corresponding first divided areas 71 of each die 30. Determine which route is faster, and have the image taken faster.
[0103] 欠陥候補が検出された第 1分割領域 7 1及びそれに対応する第 1分割領域 7 1中の第 2分割領域 7 2について撮像を終了すると (ステップ 1 1 3 ) 、 C P U 2 1は、 上記ステップ 1 0 2〜 1 0 7の処理と同様に、 各検査対象画 像についてハイパスフィルタによるフィルタリング処理 (ステップ 1 1 4 ) 及びモデル画像作成処理 (ステップ 1 1 7 ) を行い、 モデル画像と、 上記欠 陥候補が検出された第 1分割領域 7 1中の各第 2分割領域 7 2を撮像した各 検査対象画像との差分抽出処理を行い (ステップ 1 1 8 ) 、 更に B I o b抽 出によるフィルタリング処理 (ステップ 1 1 9 ) を行う。  [0103] When the imaging of the first divided area 71 1 in which the defect candidate is detected and the second divided area 7 2 in the first divided area 71 corresponding thereto is finished (step 1 1 3), the CPU 2 1 Similar to the processing in steps 10 2 to 10 7 described above, filtering processing (step 1 1 4) and model image creation processing (step 1 1 7) using a high-pass filter are performed for each inspection target image, Difference extraction processing is performed on each inspection target image obtained by imaging each second divided region 72 in the first divided region 71 in which the defect candidate is detected (step 1 1 8), and further by BI ob extraction. Perform the filtering process (step 1 1 9).
[0104] なお、 第 2分割領域 7 2の各検査対象画像は上記第 1分割領域 7 1の検査 対象画像よりも高解像度で撮像されたものであるため、 ステップ 1 1 8の B I o b抽出処理において抽出する B I o b領域の閾値 (ピクセル) は、 上記 ステップ 1 0 7における第 1分割領域 7 1毎の B I o b抽出処理において抽 出する B I o b領域の閾値よりも大きい値に設定される。 もちろん、 その閾 値 (ピクセル) で換算される、 プロテインチップ 3 5上における B I o bの 実際の面積 (; U m) には変化はない。  [0104] Since each inspection target image in the second divided area 72 is captured at a higher resolution than the inspection target image in the first divided area 71, the BI ob extraction process in step 1 1 8 The threshold value (pixel) of the BI ob region extracted in step B is set to a value larger than the threshold value of the BI ob region extracted in the BI ob extraction process for each first divided region 71 in step 107. Of course, there is no change in the actual area (; U m) of BIob on the protein chip 35 converted by the threshold value (pixel).
[0105] 図 1 5は、 第 1分割領域 7 1及び第 2分割領域 7 2の各検査対象画像から 抽出された各 B I o b抽出画像 6 5を比較して示した図である。 同図 (a ) が第 1分割領域 7 1から抽出された B I 0 !3抽出画像6 5 3、 同図 (b ) が 第 2分割領域 7 2から抽出された B I o b抽出画像 6 5 bを示している。 FIG. 15 is a diagram showing a comparison of each BI ob extracted image 65 extracted from each inspection target image in the first divided region 71 and the second divided region 72. BI 0! 3 extracts image 6 5 3 Fig. (A) is extracted from the first division region 7 1, Fig. (B) is a BI ob extracted image 6 5 b extracted from the second divided region 7 2 Show.
[0106] 同図に示すように、 上記ステップ 1 0 7で抽出された第 1分割領域 7 1 a の B I o b抽出画像 6 5 aにおいては、 左端かつ下端の部分に異物 8 2と思 われる領域が白く浮き出ているが、 その面積が微小なため、 正確な面積値を 算出することが困難である。 そこで、 同図 (b ) に示すように、 第 1分割領 域 7 1を 9個の第 2分割領域 7 2に分割して、 当該異物 8 2が表れている部 分の第 2分割領域 7 2を高倍率で撮像することで、 その異物 8 2が高解像度 で表示され、 その面積を正確に算出することが可能となる。 [0106] As shown in the figure, in the BI ob extracted image 65a of the first divided region 71a extracted at the above step 107, it is considered that the foreign object 82 is at the left end and the lower end portion. Although the area that appears is white, it is difficult to calculate an accurate area value because the area is very small. Therefore, as shown in FIG. 4B, the first divided area 71 is divided into nine second divided areas 72, and the second divided area 7 where the foreign matter 82 appears is shown. By imaging 2 at a high magnification, the foreign object 82 is displayed at a high resolution, and the area can be accurately calculated.
[0107] なお、 上述のステップ 1 0 9における高倍率撮像の要否判断を行わずに、 ステップ 1 0 8において欠陥候補が抽出された場合には、 自動的に高倍率撮 像を行うようにしてもよい。 また、 画像処理用 P C 1 0やモータ 4及びェン コーダ 5の性能が高く、 処理時間が許容範囲に収まるような場合には、 欠陥 候補が抽出された第 1分割領域 7 1のみならず、 全てのダイ 3 0の全ての第It should be noted that, when the defect candidate is extracted in step 10 08 without performing the above-described determination of necessity of high-magnification imaging in step 109, the high-magnification imaging is automatically performed. May be. In addition, when the performance of the image processing PC 10, the motor 4 and the encoder 5 is high and the processing time is within the allowable range, not only the first divided area 71 from which the defect candidate is extracted, All die 3 0 all first
1分割領域 7 1中の第 2分割領域 7 2を撮像して、 全ての第 2分割領域 7 2 についてモデル画像 4 5を作成するようにしても構わない。 この場合、 上記 ステップ 1 0 9における高倍率撮像の要否判断をすることなく、 第 1分割領 域 7 1についての B I o b抽出処理の完了後すぐに、 第 2分割領域 7 2毎の 撮像処理、 ハイパスフィルタ処理、 モデル画像作成処理を行い、 欠陥候補が 検出された第 1分割領域 7 1があると C P U 2 1が判断した場合にはその第The second divided area 7 2 in the one divided area 71 may be imaged and the model image 45 may be created for all the second divided areas 7 2. In this case, immediately after the BI ob extraction process for the first divided area 71 is completed, the imaging process for each of the second divided areas 7 2 is performed without determining whether or not high magnification imaging is required in the above steps 1 0 9. If the CPU 21 determines that there is a first divided area 7 1 in which a defect candidate is detected after performing high-pass filter processing and model image creation processing, the first
1分割領域 7 1中の各第 2分割領域 7 2について B I o b抽出処理を行うよ うにすればよい。 The B Iob extraction process may be performed for each second divided area 7 2 in the one divided area 71.
[0108] 図 6のフローチャートに戻り、 上記ステップ 1 0 9において高倍率撮像の 必要がないと判断された場合 (N o ) 、 または上記ステップ 1 1 3〜ステツ プ 1 1 9における第 2分割領域 7 2からの B I o b抽出処理が完了した場合 には、 C P U 2 1は、 B i o b抽出画像 6 5に表れた欠陥候補の分類処理を 行う (ステップ 1 1 0 ) 。  [0108] Returning to the flowchart of FIG. 6, if it is determined in step 1009 that high-magnification imaging is not necessary (No), or the second divided region in steps 113 to 1119 above When the BI ob extraction process from 72 is completed, the CPU 21 performs the classification process of defect candidates appearing in the Biob extraction image 65 (step 110).
[0109] すなわち、 C P U 2 1は、 B I o b抽出画像 6 5中に白く表れた各 B I o bについて、 その面積、 周囲長、 非真円度、 ァスぺク ト比等の特徴点に基づ いて、 各 B I o bが欠陥であるか否かを判断し、 またその欠陥の種類は異物 、 傷、 割れ等のうちいずれであるかを分類する。  That is, the CPU 21 is based on feature points such as area, perimeter, non-roundness, aspect ratio, etc., for each BI ob that appears white in the BI ob extracted image 65. Whether or not each BI ob is a defect is classified and whether the type of the defect is a foreign object, a flaw or a crack.
[01 10] 具体的には、 画像処理用 P C 1 0は、 異物、 傷、 割れ等の欠陥の種類毎に それらのサンプル画像を集めてそれらの特徴点データを特徴点データベース として H D D 2 5等に保存しておき、 その保存された特徴点データと、 検査 対象の各 B I o b抽出画像 6 5中の各 B I o bから検出された各特徴点との 比較を行う。 [01 10] Specifically, the image processing PC 10 is provided for each type of defect, such as a foreign object, a flaw, or a crack. These sample images are collected and their feature point data is saved as a feature point database in HDD 25, etc., and the saved feature point data and each BI ob extracted image 6 5 in the inspection target BI 5 Compare with each feature point detected from ob.
[01 1 1 ] 例えば、 本実施形態における異物は一辺が数; 〜数十; 程度のもので あり、 傷はその長さが数; U m〜数百; 程度である。 また異物と傷とを比較 すると、 異物に比べて傷はそのァスぺク ト比が極端に横長または縦長となり 、 周囲長も長くなる。 更に、 薄膜の割れは凹部 5 0のエッジ部において曲線 状に現れるが、 正常なものに比べて凹部 5 0の真円度が低下する。 画像処理 用 P C 1 0は、 これらのデータを特徴点データとして保存しておき、 検出さ れた B I o bの各特徴点との比較により欠陥の分類を行う。  [01 1 1] For example, the foreign matter in the present embodiment has a number of about several sides to about several tens; and the scratch has a length of several numbers; about U m to several hundreds. In addition, when comparing foreign objects with scratches, scratches have an aspect ratio that is extremely horizontal or vertical and has a longer perimeter than foreign objects. Furthermore, the thin film cracks appear in a curved line at the edge of the recess 50, but the roundness of the recess 50 is lower than that of a normal one. The image processing PC 10 stores these data as feature point data, and classifies the defects by comparison with each feature point of the detected BIo.
[01 12] また、 上述したように、 本実施形態におけるプロテインチップ 3 5は、 凹 部 5 0の底面 5 2の薄膜 5 3に径が例えば数; U mの孔 5 5を有しており、 そ の孔 5 5は試薬を排出する役割を果たしている。 したがって、 凹部 5 0内に 異物が付着している場合でも、 その数; の孔 5 5よりも径が小さい場合に は試薬とともに孔 5 5から排出されるため、 当該プロテインチップ 3 5を用 いたスクリーニング時に問題となることはない。 したがって、 異物について はその孔 5 5の径を閾値として、 それよりも径が小さい異物については欠陥 として扱わないこととする。 一方、 傷や割れについては、 そこから試薬が漏 れることで正常なスクリーニングが行えなくなるため、 無条件に欠陥として 扱う。  [0112] Further, as described above, the protein chip 35 according to the present embodiment has the hole 55 having a diameter of, for example, a few; U m on the thin film 53 of the bottom surface 52 of the recess 50. The holes 5 5 serve to discharge the reagent. Therefore, even when foreign matter is adhered in the recess 50, if the diameter is smaller than the number of holes 55, the protein chip 35 is used because it is discharged from the holes 55 together with the reagent. There is no problem during screening. Therefore, for foreign objects, the diameter of the hole 55 is used as a threshold value, and foreign objects with a smaller diameter are not treated as defects. On the other hand, scratches and cracks are treated as defects unconditionally because normal screening cannot be performed due to the leakage of the reagent from there.
[01 13] 上述したように、 第 1分割領域 7 1から抽出された B I o b抽出画像 6 5 ではその特徴点を正確に測定することができない場合には、 C P U 2 1は、 更に高倍率で撮像した第 2分割領域 7 2から抽出された B I o b抽出画像 6 5を用いて特徴点の測定を行い、 各種欠陥の分類を行う。 このように、 必要 に応じて高倍率で撮像することで、 欠陥検出後の処理をスムーズに行わせる ことができる。  [0113] As described above, in the BI ob extracted image 65 extracted from the first divided area 71, when the feature point cannot be measured accurately, the CPU 21 further increases the magnification. The feature points are measured using the BI ob extracted image 65 extracted from the captured second divided area 72, and various defects are classified. In this way, by performing high-magnification imaging as necessary, processing after defect detection can be performed smoothly.
[01 14] そして、 C P U 2 1は、 全ての欠陥候補について欠陥の有無の判断及び欠 陥の分類を行った場合 (ステップ 1 1 1の Y ES) には、 検出結果として B I o b抽出画像及び検出された欠陥の種類に関する情報を例えば表示部 26 へ出力し (ステップ 1 1 2) 、 処理を終了する。 このとき、 画像処理用 P C 1 0は、 例えばどの種類の欠陥がウェハ 1上のどの位置に存在するかが一目 で認識できるような画像を表示部 26に表示するようにしてもよい。 [01 14] Then, the CPU 21 judges whether or not there is a defect for all defect candidates, and When a defect is classified (YES in Step 1 1 1), the information on the BI ob extracted image and the detected defect type is output to the display unit 26 as a detection result (Step 1 1 2), End the process. At this time, for example, the image processing PC 10 may display an image on the display unit 26 so that it can be recognized at a glance which type of defect exists at which position on the wafer 1.
[0115] ユーザは、 出力された結果を基に、 異物が存在する場合にはその除去作業 を行い、 また傷や割れが存在する場合にはそのプロテインチップ 35を不良 品として廃棄する。 なお、 ステップ 1 08において欠陥候補が検出されない 場合には、 検査対象のプロテインチップ 35は良品として処理され、 欠陥検 出処理を終了する。 [0115] On the basis of the output result, the user performs the removal work if there is a foreign object, and discards the protein chip 35 as a defective product if there is a scratch or crack. If no defect candidate is detected in step 108, the protein chip 35 to be inspected is processed as a non-defective product, and the defect detection process ends.
[0116] 以上の動作により、 本実施形態によれば、 良品のサンプルを取得すること が困難なプロテインチップ 35のような M E M Sデバイスにおいても、 各第 1分割領域 7 1または第 2分割領域 72毎の検査対象画像 40を基にモデル 画像を作成することができるため、 高精度な欠陥検出処理が可能となる。 ま た、 同一の光学条件、 照明条件の下で撮像された各検査対象画像 40を基に モデル画像 45を作成するため、 それらの条件の違いによる誤検出を防ぐこ とができる。  [0116] Through the above operation, according to the present embodiment, even in the MEMS device such as the protein chip 35 where it is difficult to obtain a good sample, each first divided region 71 or each second divided region 72 Since a model image can be created based on the inspection target image 40, a highly accurate defect detection process can be performed. In addition, since the model image 45 is created based on each inspection object image 40 taken under the same optical conditions and illumination conditions, it is possible to prevent erroneous detection due to a difference in these conditions.
[0117] 本発明は上述の実施形態にのみ限定されるものではなく、 本発明の要旨を 逸脱しない範囲内において種々変更を加え得ることは勿論である。  [0117] The present invention is not limited to the above-described embodiment, and various modifications can be made without departing from the scope of the present invention.
[0118] 上述の実施形態においては、 検査対象の MEMSデバイスとしてプロティ ンチップを適用していたが、 MEMSデバイスはこれに限られるものではな し、。 例えば、 MEMSデバイスとして電子ビーム照射プレート (E B窓) を 適用することも可能である。  [0118] In the above embodiment, the protein chip is applied as the MEMS device to be inspected, but the MEMS device is not limited to this. For example, an electron beam irradiation plate (EB window) can be applied as a MEMS device.
[0119] 図 1 6は、 この電子ビーム照射プレートの外観を示した図である。 同図 ( a) が上面図、 同図 (b) が同図 (a) における Z方向の断面図をそれぞれ 示している。  FIG. 16 is a view showing the appearance of this electron beam irradiation plate. The figure (a) is a top view, and the figure (b) shows a cross-sectional view in the Z direction in the figure (a).
[0120] 同図に示すように、 電子ビーム照射プレート 90は、 電子ビーム (E B) を照射するための複数の窓孔 95を有するプレート 92と、 この各窓孔 95 を覆うように設けられた薄膜 9 1 とを有する。 [0120] As shown in the figure, the electron beam irradiation plate 90 includes a plate 92 having a plurality of window holes 95 for irradiating an electron beam (EB), and each window hole 95. And a thin film 9 1 provided so as to cover.
[0121 ] プレート 9 2の X方向の長さ w及び Y方向の長さ I はそれぞれ例えば数十 m mの長方形状に形成され、 Z方向の長さ hは例えば数 m m程度に形成され るが、 これらの長さ及び形状に限られない。 また各窓孔 9 5は例えば一辺 s が数 m mの正方形状であるが、 この長さ及び形状に限られず、 長方形状であ つてもよい。 また窓孔 9 5は 6行 X 9列の計 5 4個設けられるが、 この数に 限られない。 [0121] The length w in the X direction and the length I in the Y direction of the plate 9 2 are each formed in a rectangular shape of, for example, several tens mm, and the length h in the Z direction is, for example, about several mm. It is not restricted to these lengths and shapes. Each window 95 is, for example, a square with a side s of several millimeters, but is not limited to this length and shape, and may be rectangular. The number of window holes 9 5 is 6 in 4 rows x 9 columns, but is not limited to this number.
[0122] この電子ビーム照射プレート 9 0は、 図示しない真空容器の端部と接続さ れることで電子ビーム照射装置を構成する。 当該真空容器の内部に設けられ た電子ビーム発生器から発射された電子ビーム (E B ) 力 同図 (b ) の矢 印に示すように窓孔 9 5を介して大気中に放出され、 対象物に照射される。 この電子ビーム照射装置は、 電子ビームが照射される対象物の例えば殺菌、 物理的特性の改質、 化学的物性の改質等、 様々な用途に用いられる。 薄膜 9 1を設けることで、 真空状態を維持したまま電子ビームを照射することが可 能となる。 なお、 薄膜 9 1を複数重ねて多層膜構造としても構わない。  [0122] This electron beam irradiation plate 90 constitutes an electron beam irradiation apparatus by being connected to an end of a vacuum vessel (not shown). Electron beam (EB) force emitted from the electron beam generator provided inside the vacuum vessel is emitted into the atmosphere through the window hole 95 as indicated by the arrow in FIG. Is irradiated. This electron beam irradiation apparatus is used for various purposes such as sterilization, physical property modification, and chemical property modification of an object irradiated with an electron beam. By providing the thin film 91, it is possible to irradiate an electron beam while maintaining a vacuum state. Note that a plurality of thin films 91 may be stacked to form a multilayer film structure.
[0123] この電子ビーム照射プレート 9 0も、 上述の実施形態におけるプロテイン チップ 3 5と同様に、 フォトリソグラフィ技術を用いたエツチング処理等に よりウェハ 1上の各ダイ 3 0に形成される。 この場合各ダイのサイズは上記 プレート 9 2のサイズと同一となる。  [0123] This electron beam irradiation plate 90 is also formed on each die 30 on the wafer 1 by an etching process using a photolithography technique or the like, similar to the protein chip 35 in the above-described embodiment. In this case, the size of each die is the same as the size of the plate 92.
[0124] 欠陥検出装置 1 0 0は、 この電子ビーム照射プレート 9 0に対しても、 上 述のプロテインチップ 3 5と同様の撮像処理、 ハイパスフィルタ処理、 モデ ル画像作成処理、 B I o b抽出処理等を行い、 電子ビーム照射プレート 9 0 上の異物や傷、 割れ等の欠陥の検出を行う。 また、 低倍率と高倍率による撮 像、 Z方向における複数焦点における撮像も同様に可能である。 なお、 モデ ル画像作成処理及び B I o b抽出処理においては、 各検査対象画像中に現れ た窓孔 9 5のエッジ形状が重なるように各検査対象画像を X方向、 Y方向及 び 0方向で調整しながら位置合わせを行う。  [0124] The defect detection apparatus 10 0 0 also performs the same imaging process, high-pass filter process, model image creation process, BI ob extraction process as the above-described protein chip 35 for the electron beam irradiation plate 90. Etc., and detect defects such as foreign matter, scratches and cracks on the electron beam irradiation plate 90. It is also possible to take images at low and high magnifications and at multiple focal points in the Z direction. In the model image creation processing and BI ob extraction processing, each inspection target image is adjusted in the X, Y, and 0 directions so that the edge shapes of the window holes 95 appearing in each inspection target image overlap. While aligning.
[0125] また、 この電子ビーム照射プレート 9 0の検査においては、 例えば異物と して分類する場合の閾値等、 欠陥を分類する場合の特徴点は上記プロテイン チップ 3 5の検査の場合とは異なり、 画像処理用 P C 1 0は、 電子ビーム照 射プレート 9 0のサンプル等を基に、 独自の特徴点データを作成して欠陥を 分類する。 [0125] In the inspection of the electron beam irradiation plate 90, for example, foreign matter Unlike the case of the above inspection of protein chip 35, the image processing PC 10 has a sample of the electron beam irradiation plate 90, etc. Based on this, original feature data is created to classify defects.
[0126] また以上説明したプロテインチップ 3 5及び電子ビーム照射プレート 9 0 以外にも、 例えば加速度センサや圧力センサ、 エアフローセンサ等の各種セ ンサ、 インクジエツトプリンタ用のプリンタへッドゃ反射型プロジェクタ用 のマイクロミラ一アレイ、 その他のァクチユエ一タ等、 及び各種のバイオチ ップ等の他の M E M Sデ /くィスを検査対象物として適用することが可能であ る。  [0126] Besides the protein chip 35 and electron beam irradiation plate 90 described above, various sensors such as an acceleration sensor, a pressure sensor, an air flow sensor, a printer head for an ink jet printer, and a reflective projector. It is possible to apply other MEMS devices such as micro-mirror arrays for use, other actuators, and various biochips as inspection objects.
[0127] 上述の実施形態においては、 検査対象画像 4 0及びモデル画像 4 5、 差分 画像 6 0及び B I o b抽出画像 6 5等、 画像処理に必要な各画像は H D D 2 5に記憶するようにしていたが、 これらの画像は R A M 2 3に一時的に記憶 するものであってもよいし、 R A M 2 3とは別に設けたバッファ領域に一時 的に記憶し、 欠陥分類処理が終了次第消去するようにしても構わない。 また 、 上記各検査対象画像のうち、 差分抽出により差分が抽出されなかった画像 、 すなわち欠陥が検出されなかった画像については、 その後の処理において は不要となるため、 検出されなかったことが判明した時点で逐一消去するよ うにしても構わない。 更に、 低倍率で撮像した第 1分割領域 7 1の検査対象 画像について、 更に高倍率で第 2分割領域 7 2を撮像する場合、 第 2分割領 域 7 2の撮像後は、 第 1分割領域 7 1の検査対象画像は不要となるため、 そ れらの画像は、 第 2分割領域 7 2の撮像が完了した時点で消去するようにし ても構わない。 上記実施形態においては撮像する画像の数が膨大であるため 、 このように処理することで、 R A M 2 3や H D D 2 5の記憶容量を削減し て画像処理用 P Cの負荷を軽減することが可能となる。  In the above-described embodiment, each image necessary for image processing such as the inspection target image 40 and the model image 45, the difference image 60 and the BI ob extracted image 65 is stored in the HDD 25. However, these images may be temporarily stored in the RAM 23, or temporarily stored in a buffer area provided separately from the RAM 23, and deleted when the defect classification process is completed. It doesn't matter if you do. In addition, among the images to be inspected above, it was found that the image in which the difference was not extracted by the difference extraction, that is, the image in which the defect was not detected was not detected because it is unnecessary in the subsequent processing. You may delete them one at a time. Furthermore, for the image to be inspected in the first divided region 71 captured at a low magnification, when the second divided region 72 is imaged at a higher magnification, after the second divided region 72 is imaged, the first divided region 71 Since the images to be inspected 71 are not necessary, these images may be deleted when the imaging of the second divided region 72 is completed. In the above embodiment, since the number of images to be captured is enormous, it is possible to reduce the load on the image processing PC by reducing the storage capacity of RAM 23 and HDD 25 by processing in this way. It becomes.
[0128] 上述の実施形態においては、 プロテインチップ 3 5の各凹部 5 0の底面 5 2の薄膜 5 3が撓んでいる場合等を考慮してハイパスフィルタによるフィル タリング処理を行っていたが、 撮像面の平面度が高く輝度むらが生じないよ うな M E M Sデバイスにおいては、 このハイパスフィルタ処理を省略するよ うにしても構わない。 また、 画像処理用 P C 1 0が検査対象である M E M S デバイスの撮像面の平面度を測定し、 その平面度に応じてハイパスフィルタ 処理を実行するか否かを判断するようにしても構わない。 In the above-described embodiment, the filtering process using the high-pass filter is performed in consideration of the case where the thin film 53 of the bottom surface 52 of each of the recesses 50 of the protein chip 35 is bent. The flatness of the surface is high and brightness unevenness does not occur In such a MEMS device, this high-pass filter processing may be omitted. Further, the image processing PC 10 may measure the flatness of the imaging surface of the MEMS device to be inspected and determine whether or not to execute the high-pass filter processing according to the flatness.
[0129] 上述の実施形態における B I o b抽出によるフィルタリング処理において は、 例えば薄膜 5 3の傷や割れ等は、 差分画像 6 0中で連続的に (線状に) 現れる場合は正常に欠陥として認識できるが、 傷や割れが点状の B I o bの 連続体としてかすれるように現れる場合もありうる。 そのような場合、 各点 状の B I o bが上記閾値よりも小さい場合には、 画像処理用 P C 1 0が欠陥 として認識できない場合も考えられる。 そこで、 上記閾値よりも小さい B I o bでも、 傷や割れのように所定間隔で所定の方向 (直線状または曲線状) に連続して現れる B I o bは、 B i o b抽出処理において抽出されるように 処理しても構わない。 これにより更に高精度な欠陥検出が可能となる。 [0129] In the filtering process using BI ob extraction in the above-described embodiment, for example, if a scratch or a crack of the thin film 53 appears continuously (in a line) in the differential image 60, it is normally recognized as a defect. Although it is possible, scratches and cracks may appear to fade as a continuum of dotted BI ob. In such a case, if each point-like BIob is smaller than the threshold value, the image processing PC 10 may not be recognized as a defect. Therefore, even if BI ob is smaller than the above threshold, BI ob that appears continuously in a predetermined direction (straight line or curved line) at predetermined intervals, such as scratches and cracks, is extracted in the Biob extraction process. It doesn't matter. Thereby, it is possible to detect a defect with higher accuracy.
図面の簡単な説明  Brief Description of Drawings
[0130] [図 1 ]本発明の一実施形態に係る欠陥検出装置の構成を示した図である。 FIG. 1 is a diagram showing a configuration of a defect detection apparatus according to an embodiment of the present invention.
[図 2]本発明の一実施形態における画像処理用 P Cの構成を示したブロック図 である。  FIG. 2 is a block diagram showing a configuration of an image processing PC in an embodiment of the present invention.
[図 3]本発明の一実施形態におけるウェハの上面図である。  FIG. 3 is a top view of a wafer in one embodiment of the present invention.
[図 4]本発明の一実施形態におけるウェハの各ダイのうちの一つを示した上面 図である。  FIG. 4 is a top view showing one of the dies on the wafer in one embodiment of the present invention.
[図 5]本発明の一実施形態におけるプロテインチップのうち一つの凹部を拡大 して示した図である。  FIG. 5 is an enlarged view showing one recess of a protein chip according to an embodiment of the present invention.
[図 6]本発明の一実施形態において欠陥検出装置が欠陥を検出するまでの動作 の大まかな流れを示したフローチヤ一トである。  FIG. 6 is a flowchart showing a general flow of operations until a defect detection apparatus detects a defect in an embodiment of the present invention.
[図 7]本発明の一実施形態において各ダイを複数の分割領域に分けた様子を示 した図である。  FIG. 7 is a diagram showing a state in which each die is divided into a plurality of divided regions in an embodiment of the present invention.
[図 8]本発明の一実施形態において、 C C Dカメラがプロテインチップを分割 領域毎に撮像する際の撮像位置の軌跡を示した図である。 [図 9]本発明の一実施形態において、 C C Dカメラが異なる焦点位置で検査対 象画像を撮像する様子を示した図である。 FIG. 8 is a diagram showing a locus of an imaging position when a CCD camera images a protein chip for each divided region in an embodiment of the present invention. FIG. 9 is a diagram showing a state in which the CCD camera captures an inspection object image at different focal positions in an embodiment of the present invention.
[図 10]本発明の一実施形態におけるハイパスフィルタ処理の詳細な流れを示 したフローチヤ一トである。  FIG. 10 is a flowchart showing a detailed flow of high-pass filter processing in one embodiment of the present invention.
[図 11]本発明の一実施形態において、 画像処理用 P Cがモデル画像を作成す るまでの処理の流れを示したフローチヤ一トである。  FIG. 11 is a flowchart showing the flow of processing until the image processing PC creates a model image in an embodiment of the present invention.
[図 12]本発明の一実施形態において、 画像処理用 P Cがモデル画像を作成す る様子を概念的に示した図である。  FIG. 12 is a diagram conceptually showing how an image processing PC creates a model image in an embodiment of the present invention.
[図 13]本発明の一実施形態における B I o b抽出処理の前後における差分画 像を示した図である。  FIG. 13 is a view showing a difference image before and after the BIob extraction process in one embodiment of the present invention.
[図 14]本発明の一実施形態において、 欠陥候補が検出された第 1分割領域を 第 2分割領域毎に高倍率で撮像する様子を概念的に示した図である。  FIG. 14 is a diagram conceptually showing a state in which a first divided area where a defect candidate is detected is imaged at a high magnification for each second divided area in one embodiment of the present invention.
[図 15]本発明の一実施形態において、 第 1分割領域及び第 2分割領域の各検 査対象画像から抽出された各 B I o b抽出画像を比較して示した図である。  FIG. 15 is a diagram showing a comparison of each B Iob extracted image extracted from each inspection target image in the first divided region and the second divided region in the embodiment of the present invention.
[図 16]本発明の他の実施形態における電子ビーム照射プレー卜の外観を示し た図である。  FIG. 16 is a view showing the appearance of an electron beam irradiation plate in another embodiment of the present invention.
符号の説明 Explanation of symbols
1…半導体ウェハ (ウェハ)  1 ... Semiconductor wafer (wafer)
3···Χ Y Zステージ  3 ·· Χ Y Z stage
4…モータ  4 ... Motor
5…エンコーダ  5 ... Encoder
6- CCDカメラ  6- CCD camera
7…光源  7 ... Light source
1 0…画像処理用 P C  1 0… For image processing P C
1 4…レンズ  1 4 ... Lens
2 1 ".C P U  2 1 ".C P U
23---RAM  23 --- RAM
24…入出力インタフェース --■H D D24 ... I / O interface -■ HDD
.. •ダイ (半導体チップ、-- •プロテインチップ.. •検査対象画像.. • Die (Semiconductor chip, --- Protein chip ..
-- 'モデル画像-'Model image
.. •凹部 .. • Recess
■■ •上面 ■■ • Top
■■ •底面 ■■ • Bottom
、 9 1…薄膜9 1… Thin film
-- .孔-.Hole
.. •差分画像.. • Difference image
-- •B i o b抽出画像 ■■ •第 1分割領域 ■■ •第 2分割領域 ■■ •割れ -• B i ob extracted image ■■ • First divided area ■■ • Second divided area ■■ • Crack
■■ •異物■■ • Foreign matter
-- ノイズ-- noise
.. •電子ビーム照射プレー ■■ -プレート.. • Electron beam irradiation play ■■ -Plate
-- Ίし -Trick
0 …欠陥検出装置 0 ... Defect detection device

Claims

請求の範囲 The scope of the claims
[1 ] 半導体ウェハ上の複数のダイにそれぞれ形成された微小構造体を、 前記各 ダイの領域が複数に分割された分割領域毎にそれぞれ撮像する撮像手段と、 前記撮像される微小構造体を照明する照明手段と、  [1] An imaging means for imaging the microstructure formed on each of the plurality of dies on the semiconductor wafer for each divided region obtained by dividing the region of each die into a plurality of regions, and the microstructure to be imaged Illumination means for illuminating;
前記撮像された各分割領域毎の画像を、 前記各ダイ内における前記各分割 領域の位置を識別する識別情報と対応付けて検査対象画像として記憶する記 憶手段と、  Storage means for storing the imaged image of each divided area as an image to be inspected in association with identification information for identifying the position of each divided area in each die;
前記記憶された各検査対象画像に対して、 当該各検査対象画像中の低周波 成分を除去するためのフィルタリングを施す第 1のフィルタリング手段と、 前記フィルタリングされた各検査対象画像のうち、 前記各ダイ間で前記識 別情報が対応する各分割領域の各検査対象画像を平均化した平均画像をモデ ル画像として前記識別情報毎にそれぞれ作成するモデル画像作成手段と、 前記作成された各モデル画像と、 当該各モデル画像に前記識別情報が対応 する前記フィルタリングされた前記各検査対象画像とを比較して、 前記微小 構造体の欠陥を検出する検出手段と  A first filtering unit that performs filtering for removing the low-frequency component in each of the stored inspection target images, and each of the filtered inspection target images; Model image creating means for creating an average image obtained by averaging each inspection target image in each divided region corresponding to the identification information between dies for each identification information as a model image; and each created model image And detecting means for detecting defects in the microstructure by comparing the filtered image to be inspected with the identification information corresponding to the model images.
を具備することを特徴とする欠陥検出装置。  A defect detection apparatus comprising:
[2] 請求項 1に記載の欠陥検出装置であって、  [2] The defect detection apparatus according to claim 1,
前記検出手段は、  The detection means includes
前記各モデル画像と、 当該各モデル画像に前記識別情報が対応する各検査 対象画像との差分を差分画像として抽出する差分抽出手段と、  Difference extracting means for extracting a difference between each model image and each inspection target image corresponding to the identification information corresponding to each model image as a difference image;
前記抽出された差分画像中の所定値以上の輝度を有する一連の画素領域の うち所定面積以下の画素領域を除去するためのフィルタリングを施す第 2の フィルタリング手段と  A second filtering means for performing filtering for removing a pixel area having a luminance equal to or greater than a predetermined value in the extracted difference image from a series of pixel areas having a luminance equal to or higher than a predetermined value;
を有することを特徴とする欠陥検出装置。  A defect detection apparatus comprising:
[3] 請求項 2に記載の欠陥検出装置であって、 [3] The defect detection apparatus according to claim 2,
前記モデル画像作成手段は、 前記識別情報が対応する各検査対象画像を構 成する画素毎にそれぞれ輝度値の平均値を算出する手段を有することを特徴 とする欠陥検出装置。 The defect detection apparatus, wherein the model image creation means includes means for calculating an average value of luminance values for each pixel constituting each inspection object image corresponding to the identification information.
[4] 請求項 2に記載の欠陥検出装置であって、 [4] The defect detection apparatus according to claim 2,
前記撮像手段は、 前記各ダイ間で対応する識別情報を有する各分割領域の 前記微小構造体を連続して撮像することを特徴とする欠陥検出装置。  The defect detection apparatus, wherein the imaging unit continuously images the microstructure in each divided region having identification information corresponding to each die.
[5] 請求項 2に記載の欠陥検出装置であって、 [5] The defect detection apparatus according to claim 2,
前記撮像手段は、 一の前記ダイ内の全ての分割領域の微小構造体を撮像し た後、 当該一の前記ダイに隣接する他の前記ダイの各分割領域の微小構造体 を撮像することを特徴とする欠陥検出装置。  The imaging means captures the microstructures in each divided region of the other die adjacent to the one die after imaging the microstructures in all the divided regions in the one die. Feature defect detection device.
[6] 請求項 2に記載の欠陥検出装置であって、 [6] The defect detection apparatus according to claim 2,
前記微小構造体は、 試薬及び当該試薬と交差反応する抗体を導入するため の薄膜状の底面を有する複数の凹部と、 前記抗体と反応しない前記試薬を排 出するために前記各凹部の底面に複数設けられた孔とを有する、 スクリー二 ング検査用の容器であることを特徴とする欠陥検出装置。  The microstructure includes a plurality of recesses having a thin film bottom surface for introducing a reagent and an antibody that cross-reacts with the reagent, and a bottom surface of each recess for discharging the reagent that does not react with the antibody. A defect detection apparatus comprising a plurality of holes, and a screening inspection container.
[7] 請求項 6に記載の欠陥検出装置であって、 [7] The defect detection apparatus according to claim 6,
前記モデル画像作成手段は、 前記各モデル画像に前記識別情報が対応する 前記各検査対象画像の平均化に先立って、 当該各検査対象画像中の前記容器 の各凹部の形状を基に各検査対象画像を位置合わせする手段を有することを 特徴とする欠陥検出装置。  The model image creating means includes the identification information corresponding to each model image, prior to the averaging of each image to be inspected, based on the shape of each recess in the container in each image to be inspected. A defect detection device comprising means for aligning images.
[8] 請求項 6に記載の欠陥検出装置であって、 [8] The defect detection apparatus according to claim 6,
前記差分抽出手段は、 前記差分の抽出に先立って、 前記各モデル画像中の 前記容器の各凹部の形状と、 当該各モデル画像に前記識別情報が対応する各 検査対象画像中の前記各凹部の形状とを基に、 前記各モデル画像と前記各検 査対象画像とを位置合わせする手段を有することを特徴とする欠陥検査装置  Prior to the extraction of the difference, the difference extraction means includes a shape of each concave portion of the container in each model image and each concave portion in each inspection target image corresponding to the identification information corresponding to each model image. A defect inspection apparatus comprising means for aligning each model image and each inspection object image based on a shape
[9] 請求項 2に記載の欠陥検出装置であって、 [9] The defect detection apparatus according to claim 2,
前記微小構造体は、 複数の電子ビームを照射するための複数の窓孔を有す るプレート部材と、 当該各窓孔を覆うように設けられた薄膜とを有する電子 ビーム照射プレートであることを特徴とする欠陥検出装置。  The microstructure is an electron beam irradiation plate having a plate member having a plurality of window holes for irradiating a plurality of electron beams, and a thin film provided so as to cover the window holes. Feature defect detection device.
[10] 請求項 9に記載の欠陥検出装置であって、 前記モデル画像作成手段は、 前記各モデル画像に前記識別情報が対応する 前記各検査対象画像の平均化に先立って、 当該各検査対象画像中の前記電子 ビーム照射プレー卜の各窓孔の形状を基に各検査対象画像を位置合わせする 手段を有することを特徴とする欠陥検出装置。 [10] The defect detection device according to claim 9, The model image creating means is configured to determine the shape of each window hole of the electron beam irradiation plate in each inspection target image prior to averaging each inspection target image corresponding to the identification information corresponding to each model image. A defect detection apparatus comprising means for aligning each inspection object image on the basis thereof.
[11 ] 請求項 9に記載の欠陥検査装置であって、  [11] The defect inspection apparatus according to claim 9,
前記差分抽出手段は、 前記差分の抽出に先立って、 前記各モデル画像中の 前記電子ビーム照射プレー卜の各窓孔の形状と、 当該各モデル画像に前記識 別情報が対応する各検査対象画像中の前記各窓孔の形状とを基に、 前記各モ デル画像と前記各検査対象画像とを位置合わせする手段を有することを特徴 とする欠陥検査装置。  Prior to the extraction of the difference, the difference extraction means includes a shape of each window hole of the electron beam irradiation plate in each model image, and each inspection object image corresponding to the identification information corresponding to the model image. A defect inspection apparatus comprising means for aligning each model image and each image to be inspected based on the shape of each window hole therein.
[12] 半導体ウェハ上の複数のダイにそれぞれ形成された微小構造体を、 前記各 ダイの領域が複数に分割された分割領域毎にそれぞれ撮像するステップと、 前記撮像される微小構造体を照明するステップと、  [12] A step of imaging the microstructure formed on each of the plurality of dies on the semiconductor wafer for each divided region obtained by dividing the region of each die into a plurality of regions, and illuminating the microstructure to be imaged And steps to
前記撮像された各分割領域毎の画像を、 前記各ダイ内における前記各分割 領域の位置を識別する識別情報と対応付けて検査対象画像として記憶するス テツプと、  A step of storing the imaged image of each divided region in association with identification information for identifying the position of each divided region in each die as an inspection target image;
前記記憶された各検査対象画像に対して、 当該各検査対象画像中の低周波 成分を除去するためのフィルタリングを施すステップと、  Filtering each of the stored images to be inspected to remove a low frequency component in each image to be inspected;
前記フィルタリングされた各検査対象画像のうち、 前記各ダイ間で前記識 別情報が対応する各分割領域の各検査対象画像を平均化した平均画像をモデ ル画像として前記識別情報毎にそれぞれ作成するステップと、  Among the filtered inspection target images, an average image obtained by averaging the inspection target images of the divided areas corresponding to the identification information between the dies is created as a model image for each identification information. Steps,
前記作成された各モデル画像と、 当該各モデル画像に前記識別情報が対応 する前記フィルタリングされた前記各検査対象画像とを比較して、 前記微小 構造体の欠陥を検出するステップと  Comparing the created model images with the filtered inspection object images corresponding to the identification information corresponding to the model images, and detecting defects in the microstructures;
を具備することを特徴とする欠陥検出方法。  A defect detection method comprising:
[13] 請求項 1 2に記載の欠陥検出方法であって、 [13] The defect detection method according to claim 12,
前記検出するステップは、  The detecting step includes
前記各モデル画像と、 当該各モデル画像に前記識別情報が対応する各検査 対象画像との差分を差分画像として抽出するステップと、 Each model image, and each inspection corresponding to the identification information corresponding to each model image Extracting a difference from the target image as a difference image;
前記抽出された差分画像中の所定値以上の輝度を有する一連の画素領域の うち所定面積以下の画素領域を除去するためのフィルタリングを施すステツ プと  A step of performing filtering for removing a pixel area of a predetermined area or less from a series of pixel areas having a luminance of a predetermined value or more in the extracted difference image;
を有することを特徴とする欠陥検出方法。  A defect detection method characterized by comprising:
[14] 半導体ウェハ上の複数のダイにそれぞれ形成された微小構造体が、 前記各 ダイが複数に分割された分割領域毎に照明下でそれぞれ撮像された画像を、 前記各ダイ内における前記各分割領域の位置を識別する識別情報と対応付け て検査対象画像として記憶する記憶手段と、  [14] The microstructure formed on each of the plurality of dies on the semiconductor wafer is an image obtained under illumination for each of the divided regions obtained by dividing each of the dies into the plurality of dies. Storage means for storing as an image to be inspected in association with identification information for identifying the position of the divided area;
前記記憶された各検査対象画像に対して、 当該各検査対象画像中の低周波 成分を除去するためのフィルタリングを施すフィルタリング手段と、 前記フィルタリングされた各検査対象画像のうち、 前記各ダイ間で前記識 別情報が対応する各分割領域の各検査対象画像を平均化した平均画像をモデ ル画像として前記識別情報毎にそれぞれ作成するモデル画像作成手段と、 前記作成された各モデル画像と、 当該各モデル画像に前記識別情報が対応 する前記フィルタリングされた前記各検査対象画像とを比較して、 前記微小 構造体の欠陥を検出する検出手段と  Filtering means for performing filtering for removing low frequency components in each inspection target image for each stored inspection target image, and among each filtered inspection target image, between each die Model image creation means for creating an average image obtained by averaging the images to be inspected in each divided region corresponding to the identification information as a model image for each of the identification information, each created model image, Detecting means for detecting defects of the microstructure by comparing the filtered image to be inspected with the identification information corresponding to each model image;
を具備することを特徴とする情報処理装置。  An information processing apparatus comprising:
[15] 半導体ウェハ上の複数のダイにそれぞれ形成された微小構造体が、 前記各 ダイが複数に分割された分割領域毎に照明下でそれぞれ撮像された画像を、 前記各ダイ内における前記各分割領域の位置を識別する識別情報と対応付け て検査対象画像として記憶するステップと、 [15] Microstructures respectively formed on a plurality of dies on a semiconductor wafer, images obtained under illumination for each of the divided regions obtained by dividing each of the dies into a plurality of the respective dies in each of the dies. Storing as an image to be inspected in association with identification information for identifying the position of the divided region;
前記記憶された各検査対象画像に対して、 当該各検査対象画像中の低周波 成分を除去するためのフィルタリングを施すステップと、  Filtering each of the stored images to be inspected to remove a low frequency component in each image to be inspected;
前記フィルタリングされた各検査対象画像のうち、 前記各ダイ間で前記識 別情報が対応する各分割領域の各検査対象画像を平均化した平均画像をモデ ル画像として前記識別情報毎にそれぞれ作成するステップと、  Among the filtered inspection target images, an average image obtained by averaging the inspection target images of the divided areas corresponding to the identification information between the dies is created as a model image for each identification information. Steps,
前記作成された各モデル画像と、 当該各モデル画像に前記識別情報が対応 する前記フィルタリングされた前記各検査対象画像とを比較して、 前記微小 構造体の欠陥を検出するステップと Each identification image corresponds to each created model image and each model image Comparing the filtered images to be inspected with each other, and detecting a defect in the microstructure.
を具備することを特徴とする情報処理方法。  An information processing method comprising:
情報処理装置に、  In the information processing device,
半導体ウェハ上の複数のダイにそれぞれ形成された微小構造体が、 前記各 ダイが複数に分割された分割領域毎に照明下でそれぞれ撮像された画像を、 前記各ダイ内における前記各分割領域の位置を識別する識別情報と対応付け て検査対象画像として記憶するステップと、  Microstructures respectively formed on a plurality of dies on a semiconductor wafer are images obtained by illuminating each divided region obtained by dividing each die into a plurality of divided regions. Storing as an image to be inspected in association with identification information for identifying a position;
前記記憶された各検査対象画像に対して、 当該各検査対象画像中の低周波 成分を除去するためのフィルタリングを施すステップと、  Filtering each of the stored images to be inspected to remove a low frequency component in each image to be inspected;
前記フィルタリングされた各検査対象画像のうち、 前記各ダイ間で前記識 別情報が対応する各分割領域の各検査対象画像を平均化した平均画像をモデ ル画像として前記識別情報毎にそれぞれ作成するステップと、  Among the filtered inspection target images, an average image obtained by averaging the inspection target images of the divided areas corresponding to the identification information between the dies is created as a model image for each identification information. Steps,
前記作成された各モデル画像と、 当該各モデル画像に前記識別情報が対応 する前記フィルタリングされた前記各検査対象画像とを比較して、 前記微小 構造体の欠陥を検出するステップと  Comparing the created model images with the filtered inspection object images corresponding to the identification information corresponding to the model images, and detecting defects in the microstructures;
を実行させるためのプログラム。  A program for running
PCT/JP2007/001335 2006-12-04 2007-11-30 Defect detecting device, defect detecting method, information processing device, information processing method and program WO2008068894A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006-327095 2006-12-04
JP2006327095A JP2008139201A (en) 2006-12-04 2006-12-04 Apparatus and method for detecting defect, apparatus and method for processing information, and its program

Publications (1)

Publication Number Publication Date
WO2008068894A1 true WO2008068894A1 (en) 2008-06-12

Family

ID=39491811

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/001335 WO2008068894A1 (en) 2006-12-04 2007-11-30 Defect detecting device, defect detecting method, information processing device, information processing method and program

Country Status (2)

Country Link
JP (1) JP2008139201A (en)
WO (1) WO2008068894A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020187319A1 (en) * 2019-03-21 2020-09-24 深圳中科飞测科技有限公司 Detection method and detection system
CN116757973A (en) * 2023-08-23 2023-09-15 成都数之联科技股份有限公司 Automatic repair method, system, equipment and storage medium for panel products

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5292043B2 (en) * 2008-10-01 2013-09-18 株式会社日立ハイテクノロジーズ Defect observation apparatus and defect observation method
US8554579B2 (en) 2008-10-13 2013-10-08 Fht, Inc. Management, reporting and benchmarking of medication preparation
BR112012007353B1 (en) 2009-10-06 2019-12-03 Compagnie Generale Des Etablissements Michelin process and device for detecting and evaluating projections appearing on cables leaving a twisting and rubbering process
KR20200018728A (en) * 2012-10-26 2020-02-19 백스터 코포레이션 잉글우드 Improved image acquisition for medical dose preparation system
CA2889352C (en) 2012-10-26 2021-12-07 Baxter Corporation Englewood Improved work station for medical dose preparation system
JP5995756B2 (en) * 2013-03-06 2016-09-21 三菱重工業株式会社 Defect detection apparatus, defect detection method, and defect detection program
EP3161778A4 (en) 2014-06-30 2018-03-14 Baxter Corporation Englewood Managed medical information exchange
CN104124183B (en) * 2014-07-25 2016-09-21 安徽北方芯动联科微系统技术有限公司 The failure analysis device of TSV wafer-level package of MEMS chip and the method for analysis thereof
US11575673B2 (en) 2014-09-30 2023-02-07 Baxter Corporation Englewood Central user management in a distributed healthcare information management system
US11107574B2 (en) 2014-09-30 2021-08-31 Baxter Corporation Englewood Management of medication preparation with formulary management
CA2969451A1 (en) 2014-12-05 2016-06-09 Baxter Corporation Englewood Dose preparation data analytics
EP3265989A4 (en) 2015-03-03 2018-10-24 Baxter Corporation Englewood Pharmacy workflow management with integrated alerts
USD790727S1 (en) 2015-04-24 2017-06-27 Baxter Corporation Englewood Platform for medical dose preparation
JP7345764B2 (en) * 2021-02-26 2023-09-19 株式会社アダコテック Inspection system and inspection program

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001091228A (en) * 1999-09-20 2001-04-06 Dainippon Screen Mfg Co Ltd Pattern inspection device
JP2002323458A (en) * 2001-02-21 2002-11-08 Hitachi Ltd Defect inspection management system and defect inspection system and apparatus of electronic circuit pattern
WO2004017707A2 (en) * 2002-08-20 2004-03-04 Precision Automation, Inc. Apparatus and method of processing materials
JP2004077390A (en) * 2002-08-21 2004-03-11 Toshiba Corp Pattern inspection apparatus
JP2004093317A (en) * 2002-08-30 2004-03-25 Hamamatsu Photonics Kk Method for aligning wafer and wafer inspecting device
JP2004317190A (en) * 2003-04-14 2004-11-11 Neomax Co Ltd Surface inspection method capable of judging unevenness at high speed and surface inspection system
JP2005156475A (en) * 2003-11-28 2005-06-16 Hitachi High-Technologies Corp Pattern defect inspection device and method
JP2006085182A (en) * 1994-07-13 2006-03-30 Kla Instr Corp Automated photomask inspection apparatus and method
JP2006100707A (en) * 2004-09-30 2006-04-13 Mitsubishi Heavy Ind Ltd Device for manufacturing element
JP2006242900A (en) * 2005-03-07 2006-09-14 Mitsubishi Chemicals Corp Sensor unit, reaction field cell unit and analyzing apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03286383A (en) * 1990-04-02 1991-12-17 Sumitomo Metal Ind Ltd Pattern comparing device and surface defect inspecting device
JPH09265537A (en) * 1996-03-29 1997-10-07 Hitachi Ltd Image processing method
US7068363B2 (en) * 2003-06-06 2006-06-27 Kla-Tencor Technologies Corp. Systems for inspection of patterned or unpatterned wafers and other specimen

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006085182A (en) * 1994-07-13 2006-03-30 Kla Instr Corp Automated photomask inspection apparatus and method
JP2001091228A (en) * 1999-09-20 2001-04-06 Dainippon Screen Mfg Co Ltd Pattern inspection device
JP2002323458A (en) * 2001-02-21 2002-11-08 Hitachi Ltd Defect inspection management system and defect inspection system and apparatus of electronic circuit pattern
WO2004017707A2 (en) * 2002-08-20 2004-03-04 Precision Automation, Inc. Apparatus and method of processing materials
JP2004077390A (en) * 2002-08-21 2004-03-11 Toshiba Corp Pattern inspection apparatus
JP2004093317A (en) * 2002-08-30 2004-03-25 Hamamatsu Photonics Kk Method for aligning wafer and wafer inspecting device
JP2004317190A (en) * 2003-04-14 2004-11-11 Neomax Co Ltd Surface inspection method capable of judging unevenness at high speed and surface inspection system
JP2005156475A (en) * 2003-11-28 2005-06-16 Hitachi High-Technologies Corp Pattern defect inspection device and method
JP2006100707A (en) * 2004-09-30 2006-04-13 Mitsubishi Heavy Ind Ltd Device for manufacturing element
JP2006242900A (en) * 2005-03-07 2006-09-14 Mitsubishi Chemicals Corp Sensor unit, reaction field cell unit and analyzing apparatus

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020187319A1 (en) * 2019-03-21 2020-09-24 深圳中科飞测科技有限公司 Detection method and detection system
US11881186B2 (en) 2019-03-21 2024-01-23 Skyverse Technology Co., Ltd. Detection method and detection system
CN116757973A (en) * 2023-08-23 2023-09-15 成都数之联科技股份有限公司 Automatic repair method, system, equipment and storage medium for panel products
CN116757973B (en) * 2023-08-23 2023-12-01 成都数之联科技股份有限公司 Automatic repair method, system, equipment and storage medium for panel products

Also Published As

Publication number Publication date
JP2008139201A (en) 2008-06-19

Similar Documents

Publication Publication Date Title
JP4065893B1 (en) Defect detection device, defect detection method, information processing device, information processing method, and program thereof
JP4102842B1 (en) Defect detection device, defect detection method, information processing device, information processing method, and program thereof
WO2008068894A1 (en) Defect detecting device, defect detecting method, information processing device, information processing method and program
JP5272604B2 (en) Manufacturing method of color filter
JP5553716B2 (en) Defect inspection method and apparatus
JP2005158780A (en) Method and device for inspecting defect of pattern
JP2009016455A (en) Substrate position detecting device and substrate position detecting method
JP2014038045A (en) Inspection device, illumination, inspection method, program and substrate producing method
CN108458972B (en) Light box structure and optical detection equipment applying same
JP5765713B2 (en) Defect inspection apparatus, defect inspection method, and defect inspection program
JP4074624B2 (en) Pattern inspection method
JP2011163804A (en) Foreign matter detection device and method
JP4408902B2 (en) Foreign object inspection method and apparatus
JP2008268055A (en) Foreign material inspecting device, and foreign material inspection method
JP5531405B2 (en) Periodic pattern unevenness inspection method and inspection apparatus
CN112262313A (en) Foreign matter inspection device and foreign matter inspection method
JP4009595B2 (en) Pattern defect inspection apparatus and pattern defect inspection method
JP2004117150A (en) Pattern defect inspection device and pattern defect inspection method
JP2008139088A (en) Visual examination method
CN112285116B (en) Defect detection device and method
CN114222913B (en) Wafer appearance inspection device and method
TWI776152B (en) Inspection apparatus for equipment of handling electronic components
JP2010160063A (en) Foreign substance inspection apparatus of charged particle beam exposure mask and inspection method of the same
JP2009128476A (en) Ink head discharge inspection method and discharge inspection apparatus
US8331647B2 (en) Method of determining defect size of pattern used to evaluate defect detection sensitivity and method of creating sensitivity table

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07828112

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07828112

Country of ref document: EP

Kind code of ref document: A1