WO2008068894A1 - Dispositif et procédé de détection de défauts, dispositif, procédé et programme de traitement d'informations - Google Patents

Dispositif et procédé de détection de défauts, dispositif, procédé et programme de traitement d'informations Download PDF

Info

Publication number
WO2008068894A1
WO2008068894A1 PCT/JP2007/001335 JP2007001335W WO2008068894A1 WO 2008068894 A1 WO2008068894 A1 WO 2008068894A1 JP 2007001335 W JP2007001335 W JP 2007001335W WO 2008068894 A1 WO2008068894 A1 WO 2008068894A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
identification information
images
model
inspected
Prior art date
Application number
PCT/JP2007/001335
Other languages
English (en)
Japanese (ja)
Inventor
Hiroshi Kawaragi
Original Assignee
Tokyo Electron Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tokyo Electron Limited filed Critical Tokyo Electron Limited
Publication of WO2008068894A1 publication Critical patent/WO2008068894A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Definitions

  • Defect detection apparatus defect detection method, information processing apparatus, information processing method, and program thereof
  • the present invention relates to a defect detection apparatus capable of inspecting the appearance of a microstructure such as MEMS (Micro Electro Mechanical Systems) formed on a semiconductor wafer and detecting defects such as foreign matter and scratches.
  • the present invention relates to a defect detection method, an information processing apparatus, an information processing method, and a program thereof.
  • MEMS that integrate various functions in the fields of machinery, electronics, light, chemistry, etc., especially using semiconductor microfabrication technology
  • MEMS devices that have been put into practical use include acceleration sensors, pressure sensors, and airflow sensors as sensors for automobiles and medical use.
  • MEMS devices are also used in printer heads for ink jet printers, micromirror arrays for reflective projectors, and other actuators.
  • MEMS devices are also applied in fields such as chemical synthesis and bioanalysis such as protein analysis chips (so-called protein chips) and DNA analysis chips.
  • Patent Document 1 for example, a CCD (Charge Coupled Device) camera or the like images an arbitrary plurality of non-defective products.
  • a CCD (Charge Coupled Device) camera or the like images an arbitrary plurality of non-defective products.
  • a technique for determining pass / fail of an inspection product is described.
  • each of the reference patterns is imaged.
  • the reference pattern image is stored, the position of each reference pattern is aligned, the average value calculation or the intermediate value calculation is performed between each image data for each pixel, and data that has a large variation and abnormal values are avoided. It is described that reference image data serving as a proper reference is created, and pattern defects are detected by comparing the reference image data with inspection image data.
  • Patent Document 1 Japanese Patent Laid-Open No. 2 065 _ 2 6 5 6 6 1
  • Patent Document 2 Japanese Patent Laid-Open No. 11-7 3 5 1 3 (paragraph [0 0 8 0] etc.)
  • Patent Document 3 Japanese Patent Laid-Open No. 2 0 0 1 _ 2 0 9 7 9 8 ( Figure 1 etc.)
  • model image data non-defective image data or reference image data (hereinafter referred to as model image data) serving as an inspection standard is Created based on images of multiple non-defective products prepared separately from the image to be inspected. Therefore, prior to the creation of model image data, it is necessary to determine whether or not it is a non-defective product and to select a non-defective product. It will be. And in the inspection of extremely small structures such as MEMS devices, where a few scratches or foreign objects become defects, preparing an absolute good product (model) itself is model image data. It is difficult in terms of maintenance and management.
  • the inspection object is a three-dimensional object with unevenness or curvature
  • uneven luminance may occur in the captured image due to illumination conditions, optical conditions, and the like.
  • noise may be mixed in, and noise may be misdetected as a defect.
  • Patent Document 3 the entire surface of a wafer having a matrix-like repetitive pattern is imaged as a single image, and a median filter or the like is applied to the captured image.
  • a method is described in which a processed image is obtained, and the brightness value of the extracted image is binarized to determine the presence or absence of a defective portion.
  • the object of the present invention is to provide a MEMS device with high accuracy and efficiency while preventing false detection due to uneven brightness and noise, etc., without requiring an absolute model image. It is to provide a defect detection apparatus, a defect detection method, an information processing apparatus, an information processing method, and a program for the same.
  • a defect detection apparatus includes a structure in which a microstructure formed on each of a plurality of dies on a semiconductor wafer is divided into a plurality of areas of each of the dies.
  • Imaging means for imaging each divided area, illumination means for illuminating the microstructure to be imaged, and each of the captured divided areas
  • Storage means for storing each image as an inspection target image in association with identification information for identifying the position of each divided region in each die, and for each of the stored inspection target images,
  • a first filtering means for performing filtering for removing low-frequency components in the inspection target image; and in each of the divided regions corresponding to the identification information between the dies of the filtered inspection target images.
  • Model image creation means for creating an average image obtained by averaging the images to be inspected for each identification information as a model image, each created model image, and the filtering corresponding to the identification information for each model image
  • detecting means for detecting a defect of the microstructure by comparing each of the inspection target images.
  • the microstructure is a so-called M EMS (Micro Electro Mechanical System).
  • the imaging means is, for example, a camera with a built-in imaging element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) sensor.
  • the illumination means is, for example, a flash light emitting unit such as a high-intensity white LED (Light Emitting Diode) or a xenon lamp.
  • the first filtering means is a high-pass filter in other words. Defects are, for example, scratches and foreign objects.
  • each inspection object image is imaged under the same illumination condition by the imaging means and the light source.
  • Each model image is created based on the image, and each model image is compared with each image to be inspected to detect defects in multiple microstructures, change illumination conditions such as shadows, and the lens of the imaging means. It can be detected accurately and stably without being affected by fixed noise such as dirt.
  • the first filtering means can remove luminance unevenness (low frequency component) caused by the optical axis shift of the imaging means, the flatness of the microstructure, and the like. Since the model image is created and the defect is detected based on the filtered image, both the quality of the model image and the defect detection accuracy can be improved by this one-time filtering.
  • the captured image to be inspected can be used for both model image creation processing and defect detection processing, it is more efficient than creating model images separately. Defects can be detected.
  • the detection unit includes a difference in which each model image and each inspection target image corresponding to the identification information are overlapped with each model image and a difference between both images is extracted as a difference image.
  • Extraction means for performing filtering for removing a pixel area having a predetermined area or less from a series of pixel areas having a luminance equal to or higher than a predetermined value in the extracted difference image. Also good.
  • the second filtering means performs filtering by analyzing, for example, a block of pixels having a predetermined (or a predetermined range) grayscale value in the difference image, that is, B I o b.
  • further filtering by the second filtering means can prevent the noise component in the difference image from being erroneously detected as a defect, and the detection accuracy can be further improved.
  • the model image creating unit superimposes each inspection target image corresponding to the identification information, and calculates an average value of luminance values for each pixel constituting each inspection target image. You may have a means.
  • the imaging unit may continuously image the microstructure in each divided region having identification information corresponding to each die.
  • the imaging unit images the microstructures of all the divided regions in one die, and then the microstructures of the divided regions of the other die adjacent to the one die. You may make it image.
  • the microstructure discharges a reagent and a plurality of concave portions having a thin film bottom surface for introducing an antibody that cross-reacts with the reagent, and the reagent that does not react with the antibody. Therefore, it may be a screening examination container having a plurality of holes provided in the bottom surface of each of the recesses.
  • the container is a protein chip.
  • cracks and scratches on the thin film (membrane) of the protein chip, foreign matter adhering to the thin film, etc. can be detected with high accuracy.
  • the model image creating means prior to averaging each inspection object image corresponding to the identification information corresponding to each model image, the shape of each recess of the container in the inspection object image
  • each concave portion of the container by using the shape of each concave portion of the container, it is possible to accurately match the overlapping positions of the images to be inspected and to create a higher quality model image.
  • the alignment is performed by changing the relative position of each image by moving each image in the X and Y directions or rotating in the 0 direction.
  • the difference extracting means prior to extracting the difference, the shape of each concave portion of the container in each model image, and each inspection corresponding to the identification information corresponding to each model image It may have means for aligning each model image and each inspection object image based on the shape of each recess in the object image.
  • the microstructure includes a plate member having a plurality of window holes for irradiating a plurality of electron beams, and a thin film provided to cover the window holes. It may be an electron beam irradiation plate.
  • the model image creating means prior to averaging each inspection object image corresponding to the identification information for each model image, the electron beam irradiation plate in each inspection object image There may be provided means for aligning each inspection object image based on the shape of each window hole.
  • each window hole of the electron beam irradiation plate it is possible to accurately match the overlapping positions of the images to be inspected and to create a higher quality model image.
  • the difference extracting means prior to extracting the difference, the shape of each window hole of the electron beam irradiation plate in each model image, and the identification information in each model image.
  • a defect detection method provides a microstructure formed on each of a plurality of dies on a semiconductor wafer for each divided region obtained by dividing the region of each die into a plurality of regions.
  • a step of storing as an inspection target image in association with identification information for identifying the position of each of the divided regions, and for each of the stored inspection target images, a low frequency component in each of the inspection target images is stored.
  • the detecting step includes superimposing the model images and inspection target images corresponding to the identification information on the model images, and extracting a difference between the images as a difference image. And a step of performing filtering for removing a pixel area having a predetermined area or less from a series of pixel areas having a luminance equal to or higher than a predetermined value in the extracted difference image.
  • An information processing apparatus is such that a microstructure formed in each of a plurality of dies on a semiconductor wafer is illuminated in each divided region in which each die is divided into a plurality.
  • Storage means for storing each of the images picked up in (2) as an inspection target image in association with identification information for identifying the position of each of the divided regions in each die, and for each of the stored inspection target images
  • a filtering unit that performs filtering to remove low-frequency components in each image to be inspected, and each division corresponding to the identification information between each die among the filtered images to be inspected
  • a model image creating means for creating, for each identification information, an average image obtained by averaging the images to be inspected in each region as a model image, each created model image, and each model Detecting means for detecting a defect of the microstructure by comparing the filtered image to be inspected with the identification information corresponding to a Dell image.
  • the information processing apparatus is a computer such as a PC (Personal Computer). So-called notebook type or desktop type.
  • An information processing method is such that a microstructure formed in each of a plurality of dies on a semiconductor wafer is illuminated in each divided region in which each die is divided into a plurality. Storing each of the captured images as an inspection target image in association with identification information for identifying the position of each of the divided regions in each die, and for each of the stored inspection target images, Filtering for removing low-frequency components in each image to be inspected, and each inspection of each divided region corresponding to the identification information between the dies among the filtered image to be inspected An average image obtained by averaging the target images is created for each identification information as a model image, each created model image, and each identification image is associated with the identification information. And comparing each filtered image to be inspected corresponding to a report to detect a defect in the microstructure.
  • a program according to still another aspect of the present invention provides a program in which a microstructure formed on each of a plurality of dies on a semiconductor wafer is divided into a plurality of divided regions obtained by dividing each of the dies into a plurality of dies. Storing each image captured under illumination for each of the images as an inspection object image in association with identification information for identifying the position of each of the divided regions in each die; and each of the stored inspection object images In contrast, a step for performing filtering for removing a low frequency component in each inspection target image, and each of the filtered inspection target images corresponding to the identification information between the dies.
  • a step of creating an average image obtained by averaging the images to be inspected in the divided areas for each of the identification information as a model image, the created model images, and the models By comparing the respective inspection target image before Symbol filtering said identification information corresponding to the image, is intended to execute the steps of detecting a defect of the microstructure.
  • an absolute model image is not required, and the ME can be performed with high accuracy and efficiency while preventing erroneous detection due to uneven brightness or noise. MS device defects can be detected.
  • FIG. 1 is a diagram showing a configuration of a defect detection apparatus according to an embodiment of the present invention.
  • the defect detection apparatus 100 is a semiconductor wafer 1 made of, for example, silicon. (Hereinafter, also simply referred to as wafer 1), a wafer table 2, a XYZ stage 3 for moving the wafer table 2 in the X, Y and Z directions in the figure, and a CCD for imaging the wafer 1 from above
  • the camera 6 includes a light source 7 that illuminates the wafer 1 during imaging by the CCD camera 6, and an image processing PC (Personal Computer) 10 that controls the operation of each unit and performs image processing to be described later.
  • PC Personal Computer
  • the woofer 1 is transported onto the woofer table 2 by a transport arm (not shown) and is adsorbed and fixed to the woofer table 2 by suction means such as a vacuum pump (not shown).
  • suction means such as a vacuum pump (not shown).
  • a tray (not shown) that can hold wafer 1 is prepared separately, and the tray is adsorbed and fixed with wafer 1 held on the tray. You may do so.
  • a protein chip is formed as a MEMS device.
  • the defect detection apparatus 100 is an apparatus for detecting defects such as foreign matters and scratches on the protein chip using the protein chip as an inspection object. Details of the protein chip will be described later.
  • the CCD camera 6 is fixed at a predetermined position above the wafer 1 and incorporates a lens, a shirter (not shown) and the like. Based on the trigger signal output from the image processing PC 10, the CCD camera 6 emits an image of a protein chip formed on a predetermined portion of the wafer 1, enlarged by a built-in lens, by a light source 7. The image is captured under the flashlight and the captured image is transferred to the image processing PC 10 To send.
  • the XYZ stage 3 moves the wafer 1 in the vertical direction (Z direction), thereby changing the relative distance between the CCD camera 6 and the wafer 1, and the focal point when the CCD camera 6 captures the wafer 1 The position can be changed.
  • the focal position may be varied by moving the CCD camera 6 in the Z direction instead of the XYZ stage 3.
  • the lens of the CCD camera 6 is configured as a zoom lens, and the protein chip can be imaged at different magnifications by changing the focal length.
  • magnification of the CCD camera 6 can be varied in two stages of about 7 times (low magnification) and about 18 times (high magnification).
  • the field size in the case of low magnification is, for example, 6 80 X 5 10 (m 2 )
  • the field size in the case of high magnification is, for example, 2 70 X 200 (U m ⁇ ). It is not a thing.
  • a camera incorporating another image sensor such as a CM OS sensor may be used.
  • the light source 7 is fixed at a predetermined position above the wafer 1, and includes a flash lamp composed of, for example, a high-intensity white LED or a xenon lamp, and a flash lighting circuit that controls lighting of the flash lamp. Based on the flash signal output from the image processing PC 10, the light source 7 illuminates the predetermined portion of the wafer 1 by emitting a high-intensity flash for a predetermined time of, for example, several U seconds.
  • the XYZ stage 3 includes a motor 4 for moving the X stage 1 1 and the Y stage 1 2 along the movement axis 1 3 in the X direction, the Y direction, and the Z direction, respectively, and the X stage 1 1 and Y Encoder 5 is used to determine the moving distance of stage 1 2.
  • the motor 4 is, for example, an AC servo motor, a DC servo motor, a stepping motor, a linear motor, etc.
  • the encoder 5 is, for example, various motor encoders, a linear scale, or the like.
  • the encoder 5 Each time the X stage 1 1 and the Y stage 1 2 move by a unit distance in the X, Y, and Z directions, the encoder 5 generates an encoder signal that is the movement information (coordinate information) and outputs the encoder signal. Output to PC 10 for image processing.
  • the image processing PC 10 inputs the encoder signal from the encoder 5 and outputs a flash signal to the light source 7 based on the encoder signal. Is output.
  • the image processing PC 10 outputs a motor control signal for controlling the driving of the motor 4 to the motor 4 based on the encoder signal input from the encoder 5.
  • FIG. 2 is a block diagram showing a configuration of the image processing PC 10.
  • the image processing PC 10 includes a CPU (Central Processing Unit) 21, ROM (Read Only Memory) 22, RAM (Random Access Memory) 23, I / O interface 24, HDD (Hard Disk Drive 25) A display unit 26 and an operation input unit 27, which are electrically connected to each other via an internal bus 28.
  • CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • HDD Hard Disk Drive
  • the C P U 21 comprehensively controls each part of the image processing PC 10 and performs various calculations in image processing to be described later.
  • the ROM 22 is a non-volatile memory that stores programs necessary for starting up the image processing PC 10 and other programs and data that do not require rewriting.
  • the RAM 23 is used as a work area of the CPU 21 and is a volatile memory that reads various data and programs from the HDD 25 and ROM 22 and temporarily stores them.
  • the input / output interface 24 is connected to the operation input unit 27, the motor 4, the encoder 5, the light source 7, and the CCD camera 6 with the internal bus 28, and the operation input unit 27 inputs the operation.
  • This interface is used to input signals and exchange various signals with the motor 4, encoder 5, light source 7 and CCD camera 6.
  • the HDD 25 includes an OS (Operating System), various programs for performing imaging processing and image processing, which will be described later, various other applications, and a protein chip as an inspection target image captured by the CCD camera 6.
  • Image data such as a model image (described later) created from the image and the image to be inspected, and various data for reference in imaging processing and image processing are stored in a built-in hard disk.
  • the display unit 26 includes, for example, an LCD (Liquid Crystal Display), a CRT (Cathode Ray Tube), and the like. The image captured by the CCD camera 6 and various image processing images are displayed. Displays the operation screen.
  • the operation input unit 27 includes, for example, a keyboard and a mouse, and inputs an operation from a user in image processing described later.
  • FIG. 3 is a top view of the wafer 1.
  • 88 semiconductor chips 30 (hereinafter also simply referred to as chips 30 or dies 30) are formed on the wafer 1 in a grid.
  • the number of dies 30 is not limited to 88.
  • FIG. 4 is a top view showing one of the dies 30 on the wafer 1.
  • each die 30 is formed with a protein chip 35 having a plurality of circular recesses 50 over its entire surface.
  • Each die 30, that is, each protein chip 35 has a substantially square shape, and the length s of one side thereof is, for example, about several mm to several tens of mm, but is not limited to this dimension.
  • FIG. 5 is an enlarged view showing one recess 50 in the protein chip 35.
  • FIG. 4A is a top view of the recess 50
  • FIG. 4B is a sectional view of the recess 50 in the Z direction.
  • a thin film (membrane) 53 having a plurality of holes 55 is formed on the bottom surface 52 of each recess 50 of the protein chip 35.
  • the hole 55 is formed over the entire surface of the circular bottom surface 52 of each recess 50.
  • the diameter d 1 of each recess 50 is, for example, several hundred; U m
  • the diameter d 2 of each hole 55 is, for example, several m
  • the height h) is, for example, several hundreds; U m, but is not limited to these dimensions.
  • latex fine particles (latex beads) are placed on the bottom surface 52 of the recess 50 as a carrier, and an antibody (protein) is introduced as a reagent into the recess 50, Silicon for screening proteins with specific properties that adsorb to latex beads by antibody cross-reaction
  • the container made The reagent (protein) that has not adsorbed to the latex beads is discharged from each hole 55 of the bottom surface 52, and only the protein having a specific property remains in the recess 50.
  • a thin film 53 such as a silicon oxide film is formed on one surface of the wafer 1 by a C V D (Chemical Vapor Deposition) method.
  • a photoresist is applied to the other surface of the wafer 1, unnecessary portions are removed by a photolithography technique, and etching is performed using the resist pattern as a mask.
  • a plurality of recesses 50 are formed on the wafer 1 while leaving the thin film 53.
  • a photoresist is applied to the thin film 53 of each recess 50, the hole 55 is removed by photolithography, and etching is performed using the resist pattern as a mask.
  • the protein chip 35 composed of a plurality of recesses 50 having the thin film 53 formed with a large number of holes 55 as shown in FIG. 5 can be formed.
  • FIG. 6 is a flowchart showing a rough flow of operations until the defect detection apparatus 100 detects a defect.
  • the CCD camera 6 captures an image of each die 30 on which the protein chip 35 is formed at the low magnification (step 1001). Specifically, as shown in FIG. 7, each die is divided into, for example, 1 8 rows X 1 3 columns in total 2 3 4 first divided areas 7 1, and a CCD camera 6 under the flash of the light source 7 An image for each first divided area 71 is acquired. The number and aspect ratio of the first divided areas 71 are not limited to this number. Each first divided area 71 is pre-assigned with an ID for identifying its position, and the HDD 25 of the image processing PC 10 stores the ID. With this ID, the image processing PC 10 can identify the first divided area 71 existing at the same position between different dies 30. Each die 30 is also assigned an ID, and the image processing PC 10 has each first divided region 7 1 force which die 30 has which first. It is possible to identify whether it is a divided area 71.
  • the image processing PC 10 outputs a motor drive signal to the motor 4 based on the encoder signal from the encoder 5 to move the XYZ stage 3, and the encoder signal
  • the trigger signal and flash signal are generated based on the above, and the trigger signal is output to the CCD camera 6 and the flash signal is output to the light source 7, respectively.
  • the light source 7 emits a flash to the protein chip 35 every U seconds, based on the flash signal, and the CCD camera 6 Then, based on the trigger signal, for example, the first divided areas 71 of the protein chip 35 on the wafer 1 are continuously imaged at a speed of 50 sheets / second, for example.
  • FIG. 8 is a diagram showing the locus of the imaging position when the CCD camera 6 images the protein chip 35 for each first divided area 71.
  • two imaging paths are conceivable, as shown in FIGS. (A) and (b).
  • the CCD camera 6 starts from the leftmost die 30 of the die 30 having the maximum Y coordinate among the 88 dies 30 of the wafer 1, for example.
  • the first divided area 71 of each of the 18 rows X 1 3 columns of the die 30 is imaged continuously for every row, for example, and then moved to the next die 30 and all the first divided regions 71 for every row. 1 divided area 7 1 is imaged.
  • the image processing PC 10 0 sets the imaging position in each first divided area 7 1 of one die 30, for example, the first divided area 7 1 belonging to the uppermost row and the leftmost column. Move to the right in the X direction, move to the right end, move one line in the Y direction, move to the left in the X direction, move to the left end, move one line in the Y direction, and move to the next line. When the image is moved to the right in the X direction, the operation is repeated.When imaging of all the first divided areas 71 of one die 30 is completed, the image is moved to the other adjacent die 30 and the same. A motor drive signal is output to the motor 4 so that the movement is repeated.
  • the CCD camera 6 continuously images each first divided area 71 based on the trigger signal output from the image processing PC 10 in accordance with this movement.
  • the CCD camera 6 continuously captures images of the first divided area 71 having an ID corresponding to each die 30 (existing at the same position).
  • the imaging position may be moved to.
  • the image processing PC 10 sets the imaging position of the CCD camera 6, for example, the die 30 having the maximum Y coordinate from the leftmost die 30 as a starting point, It passes over each first divided area 7 1 (first divided area 7 1 indicated by a black circle) that has an ID corresponding to 30 and exists at the position where the X coordinate is the minimum and the Y coordinate is the maximum.
  • each of the first divided areas 7 1 (with white circles) adjacent to the first position in the X direction and having a corresponding ID is moved in the second order.
  • the first divided area 71 1) shown above is moved so that the CCD camera 6 repeats the movement over each first divided area 71 located at the same position between the dies 30 in the same manner.
  • the CCD camera 6 performs all the operations for continuously imaging the plurality of first divided areas 71 having the corresponding ID. Repeat for Die 30.
  • the image processing PC 10 selects a path in which the imaging time is shortened from the two imaging paths and causes the CCD camera 6 to capture an image.
  • the imaging path shown in Fig. 5 (a) is taken, the imaging interval of each first divided area 71 at the time of each imaging, that is, the movement interval of the XYZ stage 3 is the interval of each first divided area 71.
  • the movement interval of the XYZ stage 3 is the same as the interval of each die 30. Therefore, the CPU 21 of the image processing PC 10 can calculate the driving speed of the motor 4 from these movement intervals and the imaging frequency of the CCD camera 6. By multiplying this driving speed by the entire imaging path shown in FIGS.
  • the image processing PC 10 compares the respective imaging times to determine which imaging route (a) and (b) is used, the imaging time is faster, and the imaging time is faster. Select the imaging path.
  • the image of each first divided region 71 captured by the CCD camera 6 is transmitted to the image processing PC 10 as an inspection target image together with the ID for identifying each first divided region 71, It is stored in the HDD 25 or RAM 23 via the input / output interface 24 of the image processing PC 10.
  • the size of the inspection target image captured by the CCD camera 6 is a so-called VGA (Video Graphics Array) size (640 ⁇ 480 pixels) image, but is not limited to this size.
  • the CCD camera 6 moves the XYZ table 3 in the Z direction, thereby changing the distance between the wafer 1 and the protein chip 35 and having different focal positions. It is possible to take an image to be inspected.
  • Figure 9 shows the situation.
  • the XYZ stage 3 is based on the focus signal from the image processing PC 10 and is directed upward (Z1 direction in the figure) and downward (Z2 direction in the figure).
  • the distance between the CCD camera 6 and the protein chip 35 is changed to, for example, the third stage (focal points F1 to F3). That is, the CCD camera 6 moves the XYZ stage 3 in the Z2 direction so that the focal point is aligned with the upper surface 51 of the protein chip 35 (focal point F 1), and then the XYZ stage 3 moves in the Z1 direction.
  • the focus position is adjusted to the approximate middle position between the upper surface 51 and the bottom surface 52 of the protein chip 35 (focal point F2). It is possible to align (focal point F 3).
  • the variable focal position is not limited to three.
  • the defect detection apparatus 100 captures images at a plurality of different focal positions so that the inspection target is the protein chip 35 according to the present embodiment. Even in the case of a three-dimensional shape having a thickness (depth or height) in the z direction, it is possible to acquire an image of each position in the z direction and prevent defect detection omission.
  • the CCD camera 6 sorts the image of each focal position imaged by the route shown in Fig. 8 (a) or (b) for each focal position and transmits it to the image processing PC 10 for image processing.
  • the PC 10 identifies these images as inspection target images for each focal position and stores them in the HDD 25 or RAM 23. That is, as described above, when the focal points are F1 to F3, the CCD camera 6 repeats the movement along the imaging path shown in FIG. 8 (a) or (b) three times in total for each focal position. Imaging.
  • the CPU 21 of the image processing PC 10 obtains each inspection object image from the CCD force camera 6 in parallel with the imaging process by the CCD camera 6.
  • a filtering process using a high-pass filter is performed on each acquired image to be inspected (step 102).
  • the protein chip 35 in the present embodiment has a thin film 53 on the bottom surface 52, and uneven brightness may occur depending on the flatness of the thin film 53, for example, when the thin film 53 is bent. .
  • luminance unevenness may also occur due to the deviation of the optical axis of the CCD camera 6 or the uniformity of how the light source 7 hits the flash. Such luminance unevenness is extracted as a difference in a difference extraction process with a model image described later, which leads to erroneous detection of a defect.
  • This uneven brightness portion is a portion where the brightness changes gently in the inspection target image. That is, it can be said that the luminance unevenness component is a low frequency component. Therefore, in this embodiment, a high-pass filter is applied to each captured image to be inspected to remove this low frequency component.
  • FIG. 10 is a flowchart showing the detailed flow of this high-pass filter process.
  • the CPU 21 of the image processing PC 10 reads a copy of the image to be inspected from the HDD 25 to the RAM 23 (step 61), and performs Gaussian blurring processing on the image to be inspected. Apply (step 62).
  • the blur setting value is set to a radius of about 15 to 16 pixels, for example, but is not limited to this setting value.
  • the output image obtained by the Gaussian blurring process (hereinafter referred to as the Gaussian blurring image) is an image in which only the low-frequency components remain as a result of smoothing the high-frequency components in the original image to be inspected.
  • the C P U 21 subtracts the Gaussian blurred image from the original image to be inspected (step 6 3).
  • the low frequency component at the corresponding position in the Gaussian blurred image is bowed from the high frequency component in the original inspection target image, so that the original high frequency component remains and the original inspection target.
  • the original low frequency component is removed by subtracting the low frequency component at the corresponding position in the Gaussian blurred image from the low frequency component in the image.
  • the image obtained by this subtraction process is an image in which only the high-frequency component remains, with the low-frequency component removed from the original image to be inspected.
  • C PU 21 updates the original image to be inspected with the image after the subtraction process and stores it in HD 25 (step 6 4).
  • the CPU 21 determines whether or not each of the images to be inspected has been imaged for all of the first divided areas 71, and performs the filtering process by the high-pass filter for all the inspections. It is determined whether or not it has been performed on the target image (steps 10 3 and 10 4). If it is determined that all of the inspection target images have been imaged and filtered (Y es), this filter The process proceeds to a process of creating a model image of each divided region using the image to be inspected after the tulling (step 10 5). In the present embodiment, the imaging process of the inspection target image and the high-pass filter process are performed in parallel. The high-pass filter processing may be performed after the image capturing processing is completed for all the first divided areas 71 (that is, the processing in step 1002 and the processing in step 103 are performed). The reverse is also possible.)
  • Fig. 11 is a flowchart showing the flow of processing until the image processing PC 10 creates a model image.
  • Fig. 12 shows how the image processing PC 10 creates a model image. It is the figure shown conceptually.
  • the CPU 21 of the image processing PC 10 has an ID corresponding to each die 30 in the inspection target image after the high-pass filter processing.
  • the inspection target image is read from the HDD 25 to the RAM 23 (Step 4 1), and the alignment of each of the read inspection target images is performed (Step 4 2).
  • the CPU 21 detects, for example, an edge portion of the recess 50 of the protein chip 35 from the images to be inspected, which are images of the first divided area 71 existing at the same position between the dies 30. , Etc., and align them while adjusting by shifting in the X and Y directions and rotating in the 0 direction so that the shapes overlap each other.
  • the CPU 21 captures each inspection object image 4 having a corresponding ID obtained by imaging the first divided area 7 1 a existing at the same position between the dies 30. Read 0 a to 40 f, ⁇ ⁇ ⁇ ⁇ .
  • the total number of inspection target images 40 having corresponding IDs is 88.
  • the C P U 21 overlaps all of the 88 images to be inspected 40 and aligns them based on the shape of the recess 50 and the like. Thus, by performing alignment based on the shape or the like of the recess 50, easy and accurate alignment is possible.
  • the CPU 21 calculates an average luminance value for each pixel (pixel) at the same position in each inspection object image 40 in a state where the above alignment is completed (step 4 3). .
  • the CPU 21 calculates the average luminance value for all the pixels in each inspection target image 40 in the first divided area 7 1 a (Yes in step 44)
  • the CPU 21 calculates the result of the calculation. Based on this, the image composed of this average luminance value is model image 45 and Is generated and stored in the HDD 25 (step 45).
  • the CPU 21 repeats the above processing to determine whether or not the model images 45 have been created for all the first divided regions 71 corresponding to the dies 30 (step 46). If it is determined that the model image 45 is created (Y es), the process is terminated.
  • model image 45 is created based on actual inspection target image 40.
  • Each inspection image 40 may have defects such as foreign matter, scratches, and thin film cracks.
  • Each die 30 is divided into a plurality of (in this embodiment, 234) first divided regions 71, and a plurality of (in this embodiment, 88 in this embodiment) average luminance values for 30 dies are obtained.
  • the defects in each inspection target image 40 are absorbed, and a model image 45 that is very close to the ideal shape can be created, and highly accurate defect detection becomes possible.
  • step 106 performs a difference extraction process between the model image 45 and each inspection target image 40 after the high-pass filter for each first divided area 71.
  • the CPU 21 performs X, Y based on the shape of the concave portion 50 existing in the model image 45 and each inspection object image 40 in the same manner as the alignment process at the time of creating the model image 45 described above. Align the image while adjusting in the 0 direction, extract the difference by subtraction processing of both images, perform binarization processing, and output as a difference image.
  • BI ob is in the difference image A block of pixels having a predetermined (or predetermined range) grayscale value.
  • the CPU 21 performs processing for extracting only B I ob having a predetermined area (for example, 3 pixels) or more from the B I o b from the difference image.
  • FIG. 13 is a diagram showing a difference image before and after the B I ob extraction process.
  • Figure (a) shows the difference image 60 before B I ob extraction
  • (b) shows the difference image after B I ob extraction (hereinafter referred to as B I ob extraction image 65).
  • a white portion is a portion that appears as a difference between the model image 45 and the inspection target image 40.
  • the difference image 60 in order to emphasize the difference, a process for enhancing the luminance value, for example, about 40 times is performed on the original difference image.
  • defects such as foreign matter and scratches, for example, contamination of the lens 14 of the CCD camera 6 and illumination of the light source 7 Due to various factors such as uniformity, there is a small noise 84 shown in the part surrounded by a white broken line. If this noise 84 remains, it will lead to false detection of a defect, so it is necessary to remove this noise 84.
  • the noise 84 has a smaller area than a defect such as a foreign object or a scratch. Therefore, as shown in FIG. 6B, the noise 60 is obtained by filtering the difference image 60 by removing BI ob below a predetermined area and extracting only BI ob larger than the predetermined area. 84 can be removed.
  • the B I ob extraction process only the foreign matter 82 such as dust 81 attached to the protein chip 35 and the thin film crack 81 of the concave portion 50 of the protein chip 35 are extracted from the B l ob extraction image 65. At this time, the CPU 21 does not recognize the types of defects such as these foreign objects, cracks, and scratches, but merely recognizes them as defect candidates.
  • the CPU 21 detects the protein chip in which this defect candidate is detected. Determine whether further images need to be taken at high magnification (with a narrow field of view) 35 (step 110).
  • CPU 2 1 determines, for example, whether or not a user operation instructing that the first divided area 71 to which the inspection target image 40 where the defect candidate appears belongs belongs to be captured in more detail at a high magnification is input. If it is determined that high-magnification imaging is necessary (Y es), the first divided area 71 in which the defect candidate is detected and the other first divided area 71 having an ID corresponding to each die 30. Is imaged by the CCD camera 6 at a high magnification for each of the second divided areas 72 divided further finely (step 11 13).
  • the defect classification process described later for example, based on the extracted BI ob area, it is determined whether or not it is a defect and the defect is classified, but the inspection object image captured at a low magnification is used.
  • the BI ob extracted image 65 created based on the above the BI ob area may not be calculated accurately.
  • the correct shape of the defect cannot be recognized and the defect cannot be accurately classified. Therefore, by imaging the protein chip 35 at a higher magnification, it is possible to accurately determine whether or not it is a defect later and to perform defect classification processing.
  • FIG. 14 is a diagram conceptually showing a state in which the first divided region 71 in which the defect candidate is detected is imaged at a high magnification for each second divided region 72.
  • this first divided region 71a is Furthermore, it is divided into a total of nine second divided areas 72 in 3 rows and 3 columns.
  • the first divided regions 71 having IDs corresponding to the first divided regions 71 a are also divided into the second divided regions 72, respectively.
  • Each second divided region 72 is given an ID for identifying the position of each second divided region 72 in each die 30, similarly to each first divided region 71.
  • the C C D camera 6 images each of the second divided areas 72 at the same size (VGA size) as the first divided area 71. That is, the C C D camera 6 images the second divided region 72 at a magnification three times that when the first divided region 71 is imaged.
  • the captured image is stored in the HD 25 of the image processing PC 10 as the inspection target image together with the ID of each of the second divided regions 72.
  • CPU 2 1 selects the fastest path (a) or (b) in FIG. 8 as in the case of imaging of the first divided area 71.
  • the CPU 21 captures all the second divided areas 72 in the divided area 71 of the one die 30 and then each second divided area 71 in the corresponding divided area 71 of the other die 30.
  • the path to image the area 72 and the path to collectively image each second divided area 72 having the corresponding ID between the corresponding first divided areas 71 of each die 30 Determine which route is faster, and have the image taken faster.
  • step 1 1 3 When the imaging of the first divided area 71 1 in which the defect candidate is detected and the second divided area 7 2 in the first divided area 71 corresponding thereto is finished (step 1 1 3), the CPU 2 1 Similar to the processing in steps 10 2 to 10 7 described above, filtering processing (step 1 1 4) and model image creation processing (step 1 1 7) using a high-pass filter are performed for each inspection target image, Difference extraction processing is performed on each inspection target image obtained by imaging each second divided region 72 in the first divided region 71 in which the defect candidate is detected (step 1 1 8), and further by BI ob extraction. Perform the filtering process (step 1 1 9).
  • each inspection target image in the second divided area 72 is captured at a higher resolution than the inspection target image in the first divided area 71, the BI ob extraction process in step 1 1 8
  • the threshold value (pixel) of the BI ob region extracted in step B is set to a value larger than the threshold value of the BI ob region extracted in the BI ob extraction process for each first divided region 71 in step 107.
  • FIG. 15 is a diagram showing a comparison of each BI ob extracted image 65 extracted from each inspection target image in the first divided region 71 and the second divided region 72.
  • BI 0! 3 extracts image 6 5 3
  • Fig. (A) is extracted from the first division region 7 1
  • Fig. (B) is a BI ob extracted image 6 5 b extracted from the second divided region 7 2 Show.
  • the first divided area 71 is divided into nine second divided areas 72, and the second divided area 7 where the foreign matter 82 appears is shown.
  • the foreign object 82 is displayed at a high resolution, and the area can be accurately calculated.
  • step 10 08 when the defect candidate is extracted in step 10 08 without performing the above-described determination of necessity of high-magnification imaging in step 109, the high-magnification imaging is automatically performed. May be.
  • the motor 4 and the encoder 5 when the performance of the image processing PC 10, the motor 4 and the encoder 5 is high and the processing time is within the allowable range, not only the first divided area 71 from which the defect candidate is extracted, All die 3 0 all first
  • the second divided area 7 2 in the one divided area 71 may be imaged and the model image 45 may be created for all the second divided areas 7 2.
  • the imaging process for each of the second divided areas 7 2 is performed without determining whether or not high magnification imaging is required in the above steps 1 0 9. If the CPU 21 determines that there is a first divided area 7 1 in which a defect candidate is detected after performing high-pass filter processing and model image creation processing, the first
  • the B Iob extraction process may be performed for each second divided area 7 2 in the one divided area 71.
  • step 1009 if it is determined in step 1009 that high-magnification imaging is not necessary (No), or the second divided region in steps 113 to 1119 above
  • the CPU 21 performs the classification process of defect candidates appearing in the Biob extraction image 65 (step 110).
  • the CPU 21 is based on feature points such as area, perimeter, non-roundness, aspect ratio, etc., for each BI ob that appears white in the BI ob extracted image 65. Whether or not each BI ob is a defect is classified and whether the type of the defect is a foreign object, a flaw or a crack.
  • the image processing PC 10 is provided for each type of defect, such as a foreign object, a flaw, or a crack. These sample images are collected and their feature point data is saved as a feature point database in HDD 25, etc., and the saved feature point data and each BI ob extracted image 6 5 in the inspection target BI 5 Compare with each feature point detected from ob.
  • the foreign matter in the present embodiment has a number of about several sides to about several tens; and the scratch has a length of several numbers; about U m to several hundreds.
  • scratches when comparing foreign objects with scratches, scratches have an aspect ratio that is extremely horizontal or vertical and has a longer perimeter than foreign objects.
  • the thin film cracks appear in a curved line at the edge of the recess 50, but the roundness of the recess 50 is lower than that of a normal one.
  • the image processing PC 10 stores these data as feature point data, and classifies the defects by comparison with each feature point of the detected BIo.
  • the protein chip 35 has the hole 55 having a diameter of, for example, a few; U m on the thin film 53 of the bottom surface 52 of the recess 50.
  • the holes 5 5 serve to discharge the reagent. Therefore, even when foreign matter is adhered in the recess 50, if the diameter is smaller than the number of holes 55, the protein chip 35 is used because it is discharged from the holes 55 together with the reagent. There is no problem during screening. Therefore, for foreign objects, the diameter of the hole 55 is used as a threshold value, and foreign objects with a smaller diameter are not treated as defects. On the other hand, scratches and cracks are treated as defects unconditionally because normal screening cannot be performed due to the leakage of the reagent from there.
  • the CPU 21 further increases the magnification.
  • the feature points are measured using the BI ob extracted image 65 extracted from the captured second divided area 72, and various defects are classified. In this way, by performing high-magnification imaging as necessary, processing after defect detection can be performed smoothly.
  • Step 1 1 1 1 the CPU 21 judges whether or not there is a defect for all defect candidates, and When a defect is classified (YES in Step 1 1 1), the information on the BI ob extracted image and the detected defect type is output to the display unit 26 as a detection result (Step 1 1 2), End the process.
  • the image processing PC 10 may display an image on the display unit 26 so that it can be recognized at a glance which type of defect exists at which position on the wafer 1.
  • the user On the basis of the output result, the user performs the removal work if there is a foreign object, and discards the protein chip 35 as a defective product if there is a scratch or crack. If no defect candidate is detected in step 108, the protein chip 35 to be inspected is processed as a non-defective product, and the defect detection process ends.
  • each first divided region 71 or each second divided region 72 Since a model image can be created based on the inspection target image 40, a highly accurate defect detection process can be performed.
  • the model image 45 is created based on each inspection object image 40 taken under the same optical conditions and illumination conditions, it is possible to prevent erroneous detection due to a difference in these conditions.
  • the protein chip is applied as the MEMS device to be inspected, but the MEMS device is not limited to this.
  • an electron beam irradiation plate EB window
  • EB window electron beam irradiation plate
  • FIG. 16 is a view showing the appearance of this electron beam irradiation plate.
  • the figure (a) is a top view, and the figure (b) shows a cross-sectional view in the Z direction in the figure (a).
  • the electron beam irradiation plate 90 includes a plate 92 having a plurality of window holes 95 for irradiating an electron beam (EB), and each window hole 95. And a thin film 9 1 provided so as to cover.
  • EB electron beam
  • the length w in the X direction and the length I in the Y direction of the plate 9 2 are each formed in a rectangular shape of, for example, several tens mm, and the length h in the Z direction is, for example, about several mm. It is not restricted to these lengths and shapes.
  • Each window 95 is, for example, a square with a side s of several millimeters, but is not limited to this length and shape, and may be rectangular.
  • the number of window holes 9 5 is 6 in 4 rows x 9 columns, but is not limited to this number.
  • This electron beam irradiation plate 90 constitutes an electron beam irradiation apparatus by being connected to an end of a vacuum vessel (not shown). Electron beam (EB) force emitted from the electron beam generator provided inside the vacuum vessel is emitted into the atmosphere through the window hole 95 as indicated by the arrow in FIG. Is irradiated.
  • This electron beam irradiation apparatus is used for various purposes such as sterilization, physical property modification, and chemical property modification of an object irradiated with an electron beam.
  • By providing the thin film 91 it is possible to irradiate an electron beam while maintaining a vacuum state. Note that a plurality of thin films 91 may be stacked to form a multilayer film structure.
  • This electron beam irradiation plate 90 is also formed on each die 30 on the wafer 1 by an etching process using a photolithography technique or the like, similar to the protein chip 35 in the above-described embodiment.
  • the size of each die is the same as the size of the plate 92.
  • the defect detection apparatus 10 0 0 also performs the same imaging process, high-pass filter process, model image creation process, BI ob extraction process as the above-described protein chip 35 for the electron beam irradiation plate 90. Etc., and detect defects such as foreign matter, scratches and cracks on the electron beam irradiation plate 90. It is also possible to take images at low and high magnifications and at multiple focal points in the Z direction.
  • each inspection target image is adjusted in the X, Y, and 0 directions so that the edge shapes of the window holes 95 appearing in each inspection target image overlap. While aligning.
  • the image processing PC 10 has a sample of the electron beam irradiation plate 90, etc. Based on this, original feature data is created to classify defects.
  • various sensors such as an acceleration sensor, a pressure sensor, an air flow sensor, a printer head for an ink jet printer, and a reflective projector. It is possible to apply other MEMS devices such as micro-mirror arrays for use, other actuators, and various biochips as inspection objects.
  • each image necessary for image processing such as the inspection target image 40 and the model image 45, the difference image 60 and the BI ob extracted image 65 is stored in the HDD 25.
  • these images may be temporarily stored in the RAM 23, or temporarily stored in a buffer area provided separately from the RAM 23, and deleted when the defect classification process is completed. It doesn't matter if you do.
  • the image in which the difference was not extracted by the difference extraction that is, the image in which the defect was not detected was not detected because it is unnecessary in the subsequent processing. You may delete them one at a time.
  • the image to be inspected in the first divided region 71 captured at a low magnification when the second divided region 72 is imaged at a higher magnification, after the second divided region 72 is imaged, the first divided region 71 Since the images to be inspected 71 are not necessary, these images may be deleted when the imaging of the second divided region 72 is completed.
  • the number of images to be captured is enormous, it is possible to reduce the load on the image processing PC by reducing the storage capacity of RAM 23 and HDD 25 by processing in this way. It becomes.
  • the filtering process using the high-pass filter is performed in consideration of the case where the thin film 53 of the bottom surface 52 of each of the recesses 50 of the protein chip 35 is bent.
  • the flatness of the surface is high and brightness unevenness does not occur In such a MEMS device, this high-pass filter processing may be omitted.
  • the image processing PC 10 may measure the flatness of the imaging surface of the MEMS device to be inspected and determine whether or not to execute the high-pass filter processing according to the flatness.
  • FIG. 1 is a diagram showing a configuration of a defect detection apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a configuration of an image processing PC in an embodiment of the present invention.
  • FIG. 3 is a top view of a wafer in one embodiment of the present invention.
  • FIG. 4 is a top view showing one of the dies on the wafer in one embodiment of the present invention.
  • FIG. 5 is an enlarged view showing one recess of a protein chip according to an embodiment of the present invention.
  • FIG. 6 is a flowchart showing a general flow of operations until a defect detection apparatus detects a defect in an embodiment of the present invention.
  • FIG. 7 is a diagram showing a state in which each die is divided into a plurality of divided regions in an embodiment of the present invention.
  • FIG. 8 is a diagram showing a locus of an imaging position when a CCD camera images a protein chip for each divided region in an embodiment of the present invention.
  • FIG. 9 is a diagram showing a state in which the CCD camera captures an inspection object image at different focal positions in an embodiment of the present invention.
  • FIG. 10 is a flowchart showing a detailed flow of high-pass filter processing in one embodiment of the present invention.
  • FIG. 11 is a flowchart showing the flow of processing until the image processing PC creates a model image in an embodiment of the present invention.
  • FIG. 12 is a diagram conceptually showing how an image processing PC creates a model image in an embodiment of the present invention.
  • FIG. 13 is a view showing a difference image before and after the BIob extraction process in one embodiment of the present invention.
  • FIG. 14 is a diagram conceptually showing a state in which a first divided area where a defect candidate is detected is imaged at a high magnification for each second divided area in one embodiment of the present invention.
  • FIG. 15 is a diagram showing a comparison of each B Iob extracted image extracted from each inspection target image in the first divided region and the second divided region in the embodiment of the present invention.
  • FIG. 16 is a view showing the appearance of an electron beam irradiation plate in another embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Biochemistry (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Analytical Chemistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Testing Or Measuring Of Semiconductors Or The Like (AREA)
  • Image Processing (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

Le problème à résoudre consistait à détecter efficacement les défauts d'un dispositif MEMS sans disposer nécessairement d'une image modèle absolue tout en évitant la détection par inadvertance avec un niveau de précision élevé. On a donc mis au point un dispositif de détection de défauts (100) qui prélève une image d'une puce à protéines (35) formée dans chaque dé (30) d'une tranche (1) pour chaque première région divisée (71), chaque dé étant divisé en une pluralité de premières régions divisées. Le dispositif de détection de défauts conserve l'image ainsi que des données d'identification (ID) pour identifier chaque région divisée (71) en tant qu'image du sujet examiné, élimine les composantes basse fréquence de l'image du sujet examiné à l'aide d'un filtre passe-haut et calcule ensuite une luminosité moyenne pour chaque pixel de chaque image du sujet examiné ayant une ID correspondante et forme une image modèle pour chaque première région divisée (71). Le dispositif de détection de défauts établit et calcule une différence entre l'image modèle et chaque image du sujet examiné sous forme d'image différentielle, effectue le filtrage de l'image différentielle par extraction d'un Blob, extrait le Blob avec une zone qui n'est pas inférieure à une zone prédéterminée en tant que défaut Blob et classifie les types de défauts en fonction d'une quantité caractéristique du Blob extrait.
PCT/JP2007/001335 2006-12-04 2007-11-30 Dispositif et procédé de détection de défauts, dispositif, procédé et programme de traitement d'informations WO2008068894A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006327095A JP2008139201A (ja) 2006-12-04 2006-12-04 欠陥検出装置、欠陥検出方法、情報処理装置、情報処理方法及びそのプログラム
JP2006-327095 2006-12-04

Publications (1)

Publication Number Publication Date
WO2008068894A1 true WO2008068894A1 (fr) 2008-06-12

Family

ID=39491811

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/001335 WO2008068894A1 (fr) 2006-12-04 2007-11-30 Dispositif et procédé de détection de défauts, dispositif, procédé et programme de traitement d'informations

Country Status (2)

Country Link
JP (1) JP2008139201A (fr)
WO (1) WO2008068894A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020187319A1 (fr) * 2019-03-21 2020-09-24 深圳中科飞测科技有限公司 Procédé de détection et système de détection
CN116757973A (zh) * 2023-08-23 2023-09-15 成都数之联科技股份有限公司 一种面板产品自动修补方法、系统、设备及存储介质

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5292043B2 (ja) * 2008-10-01 2013-09-18 株式会社日立ハイテクノロジーズ 欠陥観察装置及び欠陥観察方法
US8554579B2 (en) 2008-10-13 2013-10-08 Fht, Inc. Management, reporting and benchmarking of medication preparation
US9194813B2 (en) 2009-10-06 2015-11-24 Compagnie Generale Des Etablissements Michelin Method and device for the automatic inspection of a cable spool
SG11201503191UA (en) 2012-10-26 2015-05-28 Baxter Corp Englewood Improved work station for medical dose preparation system
KR101974258B1 (ko) * 2012-10-26 2019-04-30 백스터 코포레이션 잉글우드 의료 투여분 조제 시스템을 위한 개선된 이미지 취득
JP5995756B2 (ja) * 2013-03-06 2016-09-21 三菱重工業株式会社 欠陥検出装置、欠陥検出方法および欠陥検出プログラム
SG11201610717WA (en) 2014-06-30 2017-01-27 Baxter Corp Englewood Managed medical information exchange
CN104124183B (zh) * 2014-07-25 2016-09-21 安徽北方芯动联科微系统技术有限公司 Tsv圆片级封装mems芯片的失效分析装置及其分析方法
US11575673B2 (en) 2014-09-30 2023-02-07 Baxter Corporation Englewood Central user management in a distributed healthcare information management system
US11107574B2 (en) 2014-09-30 2021-08-31 Baxter Corporation Englewood Management of medication preparation with formulary management
EP3227851A4 (fr) 2014-12-05 2018-07-11 Baxter Corporation Englewood Analyse de données de préparation de dose
EP3800610A1 (fr) 2015-03-03 2021-04-07 Baxter Corporation Englewood Gestion de flux de travail en pharmacie avec alertes intégrées
USD790727S1 (en) 2015-04-24 2017-06-27 Baxter Corporation Englewood Platform for medical dose preparation
JP7345764B2 (ja) * 2021-02-26 2023-09-19 株式会社アダコテック 検査システムおよび検査プログラム

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001091228A (ja) * 1999-09-20 2001-04-06 Dainippon Screen Mfg Co Ltd パターン検査装置
JP2002323458A (ja) * 2001-02-21 2002-11-08 Hitachi Ltd 電子回路パターンの欠陥検査管理システム,電子回路パターンの欠陥検査システム及び装置
WO2004017707A2 (fr) * 2002-08-20 2004-03-04 Precision Automation, Inc. Dispositif et procede de traitement de materiaux
JP2004077390A (ja) * 2002-08-21 2004-03-11 Toshiba Corp パターン検査装置
JP2004093317A (ja) * 2002-08-30 2004-03-25 Hamamatsu Photonics Kk ウェーハアライメント方法およびウェーハ検査装置
JP2004317190A (ja) * 2003-04-14 2004-11-11 Neomax Co Ltd 高速凹凸判定可能な表面検査方法及び表面検査システム
JP2005156475A (ja) * 2003-11-28 2005-06-16 Hitachi High-Technologies Corp パターン欠陥検査装置およびパターン欠陥検査方法
JP2006085182A (ja) * 1994-07-13 2006-03-30 Kla Instr Corp 自動フォトマスク検査装置及び方法
JP2006100707A (ja) * 2004-09-30 2006-04-13 Mitsubishi Heavy Ind Ltd 素子製造装置
JP2006242900A (ja) * 2005-03-07 2006-09-14 Mitsubishi Chemicals Corp センサユニット及び反応場セルユニット並びに分析装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03286383A (ja) * 1990-04-02 1991-12-17 Sumitomo Metal Ind Ltd パターン比較装置及び表面欠陥検査装置
JPH09265537A (ja) * 1996-03-29 1997-10-07 Hitachi Ltd 画像処理方法
US7068363B2 (en) * 2003-06-06 2006-06-27 Kla-Tencor Technologies Corp. Systems for inspection of patterned or unpatterned wafers and other specimen

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006085182A (ja) * 1994-07-13 2006-03-30 Kla Instr Corp 自動フォトマスク検査装置及び方法
JP2001091228A (ja) * 1999-09-20 2001-04-06 Dainippon Screen Mfg Co Ltd パターン検査装置
JP2002323458A (ja) * 2001-02-21 2002-11-08 Hitachi Ltd 電子回路パターンの欠陥検査管理システム,電子回路パターンの欠陥検査システム及び装置
WO2004017707A2 (fr) * 2002-08-20 2004-03-04 Precision Automation, Inc. Dispositif et procede de traitement de materiaux
JP2004077390A (ja) * 2002-08-21 2004-03-11 Toshiba Corp パターン検査装置
JP2004093317A (ja) * 2002-08-30 2004-03-25 Hamamatsu Photonics Kk ウェーハアライメント方法およびウェーハ検査装置
JP2004317190A (ja) * 2003-04-14 2004-11-11 Neomax Co Ltd 高速凹凸判定可能な表面検査方法及び表面検査システム
JP2005156475A (ja) * 2003-11-28 2005-06-16 Hitachi High-Technologies Corp パターン欠陥検査装置およびパターン欠陥検査方法
JP2006100707A (ja) * 2004-09-30 2006-04-13 Mitsubishi Heavy Ind Ltd 素子製造装置
JP2006242900A (ja) * 2005-03-07 2006-09-14 Mitsubishi Chemicals Corp センサユニット及び反応場セルユニット並びに分析装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020187319A1 (fr) * 2019-03-21 2020-09-24 深圳中科飞测科技有限公司 Procédé de détection et système de détection
US11881186B2 (en) 2019-03-21 2024-01-23 Skyverse Technology Co., Ltd. Detection method and detection system
CN116757973A (zh) * 2023-08-23 2023-09-15 成都数之联科技股份有限公司 一种面板产品自动修补方法、系统、设备及存储介质
CN116757973B (zh) * 2023-08-23 2023-12-01 成都数之联科技股份有限公司 一种面板产品自动修补方法、系统、设备及存储介质

Also Published As

Publication number Publication date
JP2008139201A (ja) 2008-06-19

Similar Documents

Publication Publication Date Title
JP4065893B1 (ja) 欠陥検出装置、欠陥検出方法、情報処理装置、情報処理方法及びそのプログラム
JP4102842B1 (ja) 欠陥検出装置、欠陥検出方法、情報処理装置、情報処理方法及びそのプログラム
WO2008068894A1 (fr) Dispositif et procédé de détection de défauts, dispositif, procédé et programme de traitement d'informations
JP5272604B2 (ja) カラーフィルタの製造方法
JP5553716B2 (ja) 欠陥検査方法及びその装置
JP2005158780A (ja) パターン欠陥検査方法及びその装置
JP2009016455A (ja) 基板位置検出装置及び基板位置検出方法
JP2014038045A (ja) 検査装置、照明、検査方法、プログラム及び基板の製造方法
CN108458972B (zh) 光箱结构及应用其的光学检测设备
JP5716278B2 (ja) 異物検出装置および異物検出方法
JP5765713B2 (ja) 欠陥検査装置、欠陥検査方法、及び欠陥検査プログラム
JP4074624B2 (ja) パターン検査方法
JP5929106B2 (ja) 検査方法および検査装置
JP2007183283A (ja) 異物検査方法および装置
JP2008268055A (ja) 異物検査装置及び異物検査方法
JP5531405B2 (ja) 周期性パターンのムラ検査方法及び検査装置
CN112262313A (zh) 异物检查装置及异物检查方法
JP4009595B2 (ja) パターン欠陥検査装置およびパターン欠陥検査方法
JP2004117150A (ja) パターン欠陥検査装置およびパターン欠陥検査方法
JP2008139088A (ja) 外観検査方法
CN112285116B (zh) 一种缺陷检测装置及方法
CN114222913B (zh) 晶片外观检查装置和方法
TWI776152B (zh) 電子部件處理設備用檢查裝置
JP2010160063A (ja) 荷電粒子線露光用マスクの異物検査装置及びその検査方法
JP2009128476A (ja) インクヘッド吐出検査方法及び吐出検査装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07828112

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07828112

Country of ref document: EP

Kind code of ref document: A1