WO2006006525A1 - Image processing device and method - Google Patents
Image processing device and method Download PDFInfo
- Publication number
- WO2006006525A1 WO2006006525A1 PCT/JP2005/012661 JP2005012661W WO2006006525A1 WO 2006006525 A1 WO2006006525 A1 WO 2006006525A1 JP 2005012661 W JP2005012661 W JP 2005012661W WO 2006006525 A1 WO2006006525 A1 WO 2006006525A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image processing
- model
- partial
- information
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 80
- 238000000034 method Methods 0.000 title claims description 41
- 238000004364 calculation method Methods 0.000 claims abstract description 53
- 238000003384 imaging method Methods 0.000 claims abstract description 37
- 230000008569 process Effects 0.000 claims description 28
- 238000012937 correction Methods 0.000 claims description 13
- 238000003672 processing method Methods 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 7
- 238000003705 background correction Methods 0.000 claims description 5
- 238000011156 evaluation Methods 0.000 description 71
- 239000000758 substrate Substances 0.000 description 22
- 238000010586 diagram Methods 0.000 description 21
- 238000007689 inspection Methods 0.000 description 14
- 230000007547 defect Effects 0.000 description 13
- 125000000205 L-threonino group Chemical group [H]OC(=O)[C@@]([H])(N([H])[*])[C@](C([H])([H])[H])([H])O[H] 0.000 description 10
- 238000013500 data storage Methods 0.000 description 7
- 239000012536 storage buffer Substances 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000002950 deficient Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000000737 periodic effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 241000406668 Loxodonta cyclotis Species 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000001000 micrograph Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000007261 regionalization Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 235000012431 wafers Nutrition 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
Definitions
- the present invention relates to an image processing apparatus and method for capturing an image of a subject divided into a plurality of partial images and combining the captured partial images to form an entire image of the subject.
- Image information is used as a method for inspecting defects that cause functional problems in substrates in industrial microscopes and inspection devices that inspect FPD (flat panel display) substrates, PDP (plasma display) substrates, semiconductor wafers, etc.
- FPD flat panel display
- PDP plasma display
- Image information is used as a method for inspecting defects that cause functional problems in substrates in industrial microscopes and inspection devices that inspect FPD (flat panel display) substrates, PDP (plasma display) substrates, semiconductor wafers, etc.
- FPD flat panel display
- PDP plasma display
- the entire subject is divided into a plurality of regions, each of these regions is imaged, and the partial images obtained by the imaging are bonded to each other to obtain a subject.
- a method for obtaining an entire high-definition image is often used.
- a method for obtaining a high-definition image, a method is often used in which a partial image with a high magnification is picked up based on an entire image with a low magnification and the picked-up partial images are combined and processed only for industrial use. It is used for various applications.
- Patent Document 1 Japanese Patent Laid-Open No. 2000-59606
- Patent Document 2 Japanese Patent Laid-Open No. 11-271645
- an inspection device for industrial use is a periodic pattern such as an FPD substrate or a PDP substrate.
- a periodic pattern such as an FPD substrate or a PDP substrate.
- the microscope image display device shown in Patent Document 2 is an inspection device for industrial use and targets an image with a sparse pattern density. In some cases, it may happen that there are no overlapping patterns where there are no turns, and there is no overlapping area pattern in the pasting, so the pasting process cannot be performed! There is a defect such as / ⁇ , or a combined image in which the bonded area is extremely shifted is generated!
- the present invention has been made in view of such circumstances, and an image including a periodic and Z or sparse portion of a pattern (circuit pattern and wiring pattern) such as an FPD substrate or a PDP substrate.
- the present invention also provides an image processing apparatus and method for generating a high-definition (high-resolution) image by pasting partial images.
- the image processing apparatus of the present invention combines partial images of an object photographed at a predetermined resolution with a predetermined overlap area, and pairs all or a part of the object with a predetermined size.
- An image processing device that generates an elephant image, which is a first image obtained by photographing an object at a first magnification.
- First imaging means for obtaining image information; and second object for obtaining second image information as the partial image by photographing the object at a second magnification that is higher than the first magnification.
- An image model generating unit that generates a model of the target image generated by pasting the partial images from the overlapping region information based on the size of the target image and the degree of the overlapping region in the partial image; and the partial image Shooting position calculation means for searching for the arrangement position in the first image information of the target image generated by pasting the images using the model (for example, the pattern density evaluation value calculation unit 17 and the shooting position calculation in the embodiment) Unit 18) and high-definition image generation means for generating the target image by combining the partial images based on the arrangement position.
- the model for example, the pattern density evaluation value calculation unit 17 and the shooting position calculation in the embodiment
- partial images of an object photographed at a predetermined resolution are pasted together with a predetermined overlapping area, and a target image of the entire object or a part of the object of a predetermined size is attached.
- An image processing method to be generated wherein a first photographing process of photographing a subject at a first magnification to obtain first image information, and the subject at a magnification higher than the first magnification. From the second photographing process for obtaining the second image information as the partial image by photographing at the second magnification, and the size of the target image and the degree of the overlapping region in the partial image are used as the overlapping region information.
- An image model generation process for generating a model of a target image generated by pasting images and an arrangement position of the target image generated by pasting the partial images in the first image information using the model.
- the shooting position calculation process to be searched and the position
- a high-definition image generation process for generating the target image by pasting the partial images based on the position.
- the image processing apparatus forms in advance a model of a target image to be combined by combining the partial images based on the first image information of low resolution (low magnification) when the partial images are combined.
- the photographing position calculation unit detects an optimum arrangement position of an overlapping region in the model image combining in the first image information, thereby The arrangement position is searched.
- the image processing apparatus of the present invention positively uses the overlapping portion that is overlapped and combined when searching for the shooting position of the partial image (that is, the overlapping region at the time of combining). Therefore, it is easier to compare compared with the past by improving the accuracy of pasting of partial images, that is, overlapping portions when generating target images. In addition, it is possible to generate a high-definition image having a desired high resolution with high accuracy.
- the shooting position calculation means arranges overlapping areas while moving the model by a predetermined moving distance within a preset search area of the first image information. It is characterized by searching for a position.
- the image processing apparatus of the present invention sets a search area of a predetermined size in advance, particularly when generating a high-definition image of an object composed of a repetitive pattern.
- the search process is performed at high speed in order to search for the arrangement position of the overlapping region while moving the model in a defined direction from the predetermined position at a predetermined moving distance (for example, in units of a plurality of pixels). It becomes possible to do.
- the image processing apparatus is characterized in that the photographing position calculation means searches for an arrangement position of the overlapping area in the search area based on the pattern information of the overlapping area.
- the image processing apparatus sets the position of the overlapping area based on the pattern information of the overlapping area (for example, the pattern density evaluation value indicating the density of the pattern). Since it is possible to detect a dense position, when pasting partial images, it is possible to select a position where the positioning of the pasting can be easily performed as the overlapping region, and to achieve a desired high accuracy and high accuracy. It is possible to generate high-resolution images with high resolution.
- the pattern information of the overlapping area for example, the pattern density evaluation value indicating the density of the pattern.
- the imaging position calculation unit changes the overlapping area information in the model in the search area based on the pattern information of the overlapping area. Thus, the arrangement position is searched.
- the image processing apparatus is capable of overlapping area information, for example, overlapping, required for the partial image combining processing according to image pattern information (for example, pattern density information). Since the overlap rate of the area is changed, it is possible to change the pattern information to a value suitable for matching as needed, regardless of whether the substrate pattern is rough or dense.
- image pattern information for example, pattern density information
- the image processing apparatus of the present invention has moving means for moving the object relative to the first photographing means and the second photographing means in a predetermined distance unit in the XY directions. Shi
- the shooting position calculation means sets the shooting position of the target image on the target based on the arrangement position of the target image detected by the model.
- the image processing apparatus of the present invention since the image processing apparatus of the present invention has a relative moving unit, when the shooting position is detected, it is possible to perform processing for moving to that position and shooting. It is possible to improve the generation speed of high-resolution and high-definition images by performing shooting processing in real time.
- the image processing apparatus of the present invention is characterized in that, based on the photographing position and the arrangement position of the target image detected by the model, the photographing position of the partial image used for the pasting is calculated.
- the image processing apparatus of the present invention sets the position of the overlapping area by the model, and therefore detects the position where the pattern of the overlapping area is dense.
- the image processing apparatus of the present invention sets the position of the overlapping area by the model, and therefore detects the position where the pattern of the overlapping area is dense.
- the first image information and the second image information obtained by the first and second imaging units are each subjected to distortion correction or Z and shading correction. It is characterized by that.
- the image processing apparatus of the present invention can generate V and high-definition images that are not affected by distortion or shading.
- the invention's effect is not limited to
- a model of the target image to be combined and combined with the partial images is formed in advance based on the low-resolution first image information, Using this model, in order to adjust the shooting position of the partial image that generates the high-resolution target image including the overlapped area within the predetermined area of the wide range of the first image information, In the field of view, it is possible to obtain an appropriate photographing position of the partial image by calculation, and it is possible to easily generate a high-definition image having a desired high resolution.
- FIG. 1 is a conceptual diagram showing a configuration example of a microscope apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing a configuration example of the image processing unit 5 in FIG.
- FIG. 3 is a conceptual diagram for explaining a model generated by the image model generation unit 16 of FIG.
- FIG. 4 is a conceptual diagram for explaining a Sobel filter.
- FIG. 5 is a conceptual diagram for explaining a pattern density evaluation value.
- FIG. 6 is a flowchart showing an operation example of the microscope apparatus including the image processing unit 5 according to the first embodiment.
- FIG. 7 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the first embodiment.
- FIG. 8 is a conceptual diagram illustrating the operation of the image processing unit 5 according to the first embodiment.
- FIG. 9 is a conceptual diagram illustrating the operation of the image processing unit 5 according to the first embodiment.
- FIG. 10 is a conceptual diagram illustrating maximum value detection processing within a search area for pattern density evaluation values.
- FIG. 11 is a flowchart showing an operation example of the microscope apparatus including the image processing unit 5 according to the second embodiment.
- FIG. 12 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
- FIG. 13 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
- FIG. 14 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
- FIG. 15 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
- FIG. 16 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
- FIG. 17 is a conceptual diagram illustrating the operation of the image processing unit 5 according to the second embodiment.
- FIG. 18 is a conceptual diagram for explaining the maximum value and the minimum value of the overlapping rate of partial image frames.
- FIG. 19 is a conceptual diagram for explaining an inspection apparatus according to a third embodiment.
- FIG. 20 is a conceptual diagram for explaining an inspection apparatus according to a fourth embodiment.
- FIG. 1 is a block diagram showing a configuration example of the embodiment.
- the first embodiment is a microscope equipped with the image processing function of the present invention.
- the microscope is provided with a lens barrel 1 for attaching an objective lens 2 in the Z-axis direction (up and down as viewed in the figure).
- a vertical drive mechanism capable of being driven is provided.
- the microscope Z-axis movement control unit 8 controls the vertical drive mechanism to move the lens barrel 1 up and down to adjust the focus on the object placed on the stage 4.
- the stage 4 is provided in the lower part of the microscope, and has a mechanism (two-axis movement drive mechanism) for driving in the X direction and the Y direction (left and right direction and depth direction as seen from the figure).
- the target object which is a sample for observation, is placed on the top.
- the stage movement control unit 6 performs movement control of the stage 4 in two axes, and adjusts the relative position between the objective lens 2 and the object.
- an imaging camera 3 is provided on the upper part of the lens barrel 1, and a video signal (image signal) output from the imaging camera 3 is transferred to the image processing unit 5 for various image processing. .
- the imaging camera 3 is a CCD camera and outputs, for example, gradation (luminance) data for each RGB-compatible pixel as image information.
- the image processing unit 5, the stage movement control unit 6, and the microscope Z-axis movement control unit 8 are controlled by the system control unit 7 as necessary.
- FIG. 5 is a block diagram illustrating a configuration example of the image processing unit 5 of the embodiment.
- the portion surrounded by the wavy line is the image processing unit 5, the imaging control unit 11, the shading 'distortion correction processing unit 12, the captured image data storage buffer unit 13, the first captured image reading unit 14, and the second captured image reading.
- the image forming apparatus includes a unit 15, an image model generation unit 16, a pattern density evaluation value calculation unit 17, a shooting position calculation unit 18, an image generation unit 19, and an image storage unit 20.
- the imaging control unit 11 is controlled by the system control unit 7, the magnification is changed by exchanging the objective lens 2, and the focus is adjusted by the microscope Z-axis movement control unit 8, and the image is captured by the imaging camera 3.
- Low magnification image information first image information, i.e. The whole image
- high-magnification image information second image information, that is, a partial image
- the shading / distortion correction processing unit 12 performs shading correction and distortion correction on shading and distortion caused by the imaging system force including the objective lens 2 for each of the first image information and the second image information, and then performs imaging.
- the image data storage buffer unit 13 information of magnification is added and stored.
- This magnification information is added to the first image information and the second image information in the imaging control unit 11 via the system control unit 7 as lens information of the objective lens 2.
- the first captured image reading unit 14 reads first image information in which the added magnification information indicates a low magnification from the captured image data storage buffer unit 13, and temporarily stores the first image information. To store.
- the second captured image reading unit 15 reads the second image information (hereinafter referred to as a partial image) with the added magnification information from the captured image data storage buffer unit 13 and temporarily stores the partial image. Store.
- the image model generation unit 16 generates a model of the target image that is finally generated by pasting the partial images. This model includes an overlapping area that will be overlapped when the partial images are pasted together!
- the image model generation unit 16 generates a first magnification as a low magnification preset by the user, a second magnification as a high magnification, and a partial image, which are input from the system control unit 7 and pasted together.
- the above model is generated from the image size to be overlapped and the size of the overlapping area to be overlapped when pasting.
- the no-turn density evaluation value calculation unit 17 reads a model from the image model generation unit 16 and reads the first image information from the first captured image reading unit 14 to generate a target image.
- the search area to be searched is set in the first image information by the system control unit 7 (set while the user confirms the screen).
- the pattern density evaluation value calculation unit 17 has a predetermined movement distance, for example, a plurality of movements within the search area, with the model as a predetermined position, for example, the upper left of the search area. While moving in the X-axis direction and Y-axis direction in pixel units, The pattern density evaluation value (pattern information) in the overlapping area is calculated, and these are sequentially stored in correspondence with the calculated positions.
- the movement distance in the search area may be performed in units of one pixel (pixel), but depending on the target pattern, there is no change before and after the movement, and the obtained pattern density evaluation value is also Since the values are almost the same, in the present invention, a unit of a predetermined number of pixels is used in order to reduce useless calculation time and improve the search efficiency of the overlapping area.
- the moving distance if the object is a periodic pattern as in this embodiment, the number of pixels forming one period is 1Z5, 1/10, 1/50, 1/100,. It is set according to the number of pixels in the pattern period.
- the minimum size of the target pattern included in the overlapping area (for example, the width of the signal line through which the current flows) is known, the minimum pattern width is 1 or 2 times the number of pixels. , 3 times, • You can set the movement distance according to the size of the pattern, such as-.
- the movement distance according to the size of the no-turn takes into account that the pattern density evaluation value changes depending on whether the entire pattern appears or disappears from the overlap area before and after moving.
- the pattern density evaluation value has four calculated values (vertical units to be described later) for each block of the size of the partial image (horizontal direction and vertical direction). Directional and horizontal edge strength).
- the pattern density evaluation value is calculated by the pattern density evaluation value calculation unit 17 according to the following flow.
- the pattern density evaluation value is obtained by paying attention to the edge strength for each direction (the magnitude of the luminance change in the pattern).
- the edge strength for each direction represents the edge strength in each of the vertical (up and down the screen) direction and the horizontal (left and right of the screen) direction.
- a Sobel filter is used as a method for calculating the edge strength.
- This Sobel filter multiplies each of the nine pixel values in the vicinity, that is, adjacent to the top, bottom, left, and right, centering on a certain pixel of interest, by multiplying the coefficient (center is the pixel of interest) mask as shown in Fig. 4 to obtain the result.
- this process is performed using two coefficient matrices in the vertical and horizontal directions. [0030] That is, the pixel at the center of the mask (X, Y) (X is the coordinate value on the screen in the horizontal direction, the right direction is positive, the left direction is negative, and Y is in the vertical direction.
- the absolute value of the numerical value R is defined as Abs (R)
- the horizontal strength EH (X, Y) and the vertical strength EV (X, Y) are obtained by the following formula.
- the pattern density evaluation value calculation unit 17 adds the calculated edge strength for each pixel to the direction unit in the target region, and adds the horizontal edge strength total value ⁇ and the vertical edge strength. Calculate the total value AEV.
- the edge strength in either direction is extremely low.
- the edge strength total value AEV of a certain value can be obtained, but the horizontal edge strength total value ⁇ is because the change in luminance does not exist in the horizontal direction. It becomes almost “0”.
- the pattern density evaluation value calculation unit 17 obtains a predetermined threshold value Thres for the edge intensity total value ⁇ and the edge intensity total value AEV, and each edge intensity total value is equal to or greater than the threshold value Thres. Only this threshold Thres or more is output as the pattern density evaluation value.
- the threshold value Thres is multiplied by a predetermined coefficient and used as the actual threshold value.
- the coefficient to be multiplied may be a small value such as 1 or 2 to suppress the effect of noise. Also, when extracting an area where the edge strength is high, that is, the pattern features are clear, a large value according to the value of Q (if the luminance gradation is 256 and Q is 10,
- the pattern density evaluation value calculation unit 17 uses the above formula to calculate the pattern density evaluation value PD.
- EV is calculated as follows.
- the photographing position calculation unit 18 selects the position with the largest pattern density evaluation value selected by the pattern density evaluation value calculation unit 17, that is, the first image information (entire image). Based on the shooting position of the target image, the shooting location of each partial image is obtained, and the shooting location information of this shooting location is output to the system control unit 7.
- the image generation unit 19 includes a system control unit 7 that includes a microscope Z-axis movement control unit 8, a stage movement control unit 6, an imaging control unit 11, and a shooting camera. Control 3 to paste the partial images of multiple blocks of the model.
- the image storage unit 20 stores the target image (high-definition image) generated by pasting the partial images in the image generation unit 19.
- the system control unit 7 reads out the target image from the image storage unit 20 and displays it on a display device (not shown) by the access by the user.
- the edge intensity indicating the change in the luminance value of the image is used in the above-described first embodiment.
- the “spatial characteristic” of the direction based on the luminance value of the image is used.
- An evaluation value that consists of purely luminance values, such as the luminance average value for the histogram formed from the image luminance value, the difference between the minimum and maximum values (dynamic range). ), Mode, median, and variance (standard deviation) can be used as pattern information.
- the pattern frequency evaluation value PDEV is obtained from the frequency FR and the standard deviation SD using the histogram frequency FR relating to the luminance value of the mode value relating to the overlapping region and the standard deviation SD of this histogram as the pattern information.
- the power of 2 is expressed as 2 to x.
- This pattern density evaluation value PDEV represents 0 to X-1 with the frequency FR and 2 or more with the standard deviation SD. However, in the above formula, “FR 2”.
- the pattern density evaluation value calculation unit 17 pays attention only to the frequency FR, determines whether or not the force is equal to or greater than a predetermined threshold value, and if it is equal to or greater than this threshold value, also evaluates the standard deviation.
- FIG. 6 is a flowchart specifically showing an operation example of the image processing apparatus according to the first embodiment of FIG.
- the FPD board shown in FIG. 7 will be described as an example of the object.
- the FPD substrate pixel portions and transistors for driving the pixels are periodically arranged.
- the user uses an input device (not shown) as a processing parameter to the system control unit 7 as the overall image magnification (first magnification), partial image magnification (second magnification), and the combined image (target image). And the overlap rate of each partial image are set (step S1). [0039] Next, when the process of acquiring the target image is started, the system control unit 7 drives the stage 4 by the stage movement control unit 6, and adjusts the relative position between the objective lens 2 and the target object. And switch the objective lens 2 to achieve the first magnification.
- the system control unit 7 adjusts the focus by moving the lens barrel 1 up and down via the microscope Z-axis movement control unit 8 to capture the entire image shown in FIG.
- the whole image is transferred to the shading / distortion correction processing unit 12 via the.
- a field frame as a partial image (shootable range when captured at a second magnification: partial image frame) is an area within a broken line shown in FIG.
- the shading / distortion correction processing unit 12 performs distortion correction and shading correction on the entire input image, and stores it in the captured image data storage buffer unit 13 (step S 2). ).
- the image model generation unit 16 calculates the figure based on the size of the target image (the number of vertical pixels X the number of horizontal pixels) and the overlapping ratio of the partial images when the target image is generated.
- a model of the target image having an overlapping area as shown in Fig. 3 is generated (step S3).
- the image model generation unit 16 calculates the number of partial images and the size of the overlapping region so that the overlapping region has the above-described overlapping rate with respect to the partial image (step S4).
- the partial image frame force shown in Fig. 8 also has a partial image frame of 4, and the overlap area is a shaded area where two or more of the four partial image frames overlap each other (with a + Shown in shape! / Declared part).
- the target image is formed by four partial images, and the model of the size of the target image is configured by four partial image frames.
- the user sets a search area for searching for the shooting position of the model force target image in the entire image displayed on the display device (step S5).
- This search area can be an arbitrary part of the entire image as long as the entire image is larger than the size of the model.
- the pattern density evaluation value calculation unit 17 calculates the pattern density evaluation value at each movement position while moving (shifting) it in the X axis direction and the Y axis direction at a predetermined movement distance. The above process is repeated until the entire search range is searched, and the calculated pattern is sequentially calculated.
- the density evaluation value is associated with the coordinate value (calculated position) in the entire image and stored in the internal storage unit.
- Step S6 the process proceeds to step S7.
- the pattern density evaluation value calculation unit 17 searches for the largest value among the pattern density evaluation values stored in the internal storage unit, and coordinates corresponding to the searched pattern density evaluation value The value is output as the optimal position of the target image at the time of pasting (Step S7
- the pattern density evaluation value calculation unit 17 displays a three-dimensional graph indicating the size of the pattern density evaluation value in the Z-axis direction for each evaluated coordinate value (on the XY plane). Then, the pattern density evaluation value at each coordinate value is sequentially compared to search for the maximum pattern density evaluation value.
- the position of this model that is, the coordinate value is output as the optimum target image generation position.
- the shooting position calculation unit 18 calculates the shooting position of the partial image from the generation position of the target image output from the pattern density evaluation value calculation unit 17 (step S8).
- the shooting position calculation unit 18 uses the arrangement position of the partial image frame of the model at the generation position as the shooting position of the partial image shot at the second magnification (high magnification).
- the coordinate value of the partial image frame corresponding to each partial image is output as the partial image position.
- the shooting position calculation unit 18 controls the coordinate values of the partial image frames corresponding to the four partial images because the target image is composed of four partial images! / Output to part 7.
- the system control unit 7 changes the objective lens 2 to a lens corresponding to the second magnification via the microscope Z-axis movement control unit 8, and the partial image input from the imaging position calculation unit 18.
- the stage 4 is photographed by the imaging camera 3 via the stage movement control unit 6, moved to the coordinate position, focused by the microscope Z-axis movement control unit 8, and each image is captured by the imaging camera 3. Take a partial image.
- the system control unit 7 has described all of the plurality of partial images constituting the target image. Take a picture by processing.
- the imaging control unit 11 outputs each partial image input from the imaging camera 3 to the shader / distortion correction processing unit 12.
- the shading / distortion correction processing unit 12 performs distortion correction and shading correction on the sequentially input partial images, and stores them in the captured image data storage buffer unit 13 (step S9).
- the image processing unit 5 reads out the partial images constituting the target image from the captured image data buffer unit 13 and temporarily stores them in the second captured image reading unit 15.
- the image generation unit 19 sequentially reads out the partial images from the second captured image reading unit 15, and based on the model shown in FIG. 9, that is, for each partial image frame of the model, at the partial image position of this partial image frame. Corresponding captured partial images are arranged, and the partial images are combined to generate a target image.
- the generated high-definition target image is stored in the image storage unit 20 (step S10).
- the image generation unit 19 performs pattern matching by superimposing patterns arranged in the overlapping area, and performs alignment of the bonding. For this reason, it is necessary to use an area where the pattern density evaluation value exceeds a predetermined density, that is, an area exceeding a predetermined threshold, as the overlapping area.
- the arrangement position of the overlapping region to be bonded is determined by pattern information such as the pattern density evaluation value, it is formed on the substrate like the FPD substrate pattern shown in FIG.
- a bonded image can be generated with high accuracy even for an object having periodicity, which is suitable for a bonding process in which many patterns are sparse.
- the system control unit 7 reads the target image from the image storage unit 20 as necessary, and displays the target image on the display unit.
- the second embodiment has the same configuration as that of the first embodiment, and only differences from the first embodiment will be described below.
- FIG. 11 is a flowchart specifically showing an operation example in the second embodiment. The difference is that step S8 of the first embodiment is changed to step S15, and this step will be described.
- the optimal position force of the model for pasting the partial images is based on the position of the partial image frame in the model!
- the shooting position is determined, and the search is performed in the search area with the overlapping area between the partial image frames in the model fixed.
- the position of the partial image frames constituting the stitching model is determined using the low-magnification whole image (first image information). Decide and speak.
- step S 6 the pattern density evaluation value at each position is calculated while moving the search area by a predetermined movement distance using the fixed overlapping area model.
- the minimum pattern density threshold PDEV-Min is set as an expression shown below.
- the pattern density evaluation value calculation unit 17 calculates the pattern density evaluation value at each moving position, and searches the entire search range. The above process is repeated until the above pattern PDEV —Min is exceeded, and the calculated pattern density evaluation value is sequentially associated with the coordinate value (calculated position) in the entire image, stored in the internal storage unit, and the search area When the calculation of the overall pattern density evaluation value is completed, the process proceeds to step S7.
- step S7 the pattern density evaluation value calculation unit 17 selects and outputs the largest pattern density evaluation value from the internal storage unit, as in the first embodiment.
- step S15 the pattern density evaluation value of the overlapping region in the model of the coordinate value corresponding to the selected pattern density evaluation value is recalculated.
- the overlapping area of the partial image frames in the model is the area A by the partial image frames F1 and F2 in FIG. 12, the area B by the partial image frames F3 and F4 in FIG. 13, and the partial image frames F1 and F3 in FIG.
- This area C is the area D formed by the partial image frames F2 and F4 in FIG.
- the pattern density evaluation value for each of the regions A to D is calculated from the image at the corresponding position of the low magnification overall image for each partial image frame.
- the pattern density evaluation value calculation unit 17 determines whether or not each region A to D exceeds a predetermined threshold value.
- This threshold value was obtained as the pattern density evaluation value in the horizontal and vertical directions in the first embodiment, but in the second embodiment, two partial images adjacent in the horizontal and vertical directions are obtained. Since this is a unit of overlapping area of the frame, the value is defined by the following formula.
- Thres2 2'QX (the number of pixels subject to Sobel filter operation in the overlapping region) Then, the pattern density evaluation value calculation unit 17 detects that the pattern density of all the regions A to D exceeds the threshold value, and performs processing. Proceed to step S9, and thereafter perform the same processing as in the first embodiment.
- a value obtained by multiplying the threshold Thres2 by a predetermined coefficient is used as the actual threshold.
- the coefficient to be multiplied may be a small value such as 1 or 2 to suppress the effect of noise. Also, when extracting an area where the edge strength is high, that is, the pattern features are clear, a large value according to the value of Q (if the luminance gradation is 256 and Q is 10, 10 to 15 Between the values).
- the pattern density evaluation value calculation unit 17 detects that the pattern density evaluation value of the region A indicated by the oblique lines in FIG. 16 does not exceed the threshold value Thres2, for the region that does not exceed the threshold value Thres2, for example.
- the partial image frame F1 is moved rightward by a predetermined moving distance, and the area of the area A that is an overlapping area with the partial image frame F1 and the partial image frame F2 is expanded.
- the pattern density evaluation value calculation unit 17 calculates the pattern density evaluation value of the region A again, detects whether or not the pattern density evaluation value exceeds the threshold value Thres2, and If detected, the process proceeds to step S9. If not, the partial image frame F1 is moved rightward again to determine the pattern density evaluation value of the area A.
- FIG. 18 is a conceptual diagram for explaining the overlapping rate of overlapping regions.
- the maximum overlap rate is 50% if the same pattern is included in two partial images.
- the overlap rate is 50% or more, the same pattern is included in the three partial images.
- the pattern density evaluation value calculating unit 17 again sets the pattern density evaluation value of the entire overlapping area to the coordinate of the model with the second largest numerical value! Perform the process.
- the minimum value of the overlapping rate a prescribed value of the number of pixels that is a real number multiple of 1 or more than the number of pixels of the minimum pattern is provided in the pattern formed on the substrate. Obtain from the ratio of the entire partial image.
- the pixel is twice the minimum pattern.
- the number is the specified value.
- all pattern density evaluation values of the regions A to D in the model in the entire search region have the threshold value. If not, first, the position where the evaluation value is the maximum as a whole is determined on the entire image, and the pattern density evaluation value exceeds the threshold value for the overlapping portion of each partial image frame at this coordinate value. Adjust the position of the partial image frame to change the overlap ratio of the overlapping area, and determine the shooting position of the partial image to be shot.
- the degree of freedom of search is increased with respect to the first embodiment, and an appropriately set photographing position is used as a start. Even if the entire search area is not searched, it is possible to automatically determine the optimum photographing position for pasting using the pattern density evaluation value.
- the arrangement position of the overlapping region to be bonded is determined by pattern information such as the pattern density evaluation value, the pattern of the FPD board shown in FIG.
- the bonding process fails for the overlapped part set appropriately. Even in such a case, a combined image can be generated with high accuracy.
- the third embodiment shown in FIG. 19 is a large substrate inspection apparatus equipped with a microscope.
- the substrate inspection apparatus shown in FIG. 19 is the same as the first and second embodiments in the configuration of an observation system such as a microscope, objective lens 2, and imaging camera 3.
- the difference is the drive mechanism that moves the FPD substrate of the object relative to the objective lens 2, and the stage movement controller 6 moves the stage 4 on which the object is placed in one axis direction (upper right in FIG. 19). Lower left direction: Drive only in arrow O).
- the system control unit 7 drives the microscope T itself in one axial direction perpendicular to the stage 4 (upper left and lower right in FIG. 19: arrow P).
- the relative position between the objective lens 2 and the object can be moved in the XY directions.
- the target image which is a high-definition image generated in the first to third embodiments, is a reference image that is compared with the image of the substrate being inspected when detecting a substrate defect (a normal substrate for comparison).
- the image is also used in the inspection device as a force generated image).
- the inspection apparatus shown in FIG. 20 is provided with a line sensor as an image pickup means. After the image pickup means is adjusted by a calibration sample, the stage is moved in the direction of arrow G by the holding movement means. The reflected light of the light emitted by the illumination means is detected by the line sensor at every predetermined moving distance.
- the integrated control means compares the intensity of the reflected light detected with the detection value of the reflected light sampled immediately before, and if it differs beyond a predetermined range, it is detected as a defective candidate and The coordinate value at is stored.
- the FPD board is mounted on the stage 4 of the image processing apparatus in the first to third embodiments, and the coordinate value of the defect candidate is input to the system control unit 7.
- the system control unit 7 moves the stage 4 via the stage movement control unit 6, and the position of the defect candidate is set to the position of the objective lens 2, that is, the position where the substrate portion of the defect candidate can be imaged by the imaging camera 3. Moving.
- the system control unit 7 generates the target image as a high-definition image, that is, in a state where the position of the defective candidate is included by the second magnification, and in the first to third embodiments, that is, Move to the location corresponding to the optimal model position.
- the system control unit 7 compares the image information including the captured defect candidate with the target image generated in the first and second embodiments by pattern matching, and determines the defect candidate pattern. And the pattern shape of the corresponding part of the target image, which is the reference image, are compared to determine whether they are different!
- the system control unit 7 detects that the difference is not different, it determines that the defect candidate is a non-defective product, while if it detects a difference, determines that the defect candidate is defective.
- the determination result is displayed on the display device.
- the inspection speed is improved, In addition, the accuracy of inspection can be improved.
- a program for realizing the functions of the image processing unit in FIGS. 1 and 2 is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system.
- Image processing may be performed by executing.
- the “computer system” here includes the OS and hardware such as peripheral devices.
- Computer system includes a WWW system equipped with a homepage provision environment (or display environment).
- the “computer-readable recording medium” refers to a storage device such as a flexible disk, a magneto-optical disk, a portable medium such as a ROM and a CD-ROM, and a hard disk incorporated in a computer system.
- the “computer-readable recording medium” means a volatile memory (RAM) inside a computer system that becomes a server or a client when a program is transmitted via a communication line such as a network such as the Internet or a telephone line. In this way, the program is held for a certain period of time.
- the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
- the “transmission medium” for transmitting a program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
- the program may be for realizing a part of the functions described above. Furthermore, what can realize the above-mentioned functions in combination with a program already recorded in the computer system, that is, a so-called differential file (differential program) may be used.
- a so-called differential file differential program
- a model of a target image to be combined and combined with the partial images is formed in advance based on the low-resolution first image information. Then, using this model, the first image information is used to adjust the shooting position of the partial image that generates the high-resolution target image including the overlapping region within the predetermined region of the first image information over a wide range.
- the wide field of view it is possible to obtain an appropriate partial image shooting position by calculation, and it is possible to easily generate a high-definition image with a desired high resolution.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006529001A JP4709762B2 (en) | 2004-07-09 | 2005-07-08 | Image processing apparatus and method |
KR1020077000543A KR100888235B1 (en) | 2004-07-09 | 2005-07-08 | Image processing device and method |
CN2005800228587A CN1981302B (en) | 2004-07-09 | 2005-07-08 | Image processing device and method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-203108 | 2004-07-09 | ||
JP2004203108 | 2004-07-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006006525A1 true WO2006006525A1 (en) | 2006-01-19 |
Family
ID=35783868
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/012661 WO2006006525A1 (en) | 2004-07-09 | 2005-07-08 | Image processing device and method |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP4709762B2 (en) |
KR (1) | KR100888235B1 (en) |
CN (1) | CN1981302B (en) |
TW (1) | TWI366150B (en) |
WO (1) | WO2006006525A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012523560A (en) * | 2009-04-10 | 2012-10-04 | エスエヌユー プレシジョン カンパニー リミテッド | Video centering method |
JP2015021891A (en) * | 2013-07-22 | 2015-02-02 | 株式会社ミツトヨ | Image measurement device and program |
JP2021004741A (en) * | 2019-06-25 | 2021-01-14 | 株式会社Fuji | Tolerance setting system, substrate inspection machine, tolerance setting method, and substrate inspection method |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120162377A1 (en) * | 2009-09-03 | 2012-06-28 | Ccs Inc. | Illumination/image-pickup system for surface inspection and data structure |
JP2011087183A (en) * | 2009-10-16 | 2011-04-28 | Olympus Imaging Corp | Imaging apparatus, image processing apparatus, and program |
BR112012031688A2 (en) * | 2010-06-15 | 2016-08-16 | Koninkl Philips Electronics Nv | method for processing a first digital image and computer program product for processing a first digital image |
US11336831B2 (en) | 2018-07-06 | 2022-05-17 | Canon Kabushiki Kaisha | Image processing device, control method, and program storage medium |
CN110441234B (en) * | 2019-08-08 | 2020-07-10 | 上海御微半导体技术有限公司 | Zoom lens, defect detection device and defect detection method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0785246A (en) * | 1993-09-10 | 1995-03-31 | Olympus Optical Co Ltd | Image synthesizer |
JPH11271645A (en) * | 1998-03-25 | 1999-10-08 | Nikon Corp | Microscopic image display device |
JP2000059606A (en) * | 1998-08-12 | 2000-02-25 | Minolta Co Ltd | High definition image preparation system |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2522611B2 (en) * | 1991-07-05 | 1996-08-07 | 大日本スクリーン製造株式会社 | Length measuring device |
JPH0560533A (en) * | 1991-09-04 | 1993-03-09 | Nikon Corp | Pattern inspection device |
EP0639023B1 (en) * | 1993-08-13 | 1997-06-04 | Agfa-Gevaert N.V. | Method for producing frequency-modulated halftone images |
JP3424138B2 (en) * | 1994-05-11 | 2003-07-07 | カシオ計算機株式会社 | Transparent substrate alignment method |
CN1204101A (en) * | 1997-06-26 | 1999-01-06 | 伊斯曼柯达公司 | Integral images with transitions |
US6470094B1 (en) * | 2000-03-14 | 2002-10-22 | Intel Corporation | Generalized text localization in images |
-
2005
- 2005-07-08 JP JP2006529001A patent/JP4709762B2/en not_active Expired - Fee Related
- 2005-07-08 KR KR1020077000543A patent/KR100888235B1/en not_active IP Right Cessation
- 2005-07-08 WO PCT/JP2005/012661 patent/WO2006006525A1/en active Application Filing
- 2005-07-08 TW TW094123201A patent/TWI366150B/en not_active IP Right Cessation
- 2005-07-08 CN CN2005800228587A patent/CN1981302B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0785246A (en) * | 1993-09-10 | 1995-03-31 | Olympus Optical Co Ltd | Image synthesizer |
JPH11271645A (en) * | 1998-03-25 | 1999-10-08 | Nikon Corp | Microscopic image display device |
JP2000059606A (en) * | 1998-08-12 | 2000-02-25 | Minolta Co Ltd | High definition image preparation system |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012523560A (en) * | 2009-04-10 | 2012-10-04 | エスエヌユー プレシジョン カンパニー リミテッド | Video centering method |
JP2015021891A (en) * | 2013-07-22 | 2015-02-02 | 株式会社ミツトヨ | Image measurement device and program |
JP2021004741A (en) * | 2019-06-25 | 2021-01-14 | 株式会社Fuji | Tolerance setting system, substrate inspection machine, tolerance setting method, and substrate inspection method |
JP7277283B2 (en) | 2019-06-25 | 2023-05-18 | 株式会社Fuji | tolerance setting system, circuit board inspection machine, tolerance setting method, circuit board inspection method |
Also Published As
Publication number | Publication date |
---|---|
KR20070026792A (en) | 2007-03-08 |
JP4709762B2 (en) | 2011-06-22 |
JPWO2006006525A1 (en) | 2008-04-24 |
CN1981302A (en) | 2007-06-13 |
TW200606753A (en) | 2006-02-16 |
TWI366150B (en) | 2012-06-11 |
CN1981302B (en) | 2010-12-29 |
KR100888235B1 (en) | 2009-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2006006525A1 (en) | Image processing device and method | |
JP4799329B2 (en) | Unevenness inspection method, display panel manufacturing method, and unevenness inspection apparatus | |
JP4951496B2 (en) | Image generation method and image generation apparatus | |
WO2012053521A1 (en) | Optical information processing device, optical information processing method, optical information processing system, and optical information processing program | |
JP2010139890A (en) | Imaging apparatus | |
JP5196572B2 (en) | Wafer storage cassette inspection apparatus and method | |
CN112881419A (en) | Chip detection method, electronic device and storage medium | |
JP2013025466A (en) | Image processing device, image processing system and image processing program | |
KR20110030275A (en) | Method and apparatus for image generation | |
JP2004038885A (en) | Image feature learning type defect detection method, defect detection device and defect detection program | |
JP2010141700A (en) | Imaging apparatus | |
CN105335959B (en) | Imaging device quick focusing method and its equipment | |
CN103247548A (en) | Wafer defect detecting device and method | |
JP4752733B2 (en) | IMAGING DEVICE, IMAGING METHOD, AND IMAGING DEVICE DESIGNING METHOD | |
JP2009294027A (en) | Pattern inspection device and method of inspecting pattern | |
JP2015115918A (en) | Imaging apparatus and imaging method | |
JP5149984B2 (en) | Imaging device | |
JP5544894B2 (en) | Wafer inspection apparatus and wafer inspection method | |
JPH11287618A (en) | Image processing device | |
JP2004069645A (en) | Method and device for visual inspection | |
JP4851972B2 (en) | Relative position calculation device, relative position adjustment device, relative position adjustment method, and relative position adjustment program | |
JP4428112B2 (en) | Appearance inspection method and appearance inspection apparatus | |
WO2014084056A1 (en) | Testing device, testing method, testing program, and recording medium | |
JP2013034208A (en) | Imaging apparatus | |
TW202429379A (en) | Method for processing review images and review system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006529001 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580022858.7 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020077000543 Country of ref document: KR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 1020077000543 Country of ref document: KR |
|
122 | Ep: pct application non-entry in european phase |