WO2006006525A1 - Image processing device and method - Google Patents

Image processing device and method Download PDF

Info

Publication number
WO2006006525A1
WO2006006525A1 PCT/JP2005/012661 JP2005012661W WO2006006525A1 WO 2006006525 A1 WO2006006525 A1 WO 2006006525A1 JP 2005012661 W JP2005012661 W JP 2005012661W WO 2006006525 A1 WO2006006525 A1 WO 2006006525A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image processing
model
partial
information
Prior art date
Application number
PCT/JP2005/012661
Other languages
French (fr)
Japanese (ja)
Inventor
Kazuhito Horiuchi
Original Assignee
Olympus Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corporation filed Critical Olympus Corporation
Priority to JP2006529001A priority Critical patent/JP4709762B2/en
Priority to KR1020077000543A priority patent/KR100888235B1/en
Priority to CN2005800228587A priority patent/CN1981302B/en
Publication of WO2006006525A1 publication Critical patent/WO2006006525A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination

Definitions

  • the present invention relates to an image processing apparatus and method for capturing an image of a subject divided into a plurality of partial images and combining the captured partial images to form an entire image of the subject.
  • Image information is used as a method for inspecting defects that cause functional problems in substrates in industrial microscopes and inspection devices that inspect FPD (flat panel display) substrates, PDP (plasma display) substrates, semiconductor wafers, etc.
  • FPD flat panel display
  • PDP plasma display
  • Image information is used as a method for inspecting defects that cause functional problems in substrates in industrial microscopes and inspection devices that inspect FPD (flat panel display) substrates, PDP (plasma display) substrates, semiconductor wafers, etc.
  • FPD flat panel display
  • PDP plasma display
  • the entire subject is divided into a plurality of regions, each of these regions is imaged, and the partial images obtained by the imaging are bonded to each other to obtain a subject.
  • a method for obtaining an entire high-definition image is often used.
  • a method for obtaining a high-definition image, a method is often used in which a partial image with a high magnification is picked up based on an entire image with a low magnification and the picked-up partial images are combined and processed only for industrial use. It is used for various applications.
  • Patent Document 1 Japanese Patent Laid-Open No. 2000-59606
  • Patent Document 2 Japanese Patent Laid-Open No. 11-271645
  • an inspection device for industrial use is a periodic pattern such as an FPD substrate or a PDP substrate.
  • a periodic pattern such as an FPD substrate or a PDP substrate.
  • the microscope image display device shown in Patent Document 2 is an inspection device for industrial use and targets an image with a sparse pattern density. In some cases, it may happen that there are no overlapping patterns where there are no turns, and there is no overlapping area pattern in the pasting, so the pasting process cannot be performed! There is a defect such as / ⁇ , or a combined image in which the bonded area is extremely shifted is generated!
  • the present invention has been made in view of such circumstances, and an image including a periodic and Z or sparse portion of a pattern (circuit pattern and wiring pattern) such as an FPD substrate or a PDP substrate.
  • the present invention also provides an image processing apparatus and method for generating a high-definition (high-resolution) image by pasting partial images.
  • the image processing apparatus of the present invention combines partial images of an object photographed at a predetermined resolution with a predetermined overlap area, and pairs all or a part of the object with a predetermined size.
  • An image processing device that generates an elephant image, which is a first image obtained by photographing an object at a first magnification.
  • First imaging means for obtaining image information; and second object for obtaining second image information as the partial image by photographing the object at a second magnification that is higher than the first magnification.
  • An image model generating unit that generates a model of the target image generated by pasting the partial images from the overlapping region information based on the size of the target image and the degree of the overlapping region in the partial image; and the partial image Shooting position calculation means for searching for the arrangement position in the first image information of the target image generated by pasting the images using the model (for example, the pattern density evaluation value calculation unit 17 and the shooting position calculation in the embodiment) Unit 18) and high-definition image generation means for generating the target image by combining the partial images based on the arrangement position.
  • the model for example, the pattern density evaluation value calculation unit 17 and the shooting position calculation in the embodiment
  • partial images of an object photographed at a predetermined resolution are pasted together with a predetermined overlapping area, and a target image of the entire object or a part of the object of a predetermined size is attached.
  • An image processing method to be generated wherein a first photographing process of photographing a subject at a first magnification to obtain first image information, and the subject at a magnification higher than the first magnification. From the second photographing process for obtaining the second image information as the partial image by photographing at the second magnification, and the size of the target image and the degree of the overlapping region in the partial image are used as the overlapping region information.
  • An image model generation process for generating a model of a target image generated by pasting images and an arrangement position of the target image generated by pasting the partial images in the first image information using the model.
  • the shooting position calculation process to be searched and the position
  • a high-definition image generation process for generating the target image by pasting the partial images based on the position.
  • the image processing apparatus forms in advance a model of a target image to be combined by combining the partial images based on the first image information of low resolution (low magnification) when the partial images are combined.
  • the photographing position calculation unit detects an optimum arrangement position of an overlapping region in the model image combining in the first image information, thereby The arrangement position is searched.
  • the image processing apparatus of the present invention positively uses the overlapping portion that is overlapped and combined when searching for the shooting position of the partial image (that is, the overlapping region at the time of combining). Therefore, it is easier to compare compared with the past by improving the accuracy of pasting of partial images, that is, overlapping portions when generating target images. In addition, it is possible to generate a high-definition image having a desired high resolution with high accuracy.
  • the shooting position calculation means arranges overlapping areas while moving the model by a predetermined moving distance within a preset search area of the first image information. It is characterized by searching for a position.
  • the image processing apparatus of the present invention sets a search area of a predetermined size in advance, particularly when generating a high-definition image of an object composed of a repetitive pattern.
  • the search process is performed at high speed in order to search for the arrangement position of the overlapping region while moving the model in a defined direction from the predetermined position at a predetermined moving distance (for example, in units of a plurality of pixels). It becomes possible to do.
  • the image processing apparatus is characterized in that the photographing position calculation means searches for an arrangement position of the overlapping area in the search area based on the pattern information of the overlapping area.
  • the image processing apparatus sets the position of the overlapping area based on the pattern information of the overlapping area (for example, the pattern density evaluation value indicating the density of the pattern). Since it is possible to detect a dense position, when pasting partial images, it is possible to select a position where the positioning of the pasting can be easily performed as the overlapping region, and to achieve a desired high accuracy and high accuracy. It is possible to generate high-resolution images with high resolution.
  • the pattern information of the overlapping area for example, the pattern density evaluation value indicating the density of the pattern.
  • the imaging position calculation unit changes the overlapping area information in the model in the search area based on the pattern information of the overlapping area. Thus, the arrangement position is searched.
  • the image processing apparatus is capable of overlapping area information, for example, overlapping, required for the partial image combining processing according to image pattern information (for example, pattern density information). Since the overlap rate of the area is changed, it is possible to change the pattern information to a value suitable for matching as needed, regardless of whether the substrate pattern is rough or dense.
  • image pattern information for example, pattern density information
  • the image processing apparatus of the present invention has moving means for moving the object relative to the first photographing means and the second photographing means in a predetermined distance unit in the XY directions. Shi
  • the shooting position calculation means sets the shooting position of the target image on the target based on the arrangement position of the target image detected by the model.
  • the image processing apparatus of the present invention since the image processing apparatus of the present invention has a relative moving unit, when the shooting position is detected, it is possible to perform processing for moving to that position and shooting. It is possible to improve the generation speed of high-resolution and high-definition images by performing shooting processing in real time.
  • the image processing apparatus of the present invention is characterized in that, based on the photographing position and the arrangement position of the target image detected by the model, the photographing position of the partial image used for the pasting is calculated.
  • the image processing apparatus of the present invention sets the position of the overlapping area by the model, and therefore detects the position where the pattern of the overlapping area is dense.
  • the image processing apparatus of the present invention sets the position of the overlapping area by the model, and therefore detects the position where the pattern of the overlapping area is dense.
  • the first image information and the second image information obtained by the first and second imaging units are each subjected to distortion correction or Z and shading correction. It is characterized by that.
  • the image processing apparatus of the present invention can generate V and high-definition images that are not affected by distortion or shading.
  • the invention's effect is not limited to
  • a model of the target image to be combined and combined with the partial images is formed in advance based on the low-resolution first image information, Using this model, in order to adjust the shooting position of the partial image that generates the high-resolution target image including the overlapped area within the predetermined area of the wide range of the first image information, In the field of view, it is possible to obtain an appropriate photographing position of the partial image by calculation, and it is possible to easily generate a high-definition image having a desired high resolution.
  • FIG. 1 is a conceptual diagram showing a configuration example of a microscope apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a configuration example of the image processing unit 5 in FIG.
  • FIG. 3 is a conceptual diagram for explaining a model generated by the image model generation unit 16 of FIG.
  • FIG. 4 is a conceptual diagram for explaining a Sobel filter.
  • FIG. 5 is a conceptual diagram for explaining a pattern density evaluation value.
  • FIG. 6 is a flowchart showing an operation example of the microscope apparatus including the image processing unit 5 according to the first embodiment.
  • FIG. 7 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the first embodiment.
  • FIG. 8 is a conceptual diagram illustrating the operation of the image processing unit 5 according to the first embodiment.
  • FIG. 9 is a conceptual diagram illustrating the operation of the image processing unit 5 according to the first embodiment.
  • FIG. 10 is a conceptual diagram illustrating maximum value detection processing within a search area for pattern density evaluation values.
  • FIG. 11 is a flowchart showing an operation example of the microscope apparatus including the image processing unit 5 according to the second embodiment.
  • FIG. 12 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
  • FIG. 13 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
  • FIG. 14 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
  • FIG. 15 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
  • FIG. 16 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
  • FIG. 17 is a conceptual diagram illustrating the operation of the image processing unit 5 according to the second embodiment.
  • FIG. 18 is a conceptual diagram for explaining the maximum value and the minimum value of the overlapping rate of partial image frames.
  • FIG. 19 is a conceptual diagram for explaining an inspection apparatus according to a third embodiment.
  • FIG. 20 is a conceptual diagram for explaining an inspection apparatus according to a fourth embodiment.
  • FIG. 1 is a block diagram showing a configuration example of the embodiment.
  • the first embodiment is a microscope equipped with the image processing function of the present invention.
  • the microscope is provided with a lens barrel 1 for attaching an objective lens 2 in the Z-axis direction (up and down as viewed in the figure).
  • a vertical drive mechanism capable of being driven is provided.
  • the microscope Z-axis movement control unit 8 controls the vertical drive mechanism to move the lens barrel 1 up and down to adjust the focus on the object placed on the stage 4.
  • the stage 4 is provided in the lower part of the microscope, and has a mechanism (two-axis movement drive mechanism) for driving in the X direction and the Y direction (left and right direction and depth direction as seen from the figure).
  • the target object which is a sample for observation, is placed on the top.
  • the stage movement control unit 6 performs movement control of the stage 4 in two axes, and adjusts the relative position between the objective lens 2 and the object.
  • an imaging camera 3 is provided on the upper part of the lens barrel 1, and a video signal (image signal) output from the imaging camera 3 is transferred to the image processing unit 5 for various image processing. .
  • the imaging camera 3 is a CCD camera and outputs, for example, gradation (luminance) data for each RGB-compatible pixel as image information.
  • the image processing unit 5, the stage movement control unit 6, and the microscope Z-axis movement control unit 8 are controlled by the system control unit 7 as necessary.
  • FIG. 5 is a block diagram illustrating a configuration example of the image processing unit 5 of the embodiment.
  • the portion surrounded by the wavy line is the image processing unit 5, the imaging control unit 11, the shading 'distortion correction processing unit 12, the captured image data storage buffer unit 13, the first captured image reading unit 14, and the second captured image reading.
  • the image forming apparatus includes a unit 15, an image model generation unit 16, a pattern density evaluation value calculation unit 17, a shooting position calculation unit 18, an image generation unit 19, and an image storage unit 20.
  • the imaging control unit 11 is controlled by the system control unit 7, the magnification is changed by exchanging the objective lens 2, and the focus is adjusted by the microscope Z-axis movement control unit 8, and the image is captured by the imaging camera 3.
  • Low magnification image information first image information, i.e. The whole image
  • high-magnification image information second image information, that is, a partial image
  • the shading / distortion correction processing unit 12 performs shading correction and distortion correction on shading and distortion caused by the imaging system force including the objective lens 2 for each of the first image information and the second image information, and then performs imaging.
  • the image data storage buffer unit 13 information of magnification is added and stored.
  • This magnification information is added to the first image information and the second image information in the imaging control unit 11 via the system control unit 7 as lens information of the objective lens 2.
  • the first captured image reading unit 14 reads first image information in which the added magnification information indicates a low magnification from the captured image data storage buffer unit 13, and temporarily stores the first image information. To store.
  • the second captured image reading unit 15 reads the second image information (hereinafter referred to as a partial image) with the added magnification information from the captured image data storage buffer unit 13 and temporarily stores the partial image. Store.
  • the image model generation unit 16 generates a model of the target image that is finally generated by pasting the partial images. This model includes an overlapping area that will be overlapped when the partial images are pasted together!
  • the image model generation unit 16 generates a first magnification as a low magnification preset by the user, a second magnification as a high magnification, and a partial image, which are input from the system control unit 7 and pasted together.
  • the above model is generated from the image size to be overlapped and the size of the overlapping area to be overlapped when pasting.
  • the no-turn density evaluation value calculation unit 17 reads a model from the image model generation unit 16 and reads the first image information from the first captured image reading unit 14 to generate a target image.
  • the search area to be searched is set in the first image information by the system control unit 7 (set while the user confirms the screen).
  • the pattern density evaluation value calculation unit 17 has a predetermined movement distance, for example, a plurality of movements within the search area, with the model as a predetermined position, for example, the upper left of the search area. While moving in the X-axis direction and Y-axis direction in pixel units, The pattern density evaluation value (pattern information) in the overlapping area is calculated, and these are sequentially stored in correspondence with the calculated positions.
  • the movement distance in the search area may be performed in units of one pixel (pixel), but depending on the target pattern, there is no change before and after the movement, and the obtained pattern density evaluation value is also Since the values are almost the same, in the present invention, a unit of a predetermined number of pixels is used in order to reduce useless calculation time and improve the search efficiency of the overlapping area.
  • the moving distance if the object is a periodic pattern as in this embodiment, the number of pixels forming one period is 1Z5, 1/10, 1/50, 1/100,. It is set according to the number of pixels in the pattern period.
  • the minimum size of the target pattern included in the overlapping area (for example, the width of the signal line through which the current flows) is known, the minimum pattern width is 1 or 2 times the number of pixels. , 3 times, • You can set the movement distance according to the size of the pattern, such as-.
  • the movement distance according to the size of the no-turn takes into account that the pattern density evaluation value changes depending on whether the entire pattern appears or disappears from the overlap area before and after moving.
  • the pattern density evaluation value has four calculated values (vertical units to be described later) for each block of the size of the partial image (horizontal direction and vertical direction). Directional and horizontal edge strength).
  • the pattern density evaluation value is calculated by the pattern density evaluation value calculation unit 17 according to the following flow.
  • the pattern density evaluation value is obtained by paying attention to the edge strength for each direction (the magnitude of the luminance change in the pattern).
  • the edge strength for each direction represents the edge strength in each of the vertical (up and down the screen) direction and the horizontal (left and right of the screen) direction.
  • a Sobel filter is used as a method for calculating the edge strength.
  • This Sobel filter multiplies each of the nine pixel values in the vicinity, that is, adjacent to the top, bottom, left, and right, centering on a certain pixel of interest, by multiplying the coefficient (center is the pixel of interest) mask as shown in Fig. 4 to obtain the result.
  • this process is performed using two coefficient matrices in the vertical and horizontal directions. [0030] That is, the pixel at the center of the mask (X, Y) (X is the coordinate value on the screen in the horizontal direction, the right direction is positive, the left direction is negative, and Y is in the vertical direction.
  • the absolute value of the numerical value R is defined as Abs (R)
  • the horizontal strength EH (X, Y) and the vertical strength EV (X, Y) are obtained by the following formula.
  • the pattern density evaluation value calculation unit 17 adds the calculated edge strength for each pixel to the direction unit in the target region, and adds the horizontal edge strength total value ⁇ and the vertical edge strength. Calculate the total value AEV.
  • the edge strength in either direction is extremely low.
  • the edge strength total value AEV of a certain value can be obtained, but the horizontal edge strength total value ⁇ is because the change in luminance does not exist in the horizontal direction. It becomes almost “0”.
  • the pattern density evaluation value calculation unit 17 obtains a predetermined threshold value Thres for the edge intensity total value ⁇ and the edge intensity total value AEV, and each edge intensity total value is equal to or greater than the threshold value Thres. Only this threshold Thres or more is output as the pattern density evaluation value.
  • the threshold value Thres is multiplied by a predetermined coefficient and used as the actual threshold value.
  • the coefficient to be multiplied may be a small value such as 1 or 2 to suppress the effect of noise. Also, when extracting an area where the edge strength is high, that is, the pattern features are clear, a large value according to the value of Q (if the luminance gradation is 256 and Q is 10,
  • the pattern density evaluation value calculation unit 17 uses the above formula to calculate the pattern density evaluation value PD.
  • EV is calculated as follows.
  • the photographing position calculation unit 18 selects the position with the largest pattern density evaluation value selected by the pattern density evaluation value calculation unit 17, that is, the first image information (entire image). Based on the shooting position of the target image, the shooting location of each partial image is obtained, and the shooting location information of this shooting location is output to the system control unit 7.
  • the image generation unit 19 includes a system control unit 7 that includes a microscope Z-axis movement control unit 8, a stage movement control unit 6, an imaging control unit 11, and a shooting camera. Control 3 to paste the partial images of multiple blocks of the model.
  • the image storage unit 20 stores the target image (high-definition image) generated by pasting the partial images in the image generation unit 19.
  • the system control unit 7 reads out the target image from the image storage unit 20 and displays it on a display device (not shown) by the access by the user.
  • the edge intensity indicating the change in the luminance value of the image is used in the above-described first embodiment.
  • the “spatial characteristic” of the direction based on the luminance value of the image is used.
  • An evaluation value that consists of purely luminance values, such as the luminance average value for the histogram formed from the image luminance value, the difference between the minimum and maximum values (dynamic range). ), Mode, median, and variance (standard deviation) can be used as pattern information.
  • the pattern frequency evaluation value PDEV is obtained from the frequency FR and the standard deviation SD using the histogram frequency FR relating to the luminance value of the mode value relating to the overlapping region and the standard deviation SD of this histogram as the pattern information.
  • the power of 2 is expressed as 2 to x.
  • This pattern density evaluation value PDEV represents 0 to X-1 with the frequency FR and 2 or more with the standard deviation SD. However, in the above formula, “FR 2”.
  • the pattern density evaluation value calculation unit 17 pays attention only to the frequency FR, determines whether or not the force is equal to or greater than a predetermined threshold value, and if it is equal to or greater than this threshold value, also evaluates the standard deviation.
  • FIG. 6 is a flowchart specifically showing an operation example of the image processing apparatus according to the first embodiment of FIG.
  • the FPD board shown in FIG. 7 will be described as an example of the object.
  • the FPD substrate pixel portions and transistors for driving the pixels are periodically arranged.
  • the user uses an input device (not shown) as a processing parameter to the system control unit 7 as the overall image magnification (first magnification), partial image magnification (second magnification), and the combined image (target image). And the overlap rate of each partial image are set (step S1). [0039] Next, when the process of acquiring the target image is started, the system control unit 7 drives the stage 4 by the stage movement control unit 6, and adjusts the relative position between the objective lens 2 and the target object. And switch the objective lens 2 to achieve the first magnification.
  • the system control unit 7 adjusts the focus by moving the lens barrel 1 up and down via the microscope Z-axis movement control unit 8 to capture the entire image shown in FIG.
  • the whole image is transferred to the shading / distortion correction processing unit 12 via the.
  • a field frame as a partial image (shootable range when captured at a second magnification: partial image frame) is an area within a broken line shown in FIG.
  • the shading / distortion correction processing unit 12 performs distortion correction and shading correction on the entire input image, and stores it in the captured image data storage buffer unit 13 (step S 2). ).
  • the image model generation unit 16 calculates the figure based on the size of the target image (the number of vertical pixels X the number of horizontal pixels) and the overlapping ratio of the partial images when the target image is generated.
  • a model of the target image having an overlapping area as shown in Fig. 3 is generated (step S3).
  • the image model generation unit 16 calculates the number of partial images and the size of the overlapping region so that the overlapping region has the above-described overlapping rate with respect to the partial image (step S4).
  • the partial image frame force shown in Fig. 8 also has a partial image frame of 4, and the overlap area is a shaded area where two or more of the four partial image frames overlap each other (with a + Shown in shape! / Declared part).
  • the target image is formed by four partial images, and the model of the size of the target image is configured by four partial image frames.
  • the user sets a search area for searching for the shooting position of the model force target image in the entire image displayed on the display device (step S5).
  • This search area can be an arbitrary part of the entire image as long as the entire image is larger than the size of the model.
  • the pattern density evaluation value calculation unit 17 calculates the pattern density evaluation value at each movement position while moving (shifting) it in the X axis direction and the Y axis direction at a predetermined movement distance. The above process is repeated until the entire search range is searched, and the calculated pattern is sequentially calculated.
  • the density evaluation value is associated with the coordinate value (calculated position) in the entire image and stored in the internal storage unit.
  • Step S6 the process proceeds to step S7.
  • the pattern density evaluation value calculation unit 17 searches for the largest value among the pattern density evaluation values stored in the internal storage unit, and coordinates corresponding to the searched pattern density evaluation value The value is output as the optimal position of the target image at the time of pasting (Step S7
  • the pattern density evaluation value calculation unit 17 displays a three-dimensional graph indicating the size of the pattern density evaluation value in the Z-axis direction for each evaluated coordinate value (on the XY plane). Then, the pattern density evaluation value at each coordinate value is sequentially compared to search for the maximum pattern density evaluation value.
  • the position of this model that is, the coordinate value is output as the optimum target image generation position.
  • the shooting position calculation unit 18 calculates the shooting position of the partial image from the generation position of the target image output from the pattern density evaluation value calculation unit 17 (step S8).
  • the shooting position calculation unit 18 uses the arrangement position of the partial image frame of the model at the generation position as the shooting position of the partial image shot at the second magnification (high magnification).
  • the coordinate value of the partial image frame corresponding to each partial image is output as the partial image position.
  • the shooting position calculation unit 18 controls the coordinate values of the partial image frames corresponding to the four partial images because the target image is composed of four partial images! / Output to part 7.
  • the system control unit 7 changes the objective lens 2 to a lens corresponding to the second magnification via the microscope Z-axis movement control unit 8, and the partial image input from the imaging position calculation unit 18.
  • the stage 4 is photographed by the imaging camera 3 via the stage movement control unit 6, moved to the coordinate position, focused by the microscope Z-axis movement control unit 8, and each image is captured by the imaging camera 3. Take a partial image.
  • the system control unit 7 has described all of the plurality of partial images constituting the target image. Take a picture by processing.
  • the imaging control unit 11 outputs each partial image input from the imaging camera 3 to the shader / distortion correction processing unit 12.
  • the shading / distortion correction processing unit 12 performs distortion correction and shading correction on the sequentially input partial images, and stores them in the captured image data storage buffer unit 13 (step S9).
  • the image processing unit 5 reads out the partial images constituting the target image from the captured image data buffer unit 13 and temporarily stores them in the second captured image reading unit 15.
  • the image generation unit 19 sequentially reads out the partial images from the second captured image reading unit 15, and based on the model shown in FIG. 9, that is, for each partial image frame of the model, at the partial image position of this partial image frame. Corresponding captured partial images are arranged, and the partial images are combined to generate a target image.
  • the generated high-definition target image is stored in the image storage unit 20 (step S10).
  • the image generation unit 19 performs pattern matching by superimposing patterns arranged in the overlapping area, and performs alignment of the bonding. For this reason, it is necessary to use an area where the pattern density evaluation value exceeds a predetermined density, that is, an area exceeding a predetermined threshold, as the overlapping area.
  • the arrangement position of the overlapping region to be bonded is determined by pattern information such as the pattern density evaluation value, it is formed on the substrate like the FPD substrate pattern shown in FIG.
  • a bonded image can be generated with high accuracy even for an object having periodicity, which is suitable for a bonding process in which many patterns are sparse.
  • the system control unit 7 reads the target image from the image storage unit 20 as necessary, and displays the target image on the display unit.
  • the second embodiment has the same configuration as that of the first embodiment, and only differences from the first embodiment will be described below.
  • FIG. 11 is a flowchart specifically showing an operation example in the second embodiment. The difference is that step S8 of the first embodiment is changed to step S15, and this step will be described.
  • the optimal position force of the model for pasting the partial images is based on the position of the partial image frame in the model!
  • the shooting position is determined, and the search is performed in the search area with the overlapping area between the partial image frames in the model fixed.
  • the position of the partial image frames constituting the stitching model is determined using the low-magnification whole image (first image information). Decide and speak.
  • step S 6 the pattern density evaluation value at each position is calculated while moving the search area by a predetermined movement distance using the fixed overlapping area model.
  • the minimum pattern density threshold PDEV-Min is set as an expression shown below.
  • the pattern density evaluation value calculation unit 17 calculates the pattern density evaluation value at each moving position, and searches the entire search range. The above process is repeated until the above pattern PDEV —Min is exceeded, and the calculated pattern density evaluation value is sequentially associated with the coordinate value (calculated position) in the entire image, stored in the internal storage unit, and the search area When the calculation of the overall pattern density evaluation value is completed, the process proceeds to step S7.
  • step S7 the pattern density evaluation value calculation unit 17 selects and outputs the largest pattern density evaluation value from the internal storage unit, as in the first embodiment.
  • step S15 the pattern density evaluation value of the overlapping region in the model of the coordinate value corresponding to the selected pattern density evaluation value is recalculated.
  • the overlapping area of the partial image frames in the model is the area A by the partial image frames F1 and F2 in FIG. 12, the area B by the partial image frames F3 and F4 in FIG. 13, and the partial image frames F1 and F3 in FIG.
  • This area C is the area D formed by the partial image frames F2 and F4 in FIG.
  • the pattern density evaluation value for each of the regions A to D is calculated from the image at the corresponding position of the low magnification overall image for each partial image frame.
  • the pattern density evaluation value calculation unit 17 determines whether or not each region A to D exceeds a predetermined threshold value.
  • This threshold value was obtained as the pattern density evaluation value in the horizontal and vertical directions in the first embodiment, but in the second embodiment, two partial images adjacent in the horizontal and vertical directions are obtained. Since this is a unit of overlapping area of the frame, the value is defined by the following formula.
  • Thres2 2'QX (the number of pixels subject to Sobel filter operation in the overlapping region) Then, the pattern density evaluation value calculation unit 17 detects that the pattern density of all the regions A to D exceeds the threshold value, and performs processing. Proceed to step S9, and thereafter perform the same processing as in the first embodiment.
  • a value obtained by multiplying the threshold Thres2 by a predetermined coefficient is used as the actual threshold.
  • the coefficient to be multiplied may be a small value such as 1 or 2 to suppress the effect of noise. Also, when extracting an area where the edge strength is high, that is, the pattern features are clear, a large value according to the value of Q (if the luminance gradation is 256 and Q is 10, 10 to 15 Between the values).
  • the pattern density evaluation value calculation unit 17 detects that the pattern density evaluation value of the region A indicated by the oblique lines in FIG. 16 does not exceed the threshold value Thres2, for the region that does not exceed the threshold value Thres2, for example.
  • the partial image frame F1 is moved rightward by a predetermined moving distance, and the area of the area A that is an overlapping area with the partial image frame F1 and the partial image frame F2 is expanded.
  • the pattern density evaluation value calculation unit 17 calculates the pattern density evaluation value of the region A again, detects whether or not the pattern density evaluation value exceeds the threshold value Thres2, and If detected, the process proceeds to step S9. If not, the partial image frame F1 is moved rightward again to determine the pattern density evaluation value of the area A.
  • FIG. 18 is a conceptual diagram for explaining the overlapping rate of overlapping regions.
  • the maximum overlap rate is 50% if the same pattern is included in two partial images.
  • the overlap rate is 50% or more, the same pattern is included in the three partial images.
  • the pattern density evaluation value calculating unit 17 again sets the pattern density evaluation value of the entire overlapping area to the coordinate of the model with the second largest numerical value! Perform the process.
  • the minimum value of the overlapping rate a prescribed value of the number of pixels that is a real number multiple of 1 or more than the number of pixels of the minimum pattern is provided in the pattern formed on the substrate. Obtain from the ratio of the entire partial image.
  • the pixel is twice the minimum pattern.
  • the number is the specified value.
  • all pattern density evaluation values of the regions A to D in the model in the entire search region have the threshold value. If not, first, the position where the evaluation value is the maximum as a whole is determined on the entire image, and the pattern density evaluation value exceeds the threshold value for the overlapping portion of each partial image frame at this coordinate value. Adjust the position of the partial image frame to change the overlap ratio of the overlapping area, and determine the shooting position of the partial image to be shot.
  • the degree of freedom of search is increased with respect to the first embodiment, and an appropriately set photographing position is used as a start. Even if the entire search area is not searched, it is possible to automatically determine the optimum photographing position for pasting using the pattern density evaluation value.
  • the arrangement position of the overlapping region to be bonded is determined by pattern information such as the pattern density evaluation value, the pattern of the FPD board shown in FIG.
  • the bonding process fails for the overlapped part set appropriately. Even in such a case, a combined image can be generated with high accuracy.
  • the third embodiment shown in FIG. 19 is a large substrate inspection apparatus equipped with a microscope.
  • the substrate inspection apparatus shown in FIG. 19 is the same as the first and second embodiments in the configuration of an observation system such as a microscope, objective lens 2, and imaging camera 3.
  • the difference is the drive mechanism that moves the FPD substrate of the object relative to the objective lens 2, and the stage movement controller 6 moves the stage 4 on which the object is placed in one axis direction (upper right in FIG. 19). Lower left direction: Drive only in arrow O).
  • the system control unit 7 drives the microscope T itself in one axial direction perpendicular to the stage 4 (upper left and lower right in FIG. 19: arrow P).
  • the relative position between the objective lens 2 and the object can be moved in the XY directions.
  • the target image which is a high-definition image generated in the first to third embodiments, is a reference image that is compared with the image of the substrate being inspected when detecting a substrate defect (a normal substrate for comparison).
  • the image is also used in the inspection device as a force generated image).
  • the inspection apparatus shown in FIG. 20 is provided with a line sensor as an image pickup means. After the image pickup means is adjusted by a calibration sample, the stage is moved in the direction of arrow G by the holding movement means. The reflected light of the light emitted by the illumination means is detected by the line sensor at every predetermined moving distance.
  • the integrated control means compares the intensity of the reflected light detected with the detection value of the reflected light sampled immediately before, and if it differs beyond a predetermined range, it is detected as a defective candidate and The coordinate value at is stored.
  • the FPD board is mounted on the stage 4 of the image processing apparatus in the first to third embodiments, and the coordinate value of the defect candidate is input to the system control unit 7.
  • the system control unit 7 moves the stage 4 via the stage movement control unit 6, and the position of the defect candidate is set to the position of the objective lens 2, that is, the position where the substrate portion of the defect candidate can be imaged by the imaging camera 3. Moving.
  • the system control unit 7 generates the target image as a high-definition image, that is, in a state where the position of the defective candidate is included by the second magnification, and in the first to third embodiments, that is, Move to the location corresponding to the optimal model position.
  • the system control unit 7 compares the image information including the captured defect candidate with the target image generated in the first and second embodiments by pattern matching, and determines the defect candidate pattern. And the pattern shape of the corresponding part of the target image, which is the reference image, are compared to determine whether they are different!
  • the system control unit 7 detects that the difference is not different, it determines that the defect candidate is a non-defective product, while if it detects a difference, determines that the defect candidate is defective.
  • the determination result is displayed on the display device.
  • the inspection speed is improved, In addition, the accuracy of inspection can be improved.
  • a program for realizing the functions of the image processing unit in FIGS. 1 and 2 is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system.
  • Image processing may be performed by executing.
  • the “computer system” here includes the OS and hardware such as peripheral devices.
  • Computer system includes a WWW system equipped with a homepage provision environment (or display environment).
  • the “computer-readable recording medium” refers to a storage device such as a flexible disk, a magneto-optical disk, a portable medium such as a ROM and a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” means a volatile memory (RAM) inside a computer system that becomes a server or a client when a program is transmitted via a communication line such as a network such as the Internet or a telephone line. In this way, the program is held for a certain period of time.
  • the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • the “transmission medium” for transmitting a program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the functions described above. Furthermore, what can realize the above-mentioned functions in combination with a program already recorded in the computer system, that is, a so-called differential file (differential program) may be used.
  • a so-called differential file differential program
  • a model of a target image to be combined and combined with the partial images is formed in advance based on the low-resolution first image information. Then, using this model, the first image information is used to adjust the shooting position of the partial image that generates the high-resolution target image including the overlapping region within the predetermined region of the first image information over a wide range.
  • the wide field of view it is possible to obtain an appropriate partial image shooting position by calculation, and it is possible to easily generate a high-definition image with a desired high resolution.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Analysis (AREA)

Abstract

An image processing device includes: first imaging means for bonding partial images of an object with a predetermined duplicate area so as to generate an entire or partial object image of the object of a predetermined size and imaging the object with a first magnification so as to obtain first image information; second imaging means for imaging the object with a second magnification higher than the first magnification so as to obtain a partial image; image model generation means for generating a model of the object image generated by bonding the partial images from the size of the object image and the duplicate ratio as a ratio of the duplicate area in the partial image; imaging position calculation means for searching the arrangement position of the object image generated from the partial image in the first image information by using a model; and highly accurate image generation means for generating the object image by bonding the partial images according to the arrangement position.

Description

明 細 書  Specification
画像処理装置及び方法  Image processing apparatus and method
技術分野  Technical field
[0001] 本発明は、被写体を複数の部分画像に分割して撮影し、撮影された部分画像を貼 り合わせて被写体の全体画像を構成する画像処理装置及び方法に関する。  [0001] The present invention relates to an image processing apparatus and method for capturing an image of a subject divided into a plurality of partial images and combining the captured partial images to form an entire image of the subject.
本願は、 2004年 7月 9日に出願された特願 2004— 203108号に基づき優先権を 主張し、その内容をここに援用する。  This application claims priority based on Japanese Patent Application No. 2004-203108 filed on July 9, 2004, the contents of which are incorporated herein by reference.
背景技術  Background art
[0002] FPD (フラットパネルディスプレイ)基板や PDP (プラズマディスプレイ)基板、半導 体ウェハ等を検査する工業用途の顕微鏡や検査装置において、基板の機能上問題 となる欠陥を検査する方法として画像情報を利用した方法が一般的に行われている 上記検査にぉ 、て、パターン形成等に影響する微細な欠陥を高 、精度で検査する 際には、検査対象のパターンを正常な基準パターンと比較して欠陥の検出を行う必 要があるため、低倍率で被写体全体をカバーする画像だけではなぐより高倍率で被 写体全体をカバーできる「高精細(高分解能)」な画像を必要とするケースが多くなつ てきている。  [0002] Image information is used as a method for inspecting defects that cause functional problems in substrates in industrial microscopes and inspection devices that inspect FPD (flat panel display) substrates, PDP (plasma display) substrates, semiconductor wafers, etc. In general, when a fine defect that affects pattern formation or the like is inspected with high accuracy, the pattern to be inspected is compared with a normal reference pattern. Therefore, it is necessary to detect defects, so that a high-definition (high-resolution) image that can cover the entire subject at a higher magnification is required than just an image that covers the entire subject at a lower magnification. Many cases are coming.
[0003] し力しながら、高精細画像においては、被写体の大きさによっては、一度にこの被 写体全体、または必要な範囲を取得することができな 、。  However, in a high-definition image, depending on the size of the subject, the entire subject or a necessary range cannot be obtained at a time.
このため、この高精細画像を得る方法の一つとしては、被写体全体を複数の領域に 分割し、これらの領域を各々撮像し、撮像によって得られた部分画像を、互いに貼り 合せる事により、被写体全体の高精細画像を得る方法が良く用いられている。  For this reason, as one method for obtaining this high-definition image, the entire subject is divided into a plurality of regions, each of these regions is imaged, and the partial images obtained by the imaging are bonded to each other to obtain a subject. A method for obtaining an entire high-definition image is often used.
上記高精細画像を得る方法において、倍率の低い全体画像をもとにして、倍率の 高い部分画像を撮像し、撮像した部分画像を貼り合せ処理を行う方法が多用されて おり、工業用途に限らず、様々なアプリケーションに利用されている。  In the above method for obtaining a high-definition image, a method is often used in which a partial image with a high magnification is picked up based on an entire image with a low magnification and the picked-up partial images are combined and processed only for industrial use. It is used for various applications.
[0004] 先行技術として、全体画像と拡大した全体画像中の部分画像を撮像し、部分画像 が全体画像のどこに対応するかを推定して貼り合せを行うことで高精細画像を得る方 法がある (例えば、特許文献 1参照)。 [0004] As a prior art, taking a whole image and a partial image in the enlarged whole image, estimating where the partial image corresponds to the whole image, and pasting them together to obtain a high-definition image There is a law (see, for example, Patent Document 1).
また、低倍率の顕微鏡画像から一部の領域を指定し、その指定した領域を複数の 高倍率の顕微鏡画像で撮像して、貼り合せ処理を行うことで高精細画像を得る方法 もある(例えば、特許文献 2参照)。  There is also a method for obtaining a high-definition image by designating a part of a region from a low-magnification microscopic image, capturing the designated region with a plurality of high-magnification microscopic images, and performing a pasting process (for example, , See Patent Document 2).
特許文献 1:特開 2000— 59606号公報  Patent Document 1: Japanese Patent Laid-Open No. 2000-59606
特許文献 2:特開平 11— 271645号公報  Patent Document 2: Japanese Patent Laid-Open No. 11-271645
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0005] し力しながら、特許文献 1に示す高精細画像作成装置にあっては、風景を撮像した 画像と異なり、工業用途の検査装置で FPD基板や PDP基板といった周期的なバタ ーンの画像を対象とする場合において、全体画像に対して部分画像がどこに対応す るのかを探し出すことが、すなわちパターン形状が周期的に同一であるため、部分画 像同士の貼り合わせの際、位置あわせにおける位置の特定が困難となり、重ね合わ せ部分の対応が取れな 、ケースが発生する。  [0005] However, in the high-definition image creation device shown in Patent Document 1, unlike an image obtained by capturing a landscape, an inspection device for industrial use is a periodic pattern such as an FPD substrate or a PDP substrate. When targeting images, it is necessary to find out where the partial images correspond to the entire image, that is, because the pattern shapes are periodically the same, when aligning the partial images, alignment is required. In this case, it becomes difficult to specify the position in the case, and the case where the overlapped part cannot be dealt with occurs.
[0006] また、特許文献 2に示す顕微鏡画像表示装置は、工業用途の検査装置でパターン 密度が疎である画像を対象とする場合において、指定領域の場所によっては、部分 画像の貼り合せ処理の際、ノターンが存在しないところを重複して貼り合せようとする ことが発生するケースがあり、貼り合わせにおける重複領域のパターンがないために 貼り合せ処理が行えな!/ヽ、または貼り合わせ領域が極端にずれた貼り合せ画像が生 成される等の欠点を有して!/ヽる不具合を生じる。  [0006] In addition, the microscope image display device shown in Patent Document 2 is an inspection device for industrial use and targets an image with a sparse pattern density. In some cases, it may happen that there are no overlapping patterns where there are no turns, and there is no overlapping area pattern in the pasting, so the pasting process cannot be performed! There is a defect such as / ヽ, or a combined image in which the bonded area is extremely shifted is generated!
[0007] 本発明は、このような事情に鑑みてなされたもので、 FPD基板や PDP基板等のよう に、パターン(回路パターン及び配線パターン)が周期的かつ Zまたは疎な部分を含 む画像に対しても、部分画像を貼り合わせて、高精細 (高解像度)画像を生成する画 像処理装置及び方法を提供するものである。  [0007] The present invention has been made in view of such circumstances, and an image including a periodic and Z or sparse portion of a pattern (circuit pattern and wiring pattern) such as an FPD substrate or a PDP substrate. The present invention also provides an image processing apparatus and method for generating a high-definition (high-resolution) image by pasting partial images.
課題を解決するための手段  Means for solving the problem
[0008] 本発明の画像処理装置は、所定の解像度で撮影した被対象物の部分画像を所定 の重複領域を持たせて貼り合わせ、所定の大きさの該対象物の全体または一部の対 象画像を生成する画像処理装置であり、被対象物を第 1の倍率で撮影して第 1の画 像情報を得る第 1の撮影手段と、前記被対象物を前記第 1の倍率より高い倍率であ る第 2の倍率で撮影して、前記部分画像として第 2の画像情報を得る第 2の撮影手段 と、前記対象画像の大きさと部分画像における重複領域の度合を重複領域情報とか ら、前記部分画像を貼り合わせて生成される対象画像のモデルを生成する画像モデ ル生成手段と、前記部分画像を貼り合わせて生成する対象画像の、第 1の画像情報 における配置位置を、前記モデルを用いて探索する撮影位置算出手段 (例えば、実 施例におけるパターン密度評価値算出部 17及び撮影位置算出部 18)と、前記配置 位置に基づ 、て、前記部分画像を貼り合わせて前記対象画像を生成する高精細画 像生成手段とを有することを特徴とする。 [0008] The image processing apparatus of the present invention combines partial images of an object photographed at a predetermined resolution with a predetermined overlap area, and pairs all or a part of the object with a predetermined size. An image processing device that generates an elephant image, which is a first image obtained by photographing an object at a first magnification. First imaging means for obtaining image information; and second object for obtaining second image information as the partial image by photographing the object at a second magnification that is higher than the first magnification. An image model generating unit that generates a model of the target image generated by pasting the partial images from the overlapping region information based on the size of the target image and the degree of the overlapping region in the partial image; and the partial image Shooting position calculation means for searching for the arrangement position in the first image information of the target image generated by pasting the images using the model (for example, the pattern density evaluation value calculation unit 17 and the shooting position calculation in the embodiment) Unit 18) and high-definition image generation means for generating the target image by combining the partial images based on the arrangement position.
本発明の画像処理方法は、所定の解像度で撮影した被対象物の部分画像を所定 の重複領域を持たせて貼り合わせ、所定の大きさの該対象物の全体または一部の対 象画像を生成する画像処理方法であり、被対象物を第 1の倍率で撮影して第 1の画 像情報を得る第 1の撮影過程と、前記被対象物を前記第 1の倍率より高い倍率であ る第 2の倍率で撮影して、前記部分画像として第 2の画像情報を得る第 2の撮影過程 と、前記対象画像の大きさと部分画像における重複領域の度合を重複領域情報とか ら、前記部分画像を貼り合わせて生成される対象画像のモデルを生成する画像モデ ル生成過程と、前記部分画像を貼り合わせて生成する対象画像の、第 1の画像情報 における配置位置を、前記モデルを用いて探索する撮影位置算出過程と、前記配 置位置に基づ!/ヽて、前記部分画像を貼り合わせて前記対象画像を生成する高精細 画像生成過程とを有することを特徴とする。  In the image processing method of the present invention, partial images of an object photographed at a predetermined resolution are pasted together with a predetermined overlapping area, and a target image of the entire object or a part of the object of a predetermined size is attached. An image processing method to be generated, wherein a first photographing process of photographing a subject at a first magnification to obtain first image information, and the subject at a magnification higher than the first magnification. From the second photographing process for obtaining the second image information as the partial image by photographing at the second magnification, and the size of the target image and the degree of the overlapping region in the partial image are used as the overlapping region information. An image model generation process for generating a model of a target image generated by pasting images and an arrangement position of the target image generated by pasting the partial images in the first image information using the model. The shooting position calculation process to be searched and the position And a high-definition image generation process for generating the target image by pasting the partial images based on the position.
上述した構成により、本発明の画像処理装置は、部分画像を貼り合わせるとき、低 解像度 (低倍率)の第 1の画像情報により、部分画像で貼り合わせて合成する対象画 像のモデルを予め形成して、このモデルを用いて広範囲な第 1の画像情報の所定の 領域内において、重複領域を含めて高解像度の対象画像を生成する部分画像の撮 影位置を調整するため、あらかじめ高解像度で撮影した部分画像を貼り合わせる従 来の手法と比較すると、より広範囲な視野領域において、適切な部分画像の撮影位 置を演算により求めることができ、所望の高解像度 (高倍率)の高精細画像を容易に 生成することが可能となる。 [0009] 本発明の画像処理装置は、前記撮影位置算出手段が、前記第 1の画像情報にお ける、前記モデルの貼り合わせにおける重複領域の最適な配置位置を検出すること により、対象画像の配置位置を探索することを特徴とする。 With the configuration described above, the image processing apparatus according to the present invention forms in advance a model of a target image to be combined by combining the partial images based on the first image information of low resolution (low magnification) when the partial images are combined. Thus, in order to adjust the shooting position of the partial image that generates the high-resolution target image including the overlapping area within the predetermined area of the first image information in a wide range using this model, Compared to the conventional method of pasting together the captured partial images, it is possible to obtain the appropriate partial image shooting position by calculation in a wider field of view, and to obtain a high-definition image with the desired high resolution (high magnification). Can be generated easily. [0009] In the image processing apparatus of the present invention, the photographing position calculation unit detects an optimum arrangement position of an overlapping region in the model image combining in the first image information, thereby The arrangement position is searched.
上述した構成により、本発明の画像処理装置は、貼り合わせの際に、重なり合って 合成される重複部分を積極的に、部分画像の撮影位置の探索に使用する (すなわち 、貼り合わせ時に、重複領域の貼り合わせが容易となる画像パターンの部分を抽出 できる)ため、対象画像を生成する際に、部分画像の貼り合わせ位置、すなわち重複 部分の貼り合わせの精度を向上させ、従来に比較して容易に高 、精度で所望の高 解像度の高精細画像を生成することが可能となる。  With the above-described configuration, the image processing apparatus of the present invention positively uses the overlapping portion that is overlapped and combined when searching for the shooting position of the partial image (that is, the overlapping region at the time of combining). Therefore, it is easier to compare compared with the past by improving the accuracy of pasting of partial images, that is, overlapping portions when generating target images. In addition, it is possible to generate a high-definition image having a desired high resolution with high accuracy.
[0010] 本発明の画像処理装置は、前記撮影位置算出手段が前記第 1の画像情報の予め 設定された探索領域内において、前記モデルを所定の移動距離により移動させつ つ、重複領域の配置位置を探索することを特徴とする。  [0010] In the image processing apparatus of the present invention, the shooting position calculation means arranges overlapping areas while moving the model by a predetermined moving distance within a preset search area of the first image information. It is characterized by searching for a position.
上述した構成により、本発明の画像処理装置は、繰り返しパターンにて構成される 対象物の高精細画像を生成する場合など特に、予め所定の大きさの探索領域を設 定して、この探索領域内において前記モデルを、所定の位置から定義された方向に 各々所定の移動距離 (例えば複数ピクセル単位)にて移動させつつ、重複領域の配 置位置の探索を行うため、探索処理を高速ィ匕することが可能となる。  With the above-described configuration, the image processing apparatus of the present invention sets a search area of a predetermined size in advance, particularly when generating a high-definition image of an object composed of a repetitive pattern. The search process is performed at high speed in order to search for the arrangement position of the overlapping region while moving the model in a defined direction from the predetermined position at a predetermined moving distance (for example, in units of a plurality of pixels). It becomes possible to do.
[0011] 本発明の画像処理装置は、前記撮影位置算出手段が前記重複領域のパターン情 報に基づいて、前記探索領域内における重複領域の配置位置を探索することを特 徴とする。  The image processing apparatus according to the present invention is characterized in that the photographing position calculation means searches for an arrangement position of the overlapping area in the search area based on the pattern information of the overlapping area.
上述した構成により、本発明の画像処理装置は、重複領域のパターン情報 (例えば パターンの密度を示すパターン密度評価値)により、重複領域の配置位置を設定し ているため、重複する領域のパターンが密である位置を検出することが可能となるた め、部分画像を貼り合わせる際、貼り合わせの位置あわせを容易に行える位置を重 複領域として選択することができ、高 、精度で所望の高解像度の高精細画像を生成 することが可能となる。  With the configuration described above, the image processing apparatus according to the present invention sets the position of the overlapping area based on the pattern information of the overlapping area (for example, the pattern density evaluation value indicating the density of the pattern). Since it is possible to detect a dense position, when pasting partial images, it is possible to select a position where the positioning of the pasting can be easily performed as the overlapping region, and to achieve a desired high accuracy and high accuracy. It is possible to generate high-resolution images with high resolution.
[0012] 本発明の画像処理装置は、前記撮影位置算出手段が前記重複領域のパターン情 報に基づいて、前記探索領域内において、モデルにおける重複領域情報を変更さ せて、配置位置を探索することを特徴とする。 [0012] In the image processing apparatus of the present invention, the imaging position calculation unit changes the overlapping area information in the model in the search area based on the pattern information of the overlapping area. Thus, the arrangement position is searched.
上述した構成により、本発明の画像処理装置は、画像のパターン情報 (例えばバタ ーンの粗密の情報)に応じて、部分画像の貼り合わせ処理の際に必要となる重複領 域情報、例えば重複領域の重複率を変更するため、基板のパターンが粗であるか又 は密であるかによらず、必要に応じてパターン情報をマッチングに適した数値に変更 することが可能となり、最適な部分画像の位置、すなわち対象画像の生成位置を算 出することができ、容易に高精細な画像を生成することができる。  With the above-described configuration, the image processing apparatus according to the present invention is capable of overlapping area information, for example, overlapping, required for the partial image combining processing according to image pattern information (for example, pattern density information). Since the overlap rate of the area is changed, it is possible to change the pattern information to a value suitable for matching as needed, regardless of whether the substrate pattern is rough or dense. The position of the image, that is, the generation position of the target image can be calculated, and a high-definition image can be easily generated.
[0013] 本発明の画像処理装置は、前記第 1の撮影手段及び前記第 2の撮影手段に対し、 対象物を X— Y方向に各々所定の距離単位で相対的に移動させる移動手段を有し [0013] The image processing apparatus of the present invention has moving means for moving the object relative to the first photographing means and the second photographing means in a predetermined distance unit in the XY directions. Shi
、前記撮影位置算出手段が前記モデルにより検出した対象画像の配置位置に基づ き、前記対象物における対象画像の撮影位置を設定することを特徴とする。 The shooting position calculation means sets the shooting position of the target image on the target based on the arrangement position of the target image detected by the model.
上述した構成により、本発明の画像処理装置は、相対的な移動手段を有するため 、撮影位置が検出された時点において、その位置に移動させて撮影する処理が行え るため、撮影位置の算出及び撮影の処理をリアルタイムに行い、高解像度の高精細 画像の生成速度を向上させることが可能となる。  With the configuration described above, since the image processing apparatus of the present invention has a relative moving unit, when the shooting position is detected, it is possible to perform processing for moving to that position and shooting. It is possible to improve the generation speed of high-resolution and high-definition images by performing shooting processing in real time.
[0014] 本発明の画像処理装置は、前記撮影位置と、前記モデルにより検出した対象画像 の配置位置とに基づき、貼り合わせに用いる部分画像の撮影位置を算出することを 特徴とする。  [0014] The image processing apparatus of the present invention is characterized in that, based on the photographing position and the arrangement position of the target image detected by the model, the photographing position of the partial image used for the pasting is calculated.
上述した構成により、本発明の画像処理装置は、モデルにより重複領域の配置位 置を設定しているため、重複する領域のパターンが密である位置を検出することから 、対象画像の撮影位置が算出され、これにより貼り合わせに用いる部分画像の撮影 位置を容易に算出することができ、高い精度で所望の高解像度の高精細画像を生 成することが可能となる。  With the configuration described above, the image processing apparatus of the present invention sets the position of the overlapping area by the model, and therefore detects the position where the pattern of the overlapping area is dense. Thus, it is possible to easily calculate the shooting position of the partial image used for pasting, and it is possible to generate a high-definition image having a desired high resolution with high accuracy.
[0015] 本発明の画像処理装置は、前記第 1及び第 2の撮影手段に得られる第 1の画像情 報及び第 2の画像情報各々が、歪補正または Z及びシェーディング補正されたもの であることを特徴とする。 [0015] In the image processing apparatus of the present invention, the first image information and the second image information obtained by the first and second imaging units are each subjected to distortion correction or Z and shading correction. It is characterized by that.
上述した構成により、本発明の画像処理装置は、歪みやシェーディングの影響のな V、高精細画像を生成することが可能となる。 発明の効果 With the configuration described above, the image processing apparatus of the present invention can generate V and high-definition images that are not affected by distortion or shading. The invention's effect
[0016] 以上説明したように、本発明によれば、部分画像を貼り合わせるとき、低解像度の 第 1の画像情報により、部分画像で貼り合わせて合成する対象画像のモデルを予め 形成して、このモデルを用いて広範囲な第 1の画像情報の所定の領域内において、 重複領域を含めて高解像度の対象画像を生成する部分画像の撮影位置を調整する ため、第 1の画像情報による広範囲な視野領域において、適切な部分画像の撮影位 置を演算により求めることができ、所望の高解像度の高精細画像を容易に生成する ことが可能となる。  [0016] As described above, according to the present invention, when the partial images are combined, a model of the target image to be combined and combined with the partial images is formed in advance based on the low-resolution first image information, Using this model, in order to adjust the shooting position of the partial image that generates the high-resolution target image including the overlapped area within the predetermined area of the wide range of the first image information, In the field of view, it is possible to obtain an appropriate photographing position of the partial image by calculation, and it is possible to easily generate a high-definition image having a desired high resolution.
図面の簡単な説明  Brief Description of Drawings
[0017] [図 1]図 1は、本発明の一実施例による顕微鏡装置の構成例を示す概念図である。  FIG. 1 is a conceptual diagram showing a configuration example of a microscope apparatus according to an embodiment of the present invention.
[図 2]図 2は、図 1の画像処理部 5の構成例を示すブロック図である。  FIG. 2 is a block diagram showing a configuration example of the image processing unit 5 in FIG.
[図 3]図 3は、図 2の画像モデル生成部 16で生成されるモデルを説明するための概 念図である。  FIG. 3 is a conceptual diagram for explaining a model generated by the image model generation unit 16 of FIG.
[図 4]図 4は、 Sobelフィルタを説明するための概念図である。  FIG. 4 is a conceptual diagram for explaining a Sobel filter.
[図 5]図 5は、パターン密度評価値の説明をするための概念図である。  FIG. 5 is a conceptual diagram for explaining a pattern density evaluation value.
[図 6]図 6は、第 1の実施例による画像処理部 5からなる顕微鏡装置の動作例を示す フローチャートである。  FIG. 6 is a flowchart showing an operation example of the microscope apparatus including the image processing unit 5 according to the first embodiment.
[図 7]図 7は、第 1の実施例による画像処理部 5の動作を説明する概念図である。  FIG. 7 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the first embodiment.
[図 8]図 8は、第 1の実施例による画像処理部 5の動作を説明する概念図である。  FIG. 8 is a conceptual diagram illustrating the operation of the image processing unit 5 according to the first embodiment.
[図 9]図 9は、第 1の実施例による画像処理部 5の動作を説明する概念図である。  FIG. 9 is a conceptual diagram illustrating the operation of the image processing unit 5 according to the first embodiment.
[図 10]図 10は、パターン密度評価値の探索領域内での最大値の検出処理を説明す る概念図である。  FIG. 10 is a conceptual diagram illustrating maximum value detection processing within a search area for pattern density evaluation values.
[図 11]図 11は、第 2の実施例による画像処理部 5からなる顕微鏡装置の動作例を示 すフローチャートである。  FIG. 11 is a flowchart showing an operation example of the microscope apparatus including the image processing unit 5 according to the second embodiment.
[図 12]図 12は、第 2の実施例による画像処理部 5の動作を説明する概念図である。  FIG. 12 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
[図 13]図 13は、第 2の実施例による画像処理部 5の動作を説明する概念図である。  FIG. 13 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
[図 14]図 14は、第 2の実施例による画像処理部 5の動作を説明する概念図である。  FIG. 14 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
[図 15]図 15は、第 2の実施例による画像処理部 5の動作を説明する概念図である。 [図 16]図 16は、第 2の実施例による画像処理部 5の動作を説明する概念図である。 FIG. 15 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment. FIG. 16 is a conceptual diagram for explaining the operation of the image processing unit 5 according to the second embodiment.
[図 17]図 17は、第 2の実施例による画像処理部 5の動作を説明する概念図である。  FIG. 17 is a conceptual diagram illustrating the operation of the image processing unit 5 according to the second embodiment.
[図 18]図 18は、部分画像枠の重複率の最大値及び最小値を説明するための概念図 である。  FIG. 18 is a conceptual diagram for explaining the maximum value and the minimum value of the overlapping rate of partial image frames.
[図 19]図 19は、第 3の実施例による検査装置を説明する概念図である。  FIG. 19 is a conceptual diagram for explaining an inspection apparatus according to a third embodiment.
[図 20]図 20は、第 4の実施例による検査装置を説明する概念図である。  FIG. 20 is a conceptual diagram for explaining an inspection apparatus according to a fourth embodiment.
符号の説明  Explanation of symbols
[0018] 1 鏡筒 [0018] 1 lens barrel
2 対物レンズ  2 Objective lens
3 撮像カメラ  3 Imaging camera
4 ステージ  4 stages
5 画像処理部  5 Image processing section
6 ステージ移動制御部  6 Stage movement controller
7 システム制御部  7 System controller
8 顕微鏡 Z軸移動制御部  8 Microscope Z-axis movement controller
11 撮像制御部  11 Imaging control unit
12 シェーディング '歪補正処理部  12 Shading 'Distortion correction processing section
13 撮像画像データ記憶バッファ部  13 Captured image data storage buffer
14 第 1撮影画像読込部  14 First shot image reading section
15 第 2撮影画像読込部  15 Second shot image reading section
16 画像モデル生成部  16 Image model generator
17 パターン密度評価値算出部  17 Pattern density evaluation value calculator
18 撮影位置算出部  18 Shooting position calculator
19 画像生成部  19 Image generator
20 画像記憶部  20 Image storage
Fl, F2, F3, F4 部分画像枠  Fl, F2, F3, F4 Partial image frame
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0019] <第 1の実施例 > 以下、本発明の第 1の実施例による画像処理装置を図面を参照して説明する。図 1 は同実施例の構成例を示すブロック図である。 [0019] <First embodiment> An image processing apparatus according to a first embodiment of the present invention will be described below with reference to the drawings. FIG. 1 is a block diagram showing a configuration example of the embodiment.
この図において、第 1の実施例は顕微鏡に対し、本発明の画像処理機能を搭載し たものであり、顕微鏡には対物レンズ 2をつける鏡筒 1を Z軸方向(図から見て上下) に駆動することが可能な上下駆動機構が備わっている。  In this figure, the first embodiment is a microscope equipped with the image processing function of the present invention. The microscope is provided with a lens barrel 1 for attaching an objective lens 2 in the Z-axis direction (up and down as viewed in the figure). A vertical drive mechanism capable of being driven is provided.
顕微鏡 Z軸移動制御部 8は、上記上下駆動機構を制御して、鏡筒 1を上下に移動さ せ、ステージ 4上に置かれた被対象物に対するピントの調整を行う。  The microscope Z-axis movement control unit 8 controls the vertical drive mechanism to move the lens barrel 1 up and down to adjust the focus on the object placed on the stage 4.
[0020] 上記ステージ 4は、顕微鏡の下部に設けられており、 X方向及び Y方向(図から見て 左右方向及び奥行き方向)に駆動する機構 (2軸移動駆動機構)を有しており、上部 に観察用のサンプルである上記被対象物が載せられる。 [0020] The stage 4 is provided in the lower part of the microscope, and has a mechanism (two-axis movement drive mechanism) for driving in the X direction and the Y direction (left and right direction and depth direction as seen from the figure). The target object, which is a sample for observation, is placed on the top.
ステージ移動制御部 6は、ステージ 4の 2軸における移動制御を行い、対物レンズ 2 と被対象物との相対的な位置調整を行う。  The stage movement control unit 6 performs movement control of the stage 4 in two axes, and adjusts the relative position between the objective lens 2 and the object.
また、鏡筒 1の上部には撮像用カメラが 3が設けられており、この撮像用カメラ 3から 出力される映像信号 (画像信号)は画像処理部 5に転送され各種画像処理が行われ る。  In addition, an imaging camera 3 is provided on the upper part of the lens barrel 1, and a video signal (image signal) output from the imaging camera 3 is transferred to the image processing unit 5 for various image processing. .
撮像用カメラ 3は、 CCDカメラであり、例えば、 RGB対応画素毎の階調度 (輝度)デ ータを画像情報として出力する。  The imaging camera 3 is a CCD camera and outputs, for example, gradation (luminance) data for each RGB-compatible pixel as image information.
画像処理部 5、ステージ移動制御部 6、顕微鏡 Z軸移動制御部 8はシステム制御部 7により必要に応じて夫々の制御が行われる。  The image processing unit 5, the stage movement control unit 6, and the microscope Z-axis movement control unit 8 are controlled by the system control unit 7 as necessary.
[0021] 次に、本発明の第 1の実施例による画像処理部 5を図面を参照して説明する。図 5 は同実施例の画像処理部 5の構成例を示すブロック図である。 Next, the image processing unit 5 according to the first embodiment of the present invention will be described with reference to the drawings. FIG. 5 is a block diagram illustrating a configuration example of the image processing unit 5 of the embodiment.
波線で囲まれた部分が画像処理部 5であり、撮像制御部 11、シェーディング '歪補 正処理部 12、撮像画像データ記憶バッファ部 13、第 1撮影画像読込部 14、第 2撮 影画像読込部 15、画像モデル生成部 16、パターン密度評価値算出部 17、撮影位 置算出部 18、画像生成部 19及び画像記憶部 20から構成されて ヽる。  The portion surrounded by the wavy line is the image processing unit 5, the imaging control unit 11, the shading 'distortion correction processing unit 12, the captured image data storage buffer unit 13, the first captured image reading unit 14, and the second captured image reading. The image forming apparatus includes a unit 15, an image model generation unit 16, a pattern density evaluation value calculation unit 17, a shooting position calculation unit 18, an image generation unit 19, and an image storage unit 20.
[0022] 撮像制御部 11は、システム制御部 7の制御により、対物レンズ 2のレンズの交換に よる倍率変更及び顕微鏡 Z軸移動制御部 8によるピント調整されて、撮像カメラ 3によ り撮影された低倍率の画像情報 (第 1の画像情報、すなわち被対象物の全体を撮影 した全体画像)または高倍率の画像情報 (第 2の画像情報、すなわち部分画像)を入 力し、シェーディング ·歪補正処理部 12へ出力する。 The imaging control unit 11 is controlled by the system control unit 7, the magnification is changed by exchanging the objective lens 2, and the focus is adjusted by the microscope Z-axis movement control unit 8, and the image is captured by the imaging camera 3. Low magnification image information (first image information, i.e. The whole image) or high-magnification image information (second image information, that is, a partial image) is input and output to the shading / distortion correction processing unit 12.
シェーディング ·歪補正処理部 12は、上記第 1の画像情報及び第 2の画像情報各 々に、対物レンズ 2を含む撮像系力 生じるシェーディングや歪に対するシエーディ ング補正及び歪補正を行った後、撮像画像データ記憶バッファ部 13に、各々倍率の 情報を付加して記憶する。  The shading / distortion correction processing unit 12 performs shading correction and distortion correction on shading and distortion caused by the imaging system force including the objective lens 2 for each of the first image information and the second image information, and then performs imaging. In the image data storage buffer unit 13, information of magnification is added and stored.
この倍率の情報は、対物レンズ 2のレンズの情報として、システム制御部 7を介して 、撮像制御部 11において、第 1の画像情報及び第 2の画像情報各々に付加される。  This magnification information is added to the first image information and the second image information in the imaging control unit 11 via the system control unit 7 as lens information of the objective lens 2.
[0023] 第 1撮影画像読込部 14は、撮像画像データ記憶バッファ部 13から、付加された倍 率の情報が低倍率を示す第 1の画像情報を読み出し、この第 1の画像情報を一時的 に格納する。 [0023] The first captured image reading unit 14 reads first image information in which the added magnification information indicates a low magnification from the captured image data storage buffer unit 13, and temporarily stores the first image information. To store.
第 2撮影画像読込部 15は、撮像画像データ記憶バッファ部 13から、付加された倍 率の情報が高倍率の第 2の画像情報 (以下、部分画像)を読み出し、この部分画像を 一時的に格納する。  The second captured image reading unit 15 reads the second image information (hereinafter referred to as a partial image) with the added magnification information from the captured image data storage buffer unit 13 and temporarily stores the partial image. Store.
[0024] 画像モデル生成部 16は、部分画像を貼り合わせて最終的に生成される対象画像 のモデルを生成する。このモデルには、部分画像を貼り合わせる際に重ね合わせ部 分となる重複領域が含まれて!/ヽる。  The image model generation unit 16 generates a model of the target image that is finally generated by pasting the partial images. This model includes an overlapping area that will be overlapped when the partial images are pasted together!
すなわち、画像モデル生成部 16は、システム制御部 7から入力される、予めユーザ が設定した低倍率としての第 1の倍率と、高倍率としての第 2の倍率と、部分画像を 貼り合わせて生成する画像サイズと、貼り合わせる際に重ねる重複領域の寸法とから 上記モデルを生成する。  That is, the image model generation unit 16 generates a first magnification as a low magnification preset by the user, a second magnification as a high magnification, and a partial image, which are input from the system control unit 7 and pasted together. The above model is generated from the image size to be overlapped and the size of the overlapping area to be overlapped when pasting.
[0025] ノターン密度評価値算出部 17は、上記画像モデル生成部 16からモデルを読み込 み、また、第 1撮影画像読込部 14から第 1の画像情報を読み込み、対象画像を生成 する部分を探索する探索領域がシステム制御部 7により、第 1画像情報において設定 される(ユーザが画面を確認しつつ設定)。  [0025] The no-turn density evaluation value calculation unit 17 reads a model from the image model generation unit 16 and reads the first image information from the first captured image reading unit 14 to generate a target image. The search area to be searched is set in the first image information by the system control unit 7 (set while the user confirms the screen).
また、パターン密度評価値算出部 17は、図 3に示すように、上記探索領域内にお いて、上記モデルを所定の位置、例えば探索領域の左上を開始位置として、所定の 移動距離、例えば複数ピクセル単位にて、 X軸方向及び Y軸方向に移動させつつ、 重複領域内におけるパターン密度評価値 (パターン情報)を算出し、これらを順次、 計算した位置に対応させて記憶する。 In addition, as shown in FIG. 3, the pattern density evaluation value calculation unit 17 has a predetermined movement distance, for example, a plurality of movements within the search area, with the model as a predetermined position, for example, the upper left of the search area. While moving in the X-axis direction and Y-axis direction in pixel units, The pattern density evaluation value (pattern information) in the overlapping area is calculated, and these are sequentially stored in correspondence with the calculated positions.
[0026] ここで、探索領域内における移動距離は、 1画素(ピクセル)単位で行っても良いが 、対象となるパターンによっては移動させる前後において変化が見られず、得られる ノターン密度評価値もほぼ同様な値となることから、本発明において無駄な計算時 間を削減し、重複領域の探索効率を向上させるため、所定の画素数単位としている。 この移動距離としては、本実施例のように被対象物が周期的なパターンであれば、 1周期をなす画素数の 1Z5, 1/10, 1/50, 1/100,…と言うように、パターン周 期の画素数に合わせて設定される。  Here, the movement distance in the search area may be performed in units of one pixel (pixel), but depending on the target pattern, there is no change before and after the movement, and the obtained pattern density evaluation value is also Since the values are almost the same, in the present invention, a unit of a predetermined number of pixels is used in order to reduce useless calculation time and improve the search efficiency of the overlapping area. As the moving distance, if the object is a periodic pattern as in this embodiment, the number of pixels forming one period is 1Z5, 1/10, 1/50, 1/100,. It is set according to the number of pixels in the pattern period.
[0027] また、重複領域に含まれた対象となるパターンの最小サイズ (例えば、電流が流れ る信号線の幅など)が既知であれば、最小パターンの幅の画素数の 1倍, 2倍, 3倍, • · -と言うように、パターンのサイズに対応させて移動距離を設定しても良 、。  [0027] Further, if the minimum size of the target pattern included in the overlapping area (for example, the width of the signal line through which the current flows) is known, the minimum pattern width is 1 or 2 times the number of pixels. , 3 times, • You can set the movement distance according to the size of the pattern, such as-.
ノターンのサイズに応じた移動距離は、移動する前後において、重複領域からバタ ーン全体が現れる、または消えることによりパターン密度評価値が変化することを考 慮している。  The movement distance according to the size of the no-turn takes into account that the pattern density evaluation value changes depending on whether the entire pattern appears or disappears from the overlap area before and after moving.
[0028] パターン密度評価値は、部分画像のサイズのブロック毎 (横方向及び縦方向毎)に 、隣接するブロックの重ね合わせ単位、すなわち、 1つの算出位置ごとに 4つの計算 値 (後述する垂直方向及び水平方向のエッジ強度)として計算される。  [0028] The pattern density evaluation value has four calculated values (vertical units to be described later) for each block of the size of the partial image (horizontal direction and vertical direction). Directional and horizontal edge strength).
ここで、パターン密度評価値は、以下に示すような流れにより、パターン密度評価値 算出部 17にお 、て計算される。  Here, the pattern density evaluation value is calculated by the pattern density evaluation value calculation unit 17 according to the following flow.
本実施例においては、方向別のエッジ強度 (パターンにおける輝度変化の大きさ) に着目して、パターン密度評価値を求める。  In this embodiment, the pattern density evaluation value is obtained by paying attention to the edge strength for each direction (the magnitude of the luminance change in the pattern).
[0029] 上記方向別のエッジ強度とは、垂直 (画面の上下)方向,及び水平 (画面の左右) 方向それぞれに対してエッジ強度を表すものである。 The edge strength for each direction represents the edge strength in each of the vertical (up and down the screen) direction and the horizontal (left and right of the screen) direction.
エッジ強度を算出する方法として、 Sobelフィルタを用いる。この Sobelフィルタは、 ある注目画素を中心として、近傍、すなわち隣接する上下左右の 9つの画素値に対 して、図 4に示すような係数(中央が注目画素)マスクをそれぞれ乗算して結果を合計 し、垂直方向及び水平方向の二つの係数行列を用いてこの処理を行う。 [0030] すなわち、マスクの中心の画素 (X, Y) (Xは水平方向に関する画面上の座標値で あり、原点に対して右方向を正とし、左方向を負とし、 Yは垂直方向に関する画面上 の座標値であり、原点に対して下方向を正とし、上方向を負とする)に対する各方向 のエッジ強度を、画素 (X, Y)の輝度値を I (X, Y)と表し、また、数値 Rの絶対値を Ab s(R)として以下の式により、水平方向の強度 EH (X, Y)及び垂直方向の強度 EV(X , Y)として求める。 A Sobel filter is used as a method for calculating the edge strength. This Sobel filter multiplies each of the nine pixel values in the vicinity, that is, adjacent to the top, bottom, left, and right, centering on a certain pixel of interest, by multiplying the coefficient (center is the pixel of interest) mask as shown in Fig. 4 to obtain the result. Totally, this process is performed using two coefficient matrices in the vertical and horizontal directions. [0030] That is, the pixel at the center of the mask (X, Y) (X is the coordinate value on the screen in the horizontal direction, the right direction is positive, the left direction is negative, and Y is in the vertical direction. The coordinate values on the screen, the downward direction with respect to the origin is positive and the upward direction is negative), the edge intensity in each direction, and the luminance value of the pixel (X, Y) as I (X, Y). In addition, the absolute value of the numerical value R is defined as Abs (R), and the horizontal strength EH (X, Y) and the vertical strength EV (X, Y) are obtained by the following formula.
[0031] 強度 EH (X, Y) = Abs{I(X+ 1 , Y- 1) + 2 X I(X+ 1 , Y) + I(X+ 1 , Y+ 1)  [0031] Intensity EH (X, Y) = Abs {I (X + 1, Y- 1) + 2 X I (X + 1, Y) + I (X + 1, Y + 1)
Ι(Χ-Ι,Υ-Ι) 2 X Ι(Χ-Ι,Υ) Ι(Χ-1,Υ+1)}  Ι (Χ-Ι, Υ-Ι) 2 X Ι (Χ-Ι, Υ) Ι (Χ-1, Υ + 1)}
強度 EV(X, Y) =Abs{I(X-l,Y+l) + 2 X I(X,Y+l)+I(X+l,Y+l)  Strength EV (X, Y) = Abs (I (X-l, Y + l) + 2 X I (X, Y + l) + I (X + l, Y + l)
Ι(Χ-Ι,Υ-Ι) 2 X Ι(Χ,Υ-Ι) Ι(Χ+1,Υ-1)}  Ι (Χ-Ι, Υ-Ι) 2 X Ι (Χ, Υ-Ι) Ι (Χ + 1, Υ-1)}
[0032] 上述した式を用いて、対象となる領域 (重複領域)内の各画素各々に対して算出す る(このとき、画像の端部はエッジ強度が算出できないので対象外とする)。 [0032] Using the above-described equation, the calculation is performed for each pixel in the target region (overlapping region) (at this time, the edge portion of the image cannot be calculated because the edge intensity cannot be calculated).
ここで、パターン密度評価値算出部 17は、対象となる領域において、算出された画 素毎のエッジ強度を方向単位に加算し、水平方向のエッジ強度総和値 ΑΕΗと、垂 直方向のエッジ強度総和値 AEVとを求める。  Here, the pattern density evaluation value calculation unit 17 adds the calculated edge strength for each pixel to the direction unit in the target region, and adds the horizontal edge strength total value ΑΕΗ and the vertical edge strength. Calculate the total value AEV.
し力しながら、いずれかの方向のエッジ強度が極端に低い場合があり、例えば、図 5に示すように、重複領域の内部が水平方向の線パターンのみで形成されていると、 垂直方向にぉ ヽてはパターンのエッジが存在するために、ある数値のエッジ強度総 和値 AEVが求まるが、水平方向のエッジ強度総和値 ΑΕΗは輝度の変化が横方向 にお 、て存在しな 、ためほぼ「0」となる。  In some cases, however, the edge strength in either direction is extremely low.For example, as shown in Fig. 5, if the inside of the overlap area is formed with only a horizontal line pattern,ヽ Since there is an edge of the pattern, the edge strength total value AEV of a certain value can be obtained, but the horizontal edge strength total value ΑΕΗ is because the change in luminance does not exist in the horizontal direction. It becomes almost “0”.
この場合、片方向のパターンのみで貼り合わせ処理を行おうとすると、マッチング箇 所を限定できな 、ため、最適なマッチング処理を行えな 、。  In this case, if the pasting process is performed only with a one-way pattern, the matching point cannot be limited, so the optimum matching process cannot be performed.
[0033] このため、パターン密度評価値算出部 17は、エッジ強度総和値 ΑΕΗとエッジ強度 総和値 AEVとに対し、所定の閾値 Thresを求め、各エッジ強度総和値が上記閾値 T hres以上の場合のみ、この閾値 Thres以上の値をパターン密度評価値として出力す る。 For this reason, the pattern density evaluation value calculation unit 17 obtains a predetermined threshold value Thres for the edge intensity total value ΑΕΗ and the edge intensity total value AEV, and each edge intensity total value is equal to or greater than the threshold value Thres. Only this threshold Thres or more is output as the pattern density evaluation value.
この閾値 Thresはノイズの影響を考慮して、例えば 1画素に関する Sobelフィルタの エッジとして検出する最小の輝度差を Qとして以下の式により表す。 Thres = 4'Q X (重複領域における Sobelフィルタ演算対象の画素数) 上記閾値 Thresには所定の係数を乗じた値を実際の閾値として用いる。この乗じる 係数としては、ノイズの影響を抑えるためであれば、 1または 2などの小さな値でよい。 また、エッジ強度が大きい、すなわちパターンの特徴がはっきりしている領域を取り 出す場合、 Qの値に応じて大きな値 (輝度の階調が 256であり、 Qが 10であれば、 10This threshold value Thres is expressed by the following formula, taking into consideration the influence of noise, for example, where Q is the minimum luminance difference detected as the edge of the Sobel filter for one pixel. Thres = 4′QX (Number of Sobel filter calculation target pixels in the overlapping region) The threshold value Thres is multiplied by a predetermined coefficient and used as the actual threshold value. The coefficient to be multiplied may be a small value such as 1 or 2 to suppress the effect of noise. Also, when extracting an area where the edge strength is high, that is, the pattern features are clear, a large value according to the value of Q (if the luminance gradation is 256 and Q is 10,
〜 15の間の値等)とする。 A value between ~ 15).
[0034] また、パターン密度評価値算出部 17は、上記式を用いてパターン密度評価値 PDFurther, the pattern density evaluation value calculation unit 17 uses the above formula to calculate the pattern density evaluation value PD.
EVを以下のように算出する。 EV is calculated as follows.
AEH < Thresまたは AEVく Thresの場合、 PDEVを 0とし、  If AEH <Thres or AEV <Thres, set PDEV to 0,
AEH≥ Thresかつ AEV≥ Thresの場合、 PDE V = AEH + AE Vとする。 これにより、算出されるパターン密度評価値は、いずれか一方のエッジ強度が閾値 より低い場合 (極端に小さい場合)、部分画像の貼り合わせの際に、パターンマツチン グが失敗する可能性が高 、ことから後の評価の対象から外され、貼り合わせ処理の 際、正確なパターンマッチングが可能となる配置位置の数値のみが残ることとなる。 ここで、エッジ強度が大きいほど、輝度差が大きいことを示し、貼り合わせにおける、 重複領域における部分画像のマッチングの精度が向上する。  If AEH ≥ Thres and AEV ≥ Thres, PDE V = AEH + AE V. As a result, when the calculated pattern density evaluation value is lower than one of the edge strengths (extremely small), there is a high possibility that pattern matching will fail when partial images are pasted together. Therefore, it will be excluded from the object of later evaluation, and only the numerical value of the arrangement position where accurate pattern matching can be performed remains in the pasting process. Here, the larger the edge strength, the larger the luminance difference, and the accuracy of matching the partial images in the overlapping region in the pasting is improved.
すなわち、パターンが比較的、疎である全体画像であっても、重複領域のパターン が密となるモデルの位置を探索できることとなる。  In other words, even in the case of an entire image having a relatively sparse pattern, it is possible to search for the position of the model where the pattern of the overlapping region is dense.
[0035] 次に、図 2に戻り、撮影位置算出部 18は、パターン密度評価値算出部 17が選択し た、最もパターン密度評価値の大きい位置、すなわち第 1の画像情報 (全体画像)に おける対象画像の撮影位置に基づき、各部分画像の撮影場所を求めて、この撮影 場所の撮影位置情報をシステム制御部 7へ出力する。 Next, returning to FIG. 2, the photographing position calculation unit 18 selects the position with the largest pattern density evaluation value selected by the pattern density evaluation value calculation unit 17, that is, the first image information (entire image). Based on the shooting position of the target image, the shooting location of each partial image is obtained, and the shooting location information of this shooting location is output to the system control unit 7.
画像生成部 19は、上記撮影位置算出部 18が出力した撮影位置に基づいて、シス テム制御部 7が顕微鏡 Z軸移動制御部 8,ステージ移動制御部 6,撮像制御部 11及 び撮影用カメラ 3を制御し、モデルのブロック分の複数枚撮影された部分画像を貼り 合わせる。  Based on the shooting position output from the shooting position calculation unit 18, the image generation unit 19 includes a system control unit 7 that includes a microscope Z-axis movement control unit 8, a stage movement control unit 6, an imaging control unit 11, and a shooting camera. Control 3 to paste the partial images of multiple blocks of the model.
画像記憶部 20は、画像生成部 19において部分画像が貼り合わされて生成された 対象画像 (高精細画像)を記憶する。 システム制御部 7は、ユーザ力ゝらのアクセスにより、画像記憶部 20から対象画像を 読み出し、図示しない表示装置に表示させる。 The image storage unit 20 stores the target image (high-definition image) generated by pasting the partial images in the image generation unit 19. The system control unit 7 reads out the target image from the image storage unit 20 and displays it on a display device (not shown) by the access by the user.
[0036] また、パターン情報としては、上述した第 1の実施例においては、画像の輝度値の 変化を示すエッジ強度を用いたが、この画像の輝度値をベースに方向という「空間特 性」を利用したものでなぐ純粋に輝度値だけカゝら構成される評価値、例えば、画像 の輝度値カゝら形成されるヒストグラムに関する輝度平均値、最小値と最大値との差 (ダ イナミックレンジ)、最頻値、中央値、分散 (標準偏差)をそれぞれパターン情報とする ことも可能である。 As the pattern information, the edge intensity indicating the change in the luminance value of the image is used in the above-described first embodiment. However, the “spatial characteristic” of the direction based on the luminance value of the image is used. An evaluation value that consists of purely luminance values, such as the luminance average value for the histogram formed from the image luminance value, the difference between the minimum and maximum values (dynamic range). ), Mode, median, and variance (standard deviation) can be used as pattern information.
[0037] ここで、パターン情報として、重複領域に関する最頻値の輝度値に関するヒストグラ ムの度数 FR、及びこのヒストグラムの標準偏差 SDを用い、この度数 FR及び標準偏 差 SDからパターン密度評価値 PDEVを形成し、 2のべき乗を 2~xとして表す。  [0037] Here, the pattern frequency evaluation value PDEV is obtained from the frequency FR and the standard deviation SD using the histogram frequency FR relating to the luminance value of the mode value relating to the overlapping region and the standard deviation SD of this histogram as the pattern information. And the power of 2 is expressed as 2 to x.
PDEV=FR+ (2 ) X SD  PDEV = FR + (2) X SD
このパターン密度評価値 PDEVは 0〜 X— 1までを度数 FRにより、また 2 以上を 標準偏差 SDで表している。ただし、上記式において、「FRく 2 」である。  This pattern density evaluation value PDEV represents 0 to X-1 with the frequency FR and 2 or more with the standard deviation SD. However, in the above formula, “FR 2”.
これにより、まずパターン密度評価値算出部 17は、度数 FRのみに注目して、所定 の閾値以上である力否かを判定し、この閾値以上である場合、標準偏差に対しても 評価する。  Thereby, first, the pattern density evaluation value calculation unit 17 pays attention only to the frequency FR, determines whether or not the force is equal to or greater than a predetermined threshold value, and if it is equal to or greater than this threshold value, also evaluates the standard deviation.
FRは最大値が (2 )になることにより、上述した探索方法が可能となる。 また、上述した演算は、ビット演算 (論理演算)により実現され、特性が異なるパター ン情報も一つのパターン密度評価値として用いることが可能である。  Since the maximum value of FR is (2), the search method described above becomes possible. The above-described operation is realized by bit operation (logical operation), and pattern information having different characteristics can be used as one pattern density evaluation value.
[0038] 次に、図 1,図 2及び図 6を参照して、上述した画像処理装置の動作を説明する。図 6は、図 1の第 1の実施例による画像処理装置の一動作例を具体的に示すフローチ ヤートである。 Next, the operation of the above-described image processing apparatus will be described with reference to FIGS. 1, 2, and 6. FIG. 6 is a flowchart specifically showing an operation example of the image processing apparatus according to the first embodiment of FIG.
ここで被対象物としては、図 7に示す FPD基板を例にとして説明する。 FPD基板は 、画素部分とこの画素を駆動するトランジスタが周期的に配列されている。  Here, the FPD board shown in FIG. 7 will be described as an example of the object. In the FPD substrate, pixel portions and transistors for driving the pixels are periodically arranged.
ユーザがシステム制御部 7に対して、図示しない入力装置により、処理のパラメータ として、全体画像の倍率 (第 1の倍率),部分画像の倍率 (第 2の倍率),貼り合わせ 画像 (対象画像)のサイズ,及び各部分画像の重複率を設定する (ステップ S 1)。 [0039] 次に、対象画像の取得の処理が開始されると、システム制御部 7は、ステージ移動 制御部 6によりステージ 4を駆動し、対物レンズ 2と被対象物との相対位置の調整を行 い、上記第 1の倍率となるよう対物レンズ 2を切り替える。 The user uses an input device (not shown) as a processing parameter to the system control unit 7 as the overall image magnification (first magnification), partial image magnification (second magnification), and the combined image (target image). And the overlap rate of each partial image are set (step S1). [0039] Next, when the process of acquiring the target image is started, the system control unit 7 drives the stage 4 by the stage movement control unit 6, and adjusts the relative position between the objective lens 2 and the target object. And switch the objective lens 2 to achieve the first magnification.
そして、システム制御部 7は、顕微鏡 Z軸移動制御部 8を介して、鏡筒 1を上下させ てピントの調整を行い、被対象物の図 7に示す全体画像を撮像し、撮像制御部 11を 介してシェーディング ·歪補正処理部 12へ、この全体画像を転送する。  Then, the system control unit 7 adjusts the focus by moving the lens barrel 1 up and down via the microscope Z-axis movement control unit 8 to capture the entire image shown in FIG. The whole image is transferred to the shading / distortion correction processing unit 12 via the.
また、図 7の上記全体画像に対して、部分画像としての視野枠 (第 2の倍率で撮像 した場合の撮影可能範囲:部分画像枠)としては図 8に示す破線内の領域となる。  Further, with respect to the entire image shown in FIG. 7, a field frame as a partial image (shootable range when captured at a second magnification: partial image frame) is an area within a broken line shown in FIG.
[0040] そして、シェーディング ·歪補正処理部 12は、入力される全体画像に対して、歪補 正及びシェーディング補正を行 ヽ、撮像画像データ記憶バッファ部 13にー且記憶さ せる(ステップ S 2)。 Then, the shading / distortion correction processing unit 12 performs distortion correction and shading correction on the entire input image, and stores it in the captured image data storage buffer unit 13 (step S 2). ).
次に、画像モデル生成部 16は、対象画像の大きさ(縦の画素数 X横の画素数)と、 この対象画像を生成する際に貼り合わせたときの部分画像の重複率とから、図 3に示 すような重複領域を有する対象画像のモデルを生成する (ステップ S3)。  Next, the image model generation unit 16 calculates the figure based on the size of the target image (the number of vertical pixels X the number of horizontal pixels) and the overlapping ratio of the partial images when the target image is generated. A model of the target image having an overlapping area as shown in Fig. 3 is generated (step S3).
[0041] そして、画像モデル生成部 16は、重複領域が部分画像に対して上記重複率となる よう、部分画像の数及びこの重複領域のサイズを演算して求める (ステップ S4)。 例えば、図 9において、図 8に示した部分画像枠力も部分画像の枠は 4となり、重複 領域は 4つの部分画像枠のいずれか 2つ以上が相互に重なり合う斜線部分 (画面中 央に +の形で示されて!/ヽる部分)として定義される。 [0041] Then, the image model generation unit 16 calculates the number of partial images and the size of the overlapping region so that the overlapping region has the above-described overlapping rate with respect to the partial image (step S4). For example, in Fig. 9, the partial image frame force shown in Fig. 8 also has a partial image frame of 4, and the overlap area is a shaded area where two or more of the four partial image frames overlap each other (with a + Shown in shape! / Declared part).
この結果、本実施例においては、対象画像が 4つの部分画像により形成され、対象 画像のサイズのモデルは 4つの部分画像枠により構成される。  As a result, in this embodiment, the target image is formed by four partial images, and the model of the size of the target image is configured by four partial image frames.
次に、ユーザは表示装置に表示されている全体画像において、上記モデル力 対 象画像の撮影位置を探索する探索領域の設定を行う (ステップ S5)。  Next, the user sets a search area for searching for the shooting position of the model force target image in the entire image displayed on the display device (step S5).
この探索領域は、全体画像全体でも、モデルの大きさよりも大きければ、全体画像 の任意の一部分としても良 、。  This search area can be an arbitrary part of the entire image as long as the entire image is larger than the size of the model.
[0042] そして、パターン密度評価値算出部 17は、所定の移動距離にて、 X軸方向及び Y 軸方向に移動させ (ずらし)つつ、パターン密度評価値の算出を各移動位置にて行 い、探索範囲全体が探索されるまで上記処理が繰り返され、順次、計算されたパター ン密度評価値を全体画像における座標値 (計算した位置)に対応付けて、内部の記 憶部に記憶し、探索領域全体におけるパターン密度評価値の計算を終了すると処 理をステップ S 7へ進める(ステップ S6)。 [0042] Then, the pattern density evaluation value calculation unit 17 calculates the pattern density evaluation value at each movement position while moving (shifting) it in the X axis direction and the Y axis direction at a predetermined movement distance. The above process is repeated until the entire search range is searched, and the calculated pattern is sequentially calculated. The density evaluation value is associated with the coordinate value (calculated position) in the entire image and stored in the internal storage unit. When the calculation of the pattern density evaluation value in the entire search area is completed, the process proceeds to step S7. (Step S6).
[0043] 次に、パターン密度評価値算出部 17は、内部の記憶部に記憶されているパターン 密度評価値のなかから最も大きな値を探索し、探索されたパターン密度評価値に対 応する座標値を貼り合わせの際の対象画像の最適位置として出力する (ステップ S7Next, the pattern density evaluation value calculation unit 17 searches for the largest value among the pattern density evaluation values stored in the internal storage unit, and coordinates corresponding to the searched pattern density evaluation value The value is output as the optimal position of the target image at the time of pasting (Step S7
) o ) o
このとき、パターン密度評価値算出部 17は、図 10に示すように、評価した座標値( X— Y平面上)毎に Z軸方向にパターン密度評価値の大きさを示す 3次元グラフにお いて、各座標値におけるパターン密度評価値を順次比較し、最大のパターン密度評 価値を探索する。  At this time, as shown in FIG. 10, the pattern density evaluation value calculation unit 17 displays a three-dimensional graph indicating the size of the pattern density evaluation value in the Z-axis direction for each evaluated coordinate value (on the XY plane). Then, the pattern density evaluation value at each coordinate value is sequentially compared to search for the maximum pattern density evaluation value.
例えば、図 10においては、左側のパターン密度評価値が最大であることから、この モデルの位置、すなわち座標値が最適な対象画像の生成位置として出力される。  For example, in FIG. 10, since the pattern density evaluation value on the left side is the maximum, the position of this model, that is, the coordinate value is output as the optimum target image generation position.
[0044] 次に、撮影位置算出部 18は、パターン密度評価値算出部 17が出力する対象画像 の生成位置から、部分画像の撮影位置の演算を行う (ステップ S8)。 Next, the shooting position calculation unit 18 calculates the shooting position of the partial image from the generation position of the target image output from the pattern density evaluation value calculation unit 17 (step S8).
このとき、撮影位置算出部 18は、上記生成位置におけるモデルの部分画像枠の配 置位置を、第 2の倍率 (高倍率)で撮影する部分画像の撮影位置として、対象画像を 構成する複数の部分画像各々に対応する部分画像枠の座標値を、部分画像位置と して出力する。  At this time, the shooting position calculation unit 18 uses the arrangement position of the partial image frame of the model at the generation position as the shooting position of the partial image shot at the second magnification (high magnification). The coordinate value of the partial image frame corresponding to each partial image is output as the partial image position.
また、撮影位置算出部 18は、本実施例の場合、対象画像が 4つの部分画像から構 成されて!/、るため、この 4つの部分画像に対応する部分画像枠の座標値をシステム 制御部 7に出力する。  In the present embodiment, the shooting position calculation unit 18 controls the coordinate values of the partial image frames corresponding to the four partial images because the target image is composed of four partial images! / Output to part 7.
[0045] 次に、システム制御部 7は、顕微鏡 Z軸移動制御部 8を介して対物レンズ 2を第 2の 倍率に対応するレンズに変更し、撮影位置算出部 18より入力される上記部分画像位 置に対応させ、ステージ移動制御部 6を介してステージ 4を撮像用カメラ 3で撮影する 座標位置に移動させ、顕微鏡 Z軸移動制御部 8にてピントを合わせて、撮像用カメラ 3により各部分画像の撮影を行う。  Next, the system control unit 7 changes the objective lens 2 to a lens corresponding to the second magnification via the microscope Z-axis movement control unit 8, and the partial image input from the imaging position calculation unit 18. Corresponding to the position, the stage 4 is photographed by the imaging camera 3 via the stage movement control unit 6, moved to the coordinate position, focused by the microscope Z-axis movement control unit 8, and each image is captured by the imaging camera 3. Take a partial image.
ここで、システム制御部 7は、対象画像を構成する複数の部分画像全てを、上述し た処理により撮影する。 Here, the system control unit 7 has described all of the plurality of partial images constituting the target image. Take a picture by processing.
そして、撮像用制御部 11は、撮像用カメラ 3から入力される部分画像各々をシエー ディン £·歪補正処理部 12へ出力する。  Then, the imaging control unit 11 outputs each partial image input from the imaging camera 3 to the shader / distortion correction processing unit 12.
これにより、シェーディング ·歪補正処理部 12は、順次入力される部分画像に対し て、歪補正及びシェーディング補正の処理を行い、撮像画像データ記憶バッファ部 1 3に格納する(ステップ S 9)。  As a result, the shading / distortion correction processing unit 12 performs distortion correction and shading correction on the sequentially input partial images, and stores them in the captured image data storage buffer unit 13 (step S9).
[0046] 次に、画像処理部 5は、撮像画像データバッファ部 13から対象画像を構成する部 分画像を読み出し、第 2撮影画像読込部 15に一旦格納する。 Next, the image processing unit 5 reads out the partial images constituting the target image from the captured image data buffer unit 13 and temporarily stores them in the second captured image reading unit 15.
そして、画像生成部 19は、第 2撮影画像読込部 15から順次部分画像を読み出し、 図 9に示すモデルに基づいて、すなわち、モデルの部分画像枠毎に、この部分画像 枠の部分画像位置に対応して撮影した部分画像を配置し、部分画像を貼り合わせて 対象画像の生成の処理を行 ヽ、生成した高精細画像である対象画像を画像記憶部 20に格納する(ステップ S 10)。  Then, the image generation unit 19 sequentially reads out the partial images from the second captured image reading unit 15, and based on the model shown in FIG. 9, that is, for each partial image frame of the model, at the partial image position of this partial image frame. Corresponding captured partial images are arranged, and the partial images are combined to generate a target image. The generated high-definition target image is stored in the image storage unit 20 (step S10).
[0047] このとき、画像生成部 19は、重複領域に配置されたパターンを重ね合わせて、パタ ーンマッチングを行い、貼り合わせの位置あわせを行っている。このため、重複領域 としては、パターン密度評価値が所定の密度を超える領域、すなわち所定の閾値を 超える領域を用いることが必要となる。 [0047] At this time, the image generation unit 19 performs pattern matching by superimposing patterns arranged in the overlapping area, and performs alignment of the bonding. For this reason, it is necessary to use an area where the pattern density evaluation value exceeds a predetermined density, that is, an area exceeding a predetermined threshold, as the overlapping area.
したがって、本実施例においては、貼り合わせる重複領域の配置位置を、パターン 密度評価値等のパターン情報により決定しているため、図 7に示す FPD基板のパタ ーンのように、基板上に形成されるパターンが疎の部分が多ぐ貼り合わせ処理に適 さな 、周期性を有する被対象物でも、高 、精度で貼り合わせ画像を生成することが できる。  Therefore, in this embodiment, since the arrangement position of the overlapping region to be bonded is determined by pattern information such as the pattern density evaluation value, it is formed on the substrate like the FPD substrate pattern shown in FIG. A bonded image can be generated with high accuracy even for an object having periodicity, which is suitable for a bonding process in which many patterns are sparse.
システム制御部 7は、必要に応じて画像記憶部 20から上記対象画像を読み出して 、この対象画像を表示部に表示する。  The system control unit 7 reads the target image from the image storage unit 20 as necessary, and displays the target image on the display unit.
[0048] <第 2の実施例 > [0048] <Second embodiment>
第 2の実施例は、第 1の実施例と同様の構成であり、第 1の実施例と異なる点のみ を、以下に説明する。  The second embodiment has the same configuration as that of the first embodiment, and only differences from the first embodiment will be described below.
図 11は第 2の実施例における一動作例を具体的に示すフローチャートである。 異なる点は第 1の実施例のステップ S8がステップ S 15に変更された点であり、この ステップについて説明する。 FIG. 11 is a flowchart specifically showing an operation example in the second embodiment. The difference is that step S8 of the first embodiment is changed to step S15, and this step will be described.
第 1の実施例にぉ 、ては、部分画像を貼り合せるモデルの最適位置力 モデル内 における部分画像枠の位置に基づ!ヽて、高精細な部分画像を取得する部分画像位 置、すなわち撮影位置を決定しており、モデルにおける部分画像枠間の重複領域を 固定として探索領域内での探索を行って 、る。  According to the first embodiment, the optimal position force of the model for pasting the partial images is based on the position of the partial image frame in the model! The shooting position is determined, and the search is performed in the search area with the overlapping area between the partial image frames in the model fixed.
一方、第 2の実施例においては、貼り合せモデルの最適位置が決定したところで、 低倍率の全体画像 (第 1の画像情報)を用いて、貼り合せモデルを構成する部分画 像枠の位置を決定して ヽる。  On the other hand, in the second embodiment, when the optimum position of the stitching model is determined, the position of the partial image frames constituting the stitching model is determined using the low-magnification whole image (first image information). Decide and speak.
[0049] ステップ S6において、固定した重複領域のモデルにより、所定の移動距離により、 探索領域を移動させつつ、各位置におけるパターン密度評価値を算出する。 In step S 6, the pattern density evaluation value at each position is calculated while moving the search area by a predetermined movement distance using the fixed overlapping area model.
このとき、以下に示す式として最低パターン密度閾値 PDEV—Minを設定する。 At this time, the minimum pattern density threshold PDEV-Min is set as an expression shown below.
PDE V_Min = AEH_Min + AEV— Min PDE V_Min = AEH_Min + AEV— Min
=4-Q X PixNum + 4-Q X PixNum  = 4-Q X PixNum + 4-Q X PixNum
= 8 -Q X PixNum  = 8 -Q X PixNum
ここで、 PixNum=「重複領域における Sobelフィルタ演算対象の画素数」 そして、パターン密度評価値算出部 17は、パターン密度評価値の算出を各移動位 置にて行い、探索範囲全体が探索されるまで上記処理が繰り返し、上記閾値 PDEV —Minを超えるものを、順次、計算されたパターン密度評価値を全体画像における 座標値 (計算した位置)に対応付け、内部の記憶部に記憶し、探索領域全体におけ るパターン密度評価値の計算を終了すると処理をステップ S7へ進む。  Here, PixNum = “the number of pixels subject to the Sobel filter operation in the overlapping region” Then, the pattern density evaluation value calculation unit 17 calculates the pattern density evaluation value at each moving position, and searches the entire search range. The above process is repeated until the above pattern PDEV —Min is exceeded, and the calculated pattern density evaluation value is sequentially associated with the coordinate value (calculated position) in the entire image, stored in the internal storage unit, and the search area When the calculation of the overall pattern density evaluation value is completed, the process proceeds to step S7.
[0050] ステップ S7において、パターン密度評価値算出部 17は、第 1の実施例と同様に、 内部記憶部の中から最も大きなパターン密度評価値を選択して出力する。 In step S7, the pattern density evaluation value calculation unit 17 selects and outputs the largest pattern density evaluation value from the internal storage unit, as in the first embodiment.
次に、ステップ S15において、この選択したパターン密度評価値に対応する座標値 のモデルにおける重複領域のパターン密度評価値の再計算を行う。  Next, in step S15, the pattern density evaluation value of the overlapping region in the model of the coordinate value corresponding to the selected pattern density evaluation value is recalculated.
このとき、モデルにおける部分画像枠の重複領域は、図 12の部分画像枠 F1及び F 2による領域 A,図 13の部分画像枠 F3及び F4による領域 B,図 14の部分画像枠 F1 及び F3〖こよる領域 C,図 15の部分画像枠 F2及び F4による領域 Dとなる。 これらについて、領域 A〜D各々のパターン密度評価値を、各部分画像枠に対し て、低倍率の全体画像の対応する位置の画像から算出する。 At this time, the overlapping area of the partial image frames in the model is the area A by the partial image frames F1 and F2 in FIG. 12, the area B by the partial image frames F3 and F4 in FIG. 13, and the partial image frames F1 and F3 in FIG. This area C is the area D formed by the partial image frames F2 and F4 in FIG. For these, the pattern density evaluation value for each of the regions A to D is calculated from the image at the corresponding position of the low magnification overall image for each partial image frame.
[0051] ここで、パターン密度評価値算出部 17は、各領域 A〜D各々が所定の閾値を超え ている力否かの判定を行う。 Here, the pattern density evaluation value calculation unit 17 determines whether or not each region A to D exceeds a predetermined threshold value.
この閾値は、第 1の実施例においては横方向と縦方向とのパターン密度評価値とし て求めていたが、第 2の実施例においては、横方向及び縦方向で隣接した 2つの部 分画像枠の重複領域単位となるため、以下の式で定義される値とする。  This threshold value was obtained as the pattern density evaluation value in the horizontal and vertical directions in the first embodiment, but in the second embodiment, two partial images adjacent in the horizontal and vertical directions are obtained. Since this is a unit of overlapping area of the frame, the value is defined by the following formula.
Thres2 = 2'Q X (重複領域における Sobelフィルタ演算対象の画素数) そして、パターン密度評価値算出部 17は、領域 A〜D全てのパターン密度が上記 閾値を超えていることを検出すると、処理をステップ S9へ進め、以降、第 1の実施例 と同様の処理を行う。  Thres2 = 2'QX (the number of pixels subject to Sobel filter operation in the overlapping region) Then, the pattern density evaluation value calculation unit 17 detects that the pattern density of all the regions A to D exceeds the threshold value, and performs processing. Proceed to step S9, and thereafter perform the same processing as in the first embodiment.
上記閾値 Thres2には所定の係数を乗じた値を実際の閾値として用いる。この乗じる 係数としては、ノイズの影響を抑えるためであれば、 1または 2などの小さな値でよい。 また、エッジ強度が大きい、すなわちパターンの特徴がはっきりしている領域を取り 出す場合、 Qの値に応じて大きな値 (輝度の階調が 256であり、 Qが 10であれば、 10 〜 15の間の値等)とする。  A value obtained by multiplying the threshold Thres2 by a predetermined coefficient is used as the actual threshold. The coefficient to be multiplied may be a small value such as 1 or 2 to suppress the effect of noise. Also, when extracting an area where the edge strength is high, that is, the pattern features are clear, a large value according to the value of Q (if the luminance gradation is 256 and Q is 10, 10 to 15 Between the values).
[0052] 一方、パターン密度評価値算出部 17は、上記閾値 Thres2を超えない領域に対し て、例えば、図 16の斜線で示す領域 Aのパターン密度評価値が閾値 Thres2を超え ないことを検出すると、図 17に示すように、所定の移動距離により部分画像枠 F1を 右方向に移動させて、部分画像枠 F1及び部分画像枠 F2との重複領域である領域 A の面積を広げる。 On the other hand, when the pattern density evaluation value calculation unit 17 detects that the pattern density evaluation value of the region A indicated by the oblique lines in FIG. 16 does not exceed the threshold value Thres2, for the region that does not exceed the threshold value Thres2, for example. As shown in FIG. 17, the partial image frame F1 is moved rightward by a predetermined moving distance, and the area of the area A that is an overlapping area with the partial image frame F1 and the partial image frame F2 is expanded.
そして、パターン密度評価値算出部 17は、再度、上記領域 Aのパターン密度評価 値を算出して、このパターン密度評価値が閾値 Thres2を超えた力否かの検出を行い 、超えていることを検出すると処理をステップ S9へ進め、また、超えていなければ再 度、部分画像枠 F1を右方向に移動させて、領域 Aのパターン密度評価値の判定を 行う。  Then, the pattern density evaluation value calculation unit 17 calculates the pattern density evaluation value of the region A again, detects whether or not the pattern density evaluation value exceeds the threshold value Thres2, and If detected, the process proceeds to step S9. If not, the partial image frame F1 is moved rightward again to determine the pattern density evaluation value of the area A.
[0053] ここで、部分画像枠を移動させた場合における重複領域の重複率の制限について 、図を参照して説明する。図 18は重複領域の重複率を説明するための概念図である 重複率の最大値としては、同一パターンが 2つの部分画像に含まれるようにすると、 50%が適当である。 Here, the limitation on the overlapping rate of the overlapping region when the partial image frame is moved will be described with reference to the drawings. FIG. 18 is a conceptual diagram for explaining the overlapping rate of overlapping regions. The maximum overlap rate is 50% if the same pattern is included in two partial images.
すなわち、重複率を 50%以上とすると、同一のパターンが 3枚の部分画像に含まれ ることとなる。  In other words, if the overlap rate is 50% or more, the same pattern is included in the three partial images.
そして、パターン密度評価値算出部 17は、重複領域の重複率が最大値を超えた 場合、重複領域全体のパターン密度評価値が 2番目に大きい数値のモデルの座標 にお!/、て再度上述した処理を行う。  Then, when the overlap rate of the overlapping area exceeds the maximum value, the pattern density evaluation value calculating unit 17 again sets the pattern density evaluation value of the entire overlapping area to the coordinate of the model with the second largest numerical value! Perform the process.
[0054] 一方、重複率の最低値としては、基板に形成されるパターンにお 、て最小パターン の画素数に対し、 1倍以上の実数倍の画素数の規定値を設け、この規定値の部分画 像全体における割合から求める。 [0054] On the other hand, as the minimum value of the overlapping rate, a prescribed value of the number of pixels that is a real number multiple of 1 or more than the number of pixels of the minimum pattern is provided in the pattern formed on the substrate. Obtain from the ratio of the entire partial image.
例えば、部分画像のサイズが 640 (水平方向) X 480 (垂直方向)画素に対して、最 小パターンが 4 (水平方向) X 4 (垂直方向)画素とすると、この最小パターンの 2倍の 画素数を規定値とする。  For example, if the partial image size is 640 (horizontal direction) X 480 (vertical direction) pixels and the minimum pattern is 4 (horizontal direction) X 4 (vertical direction) pixels, the pixel is twice the minimum pattern. The number is the specified value.
これにより、水平方向の最小の重複率としては (4 X 2Z640) = 1. 25%であり、垂 直方向の最小の重複率としては (4 X 2Z480) = 1. 67%となる。  As a result, the minimum overlap ratio in the horizontal direction is (4 X 2Z640) = 1.25%, and the minimum overlap ratio in the vertical direction is (4 X 2Z480) = 1.67%.
[0055] 上述したように、第 2の実施例においては、貼り合せのモデルの最適位置を決定す る際に、探索領域全体でモデルにおける領域 A〜Dの全てのパターン密度評価値が 閾値を超えな 、場合、最初は全体として評価値が最大である位置を全体画像にぉ ヽ て決定し、この座標値において個々の部分画像枠の重複部分について、パターン密 度評価値が閾値を超えるように部分画像枠の位置を調整して、重複領域の重複率を 変化させ、撮影する部分画像の撮影位置を決定する。 [0055] As described above, in the second embodiment, when determining the optimum position of the model for pasting, all pattern density evaluation values of the regions A to D in the model in the entire search region have the threshold value. If not, first, the position where the evaluation value is the maximum as a whole is determined on the entire image, and the pattern density evaluation value exceeds the threshold value for the overlapping portion of each partial image frame at this coordinate value. Adjust the position of the partial image frame to change the overlap ratio of the overlapping area, and determine the shooting position of the partial image to be shot.
[0056] また、重複領域の重複率を変化させて、パターン密度評価値を適時調整するため 、探索の自由度が実施例 1に対して増加し、適当に設定した撮影位置をスタートとし ても (探索領域全体を探索しなくても)、パターン密度評価値を用いて自動的に貼り 合せに最適な撮影位置を決定することができる。 [0056] Further, in order to adjust the pattern density evaluation value in a timely manner by changing the overlapping rate of the overlapping area, the degree of freedom of search is increased with respect to the first embodiment, and an appropriately set photographing position is used as a start. Even if the entire search area is not searched, it is possible to automatically determine the optimum photographing position for pasting using the pattern density evaluation value.
したがって、本実施例においては、貼り合わせる重複領域の配置位置を、パターン 密度評価値等のパターン情報により決定しているため、図 7に示す FPD基板のパタ ーンのように、基板上に形成されるパターンが疎の部分が多ぐ貼り合わせ処理に適 さない周期性を有する被対象物でも、また、適当に設定した重複部分では貼り合せ 処理が失敗するような場合であっても、高い精度で貼り合わせ画像を生成することが できる。 Therefore, in this embodiment, since the arrangement position of the overlapping region to be bonded is determined by pattern information such as the pattern density evaluation value, the pattern of the FPD board shown in FIG. In the case of objects with periodicity that are not suitable for the bonding process, where the pattern formed on the substrate is sparse, as in the case of a screen, the bonding process fails for the overlapped part set appropriately. Even in such a case, a combined image can be generated with high accuracy.
[0057] <第 3の実施例 >  [0057] <Third embodiment>
図 19に示す第 3の実施例は顕微鏡を搭載した大型の基板検査装置である。図 19 に示す基板検査装置は、顕微鏡や対物レンズ 2、撮像カメラ 3等の観察系等の構成 が第 1及び第 2の実施例と同様である。  The third embodiment shown in FIG. 19 is a large substrate inspection apparatus equipped with a microscope. The substrate inspection apparatus shown in FIG. 19 is the same as the first and second embodiments in the configuration of an observation system such as a microscope, objective lens 2, and imaging camera 3.
異なる点は、被対象物の FPD基板を対物レンズ 2に対して移動させる駆動機構で あり、ステージ移動制御部 6は被対象物が配置されているステージ 4を 1軸方向(図 1 9では右上 左下方向:矢印 O)にのみ駆動させる。  The difference is the drive mechanism that moves the FPD substrate of the object relative to the objective lens 2, and the stage movement controller 6 moves the stage 4 on which the object is placed in one axis direction (upper right in FIG. 19). Lower left direction: Drive only in arrow O).
一方において、システム制御部 7は顕微鏡 T自身をステージ 4とは垂直の方向(図 1 9では左上 右下方向:矢印 P)の 1軸方向に駆動する。  On the other hand, the system control unit 7 drives the microscope T itself in one axial direction perpendicular to the stage 4 (upper left and lower right in FIG. 19: arrow P).
これにより、対物レンズ 2と被対象物との相対位置を、 X—Y方向に移動させることが できる。  As a result, the relative position between the objective lens 2 and the object can be moved in the XY directions.
[0058] <第 4の実施例 >  [0058] <Fourth embodiment>
第 1〜第 3の実施例において生成された高精細画像である対象画像は、基板の欠 陥を検出するときに、検査されている基板の画像と比較する基準画像 (比較するため 正常な基板力も生成された画像)として検査装置において用いられる。  The target image, which is a high-definition image generated in the first to third embodiments, is a reference image that is compared with the image of the substrate being inspected when detecting a substrate defect (a normal substrate for comparison). The image is also used in the inspection device as a force generated image).
例えば、 FPD基板の検査において、図 20に示す検査装置は、撮像手段としてライ ンセンサが設けられており、校正用サンプルにより、撮像手段の調整がなされた後、 保持移動手段によりステージが矢印 G方向に移動し、所定の移動距離毎にラインセ ンサにより、照明手段力 放射される光の反射光を検出する。  For example, in the inspection of an FPD substrate, the inspection apparatus shown in FIG. 20 is provided with a line sensor as an image pickup means. After the image pickup means is adjusted by a calibration sample, the stage is moved in the direction of arrow G by the holding movement means. The reflected light of the light emitted by the illumination means is detected by the line sensor at every predetermined moving distance.
そして、統合制御手段が、検出された反射光の強度と、直前にサンプリングした反 射光の検出値とを比較し、所定の範囲を超えて異なっていると、不良候補として検出 し、この基板上における座標値を記憶する。  Then, the integrated control means compares the intensity of the reflected light detected with the detection value of the reflected light sampled immediately before, and if it differs beyond a predetermined range, it is detected as a defective candidate and The coordinate value at is stored.
[0059] そして、第 1〜第 3の実施例における画像処理装置のステージ 4に、上記 FPD基板 を載せて、システム制御部 7に対し不良候補の座標値を入力する。 これにより、システム制御部 7は、ステージ移動制御部 6を介してステージ 4を移動さ せ、不良候補の位置が対物レンズ 2の位置、すなわち撮像カメラ 3により不良候補の 基板部分が撮像できる位置に移動する。 Then, the FPD board is mounted on the stage 4 of the image processing apparatus in the first to third embodiments, and the coordinate value of the defect candidate is input to the system control unit 7. As a result, the system control unit 7 moves the stage 4 via the stage movement control unit 6, and the position of the defect candidate is set to the position of the objective lens 2, that is, the position where the substrate portion of the defect candidate can be imaged by the imaging camera 3. Moving.
このとき、システム制御部 7は、高精細な画像として、すなわち第 2の倍率により不良 候補の位置を含んだ状態で、かつ、第 1〜第 3の実施例において対象画像を生成し た、すなわち最適なモデルの位置と対応する場所に移動させる。  At this time, the system control unit 7 generates the target image as a high-definition image, that is, in a state where the position of the defective candidate is included by the second magnification, and in the first to third embodiments, that is, Move to the location corresponding to the optimal model position.
[0060] そして、システム制御部 7は、撮影した不良候補を含む画像情報と、第 1〜第 2の実 施例にて生成した対象画像とを、パターンマッチングにより比較し、不良候補のパタ ーン形状と、基準画像である対象画像の対応する部分のパターン形状とを比較し、 異なって!/、る力否かの判定を行う。 [0060] Then, the system control unit 7 compares the image information including the captured defect candidate with the target image generated in the first and second embodiments by pattern matching, and determines the defect candidate pattern. And the pattern shape of the corresponding part of the target image, which is the reference image, are compared to determine whether they are different!
このとき、システム制御部 7は、異なっていないことを検出すると、この不良候補を良 品と判定し、一方、異なっていることを検出すると、この不良候補を不良と判定し、た とえば、判定結果を表示装置に表示する。  At this time, if the system control unit 7 detects that the difference is not different, it determines that the defect candidate is a non-defective product, while if it detects a difference, determines that the defect candidate is defective. The determination result is displayed on the display device.
上述した本発明の検査方法により、ラインセンサによる高速な検査において不良候 補と判定したものを、実際の正常な基板パターンと比較することにより正確に判定す ることで、検査速度を向上させ、かつ検査の精度をも向上させることが可能となる。  By using the inspection method of the present invention described above to determine a defect candidate in a high-speed inspection by a line sensor accurately by comparing it with an actual normal substrate pattern, the inspection speed is improved, In addition, the accuracy of inspection can be improved.
[0061] なお、図 1及び図 2における画像処理部の機能を実現するためのプログラムをコン ピュータ読み取り可能な記録媒体に記録して、この記録媒体に記録されたプログラム をコンピュータシステムに読み込ませ、実行することにより画像処理を行ってもよい。 なお、ここでいう「コンピュータシステム」とは、 OSや周辺機器等のハードウェアを含 むものとする。また、「コンピュータシステム」は、ホームページ提供環境 (あるいは表 示環境)を備えた WWWシステムも含むものとする。また、「コンピュータ読み取り可能 な記録媒体」とは、フレキシブルディスク、光磁気ディスク、 ROM, CD— ROM等の 可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをい う。さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットヮー クゃ電話回線等の通信回線を介してプログラムが送信された場合のサーバやクライ アントとなるコンピュータシステム内部の揮発性メモリ(RAM)のように、一定時間プロ グラムを保持して 、るものも含むものとする。 [0062] また、上記プログラムは、このプログラムを記憶装置等に格納したコンピュータシス テムから、伝送媒体を介して、あるいは、伝送媒体中の伝送波により他のコンピュータ システムに伝送されてもよい。ここで、プログラムを伝送する「伝送媒体」は、インター ネット等のネットワーク (通信網)や電話回線等の通信回線 (通信線)のように情報を 伝送する機能を有する媒体のことをいう。また、上記プログラムは、前述した機能の一 部を実現するためのものであっても良い。さらに、前述した機能をコンピュータシステ ムにすでに記録されているプログラムとの組み合わせで実現できるもの、いわゆる差 分ファイル (差分プログラム)であっても良 、。 [0061] It should be noted that a program for realizing the functions of the image processing unit in FIGS. 1 and 2 is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system. Image processing may be performed by executing. The “computer system” here includes the OS and hardware such as peripheral devices. “Computer system” includes a WWW system equipped with a homepage provision environment (or display environment). The “computer-readable recording medium” refers to a storage device such as a flexible disk, a magneto-optical disk, a portable medium such as a ROM and a CD-ROM, and a hard disk incorporated in a computer system. Furthermore, the “computer-readable recording medium” means a volatile memory (RAM) inside a computer system that becomes a server or a client when a program is transmitted via a communication line such as a network such as the Internet or a telephone line. In this way, the program is held for a certain period of time. [0062] Further, the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium. Here, the “transmission medium” for transmitting a program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line. Further, the program may be for realizing a part of the functions described above. Furthermore, what can realize the above-mentioned functions in combination with a program already recorded in the computer system, that is, a so-called differential file (differential program) may be used.
産業上の利用可能性  Industrial applicability
[0063] 本発明の画像処理装置および画像処理方法によれば、部分画像を貼り合わせると き、低解像度の第 1の画像情報により、部分画像で貼り合わせて合成する対象画像 のモデルを予め形成して、このモデルを用いて広範囲な第 1の画像情報の所定の領 域内において、重複領域を含めて高解像度の対象画像を生成する部分画像の撮影 位置を調整するため、第 1の画像情報による広範囲な視野領域において、適切な部 分画像の撮影位置を演算により求めることができ、所望の高解像度の高精細画像を 容易に生成することが可能となる。 [0063] According to the image processing apparatus and the image processing method of the present invention, when partial images are combined, a model of a target image to be combined and combined with the partial images is formed in advance based on the low-resolution first image information. Then, using this model, the first image information is used to adjust the shooting position of the partial image that generates the high-resolution target image including the overlapping region within the predetermined region of the first image information over a wide range. Thus, in the wide field of view, it is possible to obtain an appropriate partial image shooting position by calculation, and it is possible to easily generate a high-definition image with a desired high resolution.

Claims

請求の範囲 The scope of the claims
[1] 所定の解像度で撮影した被対象物の部分画像を所定の重複領域を持たせて貼り 合わせ、所定の大きさの該対象物の全体または一部の対象画像を生成する画像処 理装置であり、  [1] An image processing device that combines partial images of an object photographed at a predetermined resolution with a predetermined overlapping area, and generates a target image of the entire object or a part of the object of a predetermined size And
被対象物を第 1の倍率で撮影して第 1の画像情報を得る第 1の撮影手段と、 前記被対象物を前記第 1の倍率より高!、倍率である第 2の倍率で撮影して、前記部 分画像として第 2の画像情報を得る第 2の撮影手段と、  A first photographing means for photographing the object at a first magnification to obtain first image information; and photographing the object at a second magnification which is higher than the first magnification! Second imaging means for obtaining second image information as the partial image;
前記対象画像の大きさと部分画像における重複領域の度合を示す画像領域情報 とから、前記部分画像を貼り合わせて生成される対象画像のモデルを生成する画像 モデル生成手段と、  Image model generation means for generating a model of the target image generated by pasting the partial images from the size of the target image and the image area information indicating the degree of overlapping area in the partial image;
前記部分画像を貼り合わせて生成する対象画像の、第 1の画像情報における配置 位置を、前記モデルを用いて探索する撮影位置算出手段と、  Photographing position calculation means for searching for the position of the target image generated by pasting the partial images in the first image information using the model;
前記配置位置に基づ!ヽて、前記部分画像を貼り合わせて前記対象画像を生成す る高精細画像生成手段と  High-definition image generating means for generating the target image by pasting the partial images based on the arrangement position;
を有する画像処理装置。  An image processing apparatus.
[2] 請求項 1に記載の画像処理装置において、前記撮影位置算出手段が、前記第 1の 画像情報における、前記モデルの貼り合わせにおける重複領域の最適な配置位置 を検出することにより、対象画像の配置位置を探索する画像処理装置。 [2] The image processing device according to claim 1, wherein the photographing position calculation unit detects an optimal arrangement position of an overlapping area in the pasting of the models in the first image information. Image processing apparatus for searching for an arrangement position of.
[3] 請求項 2に記載の画像処理装置において、前記撮影位置算出手段が前記第 1の 画像情報の予め設定された探索領域内において、前記モデルを所定の移動距離に より移動させつつ、重複領域の配置位置を探索する画像処理装置。 [3] The image processing device according to claim 2, wherein the shooting position calculation means overlaps the model while moving the model by a predetermined movement distance within a preset search region of the first image information. An image processing apparatus that searches for an arrangement position of a region.
[4] 請求項 2または請求項 3に記載の画像処理装置において、前記撮影位置算出手 段が前記重複領域のパターン情報に基づいて、前記探索領域内における重複領域 の配置位置を探索する画像処理装置。 [4] The image processing device according to claim 2 or 3, wherein the photographing position calculation means searches for an arrangement position of the overlapping area in the search area based on the pattern information of the overlapping area. apparatus.
[5] 請求項 1から請求項 4に記載の画像処理装置において、前記撮影位置算出手段が 前記重複領域のパターン情報に基づいて、前記探索領域内において、モデルにお ける重複領域情報を変更させて、配置位置を探索する画像処理装置。 [5] In the image processing device according to any one of claims 1 to 4, the imaging position calculation unit may change the overlapping area information in the model in the search area based on the pattern information of the overlapping area. An image processing apparatus for searching for an arrangement position.
[6] 請求項 1から請求項 5のいずれかに記載の画像処理装置において、前記第 1の撮 影手段及び前記第 2の撮影手段に対し、対象物を X— Y方向に各々所定の距離単 位で相対的に移動させる移動手段を有し、 [6] The image processing device according to any one of claims 1 to 5, wherein the first imaging is performed. Moving means for moving the object relative to each other in a predetermined distance unit in the XY directions with respect to the shadowing means and the second photographing means;
前記撮影位置算出手段が前記モデルにより検出した対象画像の配置位置に基づ き、前記対象物における対象画像の撮影位置を設定する画像処理装置。  An image processing apparatus that sets a shooting position of a target image on the target based on a position of the target image detected by the model by the shooting position calculation unit.
[7] 請求項 6に記載の画像処理装置において、前記撮影位置と前記モデルにより検出 した対象画像の配置位置とに基づき、貼り合わせに用いる部分画像の撮影位置を算 出する画像処理装置。 7. The image processing device according to claim 6, wherein the imaging position of the partial image used for pasting is calculated based on the imaging position and the arrangement position of the target image detected by the model.
[8] 請求項 1から請求項 7のいずれかに記載の画像処理装置において、前記第 1及び 第 2の撮影手段に得られる第 1の画像情報及び第 2の画像情報各々が、歪補正また は Z及びシェーディング補正されている画像処理装置。  [8] In the image processing device according to any one of claims 1 to 7, each of the first image information and the second image information obtained by the first and second imaging units is subjected to distortion correction or Is an image processing device with Z and shading correction.
[9] 所定の解像度で撮影した被対象物の部分画像を所定の重複領域を持たせて貼り 合わせ、所定の大きさの該対象物の全体または一部の対象画像を生成する画像処 理方法であり、 [9] An image processing method in which partial images of an object photographed at a predetermined resolution are pasted together with a predetermined overlapping area, and an entire or a part of the target image of a predetermined size is generated. And
被対象物を第 1の倍率で撮影して第 1の画像情報を得る第 1の撮影過程と、 前記被対象物を前記第 1の倍率より高!、倍率である第 2の倍率で撮影して、前記部 分画像として第 2の画像情報を得る第 2の撮影過程と、  A first imaging process in which an object is photographed at a first magnification to obtain first image information; and the object is photographed at a second magnification that is higher than the first magnification! A second imaging process for obtaining second image information as the partial image,
前記対象画像の大きさと部分画像における重複領域の度合を示す重複領域情報 とから、前記部分画像を貼り合わせて生成される対象画像のモデルを生成する画像 モデル生成過程と、  An image model generation process for generating a model of the target image generated by pasting the partial images from the size of the target image and the overlapping region information indicating the degree of the overlapping region in the partial image;
前記部分画像を貼り合わせて生成する対象画像の、第 1の画像情報における配置 位置を、前記モデルを用いて探索する撮影位置算出過程と、  A shooting position calculation process of searching for an arrangement position in the first image information of the target image generated by pasting the partial images using the model;
前記配置位置に基づ!ヽて、前記部分画像を貼り合わせて前記対象画像を生成す る高精細画像生成過程と  A high-definition image generation process for generating the target image by combining the partial images based on the arrangement position.
を有する画像処理方法。  An image processing method.
[10] 請求項 9に記載の画像処理方法にお 、て、前記撮影位置算出過程にお!、て、前 記第 1の画像情報における、前記モデルの貼り合わせにおける重複領域の最適な配 置位置を検出することにより、対象画像の配置位置を探索する画像処理方法。 [10] In the image processing method according to claim 9, in the imaging position calculation process, the optimal arrangement of overlapping regions in the pasting of the models in the first image information is performed. An image processing method for searching for an arrangement position of a target image by detecting a position.
[11] 請求項 10に記載の画像処理方法において、前記撮影位置算出過程において、前 記第 1の画像情報の予め設定された探索領域内にて、前記モデルを所定の移動距 離により移動させつつ、重複領域の配置位置を探索する画像処理方法。 [11] The image processing method according to claim 10, wherein in the shooting position calculation step, An image processing method for searching for an arrangement position of an overlapping area while moving the model by a predetermined moving distance within a preset search area of the first image information.
PCT/JP2005/012661 2004-07-09 2005-07-08 Image processing device and method WO2006006525A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2006529001A JP4709762B2 (en) 2004-07-09 2005-07-08 Image processing apparatus and method
KR1020077000543A KR100888235B1 (en) 2004-07-09 2005-07-08 Image processing device and method
CN2005800228587A CN1981302B (en) 2004-07-09 2005-07-08 Image processing device and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004-203108 2004-07-09
JP2004203108 2004-07-09

Publications (1)

Publication Number Publication Date
WO2006006525A1 true WO2006006525A1 (en) 2006-01-19

Family

ID=35783868

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/012661 WO2006006525A1 (en) 2004-07-09 2005-07-08 Image processing device and method

Country Status (5)

Country Link
JP (1) JP4709762B2 (en)
KR (1) KR100888235B1 (en)
CN (1) CN1981302B (en)
TW (1) TWI366150B (en)
WO (1) WO2006006525A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012523560A (en) * 2009-04-10 2012-10-04 エスエヌユー プレシジョン カンパニー リミテッド Video centering method
JP2015021891A (en) * 2013-07-22 2015-02-02 株式会社ミツトヨ Image measurement device and program
JP2021004741A (en) * 2019-06-25 2021-01-14 株式会社Fuji Tolerance setting system, substrate inspection machine, tolerance setting method, and substrate inspection method

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120162377A1 (en) * 2009-09-03 2012-06-28 Ccs Inc. Illumination/image-pickup system for surface inspection and data structure
JP2011087183A (en) * 2009-10-16 2011-04-28 Olympus Imaging Corp Imaging apparatus, image processing apparatus, and program
BR112012031688A2 (en) * 2010-06-15 2016-08-16 Koninkl Philips Electronics Nv method for processing a first digital image and computer program product for processing a first digital image
US11336831B2 (en) 2018-07-06 2022-05-17 Canon Kabushiki Kaisha Image processing device, control method, and program storage medium
CN110441234B (en) * 2019-08-08 2020-07-10 上海御微半导体技术有限公司 Zoom lens, defect detection device and defect detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0785246A (en) * 1993-09-10 1995-03-31 Olympus Optical Co Ltd Image synthesizer
JPH11271645A (en) * 1998-03-25 1999-10-08 Nikon Corp Microscopic image display device
JP2000059606A (en) * 1998-08-12 2000-02-25 Minolta Co Ltd High definition image preparation system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2522611B2 (en) * 1991-07-05 1996-08-07 大日本スクリーン製造株式会社 Length measuring device
JPH0560533A (en) * 1991-09-04 1993-03-09 Nikon Corp Pattern inspection device
EP0639023B1 (en) * 1993-08-13 1997-06-04 Agfa-Gevaert N.V. Method for producing frequency-modulated halftone images
JP3424138B2 (en) * 1994-05-11 2003-07-07 カシオ計算機株式会社 Transparent substrate alignment method
CN1204101A (en) * 1997-06-26 1999-01-06 伊斯曼柯达公司 Integral images with transitions
US6470094B1 (en) * 2000-03-14 2002-10-22 Intel Corporation Generalized text localization in images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0785246A (en) * 1993-09-10 1995-03-31 Olympus Optical Co Ltd Image synthesizer
JPH11271645A (en) * 1998-03-25 1999-10-08 Nikon Corp Microscopic image display device
JP2000059606A (en) * 1998-08-12 2000-02-25 Minolta Co Ltd High definition image preparation system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012523560A (en) * 2009-04-10 2012-10-04 エスエヌユー プレシジョン カンパニー リミテッド Video centering method
JP2015021891A (en) * 2013-07-22 2015-02-02 株式会社ミツトヨ Image measurement device and program
JP2021004741A (en) * 2019-06-25 2021-01-14 株式会社Fuji Tolerance setting system, substrate inspection machine, tolerance setting method, and substrate inspection method
JP7277283B2 (en) 2019-06-25 2023-05-18 株式会社Fuji tolerance setting system, circuit board inspection machine, tolerance setting method, circuit board inspection method

Also Published As

Publication number Publication date
KR20070026792A (en) 2007-03-08
JP4709762B2 (en) 2011-06-22
JPWO2006006525A1 (en) 2008-04-24
CN1981302A (en) 2007-06-13
TW200606753A (en) 2006-02-16
TWI366150B (en) 2012-06-11
CN1981302B (en) 2010-12-29
KR100888235B1 (en) 2009-03-12

Similar Documents

Publication Publication Date Title
WO2006006525A1 (en) Image processing device and method
JP4799329B2 (en) Unevenness inspection method, display panel manufacturing method, and unevenness inspection apparatus
JP4951496B2 (en) Image generation method and image generation apparatus
WO2012053521A1 (en) Optical information processing device, optical information processing method, optical information processing system, and optical information processing program
JP2010139890A (en) Imaging apparatus
JP5196572B2 (en) Wafer storage cassette inspection apparatus and method
CN112881419A (en) Chip detection method, electronic device and storage medium
JP2013025466A (en) Image processing device, image processing system and image processing program
KR20110030275A (en) Method and apparatus for image generation
JP2004038885A (en) Image feature learning type defect detection method, defect detection device and defect detection program
JP2010141700A (en) Imaging apparatus
CN105335959B (en) Imaging device quick focusing method and its equipment
CN103247548A (en) Wafer defect detecting device and method
JP4752733B2 (en) IMAGING DEVICE, IMAGING METHOD, AND IMAGING DEVICE DESIGNING METHOD
JP2009294027A (en) Pattern inspection device and method of inspecting pattern
JP2015115918A (en) Imaging apparatus and imaging method
JP5149984B2 (en) Imaging device
JP5544894B2 (en) Wafer inspection apparatus and wafer inspection method
JPH11287618A (en) Image processing device
JP2004069645A (en) Method and device for visual inspection
JP4851972B2 (en) Relative position calculation device, relative position adjustment device, relative position adjustment method, and relative position adjustment program
JP4428112B2 (en) Appearance inspection method and appearance inspection apparatus
WO2014084056A1 (en) Testing device, testing method, testing program, and recording medium
JP2013034208A (en) Imaging apparatus
TW202429379A (en) Method for processing review images and review system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006529001

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 200580022858.7

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 1020077000543

Country of ref document: KR

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

WWP Wipo information: published in national office

Ref document number: 1020077000543

Country of ref document: KR

122 Ep: pct application non-entry in european phase