WO2014017337A1 - Matching process device, matching process method, and inspection device employing same - Google Patents

Matching process device, matching process method, and inspection device employing same Download PDF

Info

Publication number
WO2014017337A1
WO2014017337A1 PCT/JP2013/069296 JP2013069296W WO2014017337A1 WO 2014017337 A1 WO2014017337 A1 WO 2014017337A1 JP 2013069296 W JP2013069296 W JP 2013069296W WO 2014017337 A1 WO2014017337 A1 WO 2014017337A1
Authority
WO
WIPO (PCT)
Prior art keywords
matching
image
feature
feature amount
template
Prior art date
Application number
PCT/JP2013/069296
Other languages
French (fr)
Japanese (ja)
Other versions
WO2014017337A8 (en
Inventor
渉 長友
安部 雄一
郭介 牛場
Original Assignee
株式会社日立ハイテクノロジーズ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立ハイテクノロジーズ filed Critical 株式会社日立ハイテクノロジーズ
Priority to US14/417,425 priority Critical patent/US9619727B2/en
Priority to KR1020157002223A priority patent/KR101701069B1/en
Publication of WO2014017337A1 publication Critical patent/WO2014017337A1/en
Publication of WO2014017337A8 publication Critical patent/WO2014017337A8/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70605Workpiece metrology
    • G03F7/70616Monitoring the printed patterns
    • G03F7/70625Dimensions, e.g. line width, critical dimension [CD], profile, sidewall angle or edge roughness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • G01N21/95607Inspecting patterns on the surface of objects using a comparative method
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/22Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material
    • G01N23/225Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70491Information management, e.g. software; Active and passive control, e.g. details of controlling exposure processes or exposure tool monitoring processes
    • G03F7/70508Data handling in all parts of the microlithographic apparatus, e.g. handling pattern data for addressable masks or data transfer to or from different components within the exposure apparatus
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70605Workpiece metrology
    • G03F7/706835Metrology information management or control
    • G03F7/706839Modelling, e.g. modelling scattering or solving inverse problems
    • G03F7/706841Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/771Feature selection, e.g. selecting representative features from a multi-dimensional feature space
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L22/00Testing or measuring during manufacture or treatment; Reliability measurements, i.e. testing of parts without further processing to modify the parts as such; Structural arrangements therefor
    • H01L22/10Measuring as part of the manufacturing process
    • H01L22/12Measuring as part of the manufacturing process for structural parameters, e.g. thickness, line width, refractive index, temperature, warp, bond strength, defects, optical inspection, electrical measurement of structural dimensions, metallurgic measurement of diffusions
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L22/00Testing or measuring during manufacture or treatment; Reliability measurements, i.e. testing of parts without further processing to modify the parts as such; Structural arrangements therefor
    • H01L22/30Structural arrangements specially adapted for testing or measuring during manufacture or treatment, or specially adapted for reliability measurements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation

Definitions

  • the present invention relates to a matching processing technique, more specifically to a pattern matching technique, and more particularly to a template matching technique in an inspection technique for the purpose of inspecting or measuring a pattern formed on a semiconductor wafer.
  • Non-Patent Document 1 In an apparatus for measuring and inspecting a pattern formed on a semiconductor wafer, a template matching technique (see Non-Patent Document 1 below) that performs matching using a template is used to align the visual field of the inspection apparatus with a desired measurement position.
  • Patent Document 1 below describes an example of such a template matching method. Note that the template matching process is a process for finding an area most matching a pre-registered template image from the search target image.
  • a specific example of an inspection apparatus using template matching is measurement of a pattern on a semiconductor wafer using a scanning electron microscope.
  • the field of view of the apparatus is moved to a rough position of the measurement position by moving the stage, but a large deviation often occurs on an image captured at a high magnification of an electron microscope only with the positioning accuracy of the stage.
  • the wafer is not always placed on the stage in the same direction, and the coordinate system of the wafer placed on the stage (for example, the direction in which the chips of the wafer are arranged, for example) does not completely match the drive direction of the stage, This also causes a shift on an image taken at a high magnification of an electron microscope.
  • an electron beam is deflected by a minute amount (for example, several tens of ⁇ m or less) and irradiated to a target position on an observation sample. (This may be referred to as “beam shift”). Even if this beam shift is performed, a deviation from a desired observation position may occur with respect to the irradiation position only by the accuracy of beam deflection control. Template matching is performed to correct each of these deviations and perform measurement and inspection at an accurate position.
  • alignment is performed in multiple stages: alignment with an optical camera with a lower magnification than the electron microscope image and alignment with the electron microscope image.
  • alignment is performed using images of a plurality of chips (for example, chips on both the left and right sides of the wafer) that are separated from each other on the wafer.
  • a plurality of chips for example, chips on both the left and right sides of the wafer
  • register the same unique pattern in each chip or in the vicinity (pattern relatively in the same position in each chip) as a template the pattern used for registration is optical alignment on the wafer) Often used as a pattern).
  • the stage is moved so as to capture the pattern registered in the template with each chip, and an image is acquired with each chip.
  • Template matching is performed on the acquired image. Based on the respective matching positions obtained as a result, the shift amount of the stage movement is calculated, and the coordinate system of the stage movement and the coordinate system of the wafer are matched using this shift amount as a correction value for the stage movement.
  • a unique pattern close to the measurement position is registered in advance as a template, and the relative coordinates of the measurement position viewed from the template are stored.
  • template matching is performed on the captured image, the matching position is determined, and the position moved by the stored relative coordinates is the measurement position.
  • the visual field of the apparatus is moved to a desired measurement position.
  • the alignment pattern may not appear in the image captured by the electron microscope.
  • a process of searching for an alignment pattern again around the imaging position (periphery search) or interrupting measurement and notifying the user that the alignment has failed (measurement interruption) may be performed.
  • a preset reference value hereinafter, this reference value is referred to as score acceptance
  • the template matching method can be divided into an image-based method based on normalized correlation and a feature point-based method for comparing feature points extracted from images.
  • image-based method for example, an image having the same size as the template is extracted from the searched image, a correlation value between the extracted image and the template is calculated, and this correlation value is extracted for each image position (position is determined from the searched image). For example, a position having a large correlation value is set as a matching position (non-patent document 1).
  • the latter feature point-based method extracts a plurality of feature points in each of the template and the searched image, for example, finds similar feature points in both images (corresponding point matching), and superimposes the feature points.
  • a simple template is projected (considering differences in rotation, scale, etc. between images)
  • a position where the number of projected areas overlaps is set as a matching position (Non-patent Document 2).
  • Patent document 2001-243906 (corresponding US patent US627278)
  • the matching may not be successful if there is a large difference in the appearance of the image between the template and the searched image.
  • the reason why the difference in the appearance of the image between the template and the searched image is large is, for example, that the difference between the imaging condition of the inspection apparatus when the template is registered and the imaging condition of the inspection apparatus when the searched image is captured is large. If the difference between the performance of the semiconductor pattern captured when the template is registered and the performance of the semiconductor pattern captured the image to be searched is increased, or the semi-manufacturing process of the semiconductor pattern when the template is registered, There are cases where the manufacturing process of the semiconductor pattern when the image to be searched is photographed is different. In addition to the examples given here, there may be a large difference in appearance between the template and the searched image due to various factors.
  • Non-patent Document 3 a method of performing preprocessing such as smoothing processing and edge enhancement has also been proposed.
  • FIG. 4 is a diagram exemplifying correlation values (matching scores) at a matching correct answer position and a matching incorrect answer position for a plurality of images (sample IDs: 1 to 100) having different appearances.
  • the correlation value differs depending on the sample, and with one score acceptance (threshold value) 202 (or 203), it is difficult to determine the success or failure of the matching. If the score acceptance (threshold value 1) 202 is used, a sample in the section 205 with a matching correct answer position is erroneously determined as a matching incorrect answer position.
  • the sample of the matching incorrect answer position (the hatched area on the right side of 205) is erroneously determined as the matching correct answer position.
  • Non-Patent Document 2 a method using a feature quantity such as SIFT (Scale-Invariant Feature Transform) (see Non-Patent Document 2) has been proposed. If the apparent divergence from the searched image is large, the similarity of the feature vector (feature descriptor) between the template and the searched image will be worse, and matching will not be successful, making the matching unstable. is there.
  • SIFT Scale-Invariant Feature Transform
  • An object of the present invention is to output an accurate matching position even when there is a large difference in the appearance of an image between a template and a searched image in template matching.
  • a feature amount (hereinafter referred to as individual feature amount) is extracted and compared separately from each of the template or the searched image, but also both the template and the searched image.
  • the mutual information obtained from the above (hereinafter referred to as a mutual feature amount) is also used as information for determining the success or failure of matching.
  • An inspection apparatus for performing template matching includes a feature region extraction processing unit that extracts a feature amount determined by coordinates in an image from a template, and a feature amount extraction process that extracts a feature amount determined by coordinates in the image from a searched image Mutual feature amount calculation of both images of the template and the searched image from the portion, the feature amount extracted from the template, the feature amount extracted from the searched image, and the relative position between the template and the searched image
  • the image processing apparatus includes a processing unit and a template matching processing unit that performs matching between the template and the searched image using the plurality of mutual feature amounts.
  • the score acceptance can always be 0 (or the median score) as a fixed value.
  • an accurate matching position can be output even when there is a large difference in image appearance between the template and the searched image.
  • FIG. 6 is a diagram illustrating correlation values (matching scores) at matching correct answer positions and matching incorrect answer positions for a plurality of images (sample IDs: 1 to 100) having different appearances. It is a block diagram which shows the example of 1 structure of the pattern matching process of this Embodiment.
  • FIG. 1 shows a scanning electron microscope (SEM) that is mainly used for pattern dimension measurement of a semiconductor device formed on a semiconductor wafer as an application example of an inspection apparatus according to an embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an example of a configuration of an apparatus when performing template matching using mask processing.
  • SEM scanning electron microscope
  • A an electron beam is generated from the electron gun 1.
  • the deflector 4 and the objective lens 5 are controlled so that the electron beam is focused and irradiated at an arbitrary position on the semiconductor wafer 3, for example, a sample placed on the stage 2.
  • Secondary electrons are emitted from the semiconductor wafer 3 irradiated with the electron beam and detected by the secondary electron detector 6.
  • the detected secondary electrons are converted into a digital signal by the A / D converter 7, stored in the image memory 15 in the processing / control unit 14, and the CPU 16 performs image processing according to the purpose.
  • the template matching process according to the present embodiment is performed by the processing / control unit 14, more specifically, the matching processing unit 16a.
  • the display device 20 performs processing setting and processing result display, which will be described later with reference to FIG. Further, in the alignment using the optical camera that is lower in magnification than the electron microscope, the optical camera 11 is used.
  • a signal obtained by imaging the semiconductor wafer 3 with the camera 11 is also converted into a digital signal by the A / D converter 12 (when the signal from the optical camera is a digital signal, the A / D converter 12 Is stored in the image memory 15 in the processing / control unit 14, and the CPU 16 performs image processing according to the purpose.
  • the backscattered electrons emitted from the semiconductor wafer 3 are detected by the backscattered electron detector 8, and the detected backscattered electrons are detected by the A / D converter 9 or 10. It is converted into a digital signal, stored in the image memory 15 in the processing / control unit 14, and the CPU 16 performs image processing according to the purpose.
  • a scanning electron microscope is shown as an example of an inspection apparatus.
  • the apparatus to be applied is not limited to this, and can be applied to an inspection apparatus that acquires an image and performs template matching processing. .
  • FIG. 2 is a functional block diagram showing a configuration example of the matching processing unit in the inspection apparatus according to the present embodiment, and is a functional block diagram of the processing unit that performs processing corresponding to FIG.
  • FIG. 3 is a functional block diagram illustrating an example of the entire configuration including a flow of template matching processing in the inspection apparatus according to the present embodiment, and is a diagram illustrating a configuration for performing learning processing. Note that the learning process and the matching process may be separate processes, or may have a partially common hardware configuration or software configuration, or a combination thereof.
  • the matching processing unit 16a illustrated in FIG. 1 includes a feature amount extraction unit 16a-1 that extracts, for example, feature amounts of two inputs, and a plurality of feature amounts including first and second feature amounts. Based on the mutual feature quantity calculation unit 16a-2 for calculating the mutual feature quantity indicating the relationship between the feature quantities, and the template matching is determined based on the mutual feature quantity and the matching success / failure determination boundary surface. A template matching determination unit 16a-3 for obtaining a distance (score) between the mutual feature amount and the matching success / failure determination boundary surface in the feature amount space, and a collation target determination unit 16a-4 for determining whether or not a collation target remains.
  • a feature amount extraction unit 16a-1 that extracts, for example, feature amounts of two inputs, and a plurality of feature amounts including first and second feature amounts.
  • the template matching is determined based on the mutual feature quantity and the matching success / failure determination boundary surface.
  • a template matching determination unit 16a-3 for obtaining a distance (score) between the mutual feature amount
  • the score position selection unit 16a-5 that selects the position on the wafer that has the maximum distance (score), and the class that determines whether the class is a correct answer or an incorrect answer
  • a storage unit 16a-7 for storing the matching unit 16a-6, the matched position (x, y) on the wafer, the matching score, etc., and a display based on these stored values.
  • an output unit 16a-8 for outputting.
  • a feature region extraction processing unit 16a-1a that extracts a feature amount from a template image acquired for learning and a learning processing unit 16a-2a described later are provided.
  • the matching processing unit 16a may include all the elements (functional units) shown in FIG. 2 or may include only a part.
  • FIG. 3 is a flowchart showing the flow of processing by the matching processing unit 16a in the inspection apparatus according to the present embodiment.
  • the processing according to the present embodiment includes determination index value calculation processing X based on mutual features, matching correctness determination boundary surface (identification boundary surface) specification processing Y based on learning processing, and processing X. And derivation processing of a determination result by template matching processing based on processing Y.
  • the determination index value calculation process X based on the mutual features is performed by using a template 101 registered in advance and an image 102 (cutout image at a matching candidate position) cut out from a search target image acquired by the inspection apparatus. This is a process for obtaining the value 109. Details of the process for determining whether there is a search target pattern in the image to be searched and the means for obtaining the matching position will be described later with reference to FIG. In the present embodiment, for example, one of the purposes is to make pattern matching successful even when the difference in the appearance of the image between the template 101 and the matching correct position of the searched image 102 increases. As described later in FIG.
  • a determination index value 109 for performing feature amount-based matching is obtained using the mutual feature amount 108 obtained using both the template 101 and the searched image 102. .
  • Matching using feature quantities that are less susceptible to adverse effects due to differences (or matching using feature quantities so as not to be adversely affected) can be performed, and the robustness of template matching can be improved.
  • the mutual feature quantity 108 includes a feature quantity extraction unit from the feature quantity A 105 extracted from the template 101 by the feature quantity A extraction process 103 by the feature quantity extraction unit 16a-1 and a cut-out image (matching candidate position) 102 from the search target image.
  • the mutual feature amount calculation unit 16a-2 obtains the mutual feature amount calculation processing 107.
  • the mutual feature amount calculation method will be described later. For example, as a simple calculation method, the template 101 and the clipped image 102 from the search target image are used as they are as the feature amount A105 and the feature amount B106, and both images are normalized.
  • the correlation value can be used as one of the mutual feature amounts.
  • a covariance ⁇ XY that is an average value of products of deviation from the average between two sets of corresponding data x and y may be obtained as a mutual feature amount.
  • the obtained mutual feature value 108 is used as a part or all of the determination index value 109 used in the template matching determination unit 16a-3 (matching score calculation unit).
  • the mutual feature quantity 108 is not limited to one, and a plurality of different kinds of feature quantities can be calculated and used.
  • the feature quantity B106 extracted from the clipped image 102 from the searched image can be used as part of the determination index value 109 as an individual feature quantity as it is.
  • This feature amount is not limited to one, and a plurality of different types of feature amounts can be calculated and used as individual feature amounts.
  • the matching score determination processing unit has a median value setting processing unit that sets the distance as 0 as the median value of the score. If the distance is equal to or less than the median value of the matching score, a matching incorrect answer is obtained. You may make it.
  • the learning processing unit 16a-2a can use the image 101a-1 and the matching correct / incorrect information 102a-2 as the template 101a and the cut-out image (matching correct / incorrect information) 102a from the searched image.
  • the following process Y until the determination index value 109a is obtained is the same as the process X, and can be executed by the same algorithm or can be processed by the same hardware. You may make it carry out by another structure.
  • process Y a matching correct / incorrect determination boundary surface specification process 110 is performed from the determination index value 109a.
  • the matching correct / incorrect determination boundary surface specification process 110 details will be described later, but a boundary surface that determines the correctness of matching in the determination index value space is specified. Since a plurality of determination index values are used, and the determination index values include a determination index value 109a obtained based on the mutual relationship between the template and the searched image, for example, an image-based matching method Even in the case where only the correlation value is used in the case where the success / failure of the matching cannot be divided, the technique according to the present embodiment increases the possibility of obtaining the matching success / failure determination boundary surface that can be divided into the success / failure of the matching.
  • a template matching determination unit In the template matching determination process 112 in 16a-3, the distance from the matching determination boundary surface of the determination index value 109 in the determination index value space is calculated as a matching determination index, and the distance is determined as a determination result (matching score or the like) 113. To do. An example of this distance calculation method will be described later.
  • a matching determination index such as a matching score of a target (a cut-out image 102 from the searched image) whose matching score is to be calculated.
  • a matching determination index such as a matching score of a target (a cut-out image 102 from the searched image) whose matching score is to be calculated.
  • FIG. 5 is a flowchart showing a flow of processing for performing search processing using the template matching processing described with reference to FIG.
  • a part 300 surrounded by a broken line corresponds to the process described with reference to FIG. 3.
  • the template matching determination process 112 is performed using the template 101, the image 102 cut out from the search target image, and the matching correctness determination boundary surface 111 for the learning process. Based on this, a matching determination result (matching score) 113 is calculated.
  • the image 102 cut out by the image cutout process 302 of the area to be matched with the template is cut out from the search target image 301, checked against the template 101, and the determination result 113 is output through the template matching determination process 112.
  • the verification target determination process 303 it is determined whether or not the determination result 113 is obtained at all the verification target positions of the searched image.
  • the extraction position is changed by the extraction position changing process 304, and the image is extracted by the extraction process 302 of the image to be compared with the template.
  • a certain score for example, a maximum score position selection process 305 is performed to obtain a position where the matching score is maximized.
  • the matching position with the highest matching score in the belonging class determination processing 306 is a position that can be regarded as a matching correct answer, or a position that is regarded as a matching incorrect answer It is determined whether it is.
  • the determination index value 109 at the matching position where the matching score is maximum is based on the matching correct / incorrect determination boundary surface as a matching incorrect answer side (incorrect answer class). ) (When it is below the score acceptance), it is determined that there is no pattern to be searched within the field of view of the image to be searched. In this case, processing such as searching for a pattern for alignment around the imaging position or notifying the user that the alignment has failed by interrupting measurement is performed (incorrect answer class).
  • the determination index value 109 of the matching position where the matching score is maximum belongs to the matching correct side (correct answer class) with respect to the matching correctness determination boundary surface (when the score acceptance is equal to or higher)
  • the collation position is output as the matching position 307.
  • a matching score can also be output together with the matching correctness.
  • template matching can be performed using the template matching result calculated using the mutual feature amount described in FIG. 3, for example, the matching score.
  • FIG. 6 is a diagram illustrating the principle of the matching score calculation process 112 described in FIG.
  • FIG. 6 (a) shows an example in which two determination index values of determination index value A and determination index value B are used.
  • the matching score is a determination index value space (shown in two dimensions in this example) spanned by determination index values, and the coordinates for which the score is calculated (the coordinates in the determination index value space are determined from each determination index value 109. )
  • the matching correctness determination boundary surface 111 is used.
  • the matching correctness determination boundary surface 111 is given as a result of the matching correctness determination boundary surface designation process 110 as described with reference to FIG.
  • the distance to the matching correct / incorrect determination boundary surface 111 is a broken line portion 410.
  • the Euclidean distance can be used as the distance.
  • the distance to be used is not limited to the Euclidean distance, and any means that can calculate the distance from the matching correctness determination boundary surface 111 may be used.
  • FIG. 6B is a diagram showing the relationship between the distance 410 from the matching correctness determination boundary surface 111 and the matching score 411.
  • the matching score 411 is set to 0.
  • the relationship between the distance and the score at that time can be linear as shown by a straight line 412 in FIG.
  • the correctness of matching may be determined based on the score acceptance as described above. This score acceptance is often required to be determined by the user or device designer, and the matching performance may differ depending on the setting. As described above, if the distance is 0 and the score is a fixed value of 0, setting of acceptance is not necessary. Further, in the matching using the correlation value in the conventional matching method using, for example, normalized correlation, only one correlation value corresponds to the determination index, and the value itself becomes the score. In that case, when one value is used as shown in FIG. 6A, if the matching correct answer and the incorrect answer cannot be separated, an appropriate acceptance score cannot be set. According to the present embodiment, such a case can be avoided.
  • FIG. 7 is a diagram for explaining the specification processing 110 for the matching correct / incorrect determination boundary surface 111 described in FIG.
  • the matching success / failure determination boundary surface 111 includes a case where the matching is correct in the determination index value space (indicated by a circle in FIG. 7A) and a case where the matching is incorrect (indicated by a cross in FIG. 7A). Set for the purpose of separation. By doing so, in the affiliation class determination processing 306 described with reference to FIG. 6, it is possible to determine which side has the matching result on the basis of the matching success / failure determination boundary surface 111, and the matching result is the matching correct position. It can be seen whether there is a matching incorrect answer position.
  • the matching success / failure determination boundary surface 111 can be obtained by a method used in, for example, SVM (Support Vector Vector Machine).
  • the matching success / failure determination boundary surface 111 corresponds to a separation hyperplane (also referred to as an identification surface) in the SVM.
  • the matching success / failure determination boundary surface 111 is a separation hyperplane, and a broken line portion 501 inside the matching success / failure determination boundary surface 111 and a broken line portion 502 outside the matching success / failure determination boundary surface 111 are This is called a margin.
  • a point on the margin is called a support vector (there is at least one in each case of matching correct answer and case of matching incorrect answer).
  • the separation hyperplane is set at the position where the Euclidean distance becomes the maximum with reference to the learning data that is closest to the other cases (it is a support vector). That is, the margin from the extreme end of the case to another case is maximized (margin maximization).
  • the matching correct case and the matching incorrect case are separated in the feature amount space even when there are a plurality of determination index values. It becomes possible to do. That is, the success / failure of the matching can be determined using the matching success / failure determination boundary surface 111 obtained by this method as a reference.
  • FIG. 7B is a diagram for explaining a configuration of processing for obtaining the matching success / failure determination boundary surface 111.
  • a combination of a plurality of determination index values 109a and matching success / failures 102-2 described in FIG. 5 is used as one case, and data (learning data) 102a including a plurality of cases is prepared.
  • the learning data 102a includes a matching correct answer case and a matching incorrect answer case.
  • an SVM separation hyperplane is obtained using SVM (111), and the obtained separation hyperplane is set as a matching success / failure determination boundary surface 111.
  • the matching success / failure determination boundary surface 111 can be obtained from a plurality of determination index values using the SVM.
  • the SVM it is not necessary to limit to the SVM to obtain the identification surface (separation hyperplane), and any method may be used as long as a matching success / failure determination boundary surface that separates the matching correct answer case and the matching incorrect answer case is obtained.
  • FIG. 8 is a diagram for explaining the determination index value calculation means described in FIGS. 3 and 5. As described with reference to FIG. 3, the determination index value 109 a is calculated from the template 101 and the cutout image 102 at the matching candidate position.
  • the feature amount extracted from the template 101 by the feature amount extraction unit A103 (here, the feature amount A105) and the feature amount extracted from the clipped image 102 at the matching candidate position by the feature amount extraction unit B104 (here, the feature amount B106).
  • the method for calculating the mutual feature amount will be described with reference to FIG.
  • the feature amount C608 is calculated by the feature amount extraction processing C605 by using the clipped image 102 or the template 101 at the matching candidate position.
  • a method of calculating the individual feature amount will be described with reference to FIG.
  • the obtained feature amount D108 or feature amount C608 is set as the determination index value 109a.
  • a plurality of types of feature quantity A105, feature quantity B106, feature quantity C608, and feature quantity D108 may be used.
  • a plurality of types of determination index values 109a are also used. With this configuration, a plurality of types of mutual feature amounts and individual feature amounts can be obtained from the template 101 and the cutout image 102 at the matching candidate position, and the feature amounts can be used as the determination index value 109a.
  • FIG. 9 is a diagram illustrating the feature amount described in FIG.
  • the feature quantities are classified according to their properties, and are referred to as a first class feature quantity, a second class feature quantity, and a third class feature quantity.
  • the first type feature amount is a feature that is determined from the target image or a partial region of the target image regardless of the position (coordinates) in the image in the image for calculating the feature amount. For example, the pixel value average value, the pixel value variance value, and the like of the entire image are the first type feature amount. This feature amount corresponds to an individual feature amount.
  • the second type feature amount is a feature amount determined by a position (coordinates) in the image. For example, as shown in FIG.
  • the calculated feature value V i, j is the coordinate (i, j) 1402 on the image (here, the upper left of the image is the origin of the image coordinate system).
  • the feature amount V i, j can be expressed as a multidimensional vector.
  • the vector V i, j of feature quantity has f1 to fn as vector elements (n is the number of vector elements).
  • the SIFT feature amount represents a feature by a vector determined for each certain coordinate (each feature point) on the image.
  • the area around the feature point is divided into a plurality of small areas (16 areas), and a histogram is generated with bins of gradient directions (8 directions) of pixel values in each small area.
  • a vector having a bin as one of vector elements (the number of elements is 128 (16 ⁇ 8)) is used as a feature amount.
  • This feature amount also corresponds to the individual feature amount.
  • the third type feature amount is a feature amount determined by the second type feature amount calculated from the template image, the second type feature amount calculated from the searched image, and the relative position between the two images (for example, the collation position of both images). It is.
  • the mutual feature amount is a third type feature amount.
  • the method for obtaining the third type feature using the second type feature (mutual feature amount calculation method) will be described in detail with reference to FIGS. 11 and 13.
  • the feature amount is determined by the relative position 1416 with respect to the region (broken line portion) cut out from the searched image.
  • FIG. 10 is a diagram for explaining a feature amount calculation area when calculating a feature amount determined by a position in an image in the second type feature amount described in FIG.
  • FIG. 10A shows an example in which the feature amount is obtained from certain coordinates in the image.
  • the pixel value, pixel value gradient information, and the like at this coordinate are used as feature amounts. Therefore, the feature amount is determined by the coordinates in the image.
  • FIG. 10B is an example in which the feature amount is obtained from a certain rectangular area in the image. In this rectangular area, the pixel value average, the pixel value variance, the value of each bin of the pixel value histogram, or the value of each bin of the pixel value gradient direction histogram calculated by dividing the rectangular area into small areas are used as feature amounts. .
  • FIG. 10C shows an example in which the feature amount is obtained from a circular area in the image. Similar to the rectangular area in FIG. 10B, the pixel value average, pixel value variance, and each bin value of the pixel value histogram in this circular area, or the pixel value gradient direction calculated by dividing the circular area into small areas The value of each bin of the histogram is used as a feature amount. As a result, the feature around the target coordinate for calculating the feature amount can also be used, and matching can be made more robust by using this feature.
  • FIG. 10C shows an example in which the feature amount is obtained from a circular area in the image. Similar to the rectangular area in FIG. 10B, the pixel value average, pixel value variance, and each bin value of the pixel value histogram in this circular area, or the pixel value gradient direction calculated by dividing the circular area into small areas The value of each bin of the histogram is used as a feature amount. As a result, the feature around the target coordinate for calculating the feature amount can also be used, and matching
  • FIGS. 10D shows an example in which a feature amount in a region having an arbitrary shape in the image is obtained. Similar to the rectangular area and the circular area in FIGS. 10B and 10C, the feature amount may be calculated from the arbitrarily shaped area. As a result, the feature around the target coordinate for calculating the feature amount can also be used, and matching can be made more robust by using this feature.
  • FIG. 11 is a diagram for explaining a method for obtaining the third class feature quantity from the second class feature quantity in the third class feature quantity described in FIG. As described above, the third type feature value is obtained based on the relative position between the second type feature value of the template image and the second type feature value of the searched image.
  • a region (broken line portion) 1610 having the same size as that of the template image 1601 is cut out from the searched image 1605 (cutout position (X, Y)) 1607, and the second type feature quantity is calculated from the cut out region. .
  • the second type feature amount is also calculated from the template image 1601.
  • a mutual feature amount obtained by obtaining a mutual relationship between the second type feature amount calculated from the searched image 1605 and the second type feature amount calculated from the template image 1601.
  • a vector distance value representing the second type feature quantity of both is used as the mutual feature quantity.
  • the distance is not limited as long as it can quantify the relationship between both feature quantities such as Euclidean distance, Manhattan distance, and grasshopper distance.
  • the mutual feature amount is a feature amount determined by the relative position between the template image and the searched image (in this example, the image cutout position (X, Y) 1607 in the searched image corresponds to the relative position. ).
  • FIG. 11 (b) is a diagram for explaining a method for determining the relative positions of the template image and the image to be searched by a method different from the method of FIG. 11 (a).
  • This method is a method in which the voting value when the voting base method is used as one of the methods for estimating the position where the template image is similar on the searched image is used as the third type feature amount.
  • a second type feature quantity is calculated for each of the template image and the searched image (the calculation of the second type feature quantity for the searched image here is for the entire image area).
  • a point 1631 that serves as a reference for the position when obtaining the second type feature quantity in the template image is referred to as a reference point (for example, in FIG.
  • the upper left is the origin O in the image coordinate system, and (If the type 2 feature is defined, the origin O is the reference point).
  • a feature amount having the highest similarity is selected from the second type feature amounts of both images, and is stored as a pair.
  • the second image on the searched image side paired with the second class feature quantity in the template image.
  • the distance obtained from the template image and the coordinates corresponding to the vector direction are obtained with respect to the calculated position (coordinates) of the type 2 feature amount.
  • voting is performed on the obtained coordinates as matching position candidate coordinates (one vote for one of the pairs).
  • This voting process is performed for all pairs (or all pairs having a certain degree of similarity).
  • the number of votes at the reference point for example, the upper left coordinate of the cut-out region 1641 is set as the third type feature amount.
  • you may use several high-order groups with high similarity in each feature point for example, three high-order groups are used).
  • the third type feature quantity can be calculated from the second type feature quantity.
  • the third type feature quantity which is a mutual feature quantity it becomes possible to perform more robust matching.
  • FIG. 12 is a diagram illustrating a specific example of the feature amount described in FIG.
  • the same kind of feature quantity A105, feature quantity B106, and feature quantity C608 (individual feature quantity) described in FIG. 8 can be used.
  • the feature quantity A105, feature quantity B106, and feature quantity C608 include, for example, a feature quantity 702 relating to texture in an area based on a specified coordinate in the image, or an image.
  • An edge feature amount 703 representing information on the structure of a pattern can be used.
  • Examples of the feature amount related to the texture include those using a histogram feature amount 707, a contrast feature amount 708, and a co-occurrence matrix 709, which will be described later.
  • This feature amount is a feature amount corresponding to the second type feature amount described with reference to FIG.
  • the histogram feature 707 is a feature value including an average, variance, skewness, kurtosis, etc. obtained by analyzing a gradation value histogram in a region with reference to a specified coordinate in the image for each of the template and the searched image.
  • the contrast feature amount 708 uses an average gradation value of an area designated by each of the template and the searched image as a feature amount.
  • the designated area for example, an area where a pattern (for example, a line pattern) exists in an image or an area where no pattern exists (background area) is used.
  • the contrast difference between a plurality of designated areas in the respective fields of view of the template and the searched image may be obtained (contrast feature in the image), and the value may be used as the feature amount.
  • feature point information 704 obtained using a technique such as SIFT (Non-Patent Document 2).
  • the edge feature quantity 703 includes a feature quantity such as HOG (Histgramsistof Oriented Gradients).
  • the feature amount D (mutual feature amount 720) is calculated mutually from the feature amounts obtained from the feature amount A105 and the feature amount B106.
  • the degree of coincidence for example, a difference in values
  • the correlation value of the histogram distribution shape obtained from the template and the searched image may be used as the feature amount.
  • the degree of coincidence for example, a difference in values of contrast features obtained from the template and the searched image is used as a feature amount.
  • a correlation value between the template and the searched image itself may be used as the feature amount.
  • the image used at that time may be the input image itself, or an image in which preprocessing such as noise removal processing and edge enhancement processing has been performed in advance may be used.
  • the mutual feature amount is obtained from the corner time point information
  • the number of matching corner points obtained from the template and the searched image can be used.
  • the number of Votings in corresponding point matching may be used as the feature amount as shown in Non-Patent Document 2. This feature amount corresponds to the third type feature amount described in FIG.
  • the plurality of individual feature values 701 and some or all of the mutual feature values as described above can be used in the template matching described with reference to FIGS.
  • FIG. 13 is a diagram for explaining an example of the feature amount calculation means described in FIG. Any of the following feature amounts can be used for the feature amount A105, the feature amount B106, and the feature amount C608 described with reference to FIGS.
  • FIG. 13A is a diagram for explaining the histogram feature.
  • the histogram feature is a means that uses, as a feature, a distribution shape of a gradation value histogram in a specified region or a value obtained by analyzing the distribution. Histograms 804 and 805 are obtained from the template 801 and the image 803 cut out from the searched image 802, respectively.
  • the distribution shape of the histogram can be used as it is. For example, it is characterized by a vector whose element is the frequency of each bin (divided section of the data range) of the histogram. Alternatively, some or all of the average, variance, skewness, and kurtosis calculated by analyzing the shape of the distribution may be used as the feature amount.
  • a cumulative histogram may be used as the gradation value histogram.
  • FIG. 13B is a diagram for explaining the contrast feature.
  • the average value of the gradation values in the designated areas 814 and 815 is used as the feature amount.
  • the information is not limited to the average value, and may be any information that can represent the information of the gradation value in the region, and may be, for example, a variance value, a maximum value, a minimum value, or the like.
  • FIG. 13C is a diagram for explaining a feature amount different from that in FIG. 13B regarding the contrast feature.
  • the ratio of the average values of the gradation values (contrast in the image) is obtained in each of the designated areas 822 and 823, and the value is used as the feature amount.
  • the ratio of the average value of the gradation values (contrast in the image) is used as the feature amount in the specified regions 826 and 827.
  • the average value is used, but the present invention is not limited to this, and any information that can represent the information of the gradation value in the area may be used.
  • the feature amounts acquired by the method illustrated in FIGS. 13A to 13C are individual feature amounts that can be used for the feature amount A105, the feature amount B106, and the feature amount C608 described above.
  • the present invention is not limited to these feature amounts, and any value or vector that represents the features of an image cut out from the template and the searched image may be used.
  • the mutual feature quantity which is the feature quantity D720 described with reference to FIGS. 8 and 12 can be obtained by comparing the individual feature quantities obtained from the template image and the image cut out from the searched image.
  • the correlation value of the distribution shape of the histogram obtained from the template image and the image cut out from the searched image is used as the mutual feature amount.
  • an average (or variance, skewness, kurtosis, etc.) difference or ratio which is a value obtained by analyzing a histogram, can be used as a mutual feature amount.
  • the contrast feature amount obtained in FIGS. 13B and 13C the difference or ratio between the values obtained from the template image and the searched image can be used as the mutual feature amount.
  • the following can be used as a mutual feature amount.
  • FIG. 13D is a diagram for explaining line profile characteristics.
  • pixels are added and averaged (projected) in a certain direction of the image to obtain a different dimension waveform. This is called a line profile.
  • FIG. 13D shows an example in which each image is projected in the Y direction.
  • Correlation values of line profiles 834 and 835 of the respective images can be obtained, and the correlation values can be used as mutual feature amounts. It should be noted that the range in which the line profile is correlated is not limited to using the entire line profile, and the correlation value of only a section cut out of a part of the line profile may be used.
  • FIG. 13E is a diagram illustrating an example in which the correlation value of the image itself is a mutual feature amount.
  • a correlation value between images is calculated in the template 841 and an image 843 cut out from the searched image 842, and the correlation value is used as a feature amount.
  • FIG. 13F is an example in which the corresponding point matching result of SIFT is a mutual feature amount.
  • corresponding point matching is performed on feature points (feature descriptors) extracted from the template 851 and the searched image 852, for example, corresponding points 853 connected by arrows are the corresponding features in the searched image 852.
  • the coordinates, scale, and rotation amount of the point are obtained (Non-Patent Document 2).
  • the reference point coordinates for example, the position of the white circle in the template 851
  • the reference point in the searched image 852 is obtained from the previous coordinates, scale, and rotation amount information by generalized Hough transform.
  • Voting (Voting processing) is performed on the position (Non-patent Document 2).
  • the feature amount can be the number of votes that the template is projected to the position (or the periphery thereof) of the image cut out from the searched image.
  • the density of voting (the number of votes / the area of the surrounding area) may be considered in consideration of the surrounding area, not the number of votes.
  • the correlation value between the SIFT feature value of the corresponding point template obtained by matching the corresponding points and the SIFT feature value in the searched image may be used as the feature value.
  • FIG. 13G is an example in which the corresponding point matching result of Corner is used as a mutual feature quantity instead of the SIFT in FIG. 13F.
  • corresponding point matching is performed using a corner extracted from the template 861 and the image 863 cut out from the searched image 862 as a feature point (for example, a point connected by an arrow is a corresponding point)
  • a searched image 862 is obtained.
  • the coordinates, scale, and rotation amount of the corresponding feature point are obtained.
  • Mutual feature quantities can be obtained by the above method. Note that the method for calculating the mutual feature amount is not limited to the method described here, and any feature amount (scalar value, vector, or the like) expressing the mutual relationship between the template and the searched image may be used. .
  • the template and the searched image are preprocessed to generate an image with reduced noise or enhanced features, and the above-described feature amounts (individual feature amounts and mutual feature amounts) are obtained for the generated images.
  • Examples of the preprocessing include smoothing filtering, edge enhancement filtering, and binarization processing.
  • the present invention is not limited to the processing described here, and any filter processing can be used as long as it can be used as preprocessing.
  • a process combining a plurality of pre-processing is performed on a template or a searched image, and the above-described feature amount (individual feature amount and mutual feature amount) is obtained for the image obtained by the process. You can also.
  • FIG. 14 is a diagram illustrating in detail the learning data when calculating the matching success / failure determination boundary surface 111 described in FIG. FIG. 14A shows an example of simple learning data.
  • One template 1001, an image 1002 in which a matching correct answer position in the searched image is cut out, and an image 1003 in which a matching incorrect answer position is cut out are used as learning data.
  • a plurality of images (correct position and incorrect position) cut out from the searched image may be used when a plurality of images can be obtained from one image, or when a plurality of images are used for learning ( One template or a plurality of images may be cut out from each of a plurality of images used for learning.
  • the matching correct / incorrect position of the matching success / failure determination boundary surface 111 in the determination index value space described in FIG. 7 is determined. The possibility of improving generalization performance increases.
  • FIG. 14B is a diagram illustrating an example in which a plurality of types of templates 1011 used for learning data are used.
  • a plurality of templates 1004 it is possible to obtain a matching success / failure determination boundary surface that is less dependent on a specific template pattern or appearance, that is, more versatile.
  • FIG. 15 is a diagram for explaining a method of determining not only the matching success / failure boundary surface in FIG. 3 but also the feature amount extraction method in the feature amount extraction unit by learning.
  • the feature amount extraction method is learned by a genetic algorithm (hereinafter referred to as GA) or genetic programming (hereinafter referred to as GP).
  • GA genetic algorithm
  • GP genetic programming
  • Feature extraction is a combination of a plurality of image processes. Each image process may have a plurality of setting parameters. The combination of the image processing and the setting parameters of each processing are learned using GA or GP.
  • a combination of image processing (including parameter setting) for calculating a determination index value is set as a chromosome (solution candidate).
  • FIG. 15A shows the flow of processing when GA or GP is used for learning.
  • FIG. 15B is a diagram showing in detail the evaluation unit in FIG.
  • chromosome 1721 is a combination of processes for calculating determination index value 1728 (a plurality of determination index values are calculated from one chromosome).
  • a plurality of chromosomes (solution candidates) 1721 are generated (for example, 100 individuals are generated).
  • a matching success / failure determination boundary surface calculation process 1723 is performed.
  • the matching success / failure determination boundary surface is calculated by the SVM described above.
  • the evaluation value 1724 for example, the distance (score) from the matching boundary surface to the support vector in SVM can be used as the evaluation value.
  • Learning end determination 1725 is performed based on whether or not the evaluation value satisfies a specified value (for example, whether or not the distance is larger than the specified value).
  • the chromosome at the end of learning becomes a feature amount extraction method (combination of image processing and parameter setting for each image processing), and a matching success / failure determination boundary surface is also determined. If the learning does not end, the learning is continued, and the processing of the selection 1704, the intersection 1705, and the mutation 1706 shown in FIG. 15A is performed again (generation change).
  • distance was used as an evaluation value here, if it is an evaluation value which can judge the quality of a matching success / failure determination boundary surface, it will not be limited to distance.
  • the method used for learning is not limited to GA (or GP) and SVM, and any method capable of such learning may be used.
  • FIG. 16 is a diagram illustrating an example of a GUI for realizing manual setting of the matching success / failure determination boundary surface 111.
  • the matching success / failure determination boundary surface 111 can be obtained by using a technique such as SVM, for example.
  • the user manually specifies the matching success / failure determination boundary surface 111.
  • FIG. 16A is a diagram illustrating an example of user setting by GUI. In the GUI shown in FIG. 16A, the determination index value space is displayed on the display device 20 such as an LCD screen. In FIG.
  • a graph in which the two determination index values of the determination index value A1103 and the determination index value B1104 are plotted on the vertical axis and the horizontal axis is taken as an example, but three or more determination index values are used. If it is, it can be displayed by switching the index value displayed on the axis.
  • the boundary surface drawing button 1102 is selected with a mouse or the like, and acceptance of user input of the matching determination boundary surface 1101 is started. Next, the user manually draws the determination boundary surface 1101 in the determination index value space on the GUI by mouse input or the like.
  • the matching determination boundary surface drawn by the user can be used as the matching success / failure determination boundary surface 111 described in FIG.
  • GUI is not limited to the format as shown in FIG. 16, and any known technique can be used as long as the user can manually specify the matching success / failure determination boundary surface 111.
  • FIGS. 16B and 16C are diagrams illustrating an example of a determination index value space displayed on the GUI.
  • FIG. 16B shows an example in which the matching correctness determination boundary surface 111 is drawn with a straight line
  • FIG. 16C shows an example in which the matching correctness determination boundary surface 111 is drawn with a curve.
  • the plot displayed with a symbol ⁇ or symbol ⁇ on the graph is a determination index value in the learning data, and one symbol is obtained from a set of templates and an image cut out from the searched image.
  • the determination index value is a value when the matching is correct
  • x is a value when the matching is incorrect.
  • the distribution of the symbols ⁇ and ⁇ can be displayed as reference data when the user designates the matching success / failure determination index values 1103 and 1104 by manual operation.
  • xx is used as a symbol, but the symbol is not limited to this, and any symbol can be used as long as it can distinguish between correct and incorrect answers.
  • FIG. 17 is a diagram illustrating an example of a GUI for confirming the stability of a matching result in a determination index value space spanned by determination index values.
  • FIG. 17A is an example of a two-dimensional determination index value space.
  • a determination index value space spanned by the determination index value A 1203 and the determination index value B 1204, a matching success / failure determination boundary surface 1201, and a template obtained when a matching position (matching result) is obtained by matching and an image cut out from the searched image
  • the position 1202 in the determination index value space of the matching result is displayed in the GUI.
  • the GUI shown in this figure it is possible to graphically confirm how far the matching result is from the matching success / failure determination boundary surface 1201. If the matching result is close to the matching success / failure determination boundary surface 1201, it can be confirmed that if the determination index value is slightly different, the matching correct answer / incorrect answer is different and the matching may be unstable.
  • FIG. 17B is a diagram showing an example of a GUI for confirming the stability of the matching result, as in FIG. 17A, but the determination index value space spanned by three determination index values is a three-dimensional graph. Can be checked graphically.
  • the matching success / failure determination boundary surface 1205 and the matching result position 1202 can be displayed as in FIG.
  • the matching success / failure determination boundary surface 1205 can also confirm data in the boundary surface by transparent display.
  • the position of the learning data, the matching success / failure determination boundary surface, and the matching result in the determination index value space can be confirmed from any viewpoint such as the front side or the back side by using the viewpoint movement buttons 1206, 1207, and the like.
  • FIG. 17C is a GUI for confirming the stability of the matching result, as in FIGS. 17A and 17B.
  • a determination index value space spanned by a plurality of determination index values is divided into two arbitrary index values.
  • a display 1211 in which all or some of the projections 1214 obtained by projecting the matching determination boundary surface in the determination index value space spanned by the determination index values are arranged in correspondence with each other is displayed.
  • FIG. 17C shows an example in which there are four determination index values A, B, C, and D (1212, 1213).
  • a matching determination boundary is set in a determination index value space obtained from A and B.
  • a projected surface 1214 can be seen for any correspondence. As described above, it is possible to provide a GUI for confirming the stability of the matching result.
  • the arrangement, size, and items of display members in the GUI are not limited to those shown in the figure, and a display method for confirming the relationship between the learning data, the matching success / failure determination boundary surface, and the matching result position in the determination index value space. If it is good.
  • Each component of the present invention can be arbitrarily selected, and an invention having a selected configuration is also included in the present invention.
  • the present invention can be used for a pattern matching device.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Biochemistry (AREA)
  • Manufacturing & Machinery (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Power Engineering (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Length-Measuring Devices Using Wave Or Particle Radiation (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

In an inspection device which carries out pattern matching on an image to be searched, a feature region extraction processing unit which extracts a feature value from a template image which is acquired for learning, a feature value extraction processing unit which extracts feature values from an image to be searched with is acquired for learning, a reciprocal feature value computation processing unit between the template image and the image to be searched from the feature value which is extracted from the template image and the feature value which is extracted from the image to be searched, a learning processing unit which computes an identification boundary which determines pass/fail of matching using a plurality of the reciprocal feature values, a processing unit which computes the plurality of reciprocal feature values from the image which is acquired from the inspection object, and matching is carried out on the inspection object template image and the image to be searched using the plurality of reciprocal feature values and the identification boundary. Thus, it is possible to provide an inspection device whereby, in template matching, an accurate matching location is outputted even with a large appearance deviation between the template and the image to be searched.

Description

マッチング処理装置、マッチング処理方法、及びそれを用いた検査装置Matching processing apparatus, matching processing method, and inspection apparatus using the same
 本発明は、マッチング処理技術、より詳細にはパターンマッチング技術に関し、特に、半導体ウェーハ上に形成されたパターンの検査、或いは計測を目的とした検査技術におけるテンプレートマッチング技術に関する。 The present invention relates to a matching processing technique, more specifically to a pattern matching technique, and more particularly to a template matching technique in an inspection technique for the purpose of inspecting or measuring a pattern formed on a semiconductor wafer.
 半導体ウェーハ上に形成されたパターンを計測、検査する装置では、テンプレートを用いてマッチングを行うテンプレートマッチング技術(下記非特許文献1参照)を利用して、所望の計測位置に検査装置の視野を合わせる処理を行う。下記特許文献1には、そのようなテンプレートマッチング方法の一例が説明されている。尚、テンプレートマッチング処理とは、予め登録されたテンプレート画像と最も一致する領域を、探索対象の画像から見つける処理である。 In an apparatus for measuring and inspecting a pattern formed on a semiconductor wafer, a template matching technique (see Non-Patent Document 1 below) that performs matching using a template is used to align the visual field of the inspection apparatus with a desired measurement position. Process. Patent Document 1 below describes an example of such a template matching method. Note that the template matching process is a process for finding an area most matching a pre-registered template image from the search target image.
 テンプレートマッチングを用いた検査装置の具体例としては、走査型電子顕微鏡を用いた半導体ウェーハ上のパターンの計測がある。本検査装置では、ステージ移動によって計測位置の大まかな位置に装置の視野を移動するが、ステージの位置決め精度だけでは電子顕微鏡の高倍率で撮像された画像上では大きなズレが生じることが多い。また、ウェーハをステージに毎回同じ方向で載せられるとは限らず、ステージに載せたウェーハの座標系(例えば、ウェーハのチップ等の並ぶ方向)が、ステージの駆動方向と完全には一致せず、これも電子顕微鏡の高倍率で撮像された画像上でのズレの原因となる。 A specific example of an inspection apparatus using template matching is measurement of a pattern on a semiconductor wafer using a scanning electron microscope. In this inspection apparatus, the field of view of the apparatus is moved to a rough position of the measurement position by moving the stage, but a large deviation often occurs on an image captured at a high magnification of an electron microscope only with the positioning accuracy of the stage. In addition, the wafer is not always placed on the stage in the same direction, and the coordinate system of the wafer placed on the stage (for example, the direction in which the chips of the wafer are arranged, for example) does not completely match the drive direction of the stage, This also causes a shift on an image taken at a high magnification of an electron microscope.
 更に、所望の観察位置での高倍率の電子顕微鏡画像を得るために、電子ビームを微小量(例えば、数十μm以下)だけ偏向させて観察試料上の狙った位置に照射する場合があるが(これを、「ビームシフト」と呼ぶ場合がある)、このビームシフトを行っても、ビームの偏向制御の精度だけでは、照射位置に関し、所望の観察位置からズレが生じることがある。このような夫々のズレを補正して正確な位置での計測、検査を行うために、テンプレートマッチングが行われる。 Furthermore, in order to obtain a high-magnification electron microscope image at a desired observation position, there is a case where an electron beam is deflected by a minute amount (for example, several tens of μm or less) and irradiated to a target position on an observation sample. (This may be referred to as “beam shift”). Even if this beam shift is performed, a deviation from a desired observation position may occur with respect to the irradiation position only by the accuracy of beam deflection control. Template matching is performed to correct each of these deviations and perform measurement and inspection at an accurate position.
 具体的には、電子顕微鏡像よりも低倍率の光学式カメラでのアライメントと、電子顕微鏡像でのアライメントの、多段階にアライメントを行う。例えば、ステージに乗せたウェーハの座標系のアラメントを光学式カメラで行う場合には、ウェーハ上で離れた位置にある複数チップ(例えばウェーハの左右両端のチップ)の画像を用いてアライメントを行う。まず、夫々のチップ内、あるいは近傍にあるユニークな同一パターン(夫々のチップ内で相対的に同じ位置にあるパターン)をテンプレートとして登録する(登録に用いるパターンとしては、ウェーハ上に光学用のアライメントパターンとして作成されたものを用いることが多い)。 Specifically, alignment is performed in multiple stages: alignment with an optical camera with a lower magnification than the electron microscope image and alignment with the electron microscope image. For example, when the alignment of the coordinate system of the wafer placed on the stage is performed with an optical camera, alignment is performed using images of a plurality of chips (for example, chips on both the left and right sides of the wafer) that are separated from each other on the wafer. First, register the same unique pattern in each chip or in the vicinity (pattern relatively in the same position in each chip) as a template (the pattern used for registration is optical alignment on the wafer) Often used as a pattern).
 次に、夫々のチップでテンプレート登録したパターンを撮像するようにステージ移動を行い、夫々のチップで画像を取得する。取得した画像に対しテンプレートマッチングを行う。その結果として得られる夫々のマッチング位置をもとに、ステージ移動のズレ量を算出し、このズレ量をステージ移動の補正値としてステージ移動の座標系とウェーハの座標系を合わせることを行う。また、次に行う電子顕微鏡でのアライメントでは、予め計測位置に近いユニークなパターンをテンプレートとして登録しておき、テンプレートから見た計測位置の相対座標を記憶しておく。そして、電子顕微鏡で撮像した画像から計測位置を求める時は、撮像した画像においてテンプレートマッチングを行い、マッチング位置を決め、そこから記憶しておいた相対座標分移動したところが計測位置となる。このようなテンプレートマッチングを利用して、所望の計測位置まで装置の視野を移動させることを行う。 Next, the stage is moved so as to capture the pattern registered in the template with each chip, and an image is acquired with each chip. Template matching is performed on the acquired image. Based on the respective matching positions obtained as a result, the shift amount of the stage movement is calculated, and the coordinate system of the stage movement and the coordinate system of the wafer are matched using this shift amount as a correction value for the stage movement. In the next alignment with an electron microscope, a unique pattern close to the measurement position is registered in advance as a template, and the relative coordinates of the measurement position viewed from the template are stored. When obtaining the measurement position from the image captured by the electron microscope, template matching is performed on the captured image, the matching position is determined, and the position moved by the stored relative coordinates is the measurement position. Using such template matching, the visual field of the apparatus is moved to a desired measurement position.
 尚、上述のステージ移動のズレ、或いはビームシフトのズレが大きな場合には、電子顕微鏡で撮像した画像内にアライメント用のパターンが写らない場合がある。その場合は、撮像位置周辺でアライメント用のパターンを再び探す(周辺探索)、或いは計測を中断してアライメントに失敗したことをアラームでユーザに伝える等の処理(計測中断)を行うことがある。この処理を行うには画像内にアライメント用のパターンが有るか否かの判定が必要となる。この判定には、例えばテンプレートマッチングでのマッチングスコア(例えば正規化相関演算での相関値)を用いる。マッチングスコアが予め設定した基準値(以降、この基準値をスコアアクセプタンスと呼ぶ)より高ければ視野内にパターンが有ると判定し、マッチングスコアがスコアアクセプタンスが低ければパターンが無いと判定する。 In addition, when the above-described stage movement deviation or beam shift deviation is large, the alignment pattern may not appear in the image captured by the electron microscope. In that case, a process of searching for an alignment pattern again around the imaging position (periphery search) or interrupting measurement and notifying the user that the alignment has failed (measurement interruption) may be performed. In order to perform this processing, it is necessary to determine whether or not there is an alignment pattern in the image. For this determination, for example, a matching score in template matching (for example, a correlation value in normalized correlation calculation) is used. If the matching score is higher than a preset reference value (hereinafter, this reference value is referred to as score acceptance), it is determined that there is a pattern in the field of view, and if the matching score is low, it is determined that there is no pattern.
 テンプレートマッチング方法は、正規化相関などによる画像ベースの方法と、画像から抽出した特徴点を比較する特徴点ベースの手法とに分けることができる。前者の画像ベースによる手法では、例えば被サーチ画像からテンプレートと同サイズの画像を切り出し、切り出した画像とテンプレートとの相関値を算出し、この相関値を被サーチ画像から切り出す画像位置毎(位置は例えば被サーチ画像全面)に算出し、相関値が大きな位置をマッチング位置とする(非特許文献1)。一方で、後者の特徴点ベースの手法は、テンプレートと被サーチ画像の夫々で複数の特徴点を抽出し、例えば両画像で類似した特徴点を見つけ出し(対応点マッチング)、その特徴点を重ねるようなテンプレートの射影を行ったときに(画像間での回転・スケール等の違いは考慮)、射影した領域が重なる数が多くなる位置をマッチング位置とする(非特許文献2)。 The template matching method can be divided into an image-based method based on normalized correlation and a feature point-based method for comparing feature points extracted from images. In the former image-based method, for example, an image having the same size as the template is extracted from the searched image, a correlation value between the extracted image and the template is calculated, and this correlation value is extracted for each image position (position is determined from the searched image). For example, a position having a large correlation value is set as a matching position (non-patent document 1). On the other hand, the latter feature point-based method extracts a plurality of feature points in each of the template and the searched image, for example, finds similar feature points in both images (corresponding point matching), and superimposes the feature points. When a simple template is projected (considering differences in rotation, scale, etc. between images), a position where the number of projected areas overlaps is set as a matching position (Non-patent Document 2).
特許文献2001-243906号公報(対応米国特許US627888)Patent document 2001-243906 (corresponding US patent US627278)
 先に述べたテンプレートマッチングにおいて、テンプレートと被サーチ画像とで画像の見た目の乖離が大きな場合にマッチングが成功しない場合がある。テンプレートと被サーチ画像とで画像の見た目の乖離が大きくなる理由として、例えば、テンプレートを登録した時の検査装置の撮像条件と被サーチ画像を撮像した時の検査装置の撮影条件との差が大きくなった場合、テンプレートを登録した時に撮影した半導体パターンの出来栄えと被サーチ画像を撮像した半導体パターンの出来栄えの違いが大きくなった場合、或いは、テンプレートを登録した時の半導体パターンの半製造工程と、被サーチ画像を撮影したときの半導体パターンの製造工程とが異なる場合などがある。また、ここに挙げた例に限らず、様々な要因でテンプレートと被サーチ画像とで画像の見た目の乖離が大きくなることがある。 In the template matching described above, the matching may not be successful if there is a large difference in the appearance of the image between the template and the searched image. The reason why the difference in the appearance of the image between the template and the searched image is large is, for example, that the difference between the imaging condition of the inspection apparatus when the template is registered and the imaging condition of the inspection apparatus when the searched image is captured is large. If the difference between the performance of the semiconductor pattern captured when the template is registered and the performance of the semiconductor pattern captured the image to be searched is increased, or the semi-manufacturing process of the semiconductor pattern when the template is registered, There are cases where the manufacturing process of the semiconductor pattern when the image to be searched is photographed is different. In addition to the examples given here, there may be a large difference in appearance between the template and the searched image due to various factors.
 画像ベースのテンプレートマッチングにおいては、テンプレートと被サーチ画像との見た目が乖離すると、マッチング正解位置での相関値が低下し、マッチングに失敗する恐れがある。テンプレートと被サーチ画像との見た目の違いを小さくするために、平滑化処理、エッジ強調などの前処理で行う方法も提案されている(非特許文献3)。 In image-based template matching, if the appearance of the template and the image to be searched for deviates, the correlation value at the matching correct position may decrease, and matching may fail. In order to reduce the difference in appearance between the template and the searched image, a method of performing preprocessing such as smoothing processing and edge enhancement has also been proposed (Non-patent Document 3).
 しかしながら、見た目が様々な画像に対して、マッチング正解位置を求めることは難しい。また画像の見た目が異なる度に、ユーザがスコアアクセプタンスを変更することが必要で、装置の稼働率を下げてしまう。スコアアクセプタンスは、理想的には、統一の値(固定値)が望ましいが、現状の手法では、それは難しい。図4は、見た目の異なる複数の画像(サンプルID:1~100)に対して、マッチング正解位置及びマッチング不正解位置での相関値(マッチングのスコア)を例示した図である。サンプルによって相関値が異なり、一つのスコアアクセプタンス(しきい値)202(或いは203)では、マッチング成否を判定は困難となっている。スコアアクセプタンス(しきい値1)202を用いると、区間205のサンプルでマッチング正解位置のものをマッチング不正解位置と誤って判定することになる。 However, it is difficult to find the correct matching position for images that look different. Also, every time the image looks different, it is necessary for the user to change the score acceptance, which reduces the operating rate of the apparatus. The score acceptance is ideally a uniform value (fixed value), but it is difficult with the current method. FIG. 4 is a diagram exemplifying correlation values (matching scores) at a matching correct answer position and a matching incorrect answer position for a plurality of images (sample IDs: 1 to 100) having different appearances. The correlation value differs depending on the sample, and with one score acceptance (threshold value) 202 (or 203), it is difficult to determine the success or failure of the matching. If the score acceptance (threshold value 1) 202 is used, a sample in the section 205 with a matching correct answer position is erroneously determined as a matching incorrect answer position.
 一方で、例えばアクセプタンス(しきい値2)203を用いた場合は、マッチング不正解位置のサンプル(205の右となりの斜線領域)をマッチング正解位置であると誤って判定してしまう。 On the other hand, for example, when the acceptance (threshold value 2) 203 is used, the sample of the matching incorrect answer position (the hatched area on the right side of 205) is erroneously determined as the matching correct answer position.
 このように、マッチング正解位置と不正解位置とでのスコアの分離性が悪い(統一のスコアで分離できない)と、マッチングの成否判定が困難となり、検査装置で必要とされるマッチング性能が得られない恐れがある。 In this way, if the score separation between the correct answer position and the incorrect answer position is poor (cannot be separated with a unified score), it will be difficult to determine the success or failure of the matching, and the matching performance required by the inspection device will be obtained. There is no fear.
 また、特徴点ベースのテンプレートマッチングにおいては、例えばSIFT(Scale-Invariant Feature Transform)等の特徴量を用いた方法等(非特許文献2参照)が提案されているが、この手法においても、テンプレートと被サーチ画像との見た目の乖離が大きいと、テンプレートと被サーチ画像との特徴ベクトル(特徴記述子)の類似性が悪くなり、対応点マッチングが上手くできずにマッチングが不安定になるという問題がある。 In addition, in feature point-based template matching, for example, a method using a feature quantity such as SIFT (Scale-Invariant Feature Transform) (see Non-Patent Document 2) has been proposed. If the apparent divergence from the searched image is large, the similarity of the feature vector (feature descriptor) between the template and the searched image will be worse, and matching will not be successful, making the matching unstable. is there.
 本発明は、テンプレートマッチングにおいて、テンプレートと被サーチ画像とで画像の見た目の乖離が大きな場合でも、正確なマッチング位置を出力することを目的とする。 An object of the present invention is to output an accurate matching position even when there is a large difference in the appearance of an image between a template and a searched image in template matching.
 上記課題を解決するために本発明では、テンプレート、或いは被サーチ画像の各々から別々に特徴量(以降、個別特徴量と呼ぶ)を抽出して比較するのみならず、テンプレートと被サーチ画像の両者から求めた相互的な情報(以降、相互的特徴量と呼ぶ)もマッチング成否の判定の情報として用いる。これを実現するための本発明の代表的なものの概要は次の通りである。 In order to solve the above problems, in the present invention, not only a feature amount (hereinafter referred to as individual feature amount) is extracted and compared separately from each of the template or the searched image, but also both the template and the searched image. The mutual information obtained from the above (hereinafter referred to as a mutual feature amount) is also used as information for determining the success or failure of matching. An outline of a representative example of the present invention for realizing this is as follows.
 本発明によるテンプレートマッチングを行う検査装置は、テンプレートから画像内の座標によって定まる特徴量を抽出する特徴領域抽出処理部と、被サーチ画像から画像内の座標によって定まる特徴量を抽出する特徴量抽出処理部と、前記テンプレートから抽出した特徴量、及び前記被サーチ画像から抽出した特徴量、前記テンプレートと被サーチ画像との相対位置とから、テンプレートと被サーチ画像の両画像の相互的な特徴量算出処理部と、複数の前記相互的な特徴量を用いてテンプレートと被サーチ画像とのマッチングを行うテンプレートマッチング処理部を備えることを特徴とする。 An inspection apparatus for performing template matching according to the present invention includes a feature region extraction processing unit that extracts a feature amount determined by coordinates in an image from a template, and a feature amount extraction process that extracts a feature amount determined by coordinates in the image from a searched image Mutual feature amount calculation of both images of the template and the searched image from the portion, the feature amount extracted from the template, the feature amount extracted from the searched image, and the relative position between the template and the searched image The image processing apparatus includes a processing unit and a template matching processing unit that performs matching between the template and the searched image using the plurality of mutual feature amounts.
 また上記の相互的特徴量を複数種類用い、また個別特徴量をも複数種類を用いることを特徴とし、それらの特徴量によって張られる特徴量空間において、マッチング対象の座標と、マッチングの成否を分ける識別面との距離をマッチングスコアとすることを特徴とし、境界面からの距離が0の時にスコアを0とし(或いはスコアの中央値とする)、スコアが正ならマッチング正解、スコアが負ならばマッチング不正解(或いは、スコアの中央値以上ならマッチング正解、スコアの中央値以下なら不正解)とすることを特徴とする。これによりスコアアクセプタンスは常に0(或いはスコア中央値)を固定値とすることが可能となる。 In addition, it is characterized by using a plurality of types of the above-mentioned mutual feature amounts and also using a plurality of types of individual feature amounts. In the feature amount space spanned by these feature amounts, the coordinates of the matching target and the success or failure of the matching are separated. If the distance from the boundary plane is 0, the score is 0 (or the median score), and if the score is positive, the matching correct answer, if the score is negative Matching incorrect answer (or matching correct answer if it is above the median score, incorrect answer if it is below the median score). As a result, the score acceptance can always be 0 (or the median score) as a fixed value.
 本明細書は本願の優先権の基礎である日本国特許出願2012-167363号の明細書および/または図面に記載される内容を包含する。 This specification includes the contents described in the specification and / or drawings of Japanese Patent Application No. 2012-167363, which is the basis of the priority of the present application.
 本発明によれば、テンプレートマッチングにおいて、テンプレートと被サーチ画像とで画像の見た目の乖離が大きな場合でも、正確なマッチング位置を出力することができる。 According to the present invention, in template matching, an accurate matching position can be output even when there is a large difference in image appearance between the template and the searched image.
本発明の一実施の形態によるテンプレートマッチングを行う検査装置の例(SEM)を示す図である。It is a figure which shows the example (SEM) of the test | inspection apparatus which performs template matching by one embodiment of this invention. 本発明の一実施の形態によるテンプレートマッチング処理部の一構成例を示す機能ブロック図である。It is a functional block diagram which shows one structural example of the template matching process part by one embodiment of this invention. 本発明の一実施の形態によるテンプレートマッチング装置の一構成例を示す機能ブロック図であり、学習処理機能を有する装置の一例を示す図である。It is a functional block diagram which shows one structural example of the template matching apparatus by one embodiment of this invention, and is a figure which shows an example of the apparatus which has a learning process function. 見た目の異なる複数の画像(サンプルID:1~100)に対して、マッチング正解位置及びマッチング不正解位置での相関値(マッチングのスコア)を例示した図である。FIG. 6 is a diagram illustrating correlation values (matching scores) at matching correct answer positions and matching incorrect answer positions for a plurality of images (sample IDs: 1 to 100) having different appearances. 本実施の形態のパターンマッチング処理の一構成例を示すブロック図である。It is a block diagram which shows the example of 1 structure of the pattern matching process of this Embodiment. 本実施の形態におけるマッチングスコアを算出する処理を説明する原理図である。It is a principle figure explaining the process which calculates the matching score in this Embodiment. 本実施の形態におけるマッチング成否判定境界面を指定する手段を説明する図であり(a)、その処理の概要を示す図である(b)。It is a figure explaining the means to specify the matching success-failure determination boundary surface in this Embodiment (a), and is a figure which shows the outline | summary of the process (b). 本実施の形態における判定指標値を算出する処理の流れを示す図である。It is a figure which shows the flow of the process which calculates the determination parameter | index value in this Embodiment. 第2類特徴量、第3類特徴用を説明する図である。It is a figure explaining 2nd class feature amount and 3rd class feature use. 第2類特徴量の算出領域を説明する図である。It is a figure explaining the calculation area | region of a 2nd class feature-value. 第3類特徴量の算出について説明する図である。It is a figure explaining calculation of a 3rd class feature-value. 本実施の形態による判定指標値の例を説明する図である。It is a figure explaining the example of the determination index value by this Embodiment. 本実施の形態による特徴量の例を示す図である。It is a figure which shows the example of the feature-value by this Embodiment. 本実施の形態による特徴量の例を示す図である。It is a figure which shows the example of the feature-value by this Embodiment. 本実施の形態による特徴量の例を示す図である。It is a figure which shows the example of the feature-value by this Embodiment. 本実施の形態による特徴量の例を示す図である。It is a figure which shows the example of the feature-value by this Embodiment. 本実施の形態による特徴量の例を示す図である。It is a figure which shows the example of the feature-value by this Embodiment. 本実施の形態による特徴量の例を示す図である。It is a figure which shows the example of the feature-value by this Embodiment. 本実施の形態による特徴量の例を示す図である。It is a figure which shows the example of the feature-value by this Embodiment. 本実施の形態における学習データの与える手段を説明する図である。It is a figure explaining the means to give the learning data in this Embodiment. マッチング成否判定境界面の学習に加えて、特徴量算出方法も学習する方法を説明する図である。It is a figure explaining the method of learning also the feature-value calculation method in addition to learning of a matching success / failure determination boundary surface. 本実施の形態におけるマッチング成否判定境界面を手動で指定する例を説明する図である。It is a figure explaining the example which designates the matching success-failure determination boundary surface in this Embodiment manually. 本実施の形態におけるマッチング結果の安定性を確認する手段を説明する図である。It is a figure explaining the means to confirm the stability of the matching result in this Embodiment.
 以下では、本発明に係る実施の形態について図面を参照しながら詳細に説明する。なお、図中で説明番号が同じものは、特に断わりがない限り同一部材を示していることとする。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In addition, the thing with the same description number in a figure shall show the same member unless there is particular notice.
 図1は、本発明の一実施の形態による検査装置の適用例として、半導体ウェーハ上に形成された半導体デバイスのパターン寸法計測に主に用いられている走査型電子顕微鏡(Scanning Electron Microscope:SEM)で、マスク処理を用いたテンプレートマッチングを行うときの装置の一構成例を示す図である。走査型電子顕微鏡(SEM)Aでは、電子銃1から電子線を発生させる。ステージ2上に設置された試料である例えば半導体ウェーハ3上の任意の位置において電子線が焦点を結んで照射されるように、偏向器4および対物レンズ5を制御する。電子線を照射された半導体ウェーハ3からは、2次電子が放出され、2次電子検出器6により検出される。検出された2次電子は、A/D変換機7でデジタル信号に変換され、処理・制御部14内の画像メモリ15内に格納され、CPU16で目的に応じた画像処理が行われる。本実施の形態によるテンプレートマッチング処理は、処理・制御部14、より詳細には、マッチング処理部16aで処理を行う。図13で後述する処理の設定、および、処理結果の表示は、表示装置20で行う。また、電子顕微鏡よりも低倍の光学式カメラを用いたアライメントにおいては、光学式カメラ11を用いる。半導体ウェーハ3を、本カメラ11で撮像することで得られる信号も、A/D変換器12でデジタル信号に変換され(光学式カメラからの信号がデジタル信号の場合は、A/D変換器12は不要である)、処理・制御部14内の画像メモリ15に格納され、CPU16で、目的に応じた画像処理が行われる。 FIG. 1 shows a scanning electron microscope (SEM) that is mainly used for pattern dimension measurement of a semiconductor device formed on a semiconductor wafer as an application example of an inspection apparatus according to an embodiment of the present invention. FIG. 5 is a diagram illustrating an example of a configuration of an apparatus when performing template matching using mask processing. In the scanning electron microscope (SEM) A, an electron beam is generated from the electron gun 1. The deflector 4 and the objective lens 5 are controlled so that the electron beam is focused and irradiated at an arbitrary position on the semiconductor wafer 3, for example, a sample placed on the stage 2. Secondary electrons are emitted from the semiconductor wafer 3 irradiated with the electron beam and detected by the secondary electron detector 6. The detected secondary electrons are converted into a digital signal by the A / D converter 7, stored in the image memory 15 in the processing / control unit 14, and the CPU 16 performs image processing according to the purpose. The template matching process according to the present embodiment is performed by the processing / control unit 14, more specifically, the matching processing unit 16a. The display device 20 performs processing setting and processing result display, which will be described later with reference to FIG. Further, in the alignment using the optical camera that is lower in magnification than the electron microscope, the optical camera 11 is used. A signal obtained by imaging the semiconductor wafer 3 with the camera 11 is also converted into a digital signal by the A / D converter 12 (when the signal from the optical camera is a digital signal, the A / D converter 12 Is stored in the image memory 15 in the processing / control unit 14, and the CPU 16 performs image processing according to the purpose.
 また、反射電子検出器8が備わっている場合には、半導体ウェーハ3から放出される反射電子を、反射電子検出器8により検出し、検出された反射電子はA/D変換器9あるいは10でデジタル信号に変換され、処理・制御部14内の画像メモリ15に格納され、CPU16において、目的に応じた画像処理が行われる。本実施の形態では、検査装置の例として走査型電子顕微鏡を示したが、適用する装置としては、これに限定するものではなく、画像を取得し、テンプレートマッチング処理を行う検査装置等に適用できる。 When the backscattered electron detector 8 is provided, the backscattered electrons emitted from the semiconductor wafer 3 are detected by the backscattered electron detector 8, and the detected backscattered electrons are detected by the A / D converter 9 or 10. It is converted into a digital signal, stored in the image memory 15 in the processing / control unit 14, and the CPU 16 performs image processing according to the purpose. In this embodiment, a scanning electron microscope is shown as an example of an inspection apparatus. However, the apparatus to be applied is not limited to this, and can be applied to an inspection apparatus that acquires an image and performs template matching processing. .
 図2は、本実施の形態による検査装置でのマッチング処理部の一構成例を示す機能ブロック図であり、図5に対応する処理を行う処理部の機能ブロック図である。図3は、本実施の形態による検査装置でのテンプレートマッチング処理の一流れを含む全体構成例を示す機能ブロック図であり、学習処理を行う構成を合わせて示す図である。尚、学習処理とマッチング処理とは、別の処理としても良いし、一部共通のハードウェア構成又はソフトウェア構成、それらの組み合わせであっても良い。 FIG. 2 is a functional block diagram showing a configuration example of the matching processing unit in the inspection apparatus according to the present embodiment, and is a functional block diagram of the processing unit that performs processing corresponding to FIG. FIG. 3 is a functional block diagram illustrating an example of the entire configuration including a flow of template matching processing in the inspection apparatus according to the present embodiment, and is a diagram illustrating a configuration for performing learning processing. Note that the learning process and the matching process may be separate processes, or may have a partially common hardware configuration or software configuration, or a combination thereof.
 図2に示すように、図1に示すマッチング処理部16aは、例えば2つの入力の特徴量を抽出する特徴量抽出部16a-1と、第1及び第2の特徴量を含む複数の特徴量に基づいて、それらの特徴量の関係を示す相互的特徴量を算出する相互的特徴量算出部16a-2と、相互的特徴量、及びマッチング成否判定境界面に基づいてテンプレートマッチングの判定を行い、特徴量空間で相互特徴量とマッチング成否判定境界面との距離(スコア)を求めるテンプレートマッチング判定部16a-3と、照合対象が残っているか否かを判定する照合対象判定部16a-4と、例えば、距離(スコア)が最大となるウェハ上の位置を選出するスコア位置選出部16a-5と、正解のクラスであるか不正解のクラスであるかの判定を行う所属クラス判定部16a-6と、マッチングされたウェハ上の位置(x、y)などと、マッチングスコアなどとを対応付けて記憶する記憶部16a-7と、これらの記憶された値に基づく表示等の出力を行う出力部16a-8と、を有している。さらに、後述する学習処理に関連して、学習用に取得したテンプレート画像から特徴量を抽出する特徴領域抽出処理部16a-1aと、後述する学習処理部16a-2aと、を有している。尚、マッチング処理部16aは、図2に示す全ての要素(機能部)を備えていても良く、或いは、一部のみを備えていても良い。 As illustrated in FIG. 2, the matching processing unit 16a illustrated in FIG. 1 includes a feature amount extraction unit 16a-1 that extracts, for example, feature amounts of two inputs, and a plurality of feature amounts including first and second feature amounts. Based on the mutual feature quantity calculation unit 16a-2 for calculating the mutual feature quantity indicating the relationship between the feature quantities, and the template matching is determined based on the mutual feature quantity and the matching success / failure determination boundary surface. A template matching determination unit 16a-3 for obtaining a distance (score) between the mutual feature amount and the matching success / failure determination boundary surface in the feature amount space, and a collation target determination unit 16a-4 for determining whether or not a collation target remains. For example, the score position selection unit 16a-5 that selects the position on the wafer that has the maximum distance (score), and the class that determines whether the class is a correct answer or an incorrect answer A storage unit 16a-7 for storing the matching unit 16a-6, the matched position (x, y) on the wafer, the matching score, etc., and a display based on these stored values. And an output unit 16a-8 for outputting. Further, in relation to a learning process to be described later, a feature region extraction processing unit 16a-1a that extracts a feature amount from a template image acquired for learning and a learning processing unit 16a-2a described later are provided. The matching processing unit 16a may include all the elements (functional units) shown in FIG. 2 or may include only a part.
 図3は、本実施の形態による検査装置でのマッチング処理部16aによる処理の流れを示すフローチャート図である。 FIG. 3 is a flowchart showing the flow of processing by the matching processing unit 16a in the inspection apparatus according to the present embodiment.
 図3に示すように、本実施の形態による処理は、相互的特徴に基づく判定指標値算出処理Xと、学習処理に基づくマッチング正否判定境界面(識別境界面)の指定処理Yと、処理X、処理Yとに基づくテンプレートマッチング処理による判定結果の導出処理とからなる。 As shown in FIG. 3, the processing according to the present embodiment includes determination index value calculation processing X based on mutual features, matching correctness determination boundary surface (identification boundary surface) specification processing Y based on learning processing, and processing X. And derivation processing of a determination result by template matching processing based on processing Y.
 相互的特徴に基づく判定指標値算出処理Xは、予め登録したテンプレート101と、検査装置で取得した被サーチ画像から切り出した画像102(マッチング候補位置における切り出し画像)とから、テンプレートマッチングに用いる判定指標値109を求める処理である。    
 尚、被サーチ画像内にサーチ対象のパターンが有るか否かの判定処理、及び、マッチング位置を求める手段の詳細については、図6を参照しながら後に説明する。本実施の形態では、例えば、テンプレート101と被サーチ画像102のマッチング正解位置とで、画像の見た目の乖離が大きくなった場合でもパターンマッチングを成功させるようにすることが1つの目的であり、詳細には、図3の説明の後半に述べるが、テンプレート101及び被サーチ画像102の両画像を用いて求めた相互的特徴量108を用いて、特徴量ベースのマッチングを行う判定指標値109を求める。これにより、例えば、テンプレート101のみ、或いは、被サーチ画像102のみ、から求めていた個別特徴量を用いた特徴量ベースのマッチングでは取り扱うことが難しかったテンプレート101と被サーチ画像102とでの見た目の違いに起因する悪影響を受けにくい特徴量によるマッチング(或いは、悪影響を受けにくいように特徴量を用いるマッチング)が行えるようになり、テンプレートマッチングのロバスト性を向上させることができる。
The determination index value calculation process X based on the mutual features is performed by using a template 101 registered in advance and an image 102 (cutout image at a matching candidate position) cut out from a search target image acquired by the inspection apparatus. This is a process for obtaining the value 109.
Details of the process for determining whether there is a search target pattern in the image to be searched and the means for obtaining the matching position will be described later with reference to FIG. In the present embodiment, for example, one of the purposes is to make pattern matching successful even when the difference in the appearance of the image between the template 101 and the matching correct position of the searched image 102 increases. As described later in FIG. 3, a determination index value 109 for performing feature amount-based matching is obtained using the mutual feature amount 108 obtained using both the template 101 and the searched image 102. . As a result, for example, the appearance of the template 101 and the searched image 102 that were difficult to handle by the feature-based matching using the individual feature values obtained from only the template 101 or only the searched image 102. Matching using feature quantities that are less susceptible to adverse effects due to differences (or matching using feature quantities so as not to be adversely affected) can be performed, and the robustness of template matching can be improved.
 相互的特徴量108は、テンプレート101から特徴量抽出部16a-1による特徴量A抽出処理103により抽出した特徴量A105と、被サーチ画像からの切り出し画像(マッチング候補位置)102から特徴量抽出部16a-1による特徴量B抽出処理104により抽出した特徴量B106とを用いて、相互的特徴量算出部16a-2で相互的特徴量算出処理107により求める。相互的特徴量の算出方法は後述するが、単純な算出方法としては、例えば、特徴量A105、特徴量B106として、テンプレート101、被サーチ画像からの切り出し画像102をそのまま用い、両画像の正規化相関値を相互的特徴量の一つとして用いるようにすることができる。 The mutual feature quantity 108 includes a feature quantity extraction unit from the feature quantity A 105 extracted from the template 101 by the feature quantity A extraction process 103 by the feature quantity extraction unit 16a-1 and a cut-out image (matching candidate position) 102 from the search target image. Using the feature amount B 106 extracted by the feature amount B extraction processing 104 by 16a-1, the mutual feature amount calculation unit 16a-2 obtains the mutual feature amount calculation processing 107. The mutual feature amount calculation method will be described later. For example, as a simple calculation method, the template 101 and the clipped image 102 from the search target image are used as they are as the feature amount A105 and the feature amount B106, and both images are normalized. The correlation value can be used as one of the mutual feature amounts.
 例えば、2組の対応するデータx,y間での平均からの偏差の積の平均値である共分散ρXYを、相互的特徴量として求めても良い。 For example, a covariance ρ XY that is an average value of products of deviation from the average between two sets of corresponding data x and y may be obtained as a mutual feature amount.
 ρXY=Cov(x,y)/(V(x))1/2・(V(y))1/2
 求めた相互的特徴量108は、テンプレートマッチング判定部16a-3(マッチングスコア算出部)で用いる判定指標値109の一部、或いは、全てとして用いる。尚、この相互的特徴量108は、1つとは限らず、種類の異なる複数の特徴量を算出し、用いることもできる。なお、被サーチ画像からの切り出し画像102から抽出した特徴量B106を、そのまま個別特徴量として、判定指標値109の一部に用いることもできる。この特徴量も、1つとは限らず、種類の異なる特徴量を複数算出し、個別特徴量として用いることもできる。マッチングスコア判定処理部は、距離が0をスコアの中央値として設定する中央値設定処理部を有し、マッチングススコアの中央値以下ならばマッチング不正解とし、スコアの中央値以上ならばマッチング正解とするようにしても良い。
ρ XY = Cov (x, y) / (V (x)) 1/2 · (V (y)) 1/2
The obtained mutual feature value 108 is used as a part or all of the determination index value 109 used in the template matching determination unit 16a-3 (matching score calculation unit). The mutual feature quantity 108 is not limited to one, and a plurality of different kinds of feature quantities can be calculated and used. Note that the feature quantity B106 extracted from the clipped image 102 from the searched image can be used as part of the determination index value 109 as an individual feature quantity as it is. This feature amount is not limited to one, and a plurality of different types of feature amounts can be calculated and used as individual feature amounts. The matching score determination processing unit has a median value setting processing unit that sets the distance as 0 as the median value of the score. If the distance is equal to or less than the median value of the matching score, a matching incorrect answer is obtained. You may make it.
 一方、学習処理部16a-2aでは、テンプレート101aと、被サーチ画像からの切り出し画像(マッチング正否情報)102aとして、画像102a-1とマッチング正否情報102a-2と、を用いることができる。以下の、判定指標値109aを求めるまでの処理Yは、処理Xと同様であり、同じアルゴリズムで実行することもできるし、同じハードウェアにより処理することもできる。別の構成により行うようにしても良い。 On the other hand, the learning processing unit 16a-2a can use the image 101a-1 and the matching correct / incorrect information 102a-2 as the template 101a and the cut-out image (matching correct / incorrect information) 102a from the searched image. The following process Y until the determination index value 109a is obtained is the same as the process X, and can be executed by the same algorithm or can be processed by the same hardware. You may make it carry out by another structure.
 処理Yでは、判定指標値109aから、マッチング正否判定境界面の指定処理110を行う。 In process Y, a matching correct / incorrect determination boundary surface specification process 110 is performed from the determination index value 109a.
 マッチング正否判定境界面の指定処理110においては、詳細は後述するが、判定指標値空間でマッチングの正否を分ける境界面を指定する。複数の判定指標値を用い、かつ、その判定指標値には、テンプレートと被サーチ画像の相互的な関係をもとに求めた判定指標値109aが含まれることから、例えば、画像ベースのマッチング手法において相関値のみを用いた場合ではマッチング成否を分けられないケースでも、本実施の形態による手法では、マッチングの成否を分けられるマッチング成否判定境界面を求めることができる可能性が高まる。学習処理部16a-2aにおけるマッチング正否判定境界面の指定処理110において指定されたマッチング正否判定境界面111と、相互的特徴量算出処理により求めた判定指標値109とを用いて、テンプレートマッチング判定部16a-3におけるテンプレートマッチング判定処理112により、判定指標値空間での判定指標値109のマッチング判定境界面からの距離をマッチングの判定指標として算出し、その距離を判定結果(マッチングスコア等)113とする。この距離の算出方法の例については後述する。 In the matching correct / incorrect determination boundary surface specification process 110, details will be described later, but a boundary surface that determines the correctness of matching in the determination index value space is specified. Since a plurality of determination index values are used, and the determination index values include a determination index value 109a obtained based on the mutual relationship between the template and the searched image, for example, an image-based matching method Even in the case where only the correlation value is used in the case where the success / failure of the matching cannot be divided, the technique according to the present embodiment increases the possibility of obtaining the matching success / failure determination boundary surface that can be divided into the success / failure of the matching. Using the matching correctness determination boundary surface 111 specified in the matching correctness determination boundary surface specification processing 110 in the learning processing unit 16a-2a and the determination index value 109 obtained by the mutual feature amount calculation processing, a template matching determination unit In the template matching determination process 112 in 16a-3, the distance from the matching determination boundary surface of the determination index value 109 in the determination index value space is calculated as a matching determination index, and the distance is determined as a determination result (matching score or the like) 113. To do. An example of this distance calculation method will be described later.
 以上により、マッチングスコアを算出する対象(被サーチ画像からの切り出し画像102)のマッチングスコア等のマッチングの判定指標を算出することができる。これにより、テンプレート101と被サーチ画像102との相互的な関係をも特徴量とした特徴量ベースのテンプレートマッチングにおいて、複数の前記相互的な特徴量を用いてマッチングの成否を分ける識別境界面を算出する学習結果を用いることが可能となり、被サーチ画像の見た目の変動に対して、変動の影響を受けにくいマッチング処理が可能となる。 From the above, it is possible to calculate a matching determination index such as a matching score of a target (a cut-out image 102 from the searched image) whose matching score is to be calculated. As a result, in the feature amount-based template matching in which the mutual relationship between the template 101 and the searched image 102 is also a feature amount, an identification boundary surface that divides the success or failure of matching using a plurality of the mutual feature amounts is obtained. The learning result to be calculated can be used, and the matching process that is less susceptible to the change of the appearance of the searched image can be performed.
 図5は、図3を参照して説明したテンプレートマッチング処理を利用して、サーチ処理を行う処理の流れを示すフローチャート図である。破線で囲んだ部位300が、図3で説明した処理に相当し、テンプレート101、被サーチ画像から切り出した画像102、及び、学習処理にマッチング正否判定境界面111を用いて、テンプレートマッチング判定処理112によりマッチング判定結果(マッチングスコア)113を算出する。 FIG. 5 is a flowchart showing a flow of processing for performing search processing using the template matching processing described with reference to FIG. A part 300 surrounded by a broken line corresponds to the process described with reference to FIG. 3. The template matching determination process 112 is performed using the template 101, the image 102 cut out from the search target image, and the matching correctness determination boundary surface 111 for the learning process. Based on this, a matching determination result (matching score) 113 is calculated.
 被サーチ画像301からテンプレートと照合する領域の画像の切り出し処理302により切り出した画像102を切り出し、テンプレート101と照合し、テンプレートマッチング判定処理112を経て判定結果113を出力する。照合対象の判定処理303では、判定結果113を、被サーチ画像の全ての照合対象位置で得たか否かを判定する。 The image 102 cut out by the image cutout process 302 of the area to be matched with the template is cut out from the search target image 301, checked against the template 101, and the determination result 113 is output through the template matching determination process 112. In the verification target determination process 303, it is determined whether or not the determination result 113 is obtained at all the verification target positions of the searched image.
 照合対象が未だ残っている場合は、切り出し位置変更処理304で切り出し位置を求め、テンプレートと照合する画像の切り出し処理302で画像を切り出す。照合対象の判定処理303で全ての照合対象で判定を完了した場合は、あるスコア、例えば、最大スコア位置の選出処理305を行い、マッチングスコアが最大となる位置を求める。マッチングスコアが最大となった照合位置でのマッチングスコアを用いて、所属クラス判定処理306で、マッチングスコアが最大となった照合位置が、マッチング正解とみなせる位置であるか、マッチング不正解とみなす位置であるかを判定する。 If the object to be verified still remains, the extraction position is changed by the extraction position changing process 304, and the image is extracted by the extraction process 302 of the image to be compared with the template. When the determination is completed for all the verification targets in the verification target determination process 303, a certain score, for example, a maximum score position selection process 305 is performed to obtain a position where the matching score is maximized. Using the matching score at the matching position with the highest matching score, the matching position with the highest matching score in the belonging class determination processing 306 is a position that can be regarded as a matching correct answer, or a position that is regarded as a matching incorrect answer It is determined whether it is.
 この処理は、詳細は後述するが、判定指標値空間で、マッチングスコアが最大となった照合位置の判定指標値109が、マッチング正否判定境界面を基準にして、マッチング不正解側(不正解クラス)に属する場合(スコアアクセプタンス以下となった場合)は、被サーチ画像の視野内にサーチ対象のパターンが無かったと判断する。この場合は、撮像位置周辺でアライメント用のパターンを探す、或いは計測を中断してアライメントに失敗したことをアラームでユーザに伝える等の処理を行うことになる(不正解クラス)。 Although this process will be described in detail later, in the determination index value space, the determination index value 109 at the matching position where the matching score is maximum is based on the matching correct / incorrect determination boundary surface as a matching incorrect answer side (incorrect answer class). ) (When it is below the score acceptance), it is determined that there is no pattern to be searched within the field of view of the image to be searched. In this case, processing such as searching for a pattern for alignment around the imaging position or notifying the user that the alignment has failed by interrupting measurement is performed (incorrect answer class).
 一方で、マッチングスコアが最大となった照合位置の判定指標値109が、マッチング正否判定境界面を基準にして、マッチング正解側(正解クラス)に属する場合(スコアアクセプタンス以上となった場合)は、その照合位置をマッチング位置307として出力する。またマッチング正否と併せてマッチングスコアも出力することもできる。以上により、図3で説明した相互的特徴量を用いて算出したテンプレートマッチング結果、例えばマッチングスコアを用いて、テンプレートマッチングを行うことができる。 On the other hand, when the determination index value 109 of the matching position where the matching score is maximum belongs to the matching correct side (correct answer class) with respect to the matching correctness determination boundary surface (when the score acceptance is equal to or higher), The collation position is output as the matching position 307. A matching score can also be output together with the matching correctness. As described above, template matching can be performed using the template matching result calculated using the mutual feature amount described in FIG. 3, for example, the matching score.
 図6は、図3で説明したマッチングスコア算出処理112について、原理を説明する図である。 FIG. 6 is a diagram illustrating the principle of the matching score calculation process 112 described in FIG.
 本実施の形態では、図6(a)に判定指標値A及び判定指標値Bの2つの判定指標値を用いた場合の例を示している。マッチングスコアは、判定指標値で張られる判定指標値空間(本例では2次元で示す。)で、スコアを算出する対象の座標(判定指標値空間上の座標は、各判定指標値109から決まる)と、マッチング正否判定境界面111との距離を用いる。マッチング正否判定境界面111は、図3で説明したように、マッチング正否判定境界面指定処理110の結果として与えられる。 In the present embodiment, FIG. 6 (a) shows an example in which two determination index values of determination index value A and determination index value B are used. The matching score is a determination index value space (shown in two dimensions in this example) spanned by determination index values, and the coordinates for which the score is calculated (the coordinates in the determination index value space are determined from each determination index value 109. ) And the matching correctness determination boundary surface 111 is used. The matching correctness determination boundary surface 111 is given as a result of the matching correctness determination boundary surface designation process 110 as described with reference to FIG.
 例えば、スコアを算出する対象が、図中の△印405であったとき、マッチング正否判定境界面111までの距離が破線部410となる。距離としては、例えばユークリッド距離を用いることができる。なお、用いる距離は、ユークリッド距離に限定されるものではなく、マッチング正否判定境界面111からの距離が算出できる手段であれば良い。 For example, when the target for calculating the score is Δ mark 405 in the figure, the distance to the matching correct / incorrect determination boundary surface 111 is a broken line portion 410. For example, the Euclidean distance can be used as the distance. The distance to be used is not limited to the Euclidean distance, and any means that can calculate the distance from the matching correctness determination boundary surface 111 may be used.
 図6(b)は、マッチング正否判定境界面111からの距離410とマッチングスコア411との関係を示した図である。マッチングスコア411の取り方としては、例えば、マッチング正否判定境界面111からの距離が0の時にマッチングスコア411を0にし、正解位置のクラスのとなる場合はスコアを正の値、不正解位置のクラスとなる場合は、負の値とする方法がある。そのときの距離とスコアの関係は、図6(b)に直線412で示したように線形にすることができる。 FIG. 6B is a diagram showing the relationship between the distance 410 from the matching correctness determination boundary surface 111 and the matching score 411. As a method of obtaining the matching score 411, for example, when the distance from the matching correctness determination boundary surface 111 is 0, the matching score 411 is set to 0. In the case of a class, there is a method of setting a negative value. The relationship between the distance and the score at that time can be linear as shown by a straight line 412 in FIG.
 尚、ここでは線形を例にして説明するが、関係は、線形に限定されるものではなく、非線形に距離とスコアとを対応づけることも可能である。本実施の形態で対象としている検査装置においては、先に述べたようにマッチングの正否をスコアアクセプタンスで判断することがある。このスコアアクセプタンスについては、ユーザ或いは装置設計者が決めることを要求されることが多く、その設定次第でマッチング性能が異なることもある。先に述べたように距離が0の場合を、スコアを0の固定値とすれば、アクセプタンスの設定が不要になる。また従来のマッチング方式で例えば正規化相関を用いたマッチングで関値を用いたマッチングでは、判定指標に相当するものは相関値の1つだけとなり、その値そのものがスコアとなる。その場合に、図6(a)のように1つの値を用いた場合に、マッチングの正解と不正解とが分けられない場合には、適切なアクセプタンススコアを設定することができなくなる。本実施の形態によれば、そのようなケースも回避することが可能となる。 In addition, although linear is demonstrated here as an example, a relationship is not limited to linear, It is also possible to make a distance and a score correspond non-linearly. In the inspection apparatus targeted in this embodiment, the correctness of matching may be determined based on the score acceptance as described above. This score acceptance is often required to be determined by the user or device designer, and the matching performance may differ depending on the setting. As described above, if the distance is 0 and the score is a fixed value of 0, setting of acceptance is not necessary. Further, in the matching using the correlation value in the conventional matching method using, for example, normalized correlation, only one correlation value corresponds to the determination index, and the value itself becomes the score. In that case, when one value is used as shown in FIG. 6A, if the matching correct answer and the incorrect answer cannot be separated, an appropriate acceptance score cannot be set. According to the present embodiment, such a case can be avoided.
 尚、正解と不正解とを分けるアクセプタンススコアを0と限定する必要はなく、オフセット値を設けることも可能である。また本実施の形態では、判定指標値A及び判定指標値Bの2つの判定指標値を用いて、2次元でマッチング正否判定を行う例を示したが、判定指標値は2つに限定するものでなく、2つよりも多くの判定指標値を用いて正否判定することができる。 Note that it is not necessary to limit the acceptance score that separates the correct answer and the incorrect answer to 0, and an offset value may be provided. Further, in the present embodiment, an example is shown in which two determination index values of determination index value A and determination index value B are used for two-dimensional matching correctness determination, but the determination index value is limited to two. Instead, it is possible to make a correct / incorrect determination using more than two determination index values.
 図7は、図3で述べたマッチング正否判定境界面111の指定処理110について説明する図である。マッチング成否判定境界面111は、判定指標値空間において、マッチング正解となる事例(図7(a)では○で記載)と、マッチング不正解となる事例(図7(a)では×で記載)を分けることを目的として設定する。そうすることにより、図6で説明した所属クラス判定処理306において、マッチング成否判定境界面111を基準に、どちら側にマッチング結果があるかを判定することができ、マッチング結果が、マッチング正解位置であるか、或いは、マッチング不正解位置であるかが判る。マッチング成否判定境界面111は、例えばSVM(Support Vector Machine)で用いられている手法によって求めることができる。 FIG. 7 is a diagram for explaining the specification processing 110 for the matching correct / incorrect determination boundary surface 111 described in FIG. The matching success / failure determination boundary surface 111 includes a case where the matching is correct in the determination index value space (indicated by a circle in FIG. 7A) and a case where the matching is incorrect (indicated by a cross in FIG. 7A). Set for the purpose of separation. By doing so, in the affiliation class determination processing 306 described with reference to FIG. 6, it is possible to determine which side has the matching result on the basis of the matching success / failure determination boundary surface 111, and the matching result is the matching correct position. It can be seen whether there is a matching incorrect answer position. The matching success / failure determination boundary surface 111 can be obtained by a method used in, for example, SVM (Support Vector Vector Machine).
 以下に、詳細に説明する。SVMは、教師有り学習を用いる識別手法の一つである。マッチング成否判定境界面111は、SVMでの分離超平面(識別面などとも呼ばれる)に相当する。図7(a)では、マッチング成否判定境界面111が分離超平面となり、マッチング成否判定境界面111の内側の破線部501、及びマッチング成否判定境界面111の外側の破線部502が、SVMでのマージンと呼ばれるものとなる。またマージン上の点を、サポートベクターと呼んでいる(マッチング正解となる事例、及びマッチング不正解となる事例のそれぞれで少なくとも1つはある)。 The details will be described below. SVM is one of identification methods using supervised learning. The matching success / failure determination boundary surface 111 corresponds to a separation hyperplane (also referred to as an identification surface) in the SVM. In FIG. 7A, the matching success / failure determination boundary surface 111 is a separation hyperplane, and a broken line portion 501 inside the matching success / failure determination boundary surface 111 and a broken line portion 502 outside the matching success / failure determination boundary surface 111 are This is called a margin. A point on the margin is called a support vector (there is at least one in each case of matching correct answer and case of matching incorrect answer).
 SVMでは、学習データ中で最も他の事例と近い位置にあるもの(それがサポートベクターである)を基準として、そのユークリッド距離が最も大きくなるような位置に分離超平面を設定する。つまり、事例の最端から他の事例までのマージンを最大にする(マージン最大化)。 In SVM, the separation hyperplane is set at the position where the Euclidean distance becomes the maximum with reference to the learning data that is closest to the other cases (it is a support vector). That is, the margin from the extreme end of the case to another case is maximized (margin maximization).
 本実施の形態でのマッチング成否判定境界面111を、SVMの分離超平面とすることで、複数の判定指標値がある場合でも特徴量空間でマッチング正解の事例とマッチング不正解の事例とを分離することが可能となる。つまり、本手法で求めたマッチング成否判定境界面111を基準に用いて、マッチング成否の判定ができるようになる。 By using the SVM separation hyperplane as the matching success / failure determination boundary surface 111 in the present embodiment, the matching correct case and the matching incorrect case are separated in the feature amount space even when there are a plurality of determination index values. It becomes possible to do. That is, the success / failure of the matching can be determined using the matching success / failure determination boundary surface 111 obtained by this method as a reference.
 図7(b)は、マッチング成否判定境界面111を求める処理の構成を説明する図である。まず図5で説明した複数の判定指標値109aと、マッチング成否102-2の組み合わせを1つの事例とし、その事例を複数含んだデータ(学習データ)102aを準備する。この学習データ102aには、マッチング正解の事例とマッチング不正解の事例とが含まれるようにする。次に、学習データに基づき、先に述べたようにSVMを用いてSVMの分離超平面を求め(111)、求めた分離超平面をマッチング成否判定境界面111とする。 FIG. 7B is a diagram for explaining a configuration of processing for obtaining the matching success / failure determination boundary surface 111. First, a combination of a plurality of determination index values 109a and matching success / failures 102-2 described in FIG. 5 is used as one case, and data (learning data) 102a including a plurality of cases is prepared. The learning data 102a includes a matching correct answer case and a matching incorrect answer case. Next, based on the learning data, as described above, an SVM separation hyperplane is obtained using SVM (111), and the obtained separation hyperplane is set as a matching success / failure determination boundary surface 111.
 以上の処理により、SVMを用いて、複数の判定指標値からマッチング成否判定境界面111を求めることができる。なお、識別面(分離超平面)を求めるのにSVMに限定する必要はなく、マッチング正解の事例と、マッチング不正解の事例を分離するマッチング成否判定境界面が求められる手法であれば良い。 Through the above processing, the matching success / failure determination boundary surface 111 can be obtained from a plurality of determination index values using the SVM. In addition, it is not necessary to limit to the SVM to obtain the identification surface (separation hyperplane), and any method may be used as long as a matching success / failure determination boundary surface that separates the matching correct answer case and the matching incorrect answer case is obtained.
 図8は、図3、及び図5で述べた判定指標値算出手段について説明する図である。図3で説明した通り、テンプレート101及マッチング候補位置での切り出し画像102から判定指標値109aを算出する。 FIG. 8 is a diagram for explaining the determination index value calculation means described in FIGS. 3 and 5. As described with reference to FIG. 3, the determination index value 109 a is calculated from the template 101 and the cutout image 102 at the matching candidate position.
 次に、相互的特徴量の算出方法についてまず説明する。特徴量抽出部A103でテンプレート101から抽出した特徴量(ここでは特徴量A105とする)と、特徴量抽出部B104でマッチング候補位置の切り出し画像102から抽出した特徴量(ここでは特徴量B106とする)とを用いて、相互的特徴量算出処理107で特徴量D108を求める。相互的特徴量の算出方法は、図9で説明する。また図3で説明した個別特徴量は、マッチング候補位置での切り出し画像102、或いはテンプレート101を用いて、特徴量抽出処理C605で特徴量C608を算出する。個別特徴量の算出方法は、図9で説明する。求めた特徴量D108、或いは特徴量C608を判定指標値109aとする。なお、特徴量A105、特徴量B106、特徴量C608、特徴量D108は、それぞれ複数種類を用いても良い。判定指標値109aも複数種類を用いることなる。本構成により、テンプレート101、及びマッチング候補位置での切り出し画像102から、相互的特徴量、及び個別特徴量を複数種類求め、その特徴量を判定指標値109aにすることができる。 Next, a method for calculating the mutual feature amount will be described first. The feature amount extracted from the template 101 by the feature amount extraction unit A103 (here, the feature amount A105) and the feature amount extracted from the clipped image 102 at the matching candidate position by the feature amount extraction unit B104 (here, the feature amount B106). ) To obtain the feature amount D108 in the mutual feature amount calculation processing 107. The method for calculating the mutual feature amount will be described with reference to FIG. In addition, as for the individual feature amount described with reference to FIG. 3, the feature amount C608 is calculated by the feature amount extraction processing C605 by using the clipped image 102 or the template 101 at the matching candidate position. A method of calculating the individual feature amount will be described with reference to FIG. The obtained feature amount D108 or feature amount C608 is set as the determination index value 109a. Note that a plurality of types of feature quantity A105, feature quantity B106, feature quantity C608, and feature quantity D108 may be used. A plurality of types of determination index values 109a are also used. With this configuration, a plurality of types of mutual feature amounts and individual feature amounts can be obtained from the template 101 and the cutout image 102 at the matching candidate position, and the feature amounts can be used as the determination index value 109a.
 図9は、図8で述べた特徴量について説明する図である。特徴量を、その性質によって分類し、第1類特徴量、第2類特徴量、第3類特徴量と呼ぶことにする。第1類特徴量は、特徴量を算出する画像において、画像内の位置(座標)に依らずに対象画像、或いは対象画像の一部の領域から定まる特徴である。例えば、画像全体の画素値平均値、画素値分散値などは、第1類特徴量となる。本特徴量は個別特徴量に相当する。第2類特徴量は、画像内の位置(座標)により定まる特徴量である。例えば、図9(a)に示すように、画像上の座標(i,j)1402(ここでは画像の左上を画像座標系の原点としている)において、算出される特徴量Vi,jとなる。ここでは、特徴量Vi,jは、多次元のベクトルとして表すことができる。図9(a)に示すように、特徴量のベクトルVi,jは、f1からfnを、ベクトル要素とする(nは、ベクトルの要素数)。例えばSIFT特徴量(非特許文献2)は、画像上のある座標毎(特徴点毎)に定まるベクトルで特徴を表現する。SIFT特徴量では、特徴点周辺領域を複数の小領域に分割し(16領域)、各小領域での画素値の勾配方向(8方向)をビンとするヒストグラムを生成し、その各ヒストグラムの各ビンをベクトル要素(要素数は128個(16×8))の一つとするベクトルを特徴量として用いる。本特徴量も個別特徴量に相当する。第3類特徴量は、テンプレート画像から算出した第2類特徴量及び被サーチ画像から算出した同第2類特徴量と、両画像間の相対位置(例えば両画像の照合位置)によって定まる特徴量である。相互的特徴量は、第3類特徴量となる。第2類特徴を用いて第3類特徴を求める方法(相互的特徴量算出方法)は、図11、図13で詳細を説明するが、例えば、図9(b)に示すようにテンプレート画像と被サーチ画像から切り出した領域(破線部)との相対位置1416によって定まる特徴量である。 FIG. 9 is a diagram illustrating the feature amount described in FIG. The feature quantities are classified according to their properties, and are referred to as a first class feature quantity, a second class feature quantity, and a third class feature quantity. The first type feature amount is a feature that is determined from the target image or a partial region of the target image regardless of the position (coordinates) in the image in the image for calculating the feature amount. For example, the pixel value average value, the pixel value variance value, and the like of the entire image are the first type feature amount. This feature amount corresponds to an individual feature amount. The second type feature amount is a feature amount determined by a position (coordinates) in the image. For example, as shown in FIG. 9 (a), the calculated feature value V i, j is the coordinate (i, j) 1402 on the image (here, the upper left of the image is the origin of the image coordinate system). . Here, the feature amount V i, j can be expressed as a multidimensional vector. As shown in FIG. 9 (a), the vector V i, j of feature quantity has f1 to fn as vector elements (n is the number of vector elements). For example, the SIFT feature amount (Non-Patent Document 2) represents a feature by a vector determined for each certain coordinate (each feature point) on the image. In the SIFT feature value, the area around the feature point is divided into a plurality of small areas (16 areas), and a histogram is generated with bins of gradient directions (8 directions) of pixel values in each small area. A vector having a bin as one of vector elements (the number of elements is 128 (16 × 8)) is used as a feature amount. This feature amount also corresponds to the individual feature amount. The third type feature amount is a feature amount determined by the second type feature amount calculated from the template image, the second type feature amount calculated from the searched image, and the relative position between the two images (for example, the collation position of both images). It is. The mutual feature amount is a third type feature amount. The method for obtaining the third type feature using the second type feature (mutual feature amount calculation method) will be described in detail with reference to FIGS. 11 and 13. For example, as shown in FIG. The feature amount is determined by the relative position 1416 with respect to the region (broken line portion) cut out from the searched image.
 図10は、図9で述べた第2類特徴量において、画像内の位置によって定まる特徴量の算出時の特徴量算出領域について説明する図である。図10(a)は、画像内のある座標から特徴量を求める例である。この座標における、画素値、画素値勾配情報等を特徴量とする。従って特徴量は画像内での座標によって定まることになる。図10(b)は、画像内の或る矩形領域から特徴量を求める例である。この矩形領域における、画素値平均、画素値分散、画素値ヒストグラムの各ビンの値、或いは、矩形領域を小領域に分けて算出した画素値勾配方向ヒストグラムの各ビンの値などを特徴量とする。これによって、特徴量を算出する着目座標周辺の特徴も利用することができ、本特徴を用いることでマッチングをよりロバストにできるようになる。図10(c)は、画像内の円形領域から特徴量を求める例である。図10(b)の矩形領域と同様に、この円形領域における、画素値平均、画素値分散、画素値ヒストグラムの各ビンの値、或いは、円形領域を小領域に分けて算出した画素値勾配方向ヒストグラムの各ビンの値などを特徴量とする。これによって、特徴量を算出する着目座標周辺の特徴も利用することができ、本特徴を用いることでマッチングをよりロバストにできるようになる。図10(d)は、画像内の或る任意形状の領域における特徴量を求める例である。この図10(b)(c)の矩形領域、円形領域と同様に、この任意形状の領域から特徴量を算出しても良い。これによって、特徴量を算出する着目座標周辺の特徴も利用することができ、本特徴を用いることでマッチングをよりロバストにできるようになる。 FIG. 10 is a diagram for explaining a feature amount calculation area when calculating a feature amount determined by a position in an image in the second type feature amount described in FIG. FIG. 10A shows an example in which the feature amount is obtained from certain coordinates in the image. The pixel value, pixel value gradient information, and the like at this coordinate are used as feature amounts. Therefore, the feature amount is determined by the coordinates in the image. FIG. 10B is an example in which the feature amount is obtained from a certain rectangular area in the image. In this rectangular area, the pixel value average, the pixel value variance, the value of each bin of the pixel value histogram, or the value of each bin of the pixel value gradient direction histogram calculated by dividing the rectangular area into small areas are used as feature amounts. . As a result, the feature around the target coordinate for calculating the feature amount can also be used, and matching can be made more robust by using this feature. FIG. 10C shows an example in which the feature amount is obtained from a circular area in the image. Similar to the rectangular area in FIG. 10B, the pixel value average, pixel value variance, and each bin value of the pixel value histogram in this circular area, or the pixel value gradient direction calculated by dividing the circular area into small areas The value of each bin of the histogram is used as a feature amount. As a result, the feature around the target coordinate for calculating the feature amount can also be used, and matching can be made more robust by using this feature. FIG. 10D shows an example in which a feature amount in a region having an arbitrary shape in the image is obtained. Similar to the rectangular area and the circular area in FIGS. 10B and 10C, the feature amount may be calculated from the arbitrarily shaped area. As a result, the feature around the target coordinate for calculating the feature amount can also be used, and matching can be made more robust by using this feature.
 図11は、図9で述べた第3類特徴量において、第2類特徴量から第3類特徴量を求める方法を説明する図である。先に述べたように第3類特徴量は、テンプレート画像の第2類特徴量と、被サーチ画像の同第2類特徴量との相対位置に基づき求める。 FIG. 11 is a diagram for explaining a method for obtaining the third class feature quantity from the second class feature quantity in the third class feature quantity described in FIG. As described above, the third type feature value is obtained based on the relative position between the second type feature value of the template image and the second type feature value of the searched image.
 図11(a)は、被サーチ画像1605からテンプレート画像1601と同じサイズの領域(破線部)1610を切り出し(切り出し位置(X, Y))1607、切り出した領域から第2類特徴量を算出する。同第2類特徴量をテンプレート画像1601からも算出する。この被サーチ画像1605から算出した第2類特徴量と、テンプレート画像1601から算出した第2類特徴量との、相互の関係を求めたものが、相互的特徴量となる。例えば、両者の第2類特徴量を表すベクトルの距離の値を相互的特徴量とする。距離としては、ユークリッド距離、マンハッタン距離、バッタチャリア距離など、両特徴量間の関係を定量化できるものであれば限定しない。以上により、この相互的特徴量は、テンプレート画像と被サーチ画像との相対位置によって定まる特徴量となる(本例では、被サーチ画像における画像切り出し位置(X, Y)1607が相対位置に相当する)。 In FIG. 11A, a region (broken line portion) 1610 having the same size as that of the template image 1601 is cut out from the searched image 1605 (cutout position (X, Y)) 1607, and the second type feature quantity is calculated from the cut out region. . The second type feature amount is also calculated from the template image 1601. A mutual feature amount obtained by obtaining a mutual relationship between the second type feature amount calculated from the searched image 1605 and the second type feature amount calculated from the template image 1601. For example, a vector distance value representing the second type feature quantity of both is used as the mutual feature quantity. The distance is not limited as long as it can quantify the relationship between both feature quantities such as Euclidean distance, Manhattan distance, and grasshopper distance. As described above, the mutual feature amount is a feature amount determined by the relative position between the template image and the searched image (in this example, the image cutout position (X, Y) 1607 in the searched image corresponds to the relative position. ).
 図11(b)は、図11(a)の方法とは異なる方法で、テンプレート画像と被サーチ画像の相対位置を定める方法を説明する図である。本方法は、被サーチ画像上でテンプレート画像がどの位置に類似しているかを推定するための手法の一つとして投票ベース手法を用いた場合の投票値を第3類特徴量とする方法である。テンプレート画像、及び被サーチ画像のそれぞれにおいて、第2類特徴量を算出する(ここでの被サーチ画像での第2類特徴量の算出は、画像領域全体を対象とする)。テンプレート画像において第2類特徴量を求める際の位置の基準となる点1631を基準点と呼ぶことにする(例えば図11(a)において画像座標系で左上を原点Oとし、原点を基準に第2類特徴量を定めている場合は、原点Oが基準点となる)。両画像の第2類特徴量のうち類似度の最も高い特徴量を選出し、それをペアとして記憶する。テンプレート画像での第2類特徴量の算出位置(座標)から基準点への距離、及びベクトル方向をもとに、テンプレート画像での第2類特徴量とペアになった被サーチ画像側の第2類特徴量の算出位置(座標)に対し、テンプレート画像で求めた距離、及びベクトル方向に相当する座標(被サーチ画像でのテンプレート画像の基準点の位置と推定される座標)を求める。そして、求めた座標に対してマッチング位置候補座標として投票を行う(ペアの一つに対し、投票を1回)。この投票処理を全てのペア(或いはある一定上の類似度がある全てのペア)の組に対して行う。被サーチ画像からテンプレート画像に相当する領域を切り出して、マッチング候補とするときに、その切り出し領域の基準点(例えば切り出し領域の左上の座標)1641での投票数を第3類特徴量とする。なお、各特徴点で類似度が最も高いものを選択する例を示したが、各特徴点で類似度が高い上位の数組を用いても良い(例えば上位3組を用いる)。 FIG. 11 (b) is a diagram for explaining a method for determining the relative positions of the template image and the image to be searched by a method different from the method of FIG. 11 (a). This method is a method in which the voting value when the voting base method is used as one of the methods for estimating the position where the template image is similar on the searched image is used as the third type feature amount. . A second type feature quantity is calculated for each of the template image and the searched image (the calculation of the second type feature quantity for the searched image here is for the entire image area). A point 1631 that serves as a reference for the position when obtaining the second type feature quantity in the template image is referred to as a reference point (for example, in FIG. 11A, the upper left is the origin O in the image coordinate system, and (If the type 2 feature is defined, the origin O is the reference point). A feature amount having the highest similarity is selected from the second type feature amounts of both images, and is stored as a pair. Based on the distance from the calculation position (coordinates) of the second class feature quantity in the template image to the reference point and the vector direction, the second image on the searched image side paired with the second class feature quantity in the template image. The distance obtained from the template image and the coordinates corresponding to the vector direction (coordinates estimated as the position of the reference point of the template image in the searched image) are obtained with respect to the calculated position (coordinates) of the type 2 feature amount. Then, voting is performed on the obtained coordinates as matching position candidate coordinates (one vote for one of the pairs). This voting process is performed for all pairs (or all pairs having a certain degree of similarity). When a region corresponding to the template image is cut out from the searched image and used as a matching candidate, the number of votes at the reference point (for example, the upper left coordinate of the cut-out region) 1641 is set as the third type feature amount. In addition, although the example which selects the thing with the highest similarity in each feature point was shown, you may use several high-order groups with high similarity in each feature point (for example, three high-order groups are used).
 以上により第2類特徴量から第3類特徴量を算出することができる。相互的特徴量である第3類特徴量を用いることで、よりロバストなマッチングを行うことが可能となる。 As described above, the third type feature quantity can be calculated from the second type feature quantity. By using the third type feature quantity which is a mutual feature quantity, it becomes possible to perform more robust matching.
 図12は、図8で述べた特徴量の具体的例を説明する図である。図8で説明した特徴量A105、特徴量B106、及び特徴量C608(個別特徴量)は、同種類のものを用いることができる。図12に示すように特徴量A105、特徴量B106、及び特徴量C608の特徴量には、例えば画像内の或る指定した座標を基準とした領域内のテクスチャに関する特徴量702、或いは画像に写っているパターンの構造の情報をあらわすエッジ特徴量703などを用いることができる。テクスチャに関する特徴量の例としては、後で説明するが、ヒストグラム特徴量707、コントラスト特徴量708、同時生起行列709を用いたもの等がある。本特徴量は、図9で説明した第2類特徴量に相当する特徴量となる。 FIG. 12 is a diagram illustrating a specific example of the feature amount described in FIG. The same kind of feature quantity A105, feature quantity B106, and feature quantity C608 (individual feature quantity) described in FIG. 8 can be used. As shown in FIG. 12, the feature quantity A105, feature quantity B106, and feature quantity C608 include, for example, a feature quantity 702 relating to texture in an area based on a specified coordinate in the image, or an image. An edge feature amount 703 representing information on the structure of a pattern can be used. Examples of the feature amount related to the texture include those using a histogram feature amount 707, a contrast feature amount 708, and a co-occurrence matrix 709, which will be described later. This feature amount is a feature amount corresponding to the second type feature amount described with reference to FIG.
 なお、テクスチャの情報を抽出できる手法、及び特徴量であれば良く、これらに限定するものではない。ヒストグラム特徴707は、テンプレートと被サーチ画像の夫々で画像内の或る指定した座標を基準として領域内の階調値ヒストグラムを解析して得た平均、分散、歪度、尖度等を特徴量とする。コントラスト特徴量708は、テンプレートと被サーチ画像の夫々で指定された領域の平均階調値を特徴量とする。前記の指定された領域とは、例えば画像中でパターン(例えばラインパターン)が存在する領域、或いはパターンが存在しない領域(下地領域)を用いる。あるいは、テンプレートと被サーチ画像の夫々の視野内での指定した複数の領域間のコントラスト差を求めて(画像内のコントラスト特徴)、その値を特徴量として用いても良い。またSIFT(非特許文献2)などの手法を用いて求めた特徴点情報704等もある。エッジ特徴量703としては、HOG(Histgrams of Oriented Gradients)などの特徴量がある。 It should be noted that any method and feature amount that can extract texture information may be used, and the present invention is not limited thereto. The histogram feature 707 is a feature value including an average, variance, skewness, kurtosis, etc. obtained by analyzing a gradation value histogram in a region with reference to a specified coordinate in the image for each of the template and the searched image. And The contrast feature amount 708 uses an average gradation value of an area designated by each of the template and the searched image as a feature amount. As the designated area, for example, an area where a pattern (for example, a line pattern) exists in an image or an area where no pattern exists (background area) is used. Alternatively, the contrast difference between a plurality of designated areas in the respective fields of view of the template and the searched image may be obtained (contrast feature in the image), and the value may be used as the feature amount. There is also feature point information 704 obtained using a technique such as SIFT (Non-Patent Document 2). The edge feature quantity 703 includes a feature quantity such as HOG (Histgramsistof Oriented Gradients).
 一方特徴量D(相互的特徴量720)については、特徴量A105、及び特徴量B106で求めた特徴量から相互的に算出する。図13で説明するが、例えば、ヒストグラム特徴であれば、テンプレートと被サーチ画像の夫々から求めたヒストグラムを解析して得た平均、分散、歪度、尖度等の一致度(例えば値の差分)を特徴量とする。或いは、テンプレートと被サーチ画像の夫々から求めたヒストグラムの分布の形状の相関値を特徴量としても良い。またコントラスト情報であれば、テンプレートと被サーチ画像の夫々から求めたコントラスト特徴の一致度(例えば値の差分)を特徴量とする。またテンプレートと被サーチ画像のそのものの相関値を特徴量としても良い。その際に用いる画像は、入力画像そのものでも良いし、或いはノイズ除去処理、エッジ強調処理などの前処理を事前に行った画像を用いても良い。またコーナ時点情報から相互的特徴量を求める場合は、テンプレートと被サーチ画像の夫々で求めたコーナ点の一致した数を用いることができる。或いは、SIFTで求めた特徴点を基いる場合は、非特許文献2に示してあるように対応点マッチングでのVoting数を特徴量としても良い。本特徴量は、図9で説明した第3類特徴量に相当する。 On the other hand, the feature amount D (mutual feature amount 720) is calculated mutually from the feature amounts obtained from the feature amount A105 and the feature amount B106. As illustrated in FIG. 13, for example, in the case of a histogram feature, the degree of coincidence (for example, a difference in values) such as an average, variance, skewness, and kurtosis obtained by analyzing a histogram obtained from each of a template and a searched image. ) As a feature value. Alternatively, the correlation value of the histogram distribution shape obtained from the template and the searched image may be used as the feature amount. In the case of contrast information, the degree of coincidence (for example, a difference in values) of contrast features obtained from the template and the searched image is used as a feature amount. A correlation value between the template and the searched image itself may be used as the feature amount. The image used at that time may be the input image itself, or an image in which preprocessing such as noise removal processing and edge enhancement processing has been performed in advance may be used. When the mutual feature amount is obtained from the corner time point information, the number of matching corner points obtained from the template and the searched image can be used. Alternatively, when feature points obtained by SIFT are used, the number of Votings in corresponding point matching may be used as the feature amount as shown in Non-Patent Document 2. This feature amount corresponds to the third type feature amount described in FIG.
 以上のような複数の個別特徴量701、及び相互的特徴量の一部、或いは全てを図3、図5で説明したテンプレートマッチングでは用いることができる。 The plurality of individual feature values 701 and some or all of the mutual feature values as described above can be used in the template matching described with reference to FIGS.
 図13は、図12で述べた特徴量の算出手段の一例を説明する図である。図8、及び図12で述べた特徴量A105、特徴量B106、及び特徴量C608に、次に述べるいずれの特徴量も用いることができる。 FIG. 13 is a diagram for explaining an example of the feature amount calculation means described in FIG. Any of the following feature amounts can be used for the feature amount A105, the feature amount B106, and the feature amount C608 described with reference to FIGS.
 図13Aは、ヒストグラム特徴を説明する図である。ヒストグラム特徴は指定した領域内の階調値ヒストグラムの分布形状、或いは分布を解析して得られる値を特徴として用いる手段である。テンプレート801と、被サーチ画像802から切り出した画像803の夫々からヒストグラム804、805を求める。特徴量としては、ヒストグラムの分布形状そのまま用いることができる。例えばヒストグラムの各ビン(データ範囲の分割区間)の頻度を要素とするベクトルを特徴とする。或いは、分布の形状を分析して算出した平均、分散、歪度、尖度の一部、或いは全てを特徴量としても良い。また階調値ヒストグラムに累積ヒストグラムを用いても良い。 FIG. 13A is a diagram for explaining the histogram feature. The histogram feature is a means that uses, as a feature, a distribution shape of a gradation value histogram in a specified region or a value obtained by analyzing the distribution. Histograms 804 and 805 are obtained from the template 801 and the image 803 cut out from the searched image 802, respectively. As the feature amount, the distribution shape of the histogram can be used as it is. For example, it is characterized by a vector whose element is the frequency of each bin (divided section of the data range) of the histogram. Alternatively, some or all of the average, variance, skewness, and kurtosis calculated by analyzing the shape of the distribution may be used as the feature amount. A cumulative histogram may be used as the gradation value histogram.
 図13Bは、コントラスト特徴を説明する図である。テンプレート811、及び被サーチ画像812から切り出した画像813において、指定した領域内814、815での階調値の平均値を特徴量として用いる。なお、平均値に限定するものではなく、領域内の階調値の情報を表せる情報であれば良く、例えば分散値、最大値、最小値などでも良い。 FIG. 13B is a diagram for explaining the contrast feature. In the image 813 cut out from the template 811 and the searched image 812, the average value of the gradation values in the designated areas 814 and 815 is used as the feature amount. The information is not limited to the average value, and may be any information that can represent the information of the gradation value in the region, and may be, for example, a variance value, a maximum value, a minimum value, or the like.
 図13Cは、コントラスト特徴に関して図13Bと異なる特徴量について説明する図である。テンプレート821において、指定した複数の領域822及び823の夫々で階調値の平均値の比(画像内でのコントラスト)を求め、その値を特徴量とする。同様に、被サーチ画像824から切り出した画像825についても指定した複数の領域826、及び827で階調値の平均値の比(画像内でのコントラスト)特徴量とする。ここでは、平均値を用いたが、それに限定するものではなく、領域内の階調値の情報を表せる情報であれば良く、例えば分散値、最大値、最小値などでも良い。 FIG. 13C is a diagram for explaining a feature amount different from that in FIG. 13B regarding the contrast feature. In the template 821, the ratio of the average values of the gradation values (contrast in the image) is obtained in each of the designated areas 822 and 823, and the value is used as the feature amount. Similarly, for the image 825 cut out from the search target image 824, the ratio of the average value of the gradation values (contrast in the image) is used as the feature amount in the specified regions 826 and 827. Here, the average value is used, but the present invention is not limited to this, and any information that can represent the information of the gradation value in the area may be used.
 上記の図13Aから図13Cまでに例示される方法で取得した特徴量は、先の述べた特徴量A105、特徴量B106、及び特徴量C608に用いることができる個別特徴量である。また、その他に、図12でも述べたように、同時生起行列709、エッジ特徴量703、SIFT特徴量704、Harr-like特徴量705、HLAC特徴量706などがある。但し、これらの特徴量に限定するものではなく、テンプレート及び被サーチ画像から切り出した画像の特徴を表す値、或いはベクトルを求められるものであれば良い。 The feature amounts acquired by the method illustrated in FIGS. 13A to 13C are individual feature amounts that can be used for the feature amount A105, the feature amount B106, and the feature amount C608 described above. In addition, as described in FIG. 12, there are a co-occurrence matrix 709, an edge feature quantity 703, a SIFT feature quantity 704, a Harr-like feature quantity 705, an HLAC feature quantity 706, and the like. However, the present invention is not limited to these feature amounts, and any value or vector that represents the features of an image cut out from the template and the searched image may be used.
 図8、図12で説明した特徴量D720である相互的特徴量は、テンプレート画像、及び被サーチ画像から切り出した画像から求めた個別特徴量を比較することで求めることができる。 The mutual feature quantity which is the feature quantity D720 described with reference to FIGS. 8 and 12 can be obtained by comparing the individual feature quantities obtained from the template image and the image cut out from the searched image.
 例えば、図13Aで求めたヒストグラム特徴量では、テンプレート画像、及び被サーチ画像から切り出した画像から求めたヒストグラムの分布形状の相関値を相互的特徴量とする。或いは、ヒストグラムを解析して得た値である平均(或るいは、分散、歪度、尖度など)の差、或いは比などを相互的特徴量とすることもできる。図13B及び図13Cで求めたコントラストの特徴量についても、テンプレート画像、及び被サーチ画像から求めた値の差、或いは比を相互的特徴量とすることができる。また相互的特徴量として以下のようなものを用いることができる。 For example, in the histogram feature amount obtained in FIG. 13A, the correlation value of the distribution shape of the histogram obtained from the template image and the image cut out from the searched image is used as the mutual feature amount. Alternatively, an average (or variance, skewness, kurtosis, etc.) difference or ratio, which is a value obtained by analyzing a histogram, can be used as a mutual feature amount. Regarding the contrast feature amount obtained in FIGS. 13B and 13C, the difference or ratio between the values obtained from the template image and the searched image can be used as the mutual feature amount. Moreover, the following can be used as a mutual feature amount.
 図13Dは、ラインプロファイル特徴を説明する図である。テンプレート831、及び被サーチ画像832から切り出した画像833の夫々において、画像の一定の方向に画素を加算平均(投影)して異次元の波形を求める。これをラインプロファイルと呼んでいる。図13Dでは、各画像ともにY方向に投影を行った例であり、夫々の画像のラインプロファイル834、835の相関値を求め、その相関値を相互的特徴量とすることができる。尚、ラインプロファイルで相関をとる範囲は、ラインプロファイル全体を用いることに限定せず、ラインプロファイルの一部の切り出した区間のみの相関値を用いて良い。 FIG. 13D is a diagram for explaining line profile characteristics. In each of the template 831 and the image 833 cut out from the searched image 832, pixels are added and averaged (projected) in a certain direction of the image to obtain a different dimension waveform. This is called a line profile. FIG. 13D shows an example in which each image is projected in the Y direction. Correlation values of line profiles 834 and 835 of the respective images can be obtained, and the correlation values can be used as mutual feature amounts. It should be noted that the range in which the line profile is correlated is not limited to using the entire line profile, and the correlation value of only a section cut out of a part of the line profile may be used.
 図13Eは、画像そのものの相関値を相互的特徴量とする例を示す図である。テンプレート841、及び被サーチ画像842から切り出した画像843において画像間の相関値を算出し、その相関値を特徴量とする。 FIG. 13E is a diagram illustrating an example in which the correlation value of the image itself is a mutual feature amount. A correlation value between images is calculated in the template 841 and an image 843 cut out from the searched image 842, and the correlation value is used as a feature amount.
 図13Fは、SIFTの対応点マッチング結果を相互的特徴量とする例である。テンプレート851と被サーチ画像852の夫々で抽出した特徴点(特徴記述子)で対応点マッチング(例えば、矢印でつないだものが対応点853である)を行うと、被サーチ画像852で対応する特徴点の座標とスケールと回転量が求まる(非特許文献2)。テンプレート851において、基準点座標(例えば、テンプレート851での白丸の位置)を定めておき、先の座標とスケールと回転量の情報から一般化ハフ変換のように被サーチ画像852での基準点の位置に投票(Voting処理)を行う(非特許文献2)。特徴量は、被サーチ画像上から切り出す画像の位置(或いはその周辺)にテンプレートが射影される投票回数とすることができる。また投票回数ではなく、その周辺の領域も考慮して、投票の密集度(投票回数/周辺領域の面積)としても良い。また対応点マッチング得た対応点のテンプレートのSIFT特徴量と被サーチ画像でのSIFT特徴量の相関値を特徴量としても良い。 FIG. 13F is an example in which the corresponding point matching result of SIFT is a mutual feature amount. When corresponding point matching is performed on feature points (feature descriptors) extracted from the template 851 and the searched image 852, for example, corresponding points 853 connected by arrows are the corresponding features in the searched image 852. The coordinates, scale, and rotation amount of the point are obtained (Non-Patent Document 2). In the template 851, the reference point coordinates (for example, the position of the white circle in the template 851) are determined, and the reference point in the searched image 852 is obtained from the previous coordinates, scale, and rotation amount information by generalized Hough transform. Voting (Voting processing) is performed on the position (Non-patent Document 2). The feature amount can be the number of votes that the template is projected to the position (or the periphery thereof) of the image cut out from the searched image. In addition, the density of voting (the number of votes / the area of the surrounding area) may be considered in consideration of the surrounding area, not the number of votes. Further, the correlation value between the SIFT feature value of the corresponding point template obtained by matching the corresponding points and the SIFT feature value in the searched image may be used as the feature value.
 図13Gは、図13FのSIFTに代えてCornerの対応点マッチング結果を相互的特徴量とする例である。テンプレート861と被サーチ画像862から切り出した画像863の夫々で抽出したであるCornerを特徴点として対応点マッチング(例えば、矢印でつないだものが対応点である)を行うと、被サーチ画像862で対応する特徴点の座標とスケールと回転量が求まる。 FIG. 13G is an example in which the corresponding point matching result of Corner is used as a mutual feature quantity instead of the SIFT in FIG. 13F. When corresponding point matching is performed using a corner extracted from the template 861 and the image 863 cut out from the searched image 862 as a feature point (for example, a point connected by an arrow is a corresponding point), a searched image 862 is obtained. The coordinates, scale, and rotation amount of the corresponding feature point are obtained.
 以上のような方法で相互的特徴量を求めることができる。尚、相互的特徴量の算出方法はここで説明した方法に限定するものではなく、テンプレートと被サーチ画像との相互的な関係を表現する特徴量(スカラー値、或いはベクトル等)であれば良い。 Mutual feature quantities can be obtained by the above method. Note that the method for calculating the mutual feature amount is not limited to the method described here, and any feature amount (scalar value, vector, or the like) expressing the mutual relationship between the template and the searched image may be used. .
 尚、テンプレート及び被サーチ画像に前処理を行い、ノイズ低減、或いは特徴を強調した画像を生成し、生成した画像に対して、上記の特徴量(個別特徴量、及び相互的特徴量)を求めることもできる。前処理としては、例えば、平滑化フィルタリング、エッジ強調フィルタリング、2値化処理等の処理がある。ここに挙げた処理に限定するものでなく、前処理として使えるフィルタ処理であればどのようなものでも良い。また複数の前処理を組み合わせた処理を、テンプレート、或いは被サーチ画像に対して行い、その処理で得た画像に対して、上記の特徴量(個別特徴量、及び相互的特徴量)を求めることもできる。 The template and the searched image are preprocessed to generate an image with reduced noise or enhanced features, and the above-described feature amounts (individual feature amounts and mutual feature amounts) are obtained for the generated images. You can also. Examples of the preprocessing include smoothing filtering, edge enhancement filtering, and binarization processing. The present invention is not limited to the processing described here, and any filter processing can be used as long as it can be used as preprocessing. In addition, a process combining a plurality of pre-processing is performed on a template or a searched image, and the above-described feature amount (individual feature amount and mutual feature amount) is obtained for the image obtained by the process. You can also.
 以上の相互的特徴量、及び個別特徴量の複数を判別指標値に用いることで、従来のように個別特徴量のみではマッチングの成否の判別に失敗するケースに対しても、マッチングの成否の判定に成功することが可能になる。それは、テンプレートと被サーチ画像の相互的な関係の情報をも判別指標値に用いて、テンプレートと被サーチ画像の相互的な関係の変化をも吸収できるように、判別指標値空間でマッチング成否判別境界面を求めることができることに起因する利点である。 By using a plurality of the above-mentioned mutual feature quantities and individual feature quantities as discrimination index values, it is possible to determine the success or failure of matching even in the case where the judgment of success or failure of matching fails only with individual feature quantities as in the past. It becomes possible to succeed. It uses the information on the mutual relationship between the template and the searched image as a discrimination index value, so that the change in the mutual relationship between the template and the searched image can be absorbed. This is an advantage resulting from the ability to obtain the boundary surface.
 図14は、図7で説明したマッチング成否判定境界面111を算出する際の学習データについて詳細に説明した図である。図14(a)は、単純な学習データの例であり、1つのテンプレート1001と、被サーチ画像でのマッチング正解位置を切り出した画像1002、及びマッチング不正解位置を切り出した画像1003を学習データに用いる。ここで、被サーチ画像から切り出した画像(正解位置、及び不正解位置)は、1枚の画像から複数枚とれる場合は複数枚を用いても良いし、複数の画像を学習に用いる場合は(テンプレートは共通)、複数ある学習に用いる画像の夫々から1枚、或いは複数、画像を切り出してもよい。切り出す画像の枚数を増やし、テンプレートと被サーチ画像の見た目が異なるサンプルを多くすることで、図7で説明した判別指標値空間でのマッチング成否判定境界面111のマッチング正解位置と不正解位置を判別する汎化性能を向上する可能性が高くなる。 FIG. 14 is a diagram illustrating in detail the learning data when calculating the matching success / failure determination boundary surface 111 described in FIG. FIG. 14A shows an example of simple learning data. One template 1001, an image 1002 in which a matching correct answer position in the searched image is cut out, and an image 1003 in which a matching incorrect answer position is cut out are used as learning data. Use. Here, a plurality of images (correct position and incorrect position) cut out from the searched image may be used when a plurality of images can be obtained from one image, or when a plurality of images are used for learning ( One template or a plurality of images may be cut out from each of a plurality of images used for learning. By increasing the number of images to be cut out and increasing the number of samples with different appearances of the template and the searched image, the matching correct / incorrect position of the matching success / failure determination boundary surface 111 in the determination index value space described in FIG. 7 is determined. The possibility of improving generalization performance increases.
 図14(b)は、学習データに用いるテンプレート1011を複数種類用いる場合の例を示す図である。このように複数枚のテンプレート1004を用いることで、特定のテンプレートのパターンや見た目に依存する程度が低い、つまり、より汎用性の高いマッチング成否判定境界面を求めることが可能となる。 FIG. 14B is a diagram illustrating an example in which a plurality of types of templates 1011 used for learning data are used. By using a plurality of templates 1004 in this way, it is possible to obtain a matching success / failure determination boundary surface that is less dependent on a specific template pattern or appearance, that is, more versatile.
 図15は、図3において、マッチング成否境界面を学習するだけでなく、特徴量抽出部での特徴量抽出方法も、学習によって決定する方法を説明する図である。本例では、特徴量抽出方法を遺伝的アルゴリズム(以下、GAと呼ぶ)、或いは遺伝的プログラミング(以下,GPと呼ぶ)によって学習する例を示している。特徴量抽出は複数の画像処理の組合せで構成される。また各画像処理は、複数の設定パラメータを持つ場合がある。この画像処理の組合せ、及び各処理の設定パラメータをGA、或いはGPを用いて学習する。判定指標値を算出するための画像処理の組合せ(パラメータ設定も含む)を染色体(解の候補)とする。図15(a)は、学習にGA、或いはGPを用いた場合の処理の流れを示したものである。まず、複数の処理特徴量抽出処理(初期染色体群)の生成1701を行う。次に個体群に対し、後述する学習終了判定のための評価1702を行う。評価の結果、学習が終了1703すれば、特徴量抽出方法及び、マッチング成否判定境界面が決定する。学習終了判定の終了と判断されない場合は、染色体の選択1704、交差1705、突然変異1706を行う。その結果得られた染色体群に対し、再び学習終了判定のための評価1702を行う。学習終了するまで、本処理を繰り返し行う(世代更新)。図15(b)は、図15(a)の評価部を詳細に示す図である。先の述べたように染色体1721は、判定指標値1728を算出する処理の組合せである(一つの染色体から複数の判定指標値を算出する)。染色体(解の候補)1721は複数個生成する(例えば100個体生成)。各染色体(特徴量抽出処理)で求めた判定指標値を基に、マッチング成否判定境界面の算出処理1723を行う。ここでは、先に述べたSVMによって、マッチング成否判定境界面を算出する。このとき、評価値1724として、例えばSVMでのマッチング境界面からサポートベクターまでの距離(スコア)を評価値として用いることができる。この評価値が、指定された値を満たすか否かで学習の終了判定1725を行う(例えば距離が指定された値よりも大きいか否か)。学習が終了した時の染色体が、特徴量抽出方法(画像処理の組合せ、及び各画像処理のパラメータ設定)となり、また併せてマッチング成否判定境界面も定まる。学習が終了しない場合は、学習を継続し、図15(a)で示した選択1704、交差1705、突然変異1706の処理を再び行う(世代交代)。なお、ここで評価値として、距離を用いたが、マッチング成否判定境界面の良否を判断できる評価値であれば、距離に限定するものではない。 FIG. 15 is a diagram for explaining a method of determining not only the matching success / failure boundary surface in FIG. 3 but also the feature amount extraction method in the feature amount extraction unit by learning. In this example, the feature amount extraction method is learned by a genetic algorithm (hereinafter referred to as GA) or genetic programming (hereinafter referred to as GP). Feature extraction is a combination of a plurality of image processes. Each image process may have a plurality of setting parameters. The combination of the image processing and the setting parameters of each processing are learned using GA or GP. A combination of image processing (including parameter setting) for calculating a determination index value is set as a chromosome (solution candidate). FIG. 15A shows the flow of processing when GA or GP is used for learning. First, generation 1701 of a plurality of processing feature quantity extraction processes (initial chromosome groups) is performed. Next, the evaluation 1702 for learning completion determination to be described later is performed on the individual group. If learning ends 1703 as a result of the evaluation, a feature amount extraction method and a matching success / failure determination boundary surface are determined. If it is not determined that the learning end determination has been completed, chromosome selection 1704, crossover 1705, and mutation 1706 are performed. The evaluation 1702 for determining the end of learning is performed again on the chromosome group obtained as a result. This process is repeated until learning ends (generation update). FIG. 15B is a diagram showing in detail the evaluation unit in FIG. As described above, chromosome 1721 is a combination of processes for calculating determination index value 1728 (a plurality of determination index values are calculated from one chromosome). A plurality of chromosomes (solution candidates) 1721 are generated (for example, 100 individuals are generated). Based on the determination index value obtained for each chromosome (feature amount extraction process), a matching success / failure determination boundary surface calculation process 1723 is performed. Here, the matching success / failure determination boundary surface is calculated by the SVM described above. At this time, as the evaluation value 1724, for example, the distance (score) from the matching boundary surface to the support vector in SVM can be used as the evaluation value. Learning end determination 1725 is performed based on whether or not the evaluation value satisfies a specified value (for example, whether or not the distance is larger than the specified value). The chromosome at the end of learning becomes a feature amount extraction method (combination of image processing and parameter setting for each image processing), and a matching success / failure determination boundary surface is also determined. If the learning does not end, the learning is continued, and the processing of the selection 1704, the intersection 1705, and the mutation 1706 shown in FIG. 15A is performed again (generation change). In addition, although distance was used as an evaluation value here, if it is an evaluation value which can judge the quality of a matching success / failure determination boundary surface, it will not be limited to distance.
 以上のように、GA(或るいGP)と、SVMを組み合わせて用いることで、マッチング成否判定境界面だけでなく、特徴量抽出処理方法も学習することが可能となる。なお、学習に用いる手法は、GA(或いはGP)、SVMに限らず、このような学習ができる方法であれば良い。 As described above, by using a combination of GA (some GP) and SVM, it is possible to learn not only the matching success / failure determination boundary surface but also the feature amount extraction processing method. Note that the method used for learning is not limited to GA (or GP) and SVM, and any method capable of such learning may be used.
 図16は、マッチング成否判定境界面111のマニュアル設定を実現するためのGUIの一例を示す図である。マッチング成否判定境界面111は、先に説明したように、例えばSVM等の手法を用いて求めることができるが、図16に示す例では、マッチング成否判定境界面111をユーザがマニュアルで指定する手段について説明する。図16(a)は、GUIによるユーザ設定の例を示す図である。図16(a)に示すGUIでは、判定指標値空間を例えばLCD画面などの表示装置20に表示する。図16(a)では、判定指標値A1103、判定指標値B1104の2つの判定指標値を縦軸と横軸にとったグラフを例としているが、3つ以上の複数の判定指標値を用いている場合は、軸に表示する指標値を切り替えることで表示することも可能である。本GUIで、境界面描画ボタン1102をマウス等で選択し、マッチング判定境界面1101のユーザ入力の受け入れを開始する。次にGUI上の判定指標値空間にマウス入力等によりユーザがマニュアルで判定境界面1101を描画する。ユーザが描画したマッチング判定境界面を図3で述べたマッチング成否判定境界面111としてユーザ指定により用いることができる。 FIG. 16 is a diagram illustrating an example of a GUI for realizing manual setting of the matching success / failure determination boundary surface 111. As described above, the matching success / failure determination boundary surface 111 can be obtained by using a technique such as SVM, for example. In the example shown in FIG. 16, the user manually specifies the matching success / failure determination boundary surface 111. Will be described. FIG. 16A is a diagram illustrating an example of user setting by GUI. In the GUI shown in FIG. 16A, the determination index value space is displayed on the display device 20 such as an LCD screen. In FIG. 16A, a graph in which the two determination index values of the determination index value A1103 and the determination index value B1104 are plotted on the vertical axis and the horizontal axis is taken as an example, but three or more determination index values are used. If it is, it can be displayed by switching the index value displayed on the axis. In this GUI, the boundary surface drawing button 1102 is selected with a mouse or the like, and acceptance of user input of the matching determination boundary surface 1101 is started. Next, the user manually draws the determination boundary surface 1101 in the determination index value space on the GUI by mouse input or the like. The matching determination boundary surface drawn by the user can be used as the matching success / failure determination boundary surface 111 described in FIG.
 なおGUIは、図16に示すようなフォーマットに限定するものではなく、マッチング成否判定境界面111をユーザがマニュアルで指定できるものであれば周知の技術を用いることができる。 Note that the GUI is not limited to the format as shown in FIG. 16, and any known technique can be used as long as the user can manually specify the matching success / failure determination boundary surface 111.
 図16(b)、(c)は、GUIに表示する判定指標値空間の一例を示す図である。図16(b)は、マッチング正否判定境界面111を直線で引いた例であり、図16(c)は、マッチング正否判定境界面111を曲線で引いた例を示す図である。尚、グラフ上に、記号○、或いは、記号×で表示しているプロットが、学習データにおける判定指標値であり、1つの記号が、一組のテンプレートと被サーチ画像から切り出した画像とから求めた判定指標値となる。ここでは、○がマッチング正解の場合の値、×がマッチング不正解の場合の値となっている。この記号○、記号×の分布は、ユーザがマッチング成否判定指標値1103、1104をマニュアル操作で指定する際の参考データとして表示することが可能である。尚、ここでは、記号に○×を用いたが、それに限定するものはなく、マッチング正解、不正解を区別できるものであれば良い。以上により、複数のマッチング成否判定指標値がある場合にでもマニュアルでマッチング成否判定指標値1103、1104を入力する手段を提供することが可能である。 FIGS. 16B and 16C are diagrams illustrating an example of a determination index value space displayed on the GUI. FIG. 16B shows an example in which the matching correctness determination boundary surface 111 is drawn with a straight line, and FIG. 16C shows an example in which the matching correctness determination boundary surface 111 is drawn with a curve. Note that the plot displayed with a symbol ◯ or symbol × on the graph is a determination index value in the learning data, and one symbol is obtained from a set of templates and an image cut out from the searched image. The determination index value. Here, ◯ is a value when the matching is correct, and x is a value when the matching is incorrect. The distribution of the symbols ◯ and × can be displayed as reference data when the user designates the matching success / failure determination index values 1103 and 1104 by manual operation. In this example, xx is used as a symbol, but the symbol is not limited to this, and any symbol can be used as long as it can distinguish between correct and incorrect answers. As described above, it is possible to provide means for manually inputting the matching success / failure determination index values 1103 and 1104 even when there are a plurality of matching success / failure determination index values.
 図17は、マッチング結果の安定性を判定指標値で張られる判定指標値空間で確認するためのGUIの例を説明する図である。図17(a)は、2次元の判定指標値空間の例である。判定指標値A1203と判定指標値B1204とで張られる判定指標値空間で、マッチング成否判定境界面1201及び、マッチングで照合位置(マッチング結果)を得たときのテンプレートと被サーチ画像から切り出した画像とから求めた判定指標値をもとにマッチング結果の判定指標値空間での位置1202を、GUIでは表示する。この図に示すGUIにより、マッチング結果が、マッチング成否判定境界面1201からどの程度離れているかのイメージをグラフィカルに確認することが可能となる。マッチング結果がマッチング成否判定境界面1201に近ければ、判定指標値が少し異なるとマッチング正解/不正解が異なってしまい、マッチングが不安定になる恐れがあることを確認できる。 FIG. 17 is a diagram illustrating an example of a GUI for confirming the stability of a matching result in a determination index value space spanned by determination index values. FIG. 17A is an example of a two-dimensional determination index value space. In a determination index value space spanned by the determination index value A 1203 and the determination index value B 1204, a matching success / failure determination boundary surface 1201, and a template obtained when a matching position (matching result) is obtained by matching and an image cut out from the searched image Based on the determination index value obtained from the above, the position 1202 in the determination index value space of the matching result is displayed in the GUI. With the GUI shown in this figure, it is possible to graphically confirm how far the matching result is from the matching success / failure determination boundary surface 1201. If the matching result is close to the matching success / failure determination boundary surface 1201, it can be confirmed that if the determination index value is slightly different, the matching correct answer / incorrect answer is different and the matching may be unstable.
 一方で、マッチング結果がマッチング成否判定境界面1201から離れていれば、マッチングが安定している可能性が高いことを確認できる。 On the other hand, if the matching result is away from the matching success / failure determination boundary surface 1201, it can be confirmed that the possibility that the matching is stable is high.
 図17(b)は、図17(a)と同様に、マッチング結果の安定性を確認するGUIの例を示す図であるが、3つの判定指標値で張られる判定指標値空間を3次元グラフでグラフィカルに確認することができる。マッチング成否判定境界面1205、マッチング結果の位置1202を図17(a)と同様に表示することができる。 FIG. 17B is a diagram showing an example of a GUI for confirming the stability of the matching result, as in FIG. 17A, but the determination index value space spanned by three determination index values is a three-dimensional graph. Can be checked graphically. The matching success / failure determination boundary surface 1205 and the matching result position 1202 can be displayed as in FIG.
 尚、マッチング成否判定境界面1205は、透過表示により境界面内にあるデータも確認できる。また視点の移動ボタン1206、1207などにより、表側、裏側などの任意の視点から、判定指標値空間内の学習データ、マッチング成否判定境界面、及びマッチング結果の位置を確認することができる。 Note that the matching success / failure determination boundary surface 1205 can also confirm data in the boundary surface by transparent display. In addition, the position of the learning data, the matching success / failure determination boundary surface, and the matching result in the determination index value space can be confirmed from any viewpoint such as the front side or the back side by using the viewpoint movement buttons 1206, 1207, and the like.
 図17(c)は、図17(a)、(b)と同様に、マッチング結果の安定性を確認するGUIであり、複数の判定指標値で張られる判定指標値空間を、任意の2つの判定指標値で張られる判定指標値空間にマッチング判定境界面を投影したもの1214の全て、或いは一部を対応させて並べた表示1211が行われている。図17(c)は、判定指標値がA,B,C,D(1212、1213)の4つある場合の例であり、例えば、AとBとから得られる判定指標値空間にマッチング判定境界面を投影したもの1214を、任意の対応関係について見ることができる。以上により、マッチング結果の安定性を確認するGUIを提供することが可能である。なお、このGUIでの表示部材の配置、サイズ、項目は、図に示したものに限られず、判定指標値空間における学習データ、マッチング成否判定境界面、マッチング結果の位置の関係が確認できる表示方法であれば良い。 FIG. 17C is a GUI for confirming the stability of the matching result, as in FIGS. 17A and 17B. A determination index value space spanned by a plurality of determination index values is divided into two arbitrary index values. A display 1211 in which all or some of the projections 1214 obtained by projecting the matching determination boundary surface in the determination index value space spanned by the determination index values are arranged in correspondence with each other is displayed. FIG. 17C shows an example in which there are four determination index values A, B, C, and D (1212, 1213). For example, a matching determination boundary is set in a determination index value space obtained from A and B. A projected surface 1214 can be seen for any correspondence. As described above, it is possible to provide a GUI for confirming the stability of the matching result. Note that the arrangement, size, and items of display members in the GUI are not limited to those shown in the figure, and a display method for confirming the relationship between the learning data, the matching success / failure determination boundary surface, and the matching result position in the determination index value space. If it is good.
 上記の実施の形態において、添付図面に図示されている構成等については、これらに限定されるものではなく、本発明の効果を発揮する範囲内で適宜変更することが可能である。その他、本発明の目的の範囲を逸脱しない限りにおいて適宜変更して実施することが可能である。 In the above-described embodiment, the configuration and the like illustrated in the accompanying drawings are not limited to these, and can be changed as appropriate within the scope of the effects of the present invention. In addition, various modifications can be made without departing from the scope of the object of the present invention.
 また、本発明の各構成要素は、任意に取捨選択することができ、取捨選択した構成を具備する発明も本発明に含まれるものである。 Each component of the present invention can be arbitrarily selected, and an invention having a selected configuration is also included in the present invention.
 本発明は、パターンマッチング装置に利用可能である。 The present invention can be used for a pattern matching device.
A…走査型電子顕微鏡、14…処理制御部、15…画像メモリ、16a…マッチング処理部、16a-1…特徴量抽出部、16a-2…相互的特徴量算出部、16a-3…テンプレートマッチング判定部、16a-4…照合対象判定部、16a-5…スコア位置選出部、16a-6…所属クラス判定部、16a-7…記憶部、16a-8…出力部、20…表示装置。 A ... Scanning electron microscope, 14 ... Processing control unit, 15 ... Image memory, 16a ... Matching processing unit, 16a-1 ... Feature quantity extraction unit, 16a-2 ... Mutual feature quantity calculation unit, 16a-3 ... Template matching Determination unit, 16a-4 ... collation target determination unit, 16a-5 ... score position selection unit, 16a-6 ... belonging class determination unit, 16a-7 ... storage unit, 16a-8 ... output unit, 20 ... display device.
 本明細書で引用した全ての刊行物、特許および特許出願をそのまま参考として本明細書にとり入れるものとする。 All publications, patents and patent applications cited in this specification shall be incorporated into the present specification as they are.

Claims (10)

  1.  被サーチ画像に対しパターンマッチングを行うマッチング処理装置において、
     学習用に取得したテンプレート画像から画像内の座標によって定まる特徴量を抽出する領域を抽出する特徴領域抽出処理部と、
     学習用に取得した被サーチ画像から画像内の座標によって定まる特徴量を抽出する特徴量抽出処理部と、
     前記テンプレート画像から抽出した特徴量と前記被サーチ画像から抽出した特徴量、及び前記テンプレート画像と前記被サーチ画像との相対位置とから、テンプレート画像と被サーチ画像との第1の相互的な特徴量を算出する第1の相互的特徴量算出処理部と、
     特徴量抽出演算が異なる複数の前記特徴量抽出処理部で抽出した画像内の同じ座標において特徴量の値が異なる特徴量から算出した複数の前記第1の相互的な特徴量を用いてマッチングの成否を分ける識別境界面を算出する識別境界面算出部と、
     検査対象から取得したテンプレート画像と被サーチ画像とから第2の相互的な特徴量を算出する第2の相互的特徴量算出処理部と、
     画像内の同じ座標において特徴量の値が異なる特徴量から算出した複数の前記第2の相互的な特徴量と前記識別境界面とを用いて検査対象のテンプレート画像と被サーチ画像とのマッチングを行うテンプレートマッチング処理部と
    を備えることを特徴とするマッチング処理装置。
    In a matching processing device that performs pattern matching on a searched image,
    A feature region extraction processing unit for extracting a region for extracting a feature amount determined by coordinates in the image from a template image acquired for learning;
    A feature amount extraction processing unit that extracts a feature amount determined by coordinates in the image from the searched image acquired for learning;
    A first mutual feature between the template image and the searched image from the feature value extracted from the template image, the feature value extracted from the searched image, and the relative position between the template image and the searched image. A first mutual feature amount calculation processing unit for calculating an amount;
    Matching is performed using a plurality of the first mutual feature amounts calculated from feature amounts having different feature amount values at the same coordinates in the image extracted by the plurality of feature amount extraction processing units having different feature amount extraction operations. An identification boundary surface calculation unit that calculates an identification boundary surface that divides success or failure;
    A second mutual feature amount calculation processing unit that calculates a second mutual feature amount from the template image acquired from the inspection target and the searched image;
    Matching between the template image to be inspected and the image to be searched using the plurality of second mutual feature amounts calculated from feature amounts having different feature values at the same coordinates in the image and the identification boundary surface A matching processing apparatus comprising: a template matching processing unit to perform.
  2.  前記テンプレートマッチング処理部は、
     複数の特徴量からなる特徴量空間において、前記識別境界面からの距離をマッチングスコアとして算出するマッチングスコア算出処理部を備えることを特徴とする請求項1に記載のマッチング処理装置。
    The template matching processing unit
    The matching processing apparatus according to claim 1, further comprising a matching score calculation processing unit that calculates a distance from the identification boundary surface as a matching score in a feature amount space including a plurality of feature amounts.
  3.  前記マッチングスコア算出処理部は、
     距離が0をスコアの中央値として設定する中央値設定処理部を有し、
     マッチングススコアの中央値以下ならばマッチング不正解とし、スコアの中央値以上ならばマッチング正解とすることを特徴とする請求項2に記載のマッチング処理装置。
    The matching score calculation processing unit
    A median setting processing unit for setting a distance of 0 as a median score;
    3. The matching processing apparatus according to claim 2, wherein if it is equal to or less than the median value of the matching score, a matching incorrect answer is determined, and if equal to or greater than the median value of the score, the matching correct answer is determined.
  4.  前記第2の相互的な特徴量算出部において、
     テンプレートと被サーチ画像との正規化相関値、及び、テンプレートから抽出した特徴点群と被サーチ画像から抽出した特徴点とで類似した特徴点の座標の一致度、及び、テンプレートから求めた階調値ヒストグラムと被サーチ画像から求めた階調値ヒストグラムとの一致度の、うちの少なくともいずれか1を特徴量とすることを特徴とする請求項1に記載のマッチング処理装置。
    In the second mutual feature amount calculation unit,
    Normalized correlation value between the template and the searched image, the degree of coincidence of the coordinates of similar feature points between the feature point group extracted from the template and the feature point extracted from the searched image, and the gradation obtained from the template 2. The matching processing apparatus according to claim 1, wherein at least one of the degree of coincidence between the value histogram and the gradation value histogram obtained from the searched image is used as a feature amount.
  5.  前記第1の相互的な特徴量と、テンプレートから抽出した特徴量と、被サーチ画像から抽出した特徴量のうちの少なくともいずれか1と、事前に取得したテンプレートと被サーチ画像とのマッチング成否結果と、を入力として、前記マッチング判定境界面を求めるマッチング成否判定境界面指定処理部を備えることを特徴とする請求項1に記載のマッチング処理装置。 Matching success / failure result of at least one of the first mutual feature value, the feature value extracted from the template, and the feature value extracted from the searched image, and the template and the searched image acquired in advance The matching processing device according to claim 1, further comprising: a matching success / failure determination boundary surface designation processing unit that obtains the matching determination boundary surface.
  6.  前記マッチング対象の特徴量と前記識別境界面とをGUI表示させるとともに、前記マッチング対象の特徴量に応じて、マージンを最大化するようにマッチング処理の正否を判定可能にする前記識別境界面を求める処理を行うことを特徴とする請求項5に記載のマッチング処理装置。 The feature amount of the matching target and the identification boundary surface are displayed in a GUI, and the identification boundary surface that makes it possible to determine whether the matching process is correct or not is obtained according to the feature amount of the matching target so as to maximize a margin. 6. The matching processing apparatus according to claim 5, wherein the matching processing apparatus performs processing.
  7.  請求項1に記載のマッチング処理装置によりパターンマッチングを行うことを特徴とする検査装置。 An inspection apparatus that performs pattern matching using the matching processing apparatus according to claim 1.
  8.  被サーチ画像に対しパターンマッチングを行うマッチング処理方法において、
     学習用に取得したテンプレート画像から特徴量を抽出する領域を抽出する特徴領域抽出ステップと、
     学習用に取得した被サーチ画像から特徴量を抽出する特徴量抽出ステップと、
     前記テンプレート画像から抽出した特徴量と前記被サーチ画像から抽出した特徴量からテンプレート画像と被サーチ画像との第1の相互的な特徴量を算出する第1の相互的特徴量算出ステップと、
     複数の前記第1の相互的な特徴量を用いてマッチングの成否を分ける識別境界面を算出する識別境界面算出ステップと、
     検査対象から取得したテンプレート画像と被サーチ画像とから第2の相互的な特徴量を算出する第2の相互的特徴量算出ステップと、
     前記第2の相互的な特徴量と前記識別境界面とを用いて検査対象のテンプレート画像と被サーチ画像とのマッチングを行うテンプレートマッチングステップと
    を備えることを特徴とするマッチング処理方法。
    In a matching processing method for performing pattern matching on a searched image,
    A feature region extraction step for extracting a region for extracting a feature value from the template image acquired for learning;
    A feature amount extraction step for extracting a feature amount from the searched image acquired for learning;
    A first mutual feature amount calculating step of calculating a first mutual feature amount between the template image and the searched image from the feature amount extracted from the template image and the extracted feature amount from the searched image;
    An identification boundary surface calculating step for calculating an identification boundary surface that divides the success or failure of matching using the plurality of first mutual feature quantities;
    A second mutual feature amount calculating step for calculating a second mutual feature amount from the template image acquired from the inspection target and the searched image;
    A matching processing method comprising: a template matching step for performing matching between a template image to be inspected and a searched image using the second mutual feature quantity and the identification boundary surface.
  9.  コンピュータに、請求項8に記載のマッチング処理方法を実行させるためのプログラム。 A program for causing a computer to execute the matching processing method according to claim 8.
  10.  被サーチ画像に対しパターンマッチングを行うマッチング処理装置において、
     テンプレート画像から画像内の座標によって定まる特徴量を抽出する領域を抽出する特徴領域抽出処理部と、
     被サーチ画像から画像内の座標によって定まる特徴量を抽出する特徴量抽出処理部と、
     前記テンプレート画像から抽出した特徴量と前記被サーチ画像から抽出した特徴量、及び前記テンプレート画像と前記被サーチ画像との相対位置とから、テンプレート画像と被サーチ画像との相互的な特徴量を算出する相互的特徴量算出処理部と、
     特徴量抽出演算が異なる複数の前記特徴量抽出処理部で抽出した画像内の同じ座標において特徴量の値が異なる特徴量から算出した複数の前記相互的な特徴量と、事前に設定したマッチングの成否を分ける識別境界面と、を用いて検査対象のテンプレート画像と被サーチ画像とのマッチングを行うテンプレートマッチング処理部と
    を備えることを特徴とするマッチング処理装置。
    In a matching processing device that performs pattern matching on a searched image,
    A feature region extraction processing unit for extracting a region for extracting a feature amount determined by coordinates in the image from the template image;
    A feature amount extraction processing unit that extracts a feature amount determined by coordinates in the image from the searched image;
    A mutual feature amount between the template image and the searched image is calculated from the feature amount extracted from the template image, the feature amount extracted from the searched image, and the relative position between the template image and the searched image. A mutual feature amount calculation processing unit,
    A plurality of the mutual feature amounts calculated from the feature amounts having different feature value values at the same coordinates in the image extracted by the plurality of feature amount extraction processing units having different feature amount extraction operations, and a preset matching A matching processing apparatus comprising: a template matching processing unit that performs matching between a template image to be inspected and a search target image using an identification boundary surface that divides success or failure.
PCT/JP2013/069296 2012-07-27 2013-07-16 Matching process device, matching process method, and inspection device employing same WO2014017337A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/417,425 US9619727B2 (en) 2012-07-27 2013-07-16 Matching process device, matching process method, and inspection device employing same
KR1020157002223A KR101701069B1 (en) 2012-07-27 2013-07-16 Matching process device, matching process method, and inspection device employing same

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-167363 2012-07-27
JP2012167363A JP5941782B2 (en) 2012-07-27 2012-07-27 Matching processing apparatus, matching processing method, and inspection apparatus using the same

Publications (2)

Publication Number Publication Date
WO2014017337A1 true WO2014017337A1 (en) 2014-01-30
WO2014017337A8 WO2014017337A8 (en) 2014-05-22

Family

ID=49997151

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/069296 WO2014017337A1 (en) 2012-07-27 2013-07-16 Matching process device, matching process method, and inspection device employing same

Country Status (5)

Country Link
US (1) US9619727B2 (en)
JP (1) JP5941782B2 (en)
KR (1) KR101701069B1 (en)
TW (1) TWI579775B (en)
WO (1) WO2014017337A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3021280A1 (en) * 2014-11-07 2016-05-18 JEOL Ltd. Image evaluation method and charged particle beam device
CN109724988A (en) * 2019-02-01 2019-05-07 佛山市南海区广工大数控装备协同创新研究院 A kind of pcb board defect positioning method based on multi-template matching
CN110969661A (en) * 2018-09-30 2020-04-07 上海微电子装备(集团)股份有限公司 Image processing device and method, position calibration system and method

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6063315B2 (en) * 2013-03-26 2017-01-18 富士フイルム株式会社 Authenticity determination system, feature point registration apparatus and operation control method thereof, and collation determination apparatus and operation control method thereof
JP6454533B2 (en) * 2014-12-15 2019-01-16 株式会社日立ハイテクノロジーズ Charged particle beam equipment
US11669953B2 (en) 2015-01-30 2023-06-06 Hitachi High-Tech Corporation Pattern matching device and computer program for pattern matching
JP6511986B2 (en) 2015-06-26 2019-05-15 富士通株式会社 PROGRAM GENERATION DEVICE, PROGRAM GENERATION METHOD, AND GENERATION PROGRAM
US10373379B2 (en) 2015-08-20 2019-08-06 Disney Enterprises, Inc. Deformable-surface tracking based augmented reality image generation
TWI578240B (en) * 2015-12-01 2017-04-11 財團法人工業技術研究院 Method for feature description and feature descriptor using the same
JP2017211765A (en) * 2016-05-24 2017-11-30 アイシン精機株式会社 Object recognition device
US10417737B2 (en) * 2017-06-21 2019-09-17 International Business Machines Corporation Machine learning model for automatic image registration quality assessment and correction
US10424045B2 (en) * 2017-06-21 2019-09-24 International Business Machines Corporation Machine learning model for automatic image registration quality assessment and correction
KR102041310B1 (en) * 2017-08-02 2019-11-07 세메스 주식회사 Apparatus for treating a substrate and method for determining the state the pose of a substrate
CN111417860B (en) * 2017-11-27 2023-04-07 浜松光子学株式会社 Analysis method, analysis device, analysis program, and storage medium storing analysis program
JP6898211B2 (en) 2017-11-27 2021-07-07 浜松ホトニクス株式会社 A recording medium for recording an optical measurement method, an optical measurement device, an optical measurement program, and an optical measurement program.
WO2019168310A1 (en) * 2018-02-28 2019-09-06 서울대학교산학협력단 Device for spatial normalization of medical image using deep learning and method therefor
KR102219890B1 (en) * 2018-02-28 2021-02-24 서울대학교산학협력단 Apparatus for spatial normalization of medical image using deep learning and method thereof
JP6844564B2 (en) * 2018-03-14 2021-03-17 オムロン株式会社 Inspection system, identification system, and learning data generator
TWI722562B (en) * 2018-09-24 2021-03-21 荷蘭商Asml荷蘭公司 Method for determining candidate patterns from set of patterns of a patterning process
JP7395566B2 (en) * 2019-04-02 2023-12-11 株式会社半導体エネルギー研究所 Inspection method
JP7298333B2 (en) * 2019-06-25 2023-06-27 オムロン株式会社 Visual inspection management system, visual inspection management device, visual inspection management method and program
US11244440B2 (en) * 2019-08-30 2022-02-08 Intel Corporation Ranking of objects with noisy measurements
CN110704559B (en) * 2019-09-09 2021-04-16 武汉大学 Multi-scale vector surface data matching method
JP7449111B2 (en) * 2020-02-18 2024-03-13 キヤノン株式会社 Inspection equipment, inspection method
CN111695621B (en) * 2020-06-09 2023-05-05 杭州印鸽科技有限公司 Method for detecting matching of customized article and order based on deep learning
CN114202578A (en) * 2020-09-18 2022-03-18 长鑫存储技术有限公司 Wafer alignment method and device
JP7518741B2 (en) 2020-11-27 2024-07-18 株式会社フジクラ Inspection device, inspection method, and inspection program
KR102510581B1 (en) * 2022-09-06 2023-03-16 주식회사 포스로직 Method for matching shape array and apparatus for using the method
CN118314336B (en) * 2024-06-11 2024-08-09 四川迪晟新达类脑智能技术有限公司 Heterogeneous image target positioning method based on gradient direction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05101186A (en) * 1991-10-08 1993-04-23 Sumitomo Cement Co Ltd Optical pattern identifying method
JPH09138785A (en) * 1995-11-14 1997-05-27 Mitsui Eng & Shipbuild Co Ltd Pattern matching method and device
JP2001014465A (en) * 1999-06-29 2001-01-19 Matsushita Electric Ind Co Ltd Method and device for recognizing object
JP2003076976A (en) * 2001-08-31 2003-03-14 Mitsui Eng & Shipbuild Co Ltd Pattern matching method
JP2006292615A (en) * 2005-04-13 2006-10-26 Sharp Corp Visual examination apparatus, visual inspection method, program for making computer function as visual inspection apparatus, and recording medium
JP2006293528A (en) * 2005-04-07 2006-10-26 Sharp Corp Method and apparatus for selecting learning image, and method, apparatus, program and recording medium for creating image processing algorithm

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647139B1 (en) 1999-02-18 2003-11-11 Matsushita Electric Industrial Co., Ltd. Method of object recognition, apparatus of the same and recording medium therefor
EP1049030A1 (en) * 1999-04-28 2000-11-02 SER Systeme AG Produkte und Anwendungen der Datenverarbeitung Classification method and apparatus
US6868175B1 (en) * 1999-08-26 2005-03-15 Nanogeometry Research Pattern inspection apparatus, pattern inspection method, and recording medium
JP4218171B2 (en) * 2000-02-29 2009-02-04 株式会社日立製作所 Scanning electron microscope, matching method, and computer-readable recording medium recording program
JP4199939B2 (en) * 2001-04-27 2008-12-24 株式会社日立製作所 Semiconductor inspection system
JP4901254B2 (en) * 2006-03-22 2012-03-21 株式会社日立ハイテクノロジーズ Pattern matching method and computer program for performing pattern matching
US7525673B2 (en) * 2006-07-10 2009-04-28 Tokyo Electron Limited Optimizing selected variables of an optical metrology system
JP4814116B2 (en) * 2007-01-29 2011-11-16 三菱重工業株式会社 Mounting board appearance inspection method
JP5460023B2 (en) * 2008-10-16 2014-04-02 株式会社トプコン Wafer pattern inspection method and apparatus
US8194938B2 (en) 2009-06-02 2012-06-05 George Mason Intellectual Properties, Inc. Face authentication using recognition-by-parts, boosting, and transduction
JP5671928B2 (en) 2010-10-12 2015-02-18 ソニー株式会社 Learning device, learning method, identification device, identification method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05101186A (en) * 1991-10-08 1993-04-23 Sumitomo Cement Co Ltd Optical pattern identifying method
JPH09138785A (en) * 1995-11-14 1997-05-27 Mitsui Eng & Shipbuild Co Ltd Pattern matching method and device
JP2001014465A (en) * 1999-06-29 2001-01-19 Matsushita Electric Ind Co Ltd Method and device for recognizing object
JP2003076976A (en) * 2001-08-31 2003-03-14 Mitsui Eng & Shipbuild Co Ltd Pattern matching method
JP2006293528A (en) * 2005-04-07 2006-10-26 Sharp Corp Method and apparatus for selecting learning image, and method, apparatus, program and recording medium for creating image processing algorithm
JP2006292615A (en) * 2005-04-13 2006-10-26 Sharp Corp Visual examination apparatus, visual inspection method, program for making computer function as visual inspection apparatus, and recording medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3021280A1 (en) * 2014-11-07 2016-05-18 JEOL Ltd. Image evaluation method and charged particle beam device
US9396905B2 (en) 2014-11-07 2016-07-19 Jeol Ltd. Image evaluation method and charged particle beam device
CN110969661A (en) * 2018-09-30 2020-04-07 上海微电子装备(集团)股份有限公司 Image processing device and method, position calibration system and method
CN110969661B (en) * 2018-09-30 2023-11-17 上海微电子装备(集团)股份有限公司 Image processing device and method, and position calibration system and method
CN109724988A (en) * 2019-02-01 2019-05-07 佛山市南海区广工大数控装备协同创新研究院 A kind of pcb board defect positioning method based on multi-template matching
CN109724988B (en) * 2019-02-01 2021-05-18 佛山市南海区广工大数控装备协同创新研究院 PCB defect positioning method based on multi-template matching

Also Published As

Publication number Publication date
WO2014017337A8 (en) 2014-05-22
KR20150036230A (en) 2015-04-07
US20150199583A1 (en) 2015-07-16
JP5941782B2 (en) 2016-06-29
TWI579775B (en) 2017-04-21
TW201415379A (en) 2014-04-16
US9619727B2 (en) 2017-04-11
JP2014026521A (en) 2014-02-06
KR101701069B1 (en) 2017-02-13

Similar Documents

Publication Publication Date Title
JP5941782B2 (en) Matching processing apparatus, matching processing method, and inspection apparatus using the same
US7889909B2 (en) Pattern matching method and pattern matching program
US10318805B2 (en) Pattern matching method and apparatus
US9141879B2 (en) Pattern matching method, image processing device, and computer program
US20140016854A1 (en) Pattern matching device and computer program
JP6872670B2 (en) Dimension measuring device, dimensional measuring program and semiconductor manufacturing system
WO2018074110A1 (en) Fingerprint processing device, fingerprint processing method, program, and fingerprint processing circuit
KR102435492B1 (en) Image processing system and computer program for carrying out image process
CN111783770B (en) Image correction method, device and computer readable storage medium
JP2006048322A (en) Object image detecting device, face image detection program, and face image detection method
JP5651428B2 (en) Pattern measuring method, pattern measuring apparatus, and program using the same
JP6713185B2 (en) Inspection apparatus and inspection method using template matching
CN113168687A (en) Image evaluation apparatus and method
CN110288040B (en) Image similarity judging method and device based on topology verification
CN111898408B (en) Quick face recognition method and device
JP7138137B2 (en) INSPECTION DEVICE AND INSPECTION METHOD USING TEMPLATE MATCHING
US20230114432A1 (en) Dimension measurement apparatus, semiconductor manufacturing apparatus, and semiconductor device manufacturing system
CN115546219B (en) Detection plate type generation method, plate card defect detection method, device and product
JP5592414B2 (en) Template evaluation apparatus, microscope apparatus, and program
JP5010627B2 (en) Character recognition device and character recognition method
JP4775957B2 (en) Face detection device
US20150178934A1 (en) Information processing device, information processing method, and program
JP4525526B2 (en) Pattern matching method and apparatus
US20230194253A1 (en) Pattern Inspection/Measurement Device, and Pattern Inspection/Measurement Program
JP2013053986A (en) Pattern inspection method and device thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13823104

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14417425

Country of ref document: US

ENP Entry into the national phase

Ref document number: 20157002223

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13823104

Country of ref document: EP

Kind code of ref document: A1