WO2012001862A1 - パターンマッチング用テンプレートの作成方法、及び画像処理装置 - Google Patents
パターンマッチング用テンプレートの作成方法、及び画像処理装置 Download PDFInfo
- Publication number
- WO2012001862A1 WO2012001862A1 PCT/JP2011/002660 JP2011002660W WO2012001862A1 WO 2012001862 A1 WO2012001862 A1 WO 2012001862A1 JP 2011002660 W JP2011002660 W JP 2011002660W WO 2012001862 A1 WO2012001862 A1 WO 2012001862A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- template
- pattern
- information
- area
- design data
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 151
- 238000012545 processing Methods 0.000 title claims abstract description 99
- 238000013461 design Methods 0.000 claims abstract description 124
- 230000008569 process Effects 0.000 claims description 101
- 238000004519 manufacturing process Methods 0.000 claims description 40
- 238000004364 calculation method Methods 0.000 claims description 31
- 230000003287 optical effect Effects 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 5
- 230000007261 regionalization Effects 0.000 claims 2
- 239000010410 layer Substances 0.000 description 189
- 238000003860 storage Methods 0.000 description 48
- 238000003708 edge detection Methods 0.000 description 44
- 238000001514 detection method Methods 0.000 description 38
- 238000012937 correction Methods 0.000 description 21
- 238000005259 measurement Methods 0.000 description 19
- 239000004065 semiconductor Substances 0.000 description 17
- 238000007689 inspection Methods 0.000 description 16
- 239000002131 composite material Substances 0.000 description 15
- 238000010586 diagram Methods 0.000 description 11
- 238000010894 electron beam technology Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 238000009499 grossing Methods 0.000 description 9
- 238000003705 background correction Methods 0.000 description 8
- 238000006243 chemical reaction Methods 0.000 description 8
- 230000007423 decrease Effects 0.000 description 8
- 239000011229 interlayer Substances 0.000 description 8
- 230000015572 biosynthetic process Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 7
- 238000013500 data storage Methods 0.000 description 6
- 238000003786 synthesis reaction Methods 0.000 description 6
- 239000000463 material Substances 0.000 description 5
- 238000003672 processing method Methods 0.000 description 5
- 238000012795 verification Methods 0.000 description 4
- 235000012431 wafers Nutrition 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 238000005530 etching Methods 0.000 description 3
- 238000005498 polishing Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010422 painting Methods 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 229920002120 photoresistant polymer Polymers 0.000 description 2
- 238000001878 scanning electron micrograph Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010884 ion-beam technique Methods 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
- G06T2207/10061—Microscopic image from scanning electron microscope
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
Definitions
- the present invention relates to a template creation method, an image processing apparatus, and a program used for detecting a specific position, and more particularly to a method for creating a template based on design data of a semiconductor device or the like.
- contrast information can be used as effective information in image recognition.
- contrast information is used as effective information. I can not use it. This is because the design data does not have contrast information represented by an OM image, and the pattern presence / absence information is only binary information. This is because image recognition may not be successful.
- pattern matching can be performed by selectively excluding lower layer pattern information that can be noise in the matching process.
- the edge obtained from the multi-valued OM image with contrast information has luminance intensity
- the edge of the binary template image created from the design data has no intensity, so the degree of coincidence between the two decreases. there's a possibility that.
- the semiconductor manufacturing process includes various processes such as exposure, development, etching, photoresist removal, and planarization, and the appearance of the OM image is different in each process. For this reason, as described in Patent Document 2, even if image information is generated in multiple values using contrast information from design data, the appearance differs from the OM image depending on the process, and the image recognition does not succeed. There is.
- a template creation method and an image processing apparatus for a pattern matching template for the purpose of performing pattern matching based on a template image having high contrast will be described. Also, a template creation method for a pattern matching template for the purpose of performing pattern matching based on the pattern step state according to the process, and an image processing apparatus using the template matching method will be described.
- a template creation method for partially extracting design data and creating a template for template matching based on the extracted partial area and A method for obtaining a density of edges belonging to the predetermined region for a predetermined region (for example, a search region or a region specified by a template) in design data corresponding to the search target region of the template matching, And a device are proposed.
- a method and apparatus for determining an edge density for the predetermined area and selecting the predetermined area as a template area or a template candidate area when the edge density satisfies a predetermined condition Propose For example, a region including a region having a high edge density and a region having a low edge density at a predetermined ratio is selected as a template region or a template candidate region.
- a template creation method for creating a template for template matching from design data and an apparatus for realizing the same process information relating to a semiconductor manufacturing process is used.
- a template generation method and apparatus for obtaining density information of a multilayer pattern in an area specified by a template are proposed.
- the block diagram which shows the apparatus structure of the image processing apparatus which produces a template based on design data.
- region detection part The figure which shows an example of a dense area
- the flowchart explaining the process step of a pattern edge multi-valued image creation process The flowchart explaining a pattern synthetic image creation process process.
- generation process The figure explaining an example of a semiconductor measurement system. Schematic explanatory drawing of a scanning electron microscope.
- the figure which shows the example of a table which shows the relationship between pattern classification and manufacturing process information, and image processing conditions The figure which shows the example of a table which shows the relationship between pattern conditions and a pattern classification.
- the image processing apparatus exemplified in the embodiment described below relates to a method and apparatus for generating a multi-value template by detecting a specific area based on information obtained from design data.
- a specific region is detected based on information based on the pattern edge density obtained from the design data, and a binary or multi-value template or both is generated from the design data.
- An example of generating a binary and / or multi-value template from design data based on information based on the pattern edge density obtained from the design data will also be described.
- setting of the coordinate position and / or area size of the area used for the template of the design data based on the pattern edge information obtained from the design data will also be described.
- An example in which the coordinate position and / or area size of the area used for the template of the design data is set based on information based on the pattern edge density obtained from the design data will also be described.
- an example will be described in which the coordinate position and / or region size of the region used for the design data template already set is changed based on information based on the pattern edge density obtained from the design data.
- the pattern edge density calculation means for obtaining information based on the density of the pattern line segment from the design data, and the pattern edge density information obtained by the pattern edge density calculation means.
- An example of an image processing apparatus that includes a template position adjusting unit that obtains a template area and a template generating unit that creates a template based on information obtained by the template position adjusting unit will be described.
- the coordinate position or area of the area used for the template by detecting the area including both the sparse area and the dense area of the pattern line segment density An example of setting the size or both will also be described.
- An example of displaying information based on the pattern edge density obtained from the design data will also be described.
- An example in which the user sets the template area by displaying information based on the pattern edge density obtained from the design data will also be described.
- a binary and / or multi-value template is generated using information based on the pattern edge density obtained from the design data. Also explained is an example in which a pattern edge is obtained for each layer from design data, and a binary and / or multi-value template is generated using information based on the density of the pattern edge obtained by superimposing the pattern edges of each layer. To do.
- an area including both a sparse area and a dense area of the pattern line segment density is detected, and binary or multi-value or both are detected.
- An example of generating a template will also be described.
- smoothing means smoothes pattern edge image information when generating a multi-value template using information based on pattern edge density from design data will be described.
- an image creation method for generating a binary or multi-value template or both, and information based on the pattern edge density obtained from the design data are used.
- An example of an image processing program for generating a binary and / or multi-value template will also be described.
- the step information of the pattern of the area specified by the template is estimated using the design data and process information related to the manufacturing process, and the shade information of each position in the template is obtained.
- the user sets process information related to a manufacturing process when creating a template for template matching is shown.
- the process is divided into a plurality of areas based on the relative positions of the interlayer patterns of the multilayer pattern in the area specified by the template, and process information regarding the manufacturing process is provided for each area.
- An example in which the step state of the pattern is estimated and tone information at each position in each region is generated will be described.
- a charged particle beam apparatus is illustrated as an apparatus for forming an image, and an example using an SEM is described as one aspect thereof.
- a focused ion beam (FIB) apparatus that scans a beam to form an image may be employed as the charged particle beam apparatus.
- FIB focused ion beam
- FIG. 24 is a schematic explanatory diagram of a measurement and inspection system in which a plurality of measurement or inspection devices are connected to a network.
- the system mainly includes a CD-SEM 2401 for measuring pattern dimensions of semiconductor wafers, photomasks, etc., and irradiating an electron beam onto a sample to obtain an image and comparing the image with a pre-registered reference image
- the defect inspection apparatus 2402 for extracting defects based on the above is connected to the network.
- the network also includes a condition setting device 2403 for setting the measurement position and measurement conditions on the design data of the semiconductor device, and the pattern quality based on the design data of the semiconductor device and the manufacturing conditions of the semiconductor manufacturing device.
- a simulator 2404 for simulation and a storage medium 2405 for storing design data in which layout data and manufacturing conditions of semiconductor devices are registered are connected.
- the design data is expressed in, for example, the GDS format or the OASIS format, and is stored in a predetermined format.
- the design data can be of any type as long as the software that displays the design data can display the format and can handle the data as graphic data.
- the storage medium 2405 may include a measuring device, a control device for the inspection device, or a condition setting device 2403 and a simulator 2404.
- CD-SEM 2401 and the defect inspection device 2402 are provided with respective control devices, and control necessary for each device is performed. These control devices include functions for setting the simulator, measurement conditions, and the like. May be installed.
- an electron beam emitted from an electron source is focused by a plurality of lenses, and the focused electron beam is scanned one-dimensionally or two-dimensionally on a sample by a scanning deflector.
- Secondary electrons Secondary Electron: SE
- Backscattered Electron: BSE Backscattered Electron emitted from the sample by scanning the electron beam are detected by a detector, and in synchronization with the scanning of the scanning deflector, the frame memory Or the like.
- the image signals stored in the frame memory are integrated by an arithmetic device mounted in the control device. Further, scanning by the scanning deflector can be performed in any size, position, and direction.
- control and the like are performed by the control device of each SEM, and images and signals obtained as a result of scanning with the electron beam are sent to the condition setting device 2403 via the communication line network.
- the control device that controls the SEM and the condition setting device 2403 are described as separate units.
- the present invention is not limited to this, and the condition setting device 2403 controls and measures the device. Processing may be performed in a lump, or SEM control and measurement processing may be performed together in each control device.
- condition setting device 2403 or the control device stores a program for executing a measurement process, and measurement or calculation is performed according to the program.
- the condition setting device 2403 has a function of creating a program (recipe) for controlling the operation of the SEM based on semiconductor design data, and functions as a recipe setting unit. Specifically, a position for performing processing necessary for the SEM such as a desired measurement point, auto focus, auto stigma, addressing point, etc. on design data, pattern outline data, or simulated design data And a program for automatically controlling the sample stage, deflector, etc. of the SEM is created based on the setting. Also, in order to create a template, which will be described later, information on a region serving as a template is extracted from design data, and a processor for creating a template based on the extracted information, or a program for creating a template for a general-purpose processor, or It is remembered.
- a program for controlling the operation of the SEM based on semiconductor design data
- a recipe setting unit Specifically, a position for performing processing necessary for the SEM such as a desired measurement point, auto focus, auto stigma, addressing point, etc. on design data, pattern
- FIG. 25 is a schematic configuration diagram of a scanning electron microscope.
- An electron beam 2503 extracted from an electron source 2501 by an extraction electrode 2502 and accelerated by an acceleration electrode (not shown) is focused by a condenser lens 2504 which is a form of a focusing lens, and then is scanned on a sample 2509 by a scanning deflector 2505.
- the electron beam 2503 is decelerated by a negative voltage applied to an electrode built in the sample stage 2508 and is focused by the lens action of the objective lens 2506 and irradiated onto the sample 2509.
- secondary electrons and electrons 2510 such as backscattered electrons are emitted from the irradiated portion.
- the emitted electrons 2510 are accelerated in the direction of the electron source by the acceleration action based on the negative voltage applied to the sample, collide with the conversion electrode 2512, and generate secondary electrons 2511.
- the secondary electrons 2511 emitted from the conversion electrode 2512 are captured by the detector 2513, and the output I of the detector 2513 changes depending on the amount of captured secondary electrons. Depending on the output I, the brightness of a display device (not shown) changes.
- an image of the scanning region is formed by synchronizing the deflection signal to the scanning deflector 2505 and the output I of the detector 2513.
- the scanning electron microscope illustrated in FIG. 25 includes a deflector (not shown) that moves the scanning region of the electron beam.
- the present invention is not limited to such a configuration. It is possible to adopt a configuration in which the detection surface of the electron multiplier tube or the detector is arranged on the orbit.
- the control device 2514 controls each component of the scanning electron microscope, and forms a pattern on the sample based on a function of forming an image based on detected electrons and an intensity distribution of detected electrons called a line profile. It has a function to measure the pattern width.
- the scanning electron microscope illustrated in FIG. 25 is equipped with an optical microscope.
- the optical microscope mainly includes a light source 2515 and a light receiving unit 2516, and an optical image is formed by converting light received by the light receiving unit 2516 into an image by the control device 2514.
- a pattern matching function for performing template matching on the obtained optical image based on an image registered in advance and identifying a position to be measured is provided.
- the image processing device can be built in the control device 2514 or can be executed by a computing device with built-in image processing, or can be connected to an external computing device (for example, the condition setting device 2403) via a network. It is also possible to execute image processing.
- FIG. 1 is a diagram illustrating an example of an image processing apparatus that creates a template based on design data.
- the design data storage unit 1 stores design data (layout data) corresponding to a pattern of an OM image that is an object of image recognition (matching).
- a template image is generated by the template generation unit 2 of the image processing apparatus 5 based on the design data corresponding to the OM image pattern in the design data storage unit 1.
- the area information selected by the area selection unit 4 is also used.
- the design data may be read from an external storage medium 2405.
- a process information storage unit 2801 for storing information on the semiconductor manufacturing process may be provided separately from the layout data.
- the process information storage unit 2801 stores information related to a planarization process such as CMP of a wafer acquired as an OM image to be matched.
- the process information and design data (layout data) in the process information storage unit 2801 is used to estimate the step state of each pattern by the step estimation unit 2803 of the template generation unit 2802 (image processing apparatus), and to obtain a correction value for correcting the grayscale information. .
- the grayscale information generation unit 2804 Based on the correction value obtained by the level difference estimation unit 2803 and the design data corresponding to the OM image pattern in the design data storage unit 1, the grayscale information generation unit 2804 generates a template image.
- the design data may be read from an external storage medium 2405.
- OM images used in semiconductor inspection and the like can be obtained by placing a sample on a movable stage and photographing it with an optical microscope, and can be photographed at a position corresponding to design data.
- the pattern edge density of the region corresponding to the pattern of the OM image is obtained by the edge density calculation unit 3, and the contrast is determined based on the pattern edge density information.
- a region where a high and clear edge can be acquired is selected by the region selection unit 4.
- the template generation unit 2 generates a template image from a region suitable for the template selected by the region selection unit 4.
- the luminance signal value decreases in the pattern step region as shown in FIG. For this reason, the luminance is dark in an area having many pattern steps.
- step difference also depends on the reflectance of a pattern, if the thing near a broad light source is used, a luminance value will be comparatively high and bright. Therefore, if the density of the level difference of the pattern is observed, a bright part and a dark part are found. If both a bright portion and a dark portion exist, it can be considered that the image has high contrast.
- the pattern step is considered as the edge of the pattern, the density of the pattern edge is calculated and the area where the pattern edge contains both dense and sparse parts is always high and the edge is clear Can be used as a template, and the matching success rate is improved by matching processing using edge information and matching processing using contrast information.
- FIG. Drawing is performed by the drawing unit 21 based on the design data corresponding to the pattern of the OM image from the design data storage unit 1. Since the design data is usually information such as the vertex coordinates of the pattern, the drawing unit 21 converts the pattern corresponding to the pattern of the OM image into image data information.
- a template image is generated by the image generation unit 22 using information on the template region selected by the region selection unit 4 from the drawn image data.
- FIG. 4 shows an embodiment of the drawing unit 21.
- the inside and outside of the closed figure are separately painted by the in-pattern filling unit 211.
- the inside of the pattern is converted into image data with white and the outside of the pattern as black, and stored in the storage unit 212.
- drawing there may be a pattern of a plurality of layers (multilayers). In this case as well, drawing is performed for each layer based on the design data and stored in the storage unit 212.
- the drawing unit 21 may be performed outside the template generation unit 2 or may be performed outside the image processing apparatus 5. In that case, it is also conceivable to store image data obtained by drawing design data in the design data storage unit 1.
- the image generation unit 22 in FIG. 2 will be described in detail later.
- An edge is detected by the edge detection unit 31 for the image data drawn by the template generation unit 2 based on the design data.
- This edge detection unit can be realized by a differential filter such as a Laplacian filter or a Sobel filter. It is also conceivable to handle an image in which a line segment obtained from the vertex coordinates of design data is drawn (an image of only a line segment that is not painted inside and outside the pattern) as the output of the edge detection unit.
- the density detection unit 32 obtains the density of the pattern edge.
- the density detection unit 32 can detect the density by obtaining the sum of the values of the pattern edges included in the specific area around the target pixel.
- the density may be detected by obtaining the number of pixels having a pattern edge value larger than a certain value.
- the density is obtained for each pixel, but the density may be obtained for each of a plurality of pixels. It is also conceivable to make the corresponding position finer than 1 pixel by interpolating the obtained density value. Interpolation can be realized by known techniques such as nearest, bilinear and bicubic.
- the pattern edge density information 3 a of the density detector 32 is used by the region selector 4. Further, the area selection unit 4 also uses image data 3 b that is separately applied to the inside and outside of the pattern by the template generation unit 2 used for input of the edge density calculation unit 3.
- the density detection unit 32 for example, in the edge detection result region as shown in FIG. 6A, a specific region for one pixel adjacent to the target pixel (centering on the target pixel).
- the edge density in the case of 3 pixels * 3 pixel area is as shown in FIG. 6B when the sum of the specific areas is obtained, and this value may be used as the edge density.
- the edge density may be set as shown in FIG. 6C when the edge detection result of the specific area is set to the number of pixels of 0 or more.
- the present invention is not limited to this, and it is only necessary to obtain information indicating the amount of edges within the specific range centering on the target pixel.
- the edge density calculation unit 3 according to the present invention will be described with reference to FIG. Here, a case where two layers are used will be described.
- the edge detection units 31 and 33 detect the respective edges of the image data of the 2A layer and 2B layer, and the maximum value selection unit 34 compares the detected edge detection results for each pixel. Select the one with the larger edge value.
- the image data that has passed through the edge detectors 31 and 33 is large (white) when the edge is strong, and small (black) when the edge is weak. That is, if there is an edge in any layer, the value of the edge is reflected preferentially there.
- the maximum value selection unit 34 can be realized by the comparison unit 341 and the storage unit 342 as shown in FIG. The value after edge detection of each layer is compared and the larger one is selected and stored in the storage unit 342.
- the storage unit 342 may be omitted.
- the storage unit 342 stores all 0s, and uses only the edge detection unit 31 to perform edge detection in order for the 2A layer, the 2B layer, and each layer, and the maximum value.
- the selection unit 34 it is also conceivable to read and compare the maximum value of the edges stored so far.
- the pattern edge it is considered that the edge of the portion where the lower pattern is hidden by the pattern of the upper layer is removed. In that case, whether or not the pattern is hidden can be determined from the design data. In this case, it is conceivable that the pattern portion (white) that cannot be seen by overlapping with the upper layer is excluded in advance by the drawing unit 21 to make the pattern outside (black).
- the image data obtained by painting the inside and outside of the pattern separately in the template generation unit 2 used for the input of the edge density calculation unit 3 is compared with the maximum value selection unit 36 for each layer for each pixel. If there is a pattern in any of the layers, it becomes white and is treated as information indicating pattern presence / absence information.
- the inside of the maximum value selection unit 36 is the same as that of the maximum value selection unit 34.
- the output 3 b of the maximum value selection unit 36 is used by the region selection unit 4.
- the above-described density detection unit 32 is considered to be two layers, but the same can be applied to two or more layers.
- the edge density information 3a and the pattern presence / absence information 3b obtained by the edge density calculation unit 3 are stored in the storage unit 41, respectively. Then, based on the information on the coordinate position (x, y) and the region size (the vertical and horizontal sizes of the region) of the template region obtained from the region information unit 45, the edge density information 3a of the part corresponding to the designated template region And pattern presence / absence information 3b are read from the storage unit 41, respectively. That is, a portion corresponding to the template region is cut out from the entire stored image region, and the density of the edge of the portion corresponding to the template region is obtained.
- the information on the coordinate position and area size of the template area in the area information unit 45 is designated in advance by the user.
- Information on the coordinate position (x, y) and the region size (vertical and horizontal size of the region) may be specified in pixel units or nm units.
- the sparse region detection unit 42 detects a region inside the pattern and a low pattern edge density. For example, it may be considered that the region has no pattern edge within a certain range. Further, the dense area detection unit 43 detects an area inside the pattern and a high density of pattern edges. For example, it is also conceivable that all regions other than the region having no pattern edge in a specific range are made high in pattern edge density. Whether the determination unit 44 is suitable for a template depending on the extent to which there are regions with low pattern edge density detected by the sparse region detection unit 42 and regions with high pattern edge density detected by the dense region detection unit 43. Determine whether or not.
- the signal information 4a as to whether or not it is suitable for the template is sent to the template generation unit 2 together with the coordinate position and area size information 4a of the template area at that time.
- FIG. 10 shows an embodiment of the sparse region detection unit.
- the signal inversion unit 421 inverts the signal value with respect to the edge density information 3 a stored in the storage unit 41. For example, when the maximum value of the density information 3a is 255 and the minimum value is 0, the result of the signal inversion unit 421 is the value of the maximum value-density information 3a. Specifically, when the density information 3a is 0, the value after inversion is 255, and when the density information 3a is 255, the value after inversion is 0. Then, the inverted signal and the pattern presence / absence information 3b stored in the storage unit 41 are compared by the minimum value selection unit 422 and replaced with the smaller value for each pixel.
- the pattern becomes blacker as the edge density is higher. Since the area inside the pattern is white, when the smaller (black) direction is selected by the minimum value selection unit 422, only the area having a low edge density in the pattern remains white.
- the threshold value 425 having a specific value is compared with the comparison unit 423, and the value below the threshold value is binarized into black, and the white value beyond that is binarized into white, and the white region detection unit 424 remains white over a certain region range. Detect part.
- FIG. 11 (a) a pixel region of 5 pixels * 5 pixels in white and white (1) and black (0), and a region (target pixel) separated from the target pixel by two pixels. 11
- the upper left pixel is black (0) as shown in FIG. 11 (b). Therefore, the area separated by 2 pixels is not set as a white area but a non-white area (0 ).
- FIG. 11A if a certain area range is an area away from the target pixel to one pixel (a region of 3 pixels * 3 pixels centering on the target pixel), the vicinity of the center is an area of 3 pixels * 3 pixels. Since there is no black (0), the vicinity of the center is determined as the white region (1) as shown in FIG.
- the pixel determined as the white area is a sparse area pixel.
- FIG. 12 shows an embodiment of the dense area detection unit.
- the signal inversion unit 431 inverts the signal value with respect to the edge density information 3 a stored in the storage unit 41. This is the same as the sparse region detection unit 42.
- the threshold value 432 having a specific value is compared with the comparison unit 433, and the portion below the threshold value is binarized into black, and the portion beyond the threshold value is binarized into white, and the black region detection unit 434 remains black over a certain region range. Is detected.
- This black area detection unit 434 differs from the white area detection unit 424 only in whether the object to be detected is white or black, and can be realized in the same way.
- the pixel determined to be a black region is a dense region pixel.
- FIG. 13 shows an example of the determination unit.
- the counters 443 and 444 are used to count the sparse area pixel number and the dense area pixel number obtained by the sparse area detection unit 42 and the dense area detection unit 43, respectively. It can be determined that the larger the counted number, the more sparse areas or dense areas.
- the counted values are compared with the threshold values 441 and 442 by the comparison units 445 and 446, respectively, and when the two comparison results are both larger than the threshold value (the output of the comparison unit is “1”), that is, the sparse region, When there are more than a specific area in both dense areas, the result of AND 447 is “1”, and it is determined that the area is suitable for the template. When either of the counted values is compared with the threshold values 441 and 442 and either one is smaller than the threshold value (the output of the comparison unit is “0”), the result of the AND 447 is “0”, and it is determined that it is not suitable for the template. .
- the information on the coordinate position and area size of the template area is automatically obtained by using the determination as to whether or not it is suitable for the template without setting by the user. Convenient.
- FIG. 14 shows a detection flow of the coordinate position and area size of the template area.
- the template size and the coordinate position are initially set. For example, the initial value is the smallest value for the template size, and the coordinate position is (0, 0) as the initial coordinates if the upper left corner (x, y) of the image to be templated is (0, 0). It is possible to do.
- template size image data edge density information 3a and pattern presence / absence information 3b
- the template image data edge density information 3a and the edge density information 3a
- the pattern density information 3b) is used to determine the density of pattern edges, etc., to determine whether or not the template is suitable for the template.
- S103 if the determination result of the determination unit 44 is “1” and the template is appropriate for the template, The coordinate value is stored and sent to the template generation unit 2. If “0” is not suitable, it is determined in S105 whether or not S101 to S103 have been performed for all coordinate positions of the template. If not, in S106, the coordinate position is not performed in S106. Update and execute S101 to S103, and if it has been executed at all coordinate positions, it is determined in S107 whether there is a template size that has not been executed yet.
- the initial value is the smallest value of the template size.
- the image generation unit 22 of the template generation unit 2 will be described with reference to FIG.
- the edge detection unit 231 and the edge detection unit 232 detect the edge
- the signal inversion unit 233 and the signal inversion unit 234 invert the signal
- the minimum value selection unit 235 for each pixel outputs the minimum value. Select. Since the pattern edge part is black, if either one has a pattern edge, it is left in black. Further, when detecting the pattern edge, it is considered that the edge of the portion where the lower pattern is hidden by the pattern of the upper layer is removed. In that case, whether or not the pattern is hidden can be determined from the design data.
- the maximum value selection unit 236 selects those maximum values for each pixel. Since the inside of the pattern is painted white, it remains white if either is inside the pattern. The brightness of the region with the pattern step decreases, and the more the step, the closer to black. For this reason, it is conceivable that the density calculation unit 237 calculates the density of the pattern and estimates the luminance value from the density information.
- the black color is proportional to the pattern density. Further, it is possible to use an expression calculated from information obtained experimentally instead of being simply proportional, and it is also conceivable to have a table based on information obtained experimentally.
- the density calculation unit 237 can be realized in the same manner as the density detection unit 32 described with reference to FIG. Then, the density information obtained by the density calculation unit 237 is inverted by the signal inversion unit 238 so that the density information becomes black in proportion to the height of the pattern density. Then, the combining unit 239 combines them. The synthesis may be blended at a specific ratio for each value.
- the density of the pattern edge is obtained by performing smoothing processing by the smoothing unit 230 on the image in which the pattern edge of the minimum value selection unit 235 is left as black, and the composition unit 239 performs synthesis. It is also possible to do.
- the smoothing unit can be realized by a general smoothing filter.
- the density can be obtained by performing processing for adding information on surrounding pattern edges, such as a smoothing filter having a uniform weight or a Gaussian filter. It is also conceivable that the order of the smoothing unit 230 is replaced with the synthesis unit 239 and the smoothed unit 230 performs smoothing on the synthesized image.
- the user may set it while viewing the density information of the pattern edge.
- the display unit 6 as shown in FIG. 17 and display an image of information obtained by the sparse region detection unit 42 and the dense region detection unit 43 of the region selection unit 4 on the display unit 6.
- a binary signal output from the sparse area detection unit 42 is used to display the edge density as white and the non-sparse area (non-sparse area) as black. Conceivable.
- a region including both a sparse region (white) and a non-sparse region (black) of the edge is suitable for the template, for example, when the region A is selected, Since there is no sparse area (white), it can be determined that this is not suitable for the template.
- B it can be determined that it is suitable for the template because it is an area including both a sparse area (white) and a non-sparse area (black).
- the coordinate position and template size of the template set by the user can be set in the area information section 45 of the area selecting section 4, and this information is sent to the template area as it is, so that the template is selected in the area selected by the user. Can be generated.
- the display unit 6 may display an instruction to switch between a mode for automatically setting a template and a mode for manually setting a template, and information indicating whether the current mode is automatic or manual. It is done.
- the image generation unit 22 of the template generation unit 2 creates a multi-value template, but the generated multi-value template is binarized by comparison with a specific specification to generate a binary template. Is also possible.
- the image generation unit 22 of the template generation unit 2 may use a binary signal output from the sparse region detection unit 42 shown in FIG. It is also conceivable to use a binary signal output from the dense area detection unit 43 as a template.
- the above-described multi-value template a template obtained by binarizing the multi-value template, a binary template output from the sparse region detection unit 42, and a binary template output from the dense region detection unit 43 are all created. It is also possible to do.
- FIG. 19 shows an outline of the processing flow of the image generation unit.
- a pattern edge composite image creation process S200 a pattern edge image is created from a drawing image (pattern image) of each layer, and a pattern edge composite image is created by synthesizing the pattern edge images of each layer.
- pattern edge multi-valued image creation processing S300 the pattern edge density is obtained from the pattern edge synthesized image, the brightness reduction of the pattern is estimated from the pattern edge density, and the brightness value of each pixel is obtained. Create an image.
- a pattern composite image is created by composing the drawing images (pattern images) of the respective layers.
- the multi-value template generation process S500 the pattern edge multi-value image and the pattern composite image are synthesized.
- FIG. 20 shows a processing flow of pattern edge composite image creation processing.
- edge detection is performed on the drawing image of the first layer (A layer) to create an edge image A ′.
- an edge image B ′ is created for the drawing image of the second layer (B layer) in the same manner.
- white is used when the edge is large, and black when the edge is small.
- the edge image A ′ and the edge image B ′ are compared for each pixel, and the maximum value (the edge portion or the value with the larger edge) is selected to create an edge composite image. If there is an edge in any of the layers, the edge composite image is an image in which the edges are overlapped.
- FIG. 21 shows a processing flow of pattern edge multi-value image creation processing.
- the edge density is calculated for the pattern edge composite image created by the pattern edge composite image creation process.
- the pattern edge density of each pixel is obtained from the number or amount of pattern edges around each pixel.
- the luminance value is converted based on the edge density of each pixel. For example, if the decrease in the luminance value due to the step of the pattern is inversely proportional to the height of the pattern density, a value obtained by inverting the pattern density value can be considered as the luminance value.
- the maximum value of the signal of edge density and luminance value is 255 and the minimum value is 0, if the edge density is 0, the luminance value is 255, and if the edge density is 255, the luminance value is 0.
- it is not simply proportional, but an equation calculated from experimentally obtained information may be used, or it may be possible to have a table based on experimentally obtained information. It is done.
- FIG. 22 shows a processing flow of pattern composite image creation processing.
- the first layer (A layer) drawing image (pattern image) and the second layer (B layer) drawing image (pattern image) are compared for each pixel, and the maximum value (pattern part) is selected to obtain a pattern composite image. create.
- the drawing image (pattern image) is drawn with white (255) in the pattern portion (inside the pattern) and black (0) in the non-pattern portion (outside of the pattern). Therefore, if there is a pattern part (white) in any layer, an image can be obtained in which the white part of the pattern part is given priority.
- Fig. 23 shows the processing flow of multi-value template generation processing.
- the pattern edge multi-value image obtained by the pattern edge multi-value image creation process and the pattern composite image obtained by the pattern composite image creation process are synthesized.
- the above-described processing of the image processing apparatus of the present invention may be performed by software processing.
- software processing may be performed on a personal computer, or hardware processing may be performed by being incorporated in an LSI.
- the pattern density determination method (edge density determination method) described above can be applied to a recipe verification method for verifying a created recipe, an assist function for assisting an operator when creating a template, and an automatic template creation method. it can.
- a specific application method of the pattern density determination method will be described.
- FIG. 26 is a flowchart showing a recipe verification process.
- a recipe is an operation program that automatically operates a scanning electron microscope or the like, and template information for OM matching is recorded in a part thereof. If this template information is not appropriate, an error may occur during automatic control of the SEM. Therefore, if the suitability of the recipe can be verified in advance, the error rate when the recipe is actually operated can be suppressed.
- recipe information to be verified is read (step 2601), and OM template information is extracted from the recipe information (step 2602). Pattern density information is extracted from the OM template information (information on the region selected as the template) according to the above-described method (step 2603). As described with reference to FIG.
- a portion with high luminance (a portion with a sparse edge) and a portion with low luminance (a portion with a dense edge) are mixed appropriately, it is an image with high contrast.
- the ratio of a bright part to a dark part is included in an arbitrary ratio (for example, 1: 1) or a ratio width of a certain range centered on an arbitrary ratio, the template is appropriately set. If the ratio of the bright part and the dark part is not included in the set arbitrary ratio or ratio range, it is determined that the template is inappropriate or needs to be reviewed. Is output on the display device (step 2605). If there is a search target region by a plurality of OM templates on the semiconductor wafer, steps 2602 and 2603 are repeated so as to extract the density information of the OM template again (step 2604).
- a search area for template matching is designated (step 2701).
- template size information is designated (step 2702).
- pattern density information in the search area is extracted (calculated) (step 2703).
- an area having a predetermined pattern density condition is selected from the partial areas of the template size (step 2704).
- the part has at least some contrast. If it is an obtained region and other conditions (for example, a unique shape is included) are satisfied, it can be said that the region is suitable for selection as a template. Therefore, by displaying the region distribution satisfying the condition in the search region, the operator can visually determine the region to be selected as the template. That is, by displaying the results obtained in steps 2701 to 2704, it is possible to assist the template region selection.
- step 2704 for the region selected as having a predetermined contrast, the degree of coincidence with another region is determined for each region having the same size as the template (hereinafter referred to as the first region) (Ste 2705).
- the degree of coincidence of the first area with another area for example, an area having the same size as the first area existing at a position shifted by one pixel or more with respect to the first area
- the degree of coincidence of the first area with another area Obtained by autocorrelation method.
- the first region When the degree of coincidence between the first region and another region (a plurality of other regions shifted by one pixel or more from the first region) is low, the first region has a unique shape with respect to the other region. It can be said that this is an area where a pattern or the like exists and should be selected as a template. On the other hand, when the degree of coincidence is high, in actual template matching, other regions that should not be identified by matching may be identified as matching positions, and it can be said that the region is likely to cause a matching error. . Therefore, for example, a first region having a matching degree with another region is equal to or less than a predetermined value or a first region having the lowest matching degree with another region is extracted and the extracted region is output as a template candidate ( Steps 2706, 2707). In step 2706, the comparison is performed based on the result of the matching degree between the first region and the plurality of other regions having the highest matching degree with the other region. Alternatively, a predetermined number of regions with the lowest matching degree may be extracted and output
- the region for performing autocorrelation is narrowed down based on the pattern edge density information, which is effective in reducing the processing process.
- a region to be selected as a template is extracted based on an AND condition between contrast information (pattern density information) that is not directly expressed in design data and information that can be directly extracted from design data.
- contrast information pattern density information
- template candidates may be narrowed down by selectively extracting a portion having a predetermined pattern shape and taking AND with the contrast information. Since the pattern shape information (layout data) is information that can be extracted directly from the design data, if the template pattern suitable for matching is known empirically, it is based on the input of that information. Thus, the above narrowing may be performed.
- the luminance signal value luminance value
- luminance value luminance value
- step difference also depends on the reflectance of a pattern, if the thing near a broad light source is used, a luminance value will be comparatively high and bright.
- the manufacturing process can reach several tens of times.
- the process includes a process of flattening the wafer, such as reflow or CMP (polishing), with miniaturization.
- the unevenness of the lower layer (N-1 layer) pattern affects the upper layer (N layer).
- the brightness is lowered at the position corresponding to the unevenness.
- CMP polishing
- the unevenness of the upper layer (N layer) disappears and luminance reduction does not occur.
- the interlayer insulating film is placed and planarized, and the upper layer (N layer) is placed by absorbing the unevenness of the lower layer (N-1 layer) and placing the upper layer (N layer). The unevenness of the (N layer) can also be prevented.
- the appearance of the OM image varies depending on whether or not the flattening process is performed.
- the size of the step changes depending on the process such as etching, and the appearance changes.
- the pattern step is estimated using process information such as the degree of influence of the size of the pattern step in each process, and the template is generated by reflecting the pattern step in the gray value.
- process information information indicating whether the process is a planarization process such as CMP (polishing) or not is used.
- FIG. 29 is a diagram illustrating an outline of the step estimation unit 2803 included in the template generation unit 2802 illustrated in FIG.
- the area dividing unit 2901 at least matches the pattern edge area of the upper layer corresponding to the pattern of the OM image, the pattern edge area of the lower layer (lower layer), and the lower layer covering the upper layer pattern.
- the pattern edge area is divided into the pattern edge areas, the step state is estimated based on the process information for each area, and the correction value used in the density information generation unit 2804 is calculated by the density correction calculation unit 2902.
- the drawing unit 3001 Based on the design data transmitted from the design data storage unit 1, the drawing unit 3001 displays the upper layer (Nth layer) and lower layer (N ⁇ 1th layer) patterns corresponding to each layer, for example, a pattern acquired from the OM image to be matched. Draw with. Since the design data is usually information such as the vertex coordinates of the pattern, the drawing unit 3001 converts the pattern corresponding to the OM image into image data information. In this case, it is also conceivable to create a drawing image by providing the drawing unit 3001 in advance outside the region dividing unit 2901 or outside the step estimation unit 2803 or the template generation unit 2802.
- the upper layer (Nth layer) pattern drawing image is stored in the upper layer drawing image storage unit 3004. Then, an edge is detected by the upper layer edge detection unit 3002 from the pattern drawing image of the upper layer (Nth layer) and stored in the region separation result storage unit 3006. Similarly, an edge is detected by the lower layer edge detection unit 3003 from the pattern drawing image of the lower layer (N ⁇ 1 layer) and stored in the region separation result 3006.
- the covering edge detection unit 3005 detects the lower layer pattern edge covering the upper layer pattern. And stored in the region separation result storage unit 3006.
- FIG. 31 shows an example of pattern drawing, edge detection results, and cover edge detection results.
- FIG. 31A shows an upper layer pattern drawing image
- FIG. 31B shows a lower layer pattern drawing image.
- the edge detection result of the upper pattern drawing image is (c).
- the edge detection result of the pattern drawing image in the lower layer is (d).
- An image obtained by superposing the edge detection result (d) of the upper layer pattern drawing image (a) and the lower layer pattern drawing image is (e).
- the edge detection result covered by the pattern is a white region (f) in both (a) and (d).
- an upper edge area (c), a lower edge area (f) covering the upper layer, and an edge area (f) covering the upper layer from the lower edge area (d) (g ) Is memorized.
- the pattern drawing unit 3001 included in the region dividing unit 2901 illustrated in FIG. 30 is configured like the drawing unit 21 illustrated in FIG.
- the upper layer edge detection unit 3002 and the lower layer edge detection unit 3003 can be realized by a differential filter such as a Sobel filter or a Laplacian filter. For example, binarization (Otsu method or the like) is performed after the differential filter, and the edge portion may be white “255” and the non-edge portion may be “0”.
- the upper layer drawing image storage unit 3004 can be realized by a memory.
- the covering edge detection unit 3005 is realized by an AND circuit, and outputs an AND result of the output of the lower layer edge detection unit 3003 and the output of the upper layer drawing image storage unit 3004. For example, if the AND of (d) and (a) in FIG. 31 is taken and output as white only when both are white, the output result of (f) is obtained.
- FIG. 32 shows an example of the area division result storage unit 3006.
- the conversion part 3201 converts the edge part to “1” and the non-edge part to “0”.
- the output of the covering edge detection unit 3005 is converted into “3” by the conversion unit 3205, and only the edge part “3” is stored in the storage unit 3206.
- the output of the lower layer edge detection unit 412 and the output of the covering edge detection unit 3003 inverted by the inversion unit 3203 (for the inversion, the edge portion is black “0” and the non-edge portion is white “255”).
- the AND unit 3202 performs an AND process to set white “255” only when both are white “255”. Then, the conversion unit 3204 converts the white part into “2”. Only when it is converted to “2”, it is stored in the storage unit 3206. That is, the upper layer edge (upper layer edge portion) is “1”, the lower layer edge (lower layer edge portion) that does not cover the upper layer pattern is “2”, and the lower layer edge (covered portion) that covers the upper layer pattern is “3”. ", And other areas are set to" 0 ", and different values are assigned and stored for the pixel positions of the respective edges.
- FIG. 33 shows an example of the shading correction calculation unit 2902.
- the process information enters the upper layer step estimation unit 3301, the lower layer step estimation unit 3302, and the covering portion step estimation unit 3303.
- the information on the level difference estimated from the level difference estimation units 3301, 3302, and 3303 is input to the density correction storage unit 3304 in correspondence with the pixel positions of the upper layer edge portion, the lower layer edge portion, and the cover portion obtained by the area dividing unit 2901.
- the upper layer level estimation unit 3301, the lower level level estimation unit 3302, and the covering level difference estimation unit 3303 can be realized using a conversion table. For example, information on whether or not the process is after the planarization such as CMP is given as the process information.
- the process information in the process after flattening is “1”, and the process information in other processes is “0”.
- the process information is “0”, for example, it is assumed that the pattern steps of the upper layer, the lower layer, and the covering portion remain, and the upper layer step estimating unit 3301, the lower layer step estimating unit 3302, and the covering unit step estimating unit 3303 are “100”.
- the process information is “1”, for example, the upper layer step estimation unit 3301 and the lower step estimation unit 3302 output “100”, assuming that the step difference between the upper layer and lower layer patterns remains, and the covering portion
- the covering step difference estimating unit 3303 outputs “0”, assuming that there is no step in the pattern.
- the output value is stored in the shading correction storage unit 3304.
- the density correction storage unit 3304 can be realized by a memory.
- the correction values of the upper layer step estimation unit 3301, the lower layer step estimation unit 3302, and the cover unit step estimation unit 3303 are stored as correction values corresponding to the pixel positions of the upper layer edge portion, the lower layer edge portion, and the cover portion obtained by the region dividing unit 2901. .
- the planarization process is performed on the upper layer pattern. For example, after the lower layer pattern is formed, the interlayer insulating film is placed thereon, and the interlayer insulating film is planarized in the planarization process. It is conceivable to place an upper layer pattern.
- the upper layer pattern is flat, and the lower layer pattern covering the upper layer pattern is not visible.
- the process information to be used requires not only the current process or the previous process of the semiconductor manufacturing to be matched but also information on the previous process. For example, it is conceivable to go back to the process information of the previously performed planarization process and use a series of process information.
- the pattern that covers the lower layer is ignored, and if it overlaps thereafter, it may be considered that it is affected by the pattern in the lower layer of the stacked layer.
- the pattern steps are stacked because they are stacked by two layers, and it is considered that the luminance is further reduced. In that case, it is conceivable to increase the correction value by a factor of two. It may be possible to more accurately determine the size of the step of the pattern using information on the film thickness of the layer.
- the pattern step difference if the interlayer insulating film placed on the upper layer is thick, the contrast between the upper and lower layers decreases. Therefore, it is conceivable to correct the contrast using information on the film thickness of the layer. In that case, it is possible to estimate not only the step state but also the appearance of the pattern.
- the material information may be used when determining the step of the pattern.
- the process information regarding the manufacturing process may be easily set by the user using a GUI or the like.
- the correction values of the upper layer step estimation unit 3301, the lower layer step estimation unit 3302, and the covering step difference estimation unit 3303 corresponding to the respective process information are the respective regions (upper layer, lower layer, covering portion) according to each process name and process content. It is also conceivable that the change in the level difference of the pattern is examined and placed, and a correction value corresponding to the change is placed in a table. The correction value determines the composition ratio of the pattern edges. The correction value may be determined based on how the OM image of each region (upper layer, lower layer, covering portion) is seen according to each process name and process content.
- a change in pattern step of each region (upper layer, lower layer, covering portion) according to each process name and process content is calculated and obtained in advance using a commercially available simulator, and the upper layer step estimation unit 3301, lower layer step estimation unit It is also conceivable to put it in the conversion table of the step 3022, the covering step difference estimation unit 3303. Further, a calculation formula used in a simulator or the like may be entered and obtained by calculation.
- Pattern images are created by the drawing units 3401 and 3402 for the upper and lower patterns of the design data.
- This drawing unit can have the same configuration as the drawing unit 21.
- the edge detection unit 3403 and the edge detection unit 3404 detect edges, the signal inversion units 3405 and 3406 invert signals, and the minimum value selection unit 3407 for each pixel Select their minimum value. Since the edge portion of the pattern is “0” (black), if either one has a pattern edge, it is left as “0” (black).
- the edge detection units 3403 and 3404 are the same as the upper layer edge detection unit 3002 and the lower layer edge detection unit 3003 and can be realized by a differential filter.
- the signal inverting unit when the maximum input value is 255 and the minimum value is 0, the maximum value (255) minus the input value is obtained. Specifically, when the input is 0, the value after inversion is 255, and when the input is 255, the value after inversion is 0.
- the minimum value selection unit 3407 can be realized by the comparison unit 341 and the storage unit 342 as illustrated in FIG. The value after edge detection of each layer is compared and the smaller one is selected and stored in the storage unit 342.
- the storage unit 342 may be omitted.
- the storage unit 342 stores all 0s, performs edge detection in order not only for the upper layer and the lower layer but also for each layer below it, and compares them with the minimum value selection unit 3407. At this time, it is also conceivable to read and compare the maximum value of the edges stored so far. Further, when detecting the pattern edge, it is considered that the edge of the portion where the lower pattern is hidden by the pattern of the upper layer is removed. In that case, whether or not the pattern is hidden can be determined from the design data. In this case, it is conceivable that the drawing portions 3401 and 3402 exclude the pattern portion (white) that cannot be seen by overlapping with the upper layer and exclude the pattern portion (white). On the other hand, the maximum value selection unit 3408 selects the maximum values of the upper layer and lower layer patterns for each pixel. Since the inside of the pattern is painted “255” white, if either one is inside the pattern, it remains “255” white.
- the maximum value selection unit 3408 can be realized in the same manner as the minimum value selection unit 3407, and in that case, the larger value is selected by comparing the values after edge detection of each layer.
- the area where the pattern has a step decreases in brightness, and the more steps, the closer to black.
- the density calculation unit 3409 calculates the pattern density and estimates the luminance value from the density information.
- the black color is proportional to the pattern density.
- the edge density when a specific region for one pixel adjacent to the target pixel (a region of 3 pixels * 3 pixels centering on the target pixel) is used.
- FIG. 6B shows the sum of the specific areas, and this value may be used as the edge density.
- the edge density may be set as shown in FIG. 6C when the edge detection result of the specific area is set to the number of pixels of 0 or more.
- the present invention is not limited to this, and it is only necessary to obtain information indicating the amount of edges within the specific range centering on the target pixel.
- the density information obtained by the density calculation unit 3409 is inverted by the signal inversion unit 3410 so as to be black in proportion to the height of the pattern density.
- the combining unit 3411 combines them. It is conceivable to synthesize each value at a specific ratio. The ratio to be combined is adjusted by the density correction value from the level difference estimation unit 2803.
- the density correction value estimated to have a level difference is “100” (100%), assuming that the original ratio of the density value obtained from the pattern density is 60%, the same 60% multiplied by 1.0 Synthesize with If the density correction value estimated to have no step is “0” (0%), assuming that the original density ratio obtained from the pattern density is 60%, it is synthesized at the same 0% multiplied by 0.0. To do.
- the image selected and created by the maximum value selection unit 3408 is output as it is. It is also conceivable to change the ratio using information on the film thickness of a layer such as an interlayer insulating film. For example, if the contrast is lowered as a whole when the film thickness of an interlayer insulating film or the like is thick, it is conceivable to reduce the ratio at the time of synthesis, not limited to the upper layer edge, the lower layer edge, and the cover edge. Also, for example, the output of the image processing method for combining the gray value obtained from the pattern density and the output of the other image processing method without using the gray value obtained from the pattern density (that is, the combination ratio is 0%).
- a method of creating a template by switching using the result of the step estimation unit 2803 is also conceivable. It is also conceivable to obtain the stepped state of the upper layer, the stepped state of the lower layer, and the stepped state of the covering portion outside, and setting the obtained results.
- the display unit 3601 sets the upper step state, the lower step state, and the covering step state in the process, and the set value is the adjustment unit 3602. Then, the correction value corresponding to the set step state is output to the shading information generation unit 2804. Further, the adjustment unit 3602 has the same configuration as the step estimation unit 2803 as shown in FIG. In this case, the density correction calculation unit 2902 has the configuration shown in FIG. Values corresponding to the step state of the upper layer, the step state of the lower layer, and the stepped portion of the cover set by the user are set in the upper layer step coefficient setting unit 3701, the lower layer step coefficient setting unit 3702, and the cover step difference coefficient setting unit 3703, respectively. .
- the shading correction storage unit 3304 can be realized by a memory similar to the shading correction unit described with reference to FIG. 33, and the upper layer as correction values corresponding to the pixel positions of the upper layer edge portion, the lower layer edge portion, and the cover portion obtained by the region dividing unit 2901. Stores the set values of the step coefficient setting unit 3701, the lower layer step coefficient setting unit 3702, and the covering step setting unit 3703. It is also conceivable to generate the light / dark information of each pixel in the same manner as the light / dark information generating unit 2804 described above.
- Fig. 38 shows the processing flow for template generation.
- each pattern is divided into a plurality of areas based on the design data.
- the upper layer pattern edge, the lower layer pattern edge, and the lower layer pattern edge that overlaps the upper layer pattern are obtained using the upper layer and lower layer patterns.
- the contents described in FIG. 3 are realized by software processing.
- the shading correction calculation process S300 the shading correction value is obtained for each region divided in S200 based on the process information.
- the contents described in FIG. 33 are realized by software processing.
- the shading information generation process S400 the shading information of each pixel of the template is generated using the shading correction value for each area obtained in the shading correction calculation process S300.
- the contents described in FIG. 34 are realized by software.
- the software processing may be performed using a personal computer, or may be performed by hardware processing incorporated in an LSI.
- FIG. 39 is a diagram showing an example of a table showing the relationship among pattern classification, manufacturing process information, template density adjustment conditions, and edge processing conditions. If such a table is created in advance, it is possible to easily realize condition setting when creating a template based on design data.
- FIG. 40 is a diagram showing an example of a table showing the relationship between pattern conditions and pattern classification. Since the reflected light intensity when the pattern is irradiated with light may change depending not only on the pattern density but also on the pattern height and the material, in such a case, as shown in FIG. By classifying the patterns according to the parameters of the seeds, it is possible to create a template based on accurate pattern information.
- FIG. 41 is a diagram for explaining an example of the arithmetic processing device 4101 included in the condition setting device 2403 and the like.
- an area setting unit 4105 for setting an area targeted for the OM template on the design data 4104
- a pattern classification determination unit 4106 for determining a pattern classification of the selected area, based on the pattern classification
- an image processing condition 4107 for determining an image processing condition of the selected design data area and a storage unit 4103 in which tables as illustrated in FIGS. 39 and 40 are stored are provided.
- FIG. 43 is a flowchart showing a process of setting image processing conditions for the OM template using two tables.
- a desired area on the design data is selected as an OM template candidate (step 4301).
- the pattern classification of the selected area is performed with reference to a table illustrated in FIG. 40 (step 4302).
- three pattern conditions density, depth, material
- pattern classification information are stored in association with each other.
- the density is represented by pattern, space.
- these values may be stored as pattern conditions.
- one condition for example, only pattern density
- the patterns may be classified based on four or more conditions.
- the pattern density is also an index of luminance and contrast reduction
- parameters that greatly affect changes in luminance and contrast for example, the interval between patterns, the distance between line segments included in the selected region, and the adjacent ones.
- the distance between closed figures, the statistic of the distance between multiple closed figures, etc. may be defined as the pattern density.
- step 4303 information related to the manufacturing process for performing OM matching is input (step 4303), and image processing conditions are searched based on the input information and pattern classification information (step 4304).
- density adjustment conditions and edge processing conditions are stored as template image adjustment conditions.
- a condition for decreasing (darkening) the density as the pattern becomes dense is registered in the density adjustment condition column.
- the edge existing there disappears. Therefore, a condition that gives a contrast close to that of the OM image is set in the edge processing condition column. It is good to register in.
- the searched image processing conditions are registered in the storage unit 4103 as a measurement / inspection recipe (step 4305).
- an OM matching template can be appropriately formed by setting a template area on design data or pattern shape information obtained by simulation or the like.
- FIG. 44 is a flowchart showing a process of creating an OM image template used for measurement / inspection after another manufacturing process based on an OM image obtained in a measurement / inspection process after a certain manufacturing process. .
- the OM image obtained in the measurement / inspection process after the manufacturing process A is stored in the storage medium 4103 (step 4401).
- manufacturing process information (manufacturing process information to be measured or inspected next) is input (step 4402).
- the image processing condition is searched with reference to a table storing the pattern formed by the manufacturing process A and the change information (image modification information) of the pattern formed by the manufacturing process B (image modification information).
- Step 4403 Based on the image processing conditions thus obtained, a template is created (step 4404), and the template is registered as a recipe in the storage unit 4104 (step 4405).
- a template can be created using images acquired after different manufacturing processes, and it is possible to reduce the effort for creating the template.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
2 テンプレート生成部
3 エッジ密度算出部
4 領域選択部
5 画像処理装置
6 表示部
21 描画部
22 画像生成部
31,33,231,232 エッジ検出部
32,35 密度検出部
34,36,236 最大値選択部
41,212,342 記憶部
42 疎領域検出部
43 密領域検出部
44 判定部
45 領域情報部
211 パターン内塗潰し部
230 平滑化部
233,234,238,421,431 信号反転部
235,422 最小値選択部
237 密度算出部
239 合成部
341,423,433,445,446 比較部
424 白領域検出部
425,432,441,442 閾値
434 黒領域検出部
443,444 カウンタ
447 AND
Claims (24)
- 設計データの一部を部分的に抽出し、当該抽出された部分領域に基づいて、テンプレートマッチング用のテンプレートを作成するテンプレート作成方法において、
前記テンプレートマッチングの被サーチ領域に相当する設計データ内の所定領域について、当該所定領域に属するエッジのエッジ密度を算出することを特徴とするテンプレート作成方法。 - 請求項1において、
前記エッジの密度に関する情報が、所定の条件を満たす場合に、当該所定領域をテンプレート、或いはテンプレート候補として選択することを特徴とするテンプレート作成方法。 - 請求項2において、前記所定領域内に、エッジ密度が高いと判定される領域と、エッジ密度が低いと判定される領域が、所定の比率で含まれている場合に、当該所定領域をテンプレート、或いはテンプレート候補として選択することを特徴とするテンプレート作成方法。
- 請求項1において、
前記エッジ密度に関する情報が所定の条件を満たす場合に、前記所定領域の設計データに基づいて、2値,多値、或いはその両方のテンプレートを生成することを特徴とするテンプレート作成方法。 - 請求項1において、
前記エッジ密度に関する情報が所定の条件を満たす場合に、前記所定領域の設計データに基づいて、当該所定領域の座標位置,領域サイズ、或いはその両方をテンプレート情報として登録することを特徴とするテンプレート作成方法。 - 請求項1において、
前記テンプレートを用いたテンプレートマッチングを行う動作プログラムに記憶された前記テンプレートについて、前記エッジ密度の算出を行うことを特徴とするテンプレート作成方法。 - 請求項6において、
前記エッジ密度の算出に基づいて、前記テンプレートの適否を判定することを特徴とするテンプレート作成方法。 - 請求項1において、
前記テンプレートは、光学顕微鏡によって得られた画像上にてテンプレートマッチングを行うためのものであることを特徴とするテンプレート作成方法。 - 設計データの部分領域の選択に基づいて、テンプレートマッチング用のテンプレートを生成するテンプレート生成部を備えた画像処理装置において、
前記テンプレートマッチングの被サーチ領域に相当する設計データ内の所定領域について、当該所定領域に属するエッジのエッジ密度を算出するエッジ密度算出部を備えたことを特徴とする画像処理装置。 - 請求項9において、
前記テンプレート生成部は、エッジの密度に関する情報が、所定の条件を満たす場合に、当該所定領域をテンプレート、或いはテンプレート候補として選択することを特徴とする画像処理装置。 - 請求項10において、
前記テンプレート生成部は、前記所定領域内に、エッジ密度が高いと判定される領域と、エッジ密度が低いと判定される領域が、所定の比率で含まれている場合に、当該所定領域をテンプレート、或いはテンプレート候補として選択することを特徴とする画像処理装置。 - 請求項9において、
前記テンプレート生成部は、前記エッジ密度に関する情報が所定の条件を満たす場合に、前記所定領域の設計データに基づいて、2値,多値、或いはその両方のテンプレートを生成することを特徴とする画像処理装置。 - 請求項9において、
前記テンプレート生成部は、前記エッジ密度に関する情報が所定の条件を満たす場合に、前記所定領域の設計データに基づいて、当該所定領域の座標位置,領域サイズ、或いはその両方をテンプレート情報として登録することを特徴とする画像処理装置。 - 請求項9において、
前記エッジ密度算出部は、前記テンプレートを用いたテンプレートマッチングを行う動作プログラムに記憶された前記テンプレートについて、前記エッジ密度の算出を行うことを特徴とする画像処理装置。 - 請求項14において、
前記エッジ密度の算出に基づいて、前記テンプレートの適否を判定することを特徴とする画像処理装置。 - 請求項9において、
前記テンプレートは、光学顕微鏡によって得られた画像上にてテンプレートマッチングを行うためのものであることを特徴とする画像処理装置。 - 設計データの部分領域の選択に基づいて、テンプレートマッチング用のテンプレートを演算装置に生成させるコンピュータープログラムにおいて、
前記コンピュータープログラムは、前記演算装置に、前記テンプレートマッチングの被サーチ領域に相当する設計データ内の所定領域について、当該所定領域に属するエッジのエッジ密度を演算させることを特徴とするコンピュータープログラム。 - 設計データからテンプレートマッチング用のテンプレートを作成する画像処理装置において、
設計データと製造工程に関する工程情報を用いて、テンプレート内の各位置の濃淡情報を求める濃淡情報生成部を備えたことを特徴とする画像処理装置。 - 請求項18において、
前記設計データに基づいて、前記テンプレート内の段差に関する情報を生成する段差推定部を備え、前記濃淡情報生成部は、当該段差に関する情報に基づいて、前記濃淡情報を求めることを特徴とする画像処理装置。 - 請求項19において、
前記段差推定部は、前記テンプレートによって特定される領域を複数の領域に分割する領域分割部を備え、当該領域分割部は、前記設計データと、前記工程情報に基づいて領域分割を実施することを特徴とする画像処理装置。 - 請求項20において、
前記段差推定部は、多層構造のパターンの設計データに基づいて、前記領域を分割することを特徴とする画像処理装置。 - 設計データの部分領域の選択に基づいて、テンプレートマッチング用のテンプレートを生成するテンプレート生成部を備えた画像処理装置において、
前記テンプレート生成部は、前記設計データの部分領域内に含まれる複数領域について、パターンの形成状態に応じた画像処理を施すことによって、前記テンプレートを生成することを特徴とする画像処理装置。 - 請求項22において、
前記テンプレート生成部は、前記複数領域のパターンの密度に関する情報に応じて、前記複数領域に対する画像処理を実施することを特徴とする画像処理装置。 - 設計データの部分領域の選択に基づいて、テンプレートマッチング用のテンプレートを演算装置に生成させるコンピュータープログラムにおいて、
前記コンピュータープログラムは、前記演算装置に、前記設計データの部分領域内に含まれる複数領域について、パターンの形成状態に応じた画像処理を施させることによって、前記テンプレートを生成させることを特徴とするコンピュータープログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/807,666 US20130170757A1 (en) | 2010-06-29 | 2011-05-13 | Method for creating template for patternmatching, and image processing apparatus |
JP2012522431A JP5529965B2 (ja) | 2010-06-29 | 2011-05-13 | パターンマッチング用テンプレートの作成方法、及び画像処理装置 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-147051 | 2010-06-29 | ||
JP2010147051 | 2010-06-29 | ||
JP2011008374 | 2011-01-19 | ||
JP2011-008374 | 2011-01-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012001862A1 true WO2012001862A1 (ja) | 2012-01-05 |
Family
ID=45401612
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/002660 WO2012001862A1 (ja) | 2010-06-29 | 2011-05-13 | パターンマッチング用テンプレートの作成方法、及び画像処理装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20130170757A1 (ja) |
JP (2) | JP5529965B2 (ja) |
WO (1) | WO2012001862A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2014208216A1 (ja) * | 2013-06-25 | 2017-02-23 | 株式会社日立ハイテクノロジーズ | 試料観察装置用のテンプレート作成装置及び試料観察装置 |
WO2020027282A1 (ja) * | 2018-08-02 | 2020-02-06 | 日本電信電話株式会社 | 候補領域推定装置、候補領域推定方法、及びプログラム |
WO2022080109A1 (ja) * | 2020-10-15 | 2022-04-21 | パナソニックIpマネジメント株式会社 | テンプレート画像作成方法、テンプレート画像作成システム、及びプログラム |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5953842B2 (ja) * | 2012-03-14 | 2016-07-20 | オムロン株式会社 | 画像検査方法および検査領域設定方法 |
US9390885B2 (en) * | 2013-05-09 | 2016-07-12 | Hitachi High-Technologies Corporation | Superposition measuring apparatus, superposition measuring method, and superposition measuring system |
EP3037878B1 (en) * | 2014-12-23 | 2020-09-09 | Aselta Nanographics | Method of applying vertex based corrections to a semiconductor design |
US10163733B2 (en) * | 2016-05-31 | 2018-12-25 | Taiwan Semiconductor Manufacturing Co., Ltd. | Method of extracting defects |
US10366674B1 (en) * | 2016-12-27 | 2019-07-30 | Facebook Technologies, Llc | Display calibration in electronic displays |
CN107452020B (zh) * | 2017-08-04 | 2021-04-06 | 河北汉光重工有限责任公司 | 一种自适应模板匹配的抗遮挡跟踪方法 |
JP2020148615A (ja) * | 2019-03-13 | 2020-09-17 | 株式会社ニューフレアテクノロジー | 参照画像生成方法およびパターン検査方法 |
JP7273748B2 (ja) * | 2020-02-28 | 2023-05-15 | 株式会社東芝 | 検査装置、検査方法、及びプログラム |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000236007A (ja) * | 1999-02-17 | 2000-08-29 | Fujitsu Ltd | 走査電子顕微鏡の自動検出シーケンスファイル作成方法及び走査電子顕微鏡の自動測長シーケンス方法 |
WO2005001593A2 (ja) * | 2003-06-27 | 2005-01-06 | Nippon Kogaku Kk | 基準パターン抽出方法とその装置、パターンマッチング方法とその装置、位置検出方法とその装置及び露光方法とその装置 |
JP2007334702A (ja) * | 2006-06-16 | 2007-12-27 | Hitachi High-Technologies Corp | テンプレートマッチング方法、および走査電子顕微鏡 |
JP2009216398A (ja) * | 2008-03-07 | 2009-09-24 | Hitachi High-Technologies Corp | テンプレート作成方法及び画像処理装置 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5649031A (en) * | 1992-03-31 | 1997-07-15 | Hitachi, Ltd. | Image information processor for producing high-quality output image |
US6868175B1 (en) * | 1999-08-26 | 2005-03-15 | Nanogeometry Research | Pattern inspection apparatus, pattern inspection method, and recording medium |
JP2005196678A (ja) * | 2004-01-09 | 2005-07-21 | Neucore Technol Inc | テンプレートマッチング方法および対象画像領域抽出装置 |
JP2005258080A (ja) * | 2004-03-11 | 2005-09-22 | Matsushita Electric Ind Co Ltd | レイアウトデータ検証方法、マスクパターン検証方法および回路動作検証方法 |
JP4769025B2 (ja) * | 2005-06-15 | 2011-09-07 | 株式会社日立ハイテクノロジーズ | 走査型電子顕微鏡用撮像レシピ作成装置及びその方法並びに半導体パターンの形状評価装置 |
JP4634289B2 (ja) * | 2005-11-25 | 2011-02-16 | 株式会社日立ハイテクノロジーズ | 半導体パターン形状評価装置および形状評価方法 |
JP2007194419A (ja) * | 2006-01-19 | 2007-08-02 | Seiko Epson Corp | 露光処理方法及び、半導体装置の製造方法 |
US8019164B2 (en) * | 2007-01-29 | 2011-09-13 | Hitachi High-Technologies Corporation | Apparatus, method and program product for matching with a template |
JP5203650B2 (ja) * | 2007-07-31 | 2013-06-05 | 株式会社日立ハイテクノロジーズ | レシピ作成システム、及びレシピ作成方法 |
JP4659004B2 (ja) * | 2007-08-10 | 2011-03-30 | 株式会社日立ハイテクノロジーズ | 回路パターン検査方法、及び回路パターン検査システム |
JP4951496B2 (ja) * | 2007-12-26 | 2012-06-13 | 株式会社日立ハイテクノロジーズ | 画像生成方法及びその画像生成装置 |
-
2011
- 2011-05-13 JP JP2012522431A patent/JP5529965B2/ja active Active
- 2011-05-13 US US13/807,666 patent/US20130170757A1/en not_active Abandoned
- 2011-05-13 WO PCT/JP2011/002660 patent/WO2012001862A1/ja active Application Filing
-
2014
- 2014-04-17 JP JP2014085119A patent/JP5775949B2/ja active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000236007A (ja) * | 1999-02-17 | 2000-08-29 | Fujitsu Ltd | 走査電子顕微鏡の自動検出シーケンスファイル作成方法及び走査電子顕微鏡の自動測長シーケンス方法 |
WO2005001593A2 (ja) * | 2003-06-27 | 2005-01-06 | Nippon Kogaku Kk | 基準パターン抽出方法とその装置、パターンマッチング方法とその装置、位置検出方法とその装置及び露光方法とその装置 |
JP2007334702A (ja) * | 2006-06-16 | 2007-12-27 | Hitachi High-Technologies Corp | テンプレートマッチング方法、および走査電子顕微鏡 |
JP2009216398A (ja) * | 2008-03-07 | 2009-09-24 | Hitachi High-Technologies Corp | テンプレート作成方法及び画像処理装置 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2014208216A1 (ja) * | 2013-06-25 | 2017-02-23 | 株式会社日立ハイテクノロジーズ | 試料観察装置用のテンプレート作成装置及び試料観察装置 |
WO2020027282A1 (ja) * | 2018-08-02 | 2020-02-06 | 日本電信電話株式会社 | 候補領域推定装置、候補領域推定方法、及びプログラム |
JP2020021340A (ja) * | 2018-08-02 | 2020-02-06 | 日本電信電話株式会社 | 候補領域推定装置、候補領域推定方法、及びプログラム |
JP7028099B2 (ja) | 2018-08-02 | 2022-03-02 | 日本電信電話株式会社 | 候補領域推定装置、候補領域推定方法、及びプログラム |
WO2022080109A1 (ja) * | 2020-10-15 | 2022-04-21 | パナソニックIpマネジメント株式会社 | テンプレート画像作成方法、テンプレート画像作成システム、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2012001862A1 (ja) | 2013-08-22 |
JP5775949B2 (ja) | 2015-09-09 |
JP2014132504A (ja) | 2014-07-17 |
JP5529965B2 (ja) | 2014-06-25 |
US20130170757A1 (en) | 2013-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5775949B2 (ja) | パターンマッチング用テンプレートの作成方法、及び画像処理装置 | |
US10937146B2 (en) | Image evaluation method and image evaluation device | |
JP4769025B2 (ja) | 走査型電子顕微鏡用撮像レシピ作成装置及びその方法並びに半導体パターンの形状評価装置 | |
JP4659004B2 (ja) | 回路パターン検査方法、及び回路パターン検査システム | |
JP2984633B2 (ja) | 参照画像作成方法およびパターン検査装置 | |
JP5948138B2 (ja) | 欠陥解析支援装置、欠陥解析支援装置で実行されるプログラム、および欠陥解析システム | |
WO2013118613A1 (ja) | パターン評価方法およびパターン評価装置 | |
WO2016121265A1 (ja) | 試料観察方法および試料観察装置 | |
JP5313939B2 (ja) | パターン検査方法、パターン検査プログラム、電子デバイス検査システム | |
US20130216141A1 (en) | Pattern matching method, image processing device, and computer program | |
JP6043735B2 (ja) | 画像評価装置及びパターン形状評価装置 | |
JP2011090470A (ja) | パターンマッチング方法、及びパターンマッチング装置 | |
JP2006066478A (ja) | パターンマッチング装置及びそれを用いた走査型電子顕微鏡 | |
WO2011080873A1 (ja) | パターン計測条件設定装置 | |
JP7427744B2 (ja) | 画像処理プログラム、画像処理装置、画像処理方法および欠陥検出システム | |
JP2019011972A (ja) | パターンエッジ検出方法 | |
JP6286544B2 (ja) | パターン測定条件設定装置、及びパターン測定装置 | |
JP5953117B2 (ja) | パターン評価装置、及びコンピュータープログラム | |
JP2011055004A (ja) | 回路パターン検査方法、及び回路パターン検査システム | |
JP6224467B2 (ja) | パターン評価装置および走査型電子顕微鏡 | |
WO2023026557A1 (ja) | 検査装置及び参照画像生成方法 | |
JP7459007B2 (ja) | 欠陥検査装置及び欠陥検査方法 | |
US8675948B2 (en) | Mask inspection apparatus and mask inspection method | |
JP2023030539A (ja) | 検査装置及び検査方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11800339 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012522431 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13807666 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11800339 Country of ref document: EP Kind code of ref document: A1 |