WO2021038815A1 - 計測システム、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する方法、およびコンピュータに、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する処理を実行させるためのプログラムを格納する記憶媒体 - Google Patents
計測システム、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する方法、およびコンピュータに、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する処理を実行させるためのプログラムを格納する記憶媒体 Download PDFInfo
- Publication number
- WO2021038815A1 WO2021038815A1 PCT/JP2019/034050 JP2019034050W WO2021038815A1 WO 2021038815 A1 WO2021038815 A1 WO 2021038815A1 JP 2019034050 W JP2019034050 W JP 2019034050W WO 2021038815 A1 WO2021038815 A1 WO 2021038815A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- region
- teacher data
- learning model
- measurement
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B15/00—Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/9501—Semiconductor wafers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/9501—Semiconductor wafers
- G01N21/9505—Wafer internal defects, e.g. microcracks
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/956—Inspecting patterns on the surface of objects
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03F—PHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
- G03F7/00—Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
- G03F7/70—Microphotolithographic exposure; Apparatus therefor
- G03F7/70483—Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
- G03F7/70605—Workpiece metrology
- G03F7/70616—Monitoring the printed patterns
- G03F7/70633—Overlay, i.e. relative alignment between patterns printed by separate exposures in different layers, or in the same layer in multiple exposures or stitching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/72—Data preparation, e.g. statistical preprocessing of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B2210/00—Aspects not specifically covered by any group under G01B, e.g. of wheel alignment, caliper-like sensors
- G01B2210/56—Measuring geometric parameters of semiconductor structures, e.g. profile, critical dimensions or trench depth
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8883—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
Definitions
- the present disclosure discloses a measurement system, a method of generating a learning model used when performing image measurement of a semiconductor including a predetermined structure, and a computer generating a learning model used when performing image measurement of a semiconductor including a predetermined structure. It relates to a storage medium for storing a program for executing a process to be performed.
- the patterns manufactured by the semiconductor process in recent years have been miniaturized, and there is a demand for superimposing patterns over a plurality of layers of an exposure apparatus, that is, improving the accuracy of overlay.
- Patent Document 1 discloses a technique of extracting a plurality of luminance regions separated by luminance boundaries on an input image by image processing and performing overlay measurement from the positional relationship of the center of gravity of the luminance regions. .. Further, in Patent Document 2, the design image such as a CAD image or the predicted image of the input image estimated from the design image is referred to, the input image is divided into areas for each pixel, and the positional relationship of the center of gravity of the area-divided area is divided. Discloses a technique for performing overlay measurement.
- Patent Document 3 describes a machine learning model that infers the overlay amount (the amount of displacement of the position of the structure of the semiconductor that is the object of overlay measurement) from the image using a sample of the image of the overlay measurement target collected in advance.
- a technique for measuring the overlay amount from an input image by referring to a machine learning model after learning is disclosed.
- Patent Document 1 it is necessary to manually adjust the parameters during image processing according to the input image, and since know-how is also required to adjust the parameters, the worker who measures the overlay becomes an expert. There is the issue of being limited. Further, the technique disclosed in Patent Document 2 has a problem that it cannot be operated when it cannot be obtained due to reasons such as the design image not being disclosed.
- Patent Document 3 has a problem that it is difficult to analyze the cause when an unexpected overlay amount is measured from the input image because the process of measuring the overlay amount from the input image cannot be visually confirmed.
- This disclosure has been made in view of such circumstances, and proposes a technique that enables execution of measurement processing without involving parameter adjustment of image processing that requires know-how and reference to design drawings that may be difficult to obtain. To do.
- the embodiment of the present disclosure is a method of generating a learning model used when performing image measurement of a semiconductor including a predetermined structure, in which at least one processor is a sample of the semiconductor. Based on the generation of teacher data by assigning a label containing at least one measurement target structure to the region-divided image obtained from the image and the network structure in which at least one processor is composed of multiple layers.
- a method that includes generating a training model using a region-divided image of a sample image and teacher data, and the training model includes parameters for inferring teacher data from the sample image.
- FIG. It is a figure which shows an example of the functional structure from teacher creation to overlay measurement by Example 1.
- FIG. It is a figure which shows the example of the sample image. It is a figure which shows the example of the area division image. It is a figure which shows the configuration example of the user interface for creating a teacher data 14 from a sample image 13 provided by a teacher creation unit 1. It is a figure which shows the example of the deep neural network structure 179 in the learning model 11. It is a figure for supplementary explanation about the geometrical relationship between a neural network structure 179 and an image. It is a figure which shows the image 50 which is an example of the input image 12. It is a figure which shows the example of the output of the grouping unit 4.
- FIG. It is a flowchart for demonstrating the detail of the grouping process executed by grouping part 4. It is a figure (table) which shows the example of the label of the 1st and 2nd measurement target of overlay measurement. It is a flowchart for demonstrating the detail of the overlay measurement process executed by the overlay measurement unit 5. It is a figure which shows the configuration example of the template data 85. It is a figure (table) which shows the other example of the label of the 1st and 2nd measurement target of overlay measurement. It is a figure which shows the functional configuration example from teacher creation to overlay measurement by Example 2. FIG. It is a figure which shows the example of the set of sample images 30a and 30b, and the area division image (teacher data to which a label is assigned) 40a and 40b.
- FIG. It is a figure which shows the functional configuration example from teacher creation to overlay measurement by Example 3.
- FIG. It is a figure which shows the structural example of the sample image 213. It is a figure which shows the structural example of the teacher data 214. It is a flowchart for demonstrating the teacher creation process by a teacher creation part 201. It is a figure for demonstrating the example of the correction by the statistic of step S204. It is a figure explaining another example of the correction by the statistical processing of step S204.
- the present embodiment and each embodiment relate to a measurement system that measures an image in a semiconductor having a predetermined structure (for example, a multilayer structure). More specifically, when the semiconductor has a multilayer structure, the amount of displacement between layers is measured. Overlay measurement is related to the measurement system.
- the technique according to the present disclosure is not limited to overlay measurement, and can be widely applied to image measurement in general.
- the embodiment of the present disclosure may be implemented by software running on a general-purpose computer, or may be implemented by dedicated hardware or a combination of software and hardware.
- each information of the present disclosure will be described in a "table” format, but these information do not necessarily have to be represented by a data structure by a table, and a data structure such as a list, a DB, a queue, and others. It may be expressed by. Therefore, “table”, “list”, “DB”, “queue”, etc. may be simply referred to as "information" to indicate that they do not depend on the data structure.
- Embodiment The present embodiment (and Examples 1 to 6) relates to, for example, a measurement system that measures an image of a semiconductor including a predetermined structure (for example, a multilayer structure).
- the measurement system refers to a learning model generated based on the teacher data generated from the sample image of the semiconductor and the sample image, and obtains a region-divided image from the input image (measurement target) of the semiconductor having a predetermined structure. It is generated and image measurement is performed using the region-divided image.
- the teacher data is an image in which a label including the structure of the semiconductor in the sample image is assigned to each pixel of the image
- the training model includes parameters for inferring the teacher data from the sample image.
- FIG. 31 is a diagram showing a schematic configuration example of the measurement system 310 according to the present embodiment (common to each embodiment).
- the measurement system 310 includes, for example, an overlay measurement system that executes overlay measurement, a dimension measurement system that measures dimensions such as contour extraction and hole shape in a semiconductor image, a defect pattern detection system that detects defect patterns, and inference of design drawings. It corresponds to a pattern matching system that searches for a collation position between and an actual design drawing.
- the measurement system 310 includes, for example, a main computer 191 including a main processor 190, an input / output device 192 that inputs instructions and data to the main computer 191 and outputs a calculation result, and an electronic microscope or an electronic microscope that supplies an image to be measured.
- a server computer 193 (hereinafter referred to as an electronic microscope or the like) for accumulating images of an electronic microscope, a first sub-computer 191a including a first subprocessor 190a, and a second sub-computer 191b including a second subprocessor 190b are provided.
- Each component is connected by a network (eg, LAN).
- a network eg, LAN
- two first and second sub-computers 191a and 191b are provided, but all the operations may be executed in the main computer, or a sub for assisting the main computer 191.
- One or more computers may be provided.
- the main computer 191 executes the teacher creation process and the learning model creation process (processes corresponding to the teacher creation unit and the learning unit in each figure) in FIGS. 1, 14, 16, 22, 26, and 30, which will be described later. Further, the inference processing and the measurement processing (processing corresponding to the area division unit, the grouping unit, and the measurement unit (overlay measurement unit) in each figure) are performed by the main computer 191 or the first sub-computer 191a and the second sub-computer 191b. It may be processed in a distributed manner.
- the first sub-computer 191a and the second sub-computer 191b can be configured so that only the inference processing and the measurement processing are executed, and the teacher creation processing and the learning model creation processing are not executed. Further, when a plurality of sub-computers are installed (for example, the first sub-computer 191a and the second sub-computer 191b), the inference processing and the measurement processing may be distributed among the sub-computers.
- the electron microscope or the like 193 acquires (images) a semiconductor pattern image formed on the wafer, and provides it to the main computer 191 and the sub computers 191a and 191b.
- the electron microscope or the like 193 is a server computer
- the server computer stores the semiconductor pattern image captured by the electron microscope in a storage device (for example, a hard disk drive (HDD)) and responds to an instruction from the main computer 191.
- An image corresponding to the instruction is provided to the main computer 191 and the like.
- a storage device for example, a hard disk drive (HDD)
- Example 6 it is clarified that the technique of the present disclosure can be applied to all measurement processes.
- FIG. 1 is a diagram showing an example of a functional configuration from teacher creation to overlay measurement according to the first embodiment.
- the functions of the teacher creation unit 1 and the learning unit 2 are realized by the main processor 190 of the main computer 191 reading each corresponding processing program from a storage unit (not shown). Further, the functions of the area division unit 3, the grouping unit 4, and the overlay measurement unit 5 are supported by the main processor 190 of the main computer 191 or the programs corresponding to the subprocessors 190a and 190b of the sub computers 191a and 191b from storage units (not shown). It is realized by reading.
- the sample image 13 is a sample of an image to be measured by overlay measurement collected in advance.
- the teacher data 14 is prepared for each of the sample images 13 as a region-divided image in which a label including a structure in a semiconductor to be measured for overlay measurement is assigned to each pixel in the image.
- the teacher creation unit 1 creates the teacher data 14 from the sample image 13 and also provides a user interface for creating the teacher data 14.
- the learning model 11 is a parameter such as a coefficient in a machine learning model for obtaining a region image division from an image (for example, a sample image).
- the learning unit 2 calculates a learning model 11 that infers a region-divided image as close as possible to the teacher data 14 when the sample image 13 is input.
- the input image 12 is an image to be measured at the time of overlay measurement.
- the region division unit 3 infers the region division image from the input image 12 with reference to the learning model 11.
- the grouping unit 4 groups the measurement targets of overlay measurement in the region-divided image in units of small regions.
- the overlay measurement unit 5 performs overlay measurement from the position of a small area grouped by the grouping unit 4.
- the above functions of the teacher creation unit 1, the learning unit 2, the area division unit 3, the grouping unit 4, and the overlay measurement unit 5 can be realized by signal processing on an arbitrary computer.
- the sample image 13 is an image captured before the overlay measurement is performed, and is an image of a semiconductor sample to be measured or a sample whose appearance is close to that of the semiconductor sample to be measured.
- the sample image 13 can be collected by an electron microscope that operates overlay measurement or an electron microscope whose image quality is close to that of the electron microscope.
- FIG. 2 is a diagram showing an example of a sample image.
- Image 30 in FIG. 2 shows, for example, a part of a sample image.
- the image 30 includes the structure in the semiconductor to be measured by the overlay.
- the sample image is composed of one or more images similar to the image 30.
- the teacher data 14 is composed of region-divided images obtained from each of the images 30 in the sample image 13.
- FIG. 3 is a diagram showing an example of a region-divided image.
- the region-divided image 40 is assigned a label 41, a label 42, and a label 43 indicating the first measurement target, the second measurement target, and the background in each pixel in the image 30.
- the first and second measurement targets corresponding to the labels 41 and 42 are overlay measurement targets and correspond to, for example, structures in vias, trenches and other semiconductors. How to determine the structure in the semiconductor is determined in advance according to the operation of overlay measurement.
- additional labels may be assigned to the region division image 40. For example, in FIG. 3, in the region divided image 40, a label 49 corresponding to an invalid region excluded by the learning unit 2 is assigned.
- FIG. 4 is a diagram showing a configuration example of a user interface for creating teacher data 14 from the sample image 13 provided by the teacher creation unit 1.
- the user interface includes, for example, a main screen 90, an input screen 91, an input selection area 92, and an input pen 93.
- the operator selects any item from the radio buttons of the input selection 92 and operates the input pen 93 on the image 30 (for example, fills the area to be labeled).
- the label is assigned to the area operated by the input pen.
- labels 41, 42, and 43 are selected in order according to the selection of the items of the first measurement target, the second measurement target, and the background radio button.
- a predetermined color or gradation (gradation) is assigned to the portion input by the input pen 93 according to the label selected in the input selection 92.
- the input selection 92 may include an item such as an invalid area 49 from which an additional label can be selected.
- the user interface in the input selection 92 is an example, and the character string attached to the radio button may be changed, or a user interface other than the radio button may be provided.
- the selection area 94 shows an example in which labels 41, 42, and 43 are assigned to a part of the image 30. Then, the area division image 40 is created by allocating the same label to the entire area of the image 30.
- the worker can easily create the teacher data 14 from the sample image 13 without the need for parameter adjustment and reference to the design drawing that require know-how. it can.
- FIG. 5 is a diagram showing an example of a deep neural network structure 179 in the learning model 11.
- the deep neural network structure 179 can be composed of, for example, an input layer 170, an output layer 171 and a plurality of intermediate layers 172, 173, and 174.
- the image 30 is stored in the input layer 170.
- Data in the layers are aggregated from the input layer 170 to the intermediate layer 172, and from the intermediate layer 172 to the intermediate layer 173 by a convolution operation by a predetermined coefficient filter or image reduction.
- the data in the layers is expanded from the intermediate layer 173 to the intermediate layer 174 and from the intermediate layer 174 to the output layer 171 by a convolution operation by a predetermined coefficient filter or an image enlargement.
- Such a network structure is generally called a convolutional neural network.
- the data in the output layer (final layer) 171 indicates the likelihood of each label in the region divided image 40.
- the area-divided image 40 can be obtained by assigning the label having the maximum likelihood to each pixel.
- the learning model 11 corresponds to the coefficient of the filter in the intermediate layer.
- the deep neural network structure 179 is an example of the network structure in the learning model 11, and the number of intermediate layers such as 172 is not limited to three shown in FIG. Further, it is possible to take an additional structure such as a bypass structure connecting the intermediate layers 172 and 174, or to add an additional operation in addition to the filter operation. When additional parameters are added with the addition of additional structures and additional operations to the network structure in the learning model 11, additional parameters are also added to the learning model 11.
- FIG. 6 is a diagram for supplementarily explaining the geometrical relationship between the neural network structure 179 and the image.
- attention is paid to the pixel 175 in the image 30 and the pixel 177 having the same coordinates as the pixel 175 in the region-divided image 40.
- a predetermined range of pixels 175 called a receptive field 176 (the receptive field is an image range involved in determining the label, and its size is determined according to the neural network structure 179).
- the neighborhood area of is involved.
- the data of the intermediate layers 172, 173, and 174 can be obtained by the convolution calculation, the image enlargement, and the image reduction, and the label of the pixel 177 is determined in the output layer 171.
- the label of each pixel in the region-divided image 40 can be obtained more efficiently by parallel computing from each pixel in the image 30.
- the learning unit 2 refers to the sample image 13 and the teacher data 14, and calculates the parameters of the learning model 11 for inferring the region-divided image 40 when the image 30 is given. Specifically, the learning unit 2 infers the region division image 40 from the image 30 in the sample image 13, for example, and the region division image 40 corresponding to the inferred region division image 40 and the image 30 in the teacher data 14. Are compared, and the learning model 11 in which the difference between the two region-divided images 40 is optimal and the minimum is calculated. For example, the difference between the two region divided images 40 is defined as the number of pixels having different labels among all the pixels, and the partial differential coefficient of each element in the learning model 11 (neural network structure 179) with respect to the number of pixels is obtained.
- each element in the learning model 11 is updated by adding the partial differential coefficient multiplied by a negative predetermined coefficient (updated little by little so that the number of pixels decreases), respectively, in the sample image 13.
- a method of sequentially performing the image 30 is not limited to this method.
- the method is not limited to this method.
- the corresponding region-divided image 40 in the teacher data 14 includes the label 49 (invalid region), that portion is excluded from the total number of pixels having different labels.
- the learning unit 2 when the learning model 11 is generated, the learning unit 2 performs a compositing process such as adding random noise to the image 30 in the sample image 13, performing enlargement / reduction, and geometric transformation such as horizontal inversion and vertical inversion. You may add the composite image performed. By adding the composite image, the learning unit 2 can obtain the learning model 11 from a larger number of images 30.
- a compositing process such as adding random noise to the image 30 in the sample image 13, performing enlargement / reduction, and geometric transformation such as horizontal inversion and vertical inversion. You may add the composite image performed.
- the learning unit 2 can obtain the learning model 11 from a larger number of images 30.
- the input image 12 is an image taken at the time of overlay measurement.
- FIG. 7 is a diagram showing an image 50 which is an example of the input image 12.
- the region 59 in the image 50 of FIG. 7 is a region having the same size as the image 30 in the sample image 13, and includes a structure in the semiconductor to be overlay-measured. Outside the region 59 of the image 50, the structure in the semiconductor to be overlay measured is periodically reflected as in the region 59.
- the region division unit 3 infers the region division image 60 in FIG. 7 from the input image 12 with reference to the learning model 11.
- the area 69 in the area-divided image 60 is an area having the same range as the area 59.
- inference means inputting data into the network structure of the learning model 11 and performing calculations such as convolution operations of each layer of the network structure (input layer 170, output layer 171 and intermediate layers 172, 173, and 174). , Means to get the calculation result.
- Each pixel of the region 69 in the region-divided image 60 is assigned one of the labels 41, 42, and 43 in the same manner as the region-divided image 40.
- the learning model 11 is provided with a characteristic of inferring the region divided image 40 according to the image 30 in the process in which the learning unit 2 obtains the learning model 11. .. Therefore, an accurate label can be assigned to the area 69.
- the scale (reference) of "similarity" is that the internal images are similar in the unit of the receptive field 176 (small area in the image 30).
- the structure in the semiconductor to be measured is periodic. Therefore, even if the image in the sample image 13 has a smaller size than the region-divided image 60, the image 30 satisfying the same conditions as the region-divided image 60 exists in the sample image 13 and is accurate. You can expect the label to be assigned.
- the grouping unit 4 groups the overlay measurement targets in the region-divided image 60 in small-region units by executing the process shown in the flowchart of FIG. 9. The details of the grouping process will be described later.
- the overlay measurement unit 5 performs overlay measurement from the grouped image 70 by executing the process shown in the flowchart of FIG. The details of the overlay measurement process will be described later.
- FIG. 9 is a flowchart for explaining the details of the grouping process executed by the grouping unit 4.
- the grouping process groups the related small areas according to the label given to the predetermined area of the area division image 60 according to the measurement target items specified by Table 80 of FIG.
- the grouping unit 4 processes the steps S2 and S3 for each item of the measurement target specified in Table 80 in FIG. 10 (first measurement target and second measurement target). Is repeated.
- step S2 the grouping unit 4 obtains a binary image in which the pixel of the target label in the area division image 60 is 1 and the other pixels are 0.
- the target labels are label 41 for the first measurement target and labels 41 and 42 for the second measurement target.
- the reason why a plurality of labels are specified for the second measurement target is that the structure corresponding to the label 41 is on the front side of the structure corresponding to the label 42 in the image 50, so that the labels are specified in the region divided image 60. This is because a part of 42 is shielded by the label 41.
- step S3 the grouping unit 4 groups from the binary image obtained in step S2 in units of small areas.
- a method of grouping in units of small areas a method called labeling can be applied in which pixels of value 1 in a binary image are grouped in units of a connected area.
- labeling a method of labeling in which pixels of value 1 in a binary image are grouped in units of a connected area.
- other methods that can be grouped in small area units can be applied.
- FIG. 8 is a diagram showing an example of the output of the grouping unit 4.
- image 70 shows an example of the grouped image obtained by the grouping process (FIG. 9).
- the small regions 71a, 71b, 71c, and 71d are small regions corresponding to the first measurement target in Table 80.
- the small regions 72a, 72b, 72c, and 72d are small regions corresponding to the second measurement target in Table 80.
- the area 79 in the grouped image 70 is the same area as the area 59 (see FIG. 7). For example, in the case of a semiconductor pattern, a small region similar to the region 59 appears repeatedly outside the region 59 in the region divided image 60 (see FIG. 7).
- FIG. 11 is a flowchart for explaining the details of the overlay measurement process executed by the overlay measurement unit 5.
- FIG. 12 is a diagram showing a configuration example of template data 85.
- the template data 85 is composed of the X coordinate and the Y coordinate of each element from the first to the Nth element.
- the template data 85 is obtained from the X and Y coordinates of the centers of gravity of the small regions 71a, 71b, 71c, and 71d in the grouped image 70, which is a typical example.
- the template data 85 may be obtained from a semiconductor design drawing or the like.
- the center of gravity of all points in the template is aligned with the center of gravity of the small area 71a or the like of the first measurement target, but the method is not limited to this method.
- Step S12 The overlay measurement unit 5 selects a small area 71a or the like corresponding to each element in the aligned template data 85.
- the criterion for selection can be that the center of gravity is closest to the element of the template data 85 in the small area 71a or the like, but the selection is not limited to this.
- Steps 13 to S18 The overlay measurement unit 5 repeatedly executes the processes of steps S14 to S17 for each small area selected in step S12.
- steps S14 to S17 for each small area selected in step S12.
- Position 1 is the representative position of the first measurement target.
- Position 1 is composed of two elements, X1 at the X coordinate and Y1 at the Y coordinate.
- the position 1 is calculated from the X coordinate and the Y coordinate of the position of the center of gravity of the small area 71a or the like.
- Step S15 The overlay measurement unit 5 selects the small area 71a and the one that measures the overlay amount from the second measurement target small areas.
- the criterion of selecting the one with the closest position of the center of gravity can be applied to this selection criterion. In the case of FIG. 8, for example, the small area 72a is selected.
- Step S16 The overlay measurement unit 5 obtains the position 2 which is the representative position of the second measurement target small area (for example, the small area 72a) selected in step S15 by the same procedure as in step S14.
- Position 2 is composed of two elements, X2 at the X coordinate and Y2 at the Y coordinate.
- Step S17 The overlay measuring unit 5 calculates Dx and Dy, which are displacement amounts of the X coordinate and the Y coordinate, from the positions 2 and 1 by the following equations 1 and 2.
- Dx X2-X1 ...
- Dy Y2-Y1 ... (Equation 2)
- Step S19 The overlay measurement unit 5 calculates the statistics of the displacement amounts of Dx and Dy obtained based on the equations 1 and 2. Arithmetic mean can be applied when calculating the statistic, but it is not limited to this and may be geometric mean or median. The overlay measurement unit 5 uses the statistic of the displacement amount obtained in step 19 as the overlay amount of the image 50.
- a process is provided in which the learning unit 2 obtains the learning model 11 by using the teacher data 14 created from the sample image 13 in the teacher creating unit 1 in advance. Then, the area division unit 3 uses the area division image 60 obtained from the input image 12 with reference to the learning model 11, so that the overlay amount can be measured by the grouping unit 4 and the overlay measurement unit 5.
- the learning unit 2 obtains the learning model 11 by using the teacher data 14 created from the sample image 13 in the teacher creating unit 1 in advance.
- the area division unit 3 uses the area division image 60 obtained from the input image 12 with reference to the learning model 11, so that the overlay amount can be measured by the grouping unit 4 and the overlay measurement unit 5.
- the intermediate processing data such as the area divided image 60 and the grouped image 70 can be visualized, it is possible to grasp the factor by displaying the intermediate processing data on the screen when an unexpected overlay amount is measured unlike Patent Document 3.
- the region division unit 3 divides the input image 12 into regions with reference to the learning model 11, the above-mentioned first and second problems are solved.
- the region-divided image is data that can be visualized, it is easy for the operator to confirm it. Therefore, the third problem is also solved.
- the learning model 11 infers the region division image 40 in units of the receptive field 176.
- the input image 12 image 60: see FIG. 7 generally reflects the structure in the semiconductor periodically. Therefore, the size of the image 30 in the sample image 13 to which the teacher data 14 is allocated by using the user interface of the main screen 90 in the teacher creation unit 1 can be smaller than the size of the image 60 in the input image 12. As a result, the man-hours required when the worker allocates the teacher data 14 can be reduced.
- the components can be changed.
- the learning model 11 in addition to the above-mentioned neural network structure, any machine learning model that infers the region-divided image 40 from the image 30 in units of the receptive field 176 can be applied.
- a linear discriminator that determines the label of pixel 177 from all the pixels of the receptive field 176 of the image 30 may be used.
- the coordinates of X2 and Y2 at position 2 may be obtained from small areas of different labels.
- the overlay measuring unit 5 sets the coordinates X2 as the small area of the label 41 and the label 42, and the coordinates Y2 as the label 44 (the area divided image 40 of FIG. 3). It is obtained from a small area of a label (label assigned to a predetermined structure in a semiconductor) (not shown in the example). Since the vertical contour (contrast) of the structure in the semiconductor corresponding to the label 42 is unclear, accurate Y2 may not be obtained from the small area combined with the label 41 and the label 42.
- the accurate coordinates Y2 can be obtained by adding the label 44 (for example, the label 42b of FIG. 15 of the second embodiment) assigned to the structure in which the contrast (horizontal stripes) in the vertical direction is clear in FIG. That is, when the vertical contrast of the structure in the semiconductor corresponding to the label 42 is unclear, the coordinates Y2 can be accurately obtained by changing the overlay measurement target to the newly assigned label 44. Become.
- the teacher creation unit 1 adds the label 44 to the target of label allocation as compared with the case where the table 80 is referred to in step S16. Then, the region dividing unit 3 adds the label 44 to the inference target, and the grouping unit 4 adds the label 44 to the grouping target. Similarly, in step S14, the coordinates of X1 and Y1 at position 1 may be obtained from small regions of different labels.
- step S11 is generally used as overlay measurement, but it may be excluded. In that case, in step S12, all of the small areas 71a and the like in the grouped image 70 are selected. Alternatively, in step S12, additional selection may be made such that the area is as small as noise in the small area 71a or the like.
- FIG. 14 is a diagram showing a functional configuration example from teacher creation to overlay measurement according to the second embodiment.
- the sample image 113 is, for example, a set of images obtained by photographing the same portion of the semiconductor wafer a plurality of times under different imaging conditions.
- the imaging conditions include, but are not limited to, the acceleration voltage of the electron microscope, the imaging of the backscattered electron image and the secondary electron image, and the composition ratio when obtaining a composite image of both.
- FIG. 15 is a diagram showing an example of a set of sample images 30a and 30b and region-divided images (teacher data to which labels are assigned) 40a and 40b.
- the image 30a has a horizontally long structure in the region 31a, but the upper and lower contours (contrast in the vertical direction) are unclear.
- the image 30b includes an image of the region 31b at the same location as the region 31a. In the image 30b, the upper and lower contours (contrast in the vertical direction) have a clear structure.
- the teacher creation unit 1 allocates a label to a set of images in the sample image 113, creates teacher data 114, and provides a user interface for that purpose.
- the teacher data 40a and 40b in FIG. 15 is an example of the teacher data 114, and is a set of region-divided images in which labels 41a, 42a, and 43a and labels 42b and 42c are assigned to the images 30a and 30b, respectively. is there.
- the learning unit 102 calculates the learning model 111 from the sample image 113, the teacher data 114, the image sets 30a and 30b, and the area division image sets 40a and 40b.
- the learning model 111 includes a neural network structure for inferring the region-divided image 40a from the image 30a and a neural network structure for inferring the region-divided image 40b from the image 30b.
- These neural network structures may be two completely independent neural network structures, or may be a neural network structure that shares a part (common) of the intermediate layer 173 and the like.
- the input image 112 is a set of images taken under the same or similar shooting conditions as the sample image 113.
- the region division unit 103 outputs a set of region division images having the same configuration as the pair 40a and 40b of the region division images from the input image 112.
- the grouping unit 104 executes an operation for obtaining a small area to which the labels 41a, the labels 41a and 42a, and the three types of labels 42b are allocated from the set of the area division images output by the area division unit 103.
- the overlay measurement unit 105 obtains the position 1 in step S14 shown in FIG. 11 from the label 41, obtains the X coordinate (X2) of the position 2 in step S16 from the labels 41a and 42a, and obtains the Y coordinate (Y2) from the label 42b.
- the overlay amount is measured as required.
- Example 3 discloses a technique in which a worker reduces the amount of work of label allocation work of teacher data by narrowing down the target of label allocation to a part of a sample image.
- FIG. 16 is a diagram showing a functional configuration example from teacher creation to overlay measurement according to the third embodiment. First, the outline of the functional configuration example according to the third embodiment will be described.
- the teacher creation unit 201 assigned a label to the image group 231 which is a subset in the sample image 213 (that is, a label was assigned to each image 30 in the image group 231), and the intermediate learning model from the region-divided image group 241.
- the term "intermediate learning model” is used in a learning model for generating a region-divided image group 241 from an image group 231 (a subset of the sample image 213), and a region-divided image is used from the input image 12. This is because it is not the final learning model for generating.
- the teacher creation unit 201 subsequently infers the region-divided image group 243 from the remaining sample images 233 with reference to the intermediate learning model, and further executes a process of correcting the region-divided image group 243, or for correction.
- FIG. 17 is a diagram showing a configuration example of the sample image 213.
- FIG. 18 is a diagram showing a configuration example of teacher data 214.
- FIG. 19 is a flowchart for explaining the teacher creation process by the teacher creation unit 201.
- Step S201 As shown in FIG. 17, the sample image 213 is previously divided into a subset of the image group 231 and the image group 233.
- step S201 the teacher creation unit 201 provides the operator with a user interface (main screen 90: see FIG. 4) for creating the region-divided image group 241 shown in FIG. 18, and responds to the input of the operator. , Assign labels to image group 231.
- Step 202 The teacher creation unit 201 obtains an intermediate learning model (a learning model for generating the region divided image group 241 from the image portion 231) from the image group 231 and the region divided image group 241 according to the same procedure as the learning unit 2.
- an intermediate learning model a learning model for generating the region divided image group 241 from the image portion 231
- Step S203 Similar to the region division unit 3, the teacher creation unit 201 infers the region division image group 243 from the image group 233 with reference to the intermediate learning model obtained in step S202 (to be exact, the image in the image group 233).
- the region-divided image group 243 is obtained by inferring the region-divided image 40 from each of the 30).
- Step S204 In most cases, the region-divided image group 243 contains an erroneous label because it is difficult for the image group 231 which is a subset of the sample image to completely cover the properties of all the images included in the image group 233. .. Therefore, the teacher creation unit 201 executes correction by statistical processing on the label in the region-divided image group 243. As a correction by statistical processing, for example, a correction for taking the mode value of the label in the subset in the region-divided image group 243 in which the same imaged portion in the semiconductor chip is repeatedly photographed can be performed.
- the images 32a, 32b to 32 m (m is an arbitrary number) in the image group 232 of the subset in the image group 233 shown in FIG. 17 are images obtained by repeatedly photographing the same portion in the semiconductor wafer.
- all the region-divided images 42a, 42b to 42m in the region-divided image group 242 inferred from the image group 232 are assigned the mode label by the correction that takes the mode.
- the correction that takes the mode it is possible to correct that the label of the region-divided image 40 changes due to the image quality of the image 30 and the superimposed noise when the same portion is repeatedly photographed.
- the alignment may be performed in advance.
- the alignment can be performed by finding the amount of displacement between the images 32b to 32m with respect to the image 32a and moving the images 32b to 32m in parallel by the amount of displacement of the image. You may align it.
- the amount of displacement between the images is obtained under the condition that the sum of the differences in brightness of each pixel of the images 32b and 32a is minimized when the image 32b is translated according to the amount of displacement. Can be done.
- the displacement amount is also obtained under the condition that the number of pixels whose labels do not match between the area division images 42b and 42a is minimized when the area division image 42b is translated according to the displacement amount.
- the target for which the amount of displacement between images is obtained may be other than the first image 32a.
- Step 205 The teacher creation unit 201 provides a user interface for the operator to confirm whether the label assigned in the region-divided image group 233 is accurate.
- the user interface displays each region-divided image 40 constituting the region-divided image group 243.
- the images 30 in the image group 233 are displayed side by side in the user interface of step S205, or the region-divided image 40 is displayed on the image 30. It is also possible to additionally display a blended image in which is transparent.
- the user interface provided in step S205 may be provided with a function of correcting the label of the region-divided image 40 in the region-divided image group 243.
- the function of correcting the label displays the area divided image 40 or the blended image in the area divided image group 243 on the input screen 91 (see FIG. 4), and is being displayed by operating the input pen 93 on the input screen 91. It allows the label to be modified.
- Step S206 The teacher creation unit 201 combines the area-divided image group 241 and the area-divided image group 243 and outputs the teacher data 214.
- the target of the work of allocating the label using the user interface (main screen 90) provided by the teacher creation unit 201 is the image group 231 which is a subset. It is possible to acquire teacher data 214 by narrowing down and assigning labels to all the images 30 in the sample image 213.
- the inference result of the learning model 11 becomes more accurate as the population parameter in the sample image 13 increases.
- the above trade-off can be eliminated by reducing the amount of label allocation work.
- the reproducibility of the overlay measurement is an index of the degree of variation in the overlay amount shown in the above equations 1 and 2 when the same portion of the semiconductor wafer is repeatedly photographed. Generally, 3 ⁇ , which is three times the standard deviation ⁇ , is used as an index.
- the effect of improving the reproducibility of the overlay measurement is that when the correction by the statistical processing in step S204 is performed, the labels of the region-divided image group 242 and the like are the same for the image group 232 and the like in which the same portion is repeatedly photographed.
- the region-divided image group 241 created by using the user interface provided in step S201 may also be corrected by the statistical processing of step S204.
- step S204 or step S205 may be deleted. This is because step S204 alone has the effect of correcting the label of the region-divided image group 243, and step S205 alone has the effect of confirming and correcting the label of the region-divided image group 243.
- a statistic other than the mode may be applied, or additional processing may be performed other than taking the mode.
- additional processing may be performed other than taking the mode.
- the labels of the pixels may vary widely, and the label of the invalid area 49 may be assigned.
- a plurality of similar partial regions in the image group 232 may be extracted.
- the partial region is a partial region in the image 30 such as the region 94 in FIG.
- a plurality of similar partial regions are subregions having the same dimensions from a single image such as 32a in the image group 232 or a plurality of images such as a plurality of images 32a to 32 m on the condition that the similarity is high. Is extracted from a plurality of.
- the similarity is determined by the correlation value of the brightness of the pixels of the two images 30 in the partial region, or the ratio of the pixels in the two region-divided images 40 in the partial region whose labels match. Can be judged by. Since the semiconductor image targeted for overlay measurement is a repeating pattern, it is easy to find a plurality of similar partial regions.
- the correction by the statistical processing in step S204 can be a correction in units of small areas for the label in the area divided image group 243.
- the correction by the statistic will be described with reference to FIG.
- FIG. 20 is a diagram for explaining an example of correction by the statistic in step S204.
- FIG. 20 shows an example of the grouped image 270 before correction obtained by the grouping unit 4 from any of the area divided images 40 in the area divided image group 243, and an example of the grouped image 270'after correction. There is.
- the small area 71h is translated into the small area 71h'by the correction by the statistic.
- the amount of translation of the small area 71h is such that the displacement amount of the center of gravity of the small areas 71h and 72h (the amount of displacement obtained by the method of step S17) is the center of gravity of the other small areas 71i and 72i, 71j and 72j, 71k and 72k. It can be set so as to be consistent with the average value of the displacement amount of.
- all small areas such as the small area 71h may be uniformly translated so that the displacement amount statistic obtained in step S19 becomes the target value.
- the average when the statistic of the displacement amount obtained in step S19 is obtained from each element in the subset.
- the value can be used as the target value.
- the semiconductor wafer obtained by photographing each of the images in the sample image 213 is manufactured by giving an artificial overlay amount (manufactured by shifting a predetermined layer in a multilayer semiconductor by an artificial overlay amount).
- the subset may be obtained from the images in the sample image 213 having the same grouping and artificial overlay amount as the image 270.
- the artificial overlay amount may be referred to as a design value of the overlay amount.
- FIG. 21 is a diagram illustrating another example of correction by statistical processing in step S204. Based on FIG. 21, a reference (reference for determining the movement amount) when generating the corrected grouped image 270'from the uncorrected grouped image 270 can be described.
- a straight line connecting the ideal values of the V-axis 296 with respect to the U-axis 295 is 297 (a straight line having an inclination of 1 and an intercept 0 straight line, a straight line passing through the center of gravity of points 292a, 293b, and 293c with an inclination of 1 or an inclination of 1 as a predetermined value. An inclined straight line, etc.).
- the point 294a showing the X component of the overlay amount of the region divided image 270 can be set as the X component of the target value by lowering the point 292a on the straight line 297. That is, the vector 293a can be a translation Xb.
- step S204 When the correction by the statistical processing in step S204 is performed in small area units, some of the above-mentioned corrections may be combined. Further, in the correction in units of small areas, in addition to translation, geometric deformation such as a small area 71h in which the center of gravity changes may be performed. For example, as a geometrical deformation, the right half of the small region 71h may be scraped. As a result, the center of gravity of the small area 71h is moved to the left side.
- the teacher creation unit 201 includes grouped images 270 and 270'before and after the correction, and images 30 and region-divided images 40 corresponding to the grouped images 270 and 270'before and after the correction so that the correction by the statistical processing in step S204 can be confirmed on the main screen 90 or the like. May be displayed.
- the intermediate learning model shown in step S202 may be obtained in several stages.
- the label is obtained from the inference result referring to the image group 231 and the region-divided image group 241 to the first intermediate learning model, the image group 231 and the region-divided image group 241, and the image group 232 and the first intermediate learning model.
- a second intermediate learning model is obtained from the region-divided image group 242. Then, labels may be assigned to all the region-divided images 40 in the teacher data 214 by inferring other than the image group 232 in the image group 233 with reference to the second intermediate learning model.
- Example 4 will be described with reference to FIGS. 22 to 25.
- FIG. 22 is a diagram showing an example of functional configuration from teacher creation to overlay measurement in Example 4.
- the teacher creation unit 301 has a small area 71a or the like in the area division image 40 from each pixel in the area division image 40 created on the main screen 90 (see FIG. 4) (see FIG. 8). It has a function to create a position information image that holds the amount of displacement to the representative position of. Then, the teacher creation unit 301 creates teacher data 314 by adding the position information image to the teacher data 14. From the image 30 in the sample image 13, the learning unit 302 calculates the area division image 40 in the teacher data 314 and the learning model 311 in which the position information image can be inferred as accurately as possible.
- the area division unit 303 infers the area division image 60 and the position information image from the input image 12 with reference to the learning model 311.
- the grouping unit 304 generates and outputs the grouping image 70 from the area division image 60 in the same procedure as the grouping unit 4.
- the overlay measurement unit 305 performs overlay measurement by using the position information of the position information image included in the small area 71a or the like in the grouping image output by the grouping unit 304.
- the teacher creation unit 301 creates the teacher data 14 from the sample image 13 in response to the operation of the main screen 90 by the operator, and then displays the position information image 340 described below in the area division image 40 in the teacher data. Is added to output teacher data 314.
- the position information image 340 will be described with reference to FIG. 23.
- FIG. 23 is a diagram showing an example of the position information image 340.
- the image 370 is a grouping image obtained by the grouping unit 4 from the region division image 40 in the sample image 13.
- the grouped image 370 is obtained from the label of the first measurement target included in Table 80 (see FIG. 10).
- the position information image 340 is an image in which position information is added to each pixel in the small areas 371m, 371n, 371o, and 371p corresponding to the small areas 71m, 71n, 71o, and 71p in the grouped image 370.
- the range of the small area 371m is the same range as the small area 71m, or a range in the vicinity of the small area 71m obtained by performing expansion processing from the small area 71m or obtaining an circumscribed rectangle of the small area 71m.
- the range of other small areas such as 371n is also the same as or close to the small area 71n and the like. Assuming that the coordinates of a certain pixel 342m in the small area 371m are (Xp, Yp) and the X coordinate of the representative position 341m is (Xc, Yc), the displacement amount (Rx, Yc) obtained for the pixel 342m according to the following equations 3 and 4 Ry) is allocated.
- This displacement amount (Rx, Ry) corresponds to the displacement amount from the pixel 342 m to the representative position 341 m.
- the representative position 341m is the coordinate of the center of gravity of the small area 371m.
- a displacement amount up to the representative position 341m is assigned to each pixel in the small area 371m as in the pixel 342m.
- Rx Xc-Xp ... (Equation 3)
- Ry Xc-Yp ... (Equation 4)
- the displacement amount from each pixel to the representative positions 341n, 341o, 341p of each small area is assigned to each pixel in the small area 371n, 371o, 371p in the position information image 340.
- the attributes of the invalid region excluded by the learning unit 302 in the calculation for obtaining the learning model 311 are assigned to the regions other than the small regions 371m, 371n, 371o, and 371p.
- the teacher creation unit 301 also gives the same position information image as the position information image 340 to the label of the second measurement target in Table 80.
- the learning unit 302 calculates a learning model 311 capable of inferring the region division image 40 and the position information image 340 in the teacher data 314 from the image 30 in the sample image 13 as accurately as possible.
- Independent neural network structures 179 can be assigned to each of the learning model that infers the region-divided image 40 from the image 30 and the learning model that infers the position information image 340 from the image 30.
- the two learning models may share all or part of the layers of the neural network structure 179. For example, when the image 30 is input to the input layer 170, the region division image 40 is output from a part of the output layer 171 and the position information is output from a part other than the part of the output layer 171 or other layers such as the intermediate layer 174.
- the image 340 may be output.
- the learning unit 302 includes the region division image 40 and the position information image 340 inferred from the image 30 in the sample image 13, and the corresponding region division image 40 and the position information image 340 in the teacher data 314.
- the parameters in the training model 311 are optimized so that the difference when compared is small.
- the parameters in the learning model 311 are initialized with random numbers.
- the error (error 1) of the two region-divided images 40 is the number of pixels whose labels do not match
- the error (error 2) of the two position information images 340 is the sum of the absolute values of the difference in the amount of displacement in each pixel.
- each parameter in the learning model 311 is sequentially added with the partial differential coefficient for the error 1 and the error 2 multiplied by a negative predetermined coefficient. Optimization is possible by repeating such processing with respect to the image 30 in the sample image 13. However, it is not limited to such a method.
- FIG. 24 is a diagram showing an example of the position information image 360.
- the region division unit 303 infers the region division image 60 and the position information image 360 shown in FIG. 24 from the input image 12 (that is, the image 50) with reference to the learning model 311.
- the area 369 in FIG. 24 is the same area as the area 59 in the position information image 360.
- the pixel 342a in the small region 371a includes information on the inferred value (Rix, Riy) of the displacement amount up to the representative position 341a of the small region 371a.
- the small regions 71a, 71b, 71c, and 71d obtained by the grouping unit 4 for the label of the first measurement target in Table 80 whereas the small regions 371a, 371b, 371c, and 371d are small regions, respectively. It is a small area having the same range as 71a and the like.
- the positions of the representative positions 341b, 341c, and 341d can be inferred from the pixels in the small regions 371b, 371c, and 371d by the equations 5 and 6 (the inferred value can be calculated).
- the position information image 360 the same position information as that of the pixel 342a is stored in the portion outside the area 369.
- the area division unit 303 also outputs a position information image similar to the position information image 360 from the label of the second measurement target in Table 80.
- the overlay measurement unit 305 executes the overlay measurement process according to the flowchart shown in FIG. In FIG. 25, since steps S314 and S316 are the same as the flowchart (FIG. 11) executed by the overlay measurement unit 5 during the overlay measurement process, the description thereof will be omitted. Hereinafter, a case where the loop from step S13 to step S18 targets the small region 371a will be described.
- step S314 the overlay measurement unit 305 obtains the inferred value of the representative position 341a from each pixel (pixel 342a, etc.) in the small area 371a in the position information image 360 using the above equations 5 and 6.
- the overlay measurement unit 305 obtains the inferred value of the representative position 341a from each pixel, calculates the statistic thereof, and sets the statistic as the position 1.
- the statistic can be calculated as the median value, but other statistic such as arithmetic mean may be used.
- step S316 the overlay measurement unit 305 is similarly the same as in step S314 from the position information image for the small area obtained for the label of the second measurement target for the second measurement target shown in Table 80.
- the position 2 (statistic of the inferred value of the representative position) is obtained.
- the overlay amount can be measured by using the position information image 360. Random noise may be superimposed on the input image 12, the contrast may be reduced, and the image quality may be deteriorated. In such a case, the label in the region divided image 60 becomes inaccurate, and the range of the small region 71a or the like in the grouped image 70 becomes inaccurate. However, if the technique disclosed in Example 4 is used, this becomes inaccurate. Even in such a case, accurate overlay measurement can be performed. For example, if the right half of the small area 71a is missing, the center of gravity of the small area 71a shifts to the left from the original position, and as a result, the accurate position 1 cannot be obtained in step S14.
- step S314 the representative position 341a can be calculated from the equations 5 and 6 from any pixel of the small region 371a in the position information image 370. Therefore, even if the right half of the small area 71a (that is, the small area 371a) is missing, the accurate position 1 can be calculated in step S314.
- the teacher creation unit 301 may correct the region-divided image 40 in the teacher data 14 by the statistical processing of step S204.
- the center of gravity of the small region 71a or the like in the region divided image 40 can be corrected so as to improve the reproducibility or sensitivity characteristics of the overlay measurement in the overlay measurement unit 5. Therefore, the reproducibility or sensitivity characteristics of the overlay measurement are also improved for each pixel (342 m, etc.) in the position information image 370 in the teacher data 314, which is determined according to the center of gravity of the small region (71a, etc.) in the region divided image 40. Can be corrected to do so.
- the grouping unit 304 may refer to the position information image 360 when obtaining the grouping image 70 from the area division image 60.
- two small regions for example, the small regions 71m and 71n
- the representative positions obtained by the formulas 5 and 6 are separated in one small area (separated into the representative positions of 371m and 371 corresponding to 71m and 71n in the position information image 360).
- one small area may be divided with reference to the separation of the representative positions. Dividing the small area in this way is useful, for example, when the small area in the grouped image 70 becomes inaccurate due to the fact that the image quality of the input image 12 is less clear than that of the sample image 30. ..
- Example 5 In the fifth embodiment, a region-divided image (corresponding to teacher data) is generated from a limited amount of sample images, and a small region in the region-divided image is moved in parallel to change the layout of the region-divided image and the sample image (corresponding to the teacher data).
- a technique for accumulating teacher data and sample images by synthesizing a region-divided image and a sample image) will be disclosed.
- FIG. 26 is a diagram showing a functional configuration example from teacher creation to overlay measurement according to the fifth embodiment.
- the teacher creation unit 401 creates teacher data 14 from the sample image 13 and provides a user interface for that purpose. Further, the teacher creation unit 401 generates and outputs the teacher data 414 in which the layout of the area division image 40 in the teacher data 14 is changed. Further, the teacher creation unit 401 has a function of inferring the image 30 from the area division image 40 (hereinafter, this function is referred to as an image inference function of the teacher creation unit 401), and corresponds to the area division image 40 in the teacher data 414. A sample image 413 is output by inferring each of the created images 30.
- the learning unit 2 in the fifth embodiment calculates the learning model 11 by the same procedure as in the first embodiment, regarding the teacher data 14 and the teacher data 414 and the sample image 13 and the sample image 413 as the same quality data.
- FIG. 27 is a diagram showing an example of an image 430 inferred from the region divided image 440.
- image 440 shows an example of a region-divided image created by an operator using the user interface of teacher creation unit 1 included in teacher creation unit 401, and is composed of labels 41, 42, and 43. ..
- the small area 71q shows an example of a small area of the label 41 in the area divided image obtained by the grouping unit 4.
- the image 430 shows an image inferred from the region divided image 440 by the teacher creation unit 401 using the image inference function.
- the image inference function of the teacher creating unit 401 is similar to (a) learning unit 2 in advance that a set of an arbitrary image 30 and a region divided image 40, such as the sample image 13 and the teacher data 14. A sample is collected, and (b) the image 30 inferred from the one region division image 40, and the image included in the combination of the arbitrary image 30 and the region division image 40 and one region division.
- This can be achieved by performing learning to find parameters in a structure similar to the neural network 179 that minimizes the error with the image corresponding to the image 40.
- the error between the image corresponding to the region-divided image 40 and the inferred image 30 can be obtained by aggregating the absolute sum of the brightness differences of each pixel, but is not limited to this method.
- the image inference function of the teacher creation unit 401 uses, for example, a machine learning algorithm excellent in generating an image from random numbers or symbols called a hostile generation network, and has a neural network structure 179. It can also be realized by defining parameters in a similar network structure.
- the inference of 176 units of the receptive field may be performed as in the network structure 179. That is, for each of the pixels of the region-divided image 40, the brightness of the pixels having the same coordinates as the pixels in the image 30 is determined from the range of the receptive field 176 around the pixels in the region-divided image 40.
- FIG. 28 is a diagram showing an example in which the layout of the small area 71q is changed from the area divided image 440.
- the region-divided image 440r is a layout in which the small region 71q in the region-divided image 440 is translated into the small region 71r. Comparing the image 430r inferred from the region-divided image 440r with the image 430 corresponding to the region-divided image 440 by the effect of the inference of 176 units of the receptive field, the image in the small region 431q corresponding to the small region 71q is a small region. It can be seen that it has moved to a small area 431r within 71r.
- the image 430r becomes an image in which the overlay amount is changed by a uniform amount from the image 430.
- the teacher data 414 can be added to the teacher data 14 by changing the layout. Further, by changing the layout and using the image inference function of the teacher creation unit 401, the sample image 413 can be added to the sample image 13. For example, even when the sample image 13 is composed of the image 30 having a uniform overlay amount, the region-divided image 40 and the image 30 having various overlay amounts can be added to the teacher data 14 and the sample image 13 by changing the layout. it can. Therefore, the operator does not need to prepare many sample images having various layouts in advance, and can save the trouble for overlay measurement.
- FIG. 29 is a diagram showing an example when shielding occurs when the layout of the small area 71q is changed from the area divided image 440 (an example when it is determined that the labels 43, 41, and 42 are on the front side in this order). is there. As shown in FIG. 29, the range overlapping the label 43 that occurs when the small region 71q in the region divided image is translated to the small region 71s is deleted.
- Example 6 describes an example in which the teacher creation process, the learning model creation process, and the area division process in the first to fifth embodiments are applied to measurement processes other than overlay measurement.
- FIG. 30 is a diagram showing a functional configuration example from the teacher creation to the image measurement inspection according to the sixth embodiment.
- the teacher creation unit 501 corresponds to any of the teacher creation units 1, 101, 201, 301, and 401 according to the first to fifth embodiments.
- the teacher data 514, the learning unit 502, the learning model 511, and the area dividing unit 503 correspond to the teacher data, the learning unit, the learning model, and the area dividing unit in any one of Examples 1 to 5 in this order.
- the image measurement inspection unit 505 uses the region division image 60 inferred from the input image 12 by the region division unit 503 to inspect and measure an image not limited to overlay measurement.
- image inspection and measurement in the image measurement inspection unit 505 for example, contour line extraction in a semiconductor image, dimension measurement such as hole shape, detection of defect patterns such as short circuit defects, and inference of design drawings from images are sought for design drawings. Inference and pattern matching to find the matching position of the actual design drawing can be mentioned.
- the present invention is not limited to these, and can be applied to any application for performing image measurement using the region division image 60 inferred by the region division unit 503.
- the image measurement inspection unit 505 may supplementarily refer to the input image 12 and other data not shown in FIG. 30 for the purpose of correcting the region divided image 60 and the like.
- the sample image 13 and the input image 12 may be obtained by taking an image other than the semiconductor.
- the image inference function in the teacher creation unit 501 is used in the area division image 40.
- the sample image 413 and the teacher data 414 the teacher data 14 and the teacher data 414 are combined to form the teacher data 514) can be added to the sample image 13 and the teacher data 14.
- Example 6 it was shown that the techniques disclosed in Examples 1 to 5 can be applied not only to overlay measurement but also to a whole system for performing image measurement and image inspection using region-divided images.
- Each embodiment can also be realized by a software program code.
- a storage medium in which the program code is recorded is provided to the system or device, and the computer (or CPU or MPU) of the system or device reads out the program code stored in the storage medium.
- the program code itself read from the storage medium realizes the function of the above-described embodiment, and the program code itself and the storage medium storing the program code itself constitute the present disclosure.
- Storage media for supplying such program codes include, for example, flexible disks, CD-ROMs, DVD-ROMs, hard disks, optical disks, magneto-optical disks, CD-Rs, magnetic tapes, non-volatile memory cards, and ROMs. Etc. are used.
- the OS operating system
- the processing enables the functions of the above-described embodiment to be realized. You may. Further, after the program code read from the storage medium is written in the memory on the computer, the CPU of the computer or the like performs a part or all of the actual processing based on the instruction of the program code, and the processing is performed. May realize the functions of the above-described embodiment.
- the program code of the software that realizes the function of the embodiment via the network, it is distributed as a storage means such as a hard disk or a memory of the system or a device or a storage medium such as a CD-RW or a CD-R.
- the computer (or CPU or MPU) of the system or device may read and execute the program code stored in the storage means or the storage medium at the time of use.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Chemical & Material Sciences (AREA)
- Pathology (AREA)
- Immunology (AREA)
- Biochemistry (AREA)
- Analytical Chemistry (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Geometry (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Electromagnetism (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Testing Or Measuring Of Semiconductors Or The Like (AREA)
Abstract
Description
また、特許文献2に開示の技術には、設計画像が開示されない等の理由で入手できない場合に運用できないという課題がある。
本明細書の記述は典型的な例示に過ぎず、本開示の請求の範囲又は適用例をいかなる意味においても限定するものではない。
本実施形態(および実施例1から6)は、例えば、所定の構造(例えば、多層構造)を含む半導体の画像計測を行う計測システムに関する。当該計測システムは、半導体のサンプル画像から生成された教師データとサンプル画像とに基づいて生成された学習モデルを参照して、所定の構造を有する半導体の入力画像(計測対象)から領域分割画像を生成し、当該領域分割画像を用いて画像計測を行う。ここで、教師データは、サンプル画像における半導体の構造を含むラベルが画像の各画素に割り振られた画像であり、学習モデルは、サンプル画像から教師データを推論するためのパラメータを含んでいる。この学習モデルを用いることにより、サンプル画像から教師データへの推論が入力画像に対して適用されるので、入力画像の設計データを用いることなく計測処理を実行することが可能となる。
図1は、実施例1による教師作成からオーバレイ計測までの機能構成の一例を示す図である。教師作成部1、および学習部2の機能は、メイン計算機191のメインプロセッサ190が図示しない記憶部から対応する各処理プログラムを読み込むことによって実現される。また、領域分割部3、グループ化部4、およびオーバレイ計測部5の機能は、メイン計算機191のメインプロセッサ190あるいはサブ計算機191aや191bのサブプロセッサ190aや190bが図示しない記憶部から対応する各プログラムを読み込むことによって実現される。
まず、図1に示される教師作成からオーバレイ計測までの機能構成の概要について説明する。サンプル画像13は、事前に収集されたオーバレイ計測の対象とする画像のサンプルである。教師データ14は、オーバレイ計測の計測対象となる半導体中の構造を含むラベルを画像中の各画素に割り振った領域分割画像を、サンプル画像13のそれぞれについて用意したものである。
以下、実施例1の各機能構成の詳細について述べる。サンプル画像13は、オーバレイ計測を運用するよりも前に撮像された画像であって、計測対象となる半導体の試料もしくは計測対象となる半導体の試料に画像の見え方が近い試料の画像である。サンプル画像13は、オーバレイ計測を運用する電子顕微鏡もしくはこの電子顕微鏡と撮影画像の画質が近い電子顕微鏡によって収集することができる。
図9は、グループ化部4が実行するグループ化処理の詳細を説明するためのフローチャートである。グループ化処理は、図10の表80によって指定される計測対象の項目に従って、領域分割画像60の所定の領域に付与されたラベルに応じて関連する小領域ごとにグループ化する
図11は、オーバレイ計測部5が実行するオーバレイ計測処理の詳細を説明するためのフローチャートである。
オーバレイ計測部5は、第1の計測対象に対してテンプレートの位置合わせをする。ここで、テンプレートとは、第1の計測対象の各要素のX座標とY座標であり、オーバレイ計測の運用より事前に用意されたデータである。図12は、テンプレートデータ85の構成例を示す図である。図12において、テンプレートデータ85は、第1から第Nまでの各要素のX座標およびY座標から構成される。テンプレートデータ85は、代表例となるグループ化画像70における小領域71a、71b、71c、および71d等の重心のX座標およびY座標から求められる。あるいは、テンプレートデータ85は、半導体の設計図等から求めてもよい。
オーバレイ計測部5は、位置合わせしたテンプレートデータ85における要素それぞれに対応した小領域71a等を選択する。選択の基準は、小領域71a等の中で重心が最もテンプレートデータ85の要素に近いこととすることができるが、これに限らない。
オーバレイ計測部5は、ステップS12で選択された小領域毎にステップS14からS17の処理を繰り返し実行する。以下の説明では小領域71aを対象とした場合を例として述べる。
オーバレイ計測部5は、第1の計測対象の代表位置である位置1を計算する。位置1は、X座標のX1とY座標のY1という2つの要素から構成される。位置1は、小領域71a等の重心位置のX座標とY座標から計算する。
オーバレイ計測部5は、第2の計測対象の小領域の中で、小領域71aとオーバレイ量を計測するものを選択する。この選択の基準には重心の位置が最も近いものを選択するという基準を適用することができる。図8の場合、例えば、小領域72aが選択される。
オーバレイ計測部5は、ステップS14と同様の手順で、ステップS15で選択した第2の計測対象の小領域(例えば、小領域72a)の代表位置である位置2を求める。位置2は、X座標のX2とY座標のY2という2つの要素から構成される。
オーバレイ計測部5は、位置2と位置1から、下記式1および式2でX座標およびY座標の変位量であるDxおよびDyを計算する。
Dx = X2-X1 ・・・ (式1)
Dy = Y2-Y1 ・・・ (式2)
オーバレイ計測部5は、式1および式2に基づいて求めたDxおよびDyの変位量の統計量を計算する。当該統計量を算出する際には相加平均を適用することができるが、これに限らず相乗平均や中央値であってもよい。オーバレイ計測部5はステップ19で求めた変位量の統計量を画像50のオーバレイ量とする。
実施例1によれば、事前に教師作成部1においてサンプル画像13から作成した教師データ14を用いて学習部2が学習モデル11を求める過程を設けておく。そして、領域分割部3が学習モデル11を参照して入力画像12から求めた領域分割画像60を使うことで、グループ化部4とオーバレイ計測部5によってオーバレイ量を計測することができる。これにより特許文献1とは異なり、ノウハウを要するパラメータの調整を要することなく、また特許文献2とは異なり、入力画像12の設計データを必要とすることなく、領域分割画像60を推論することで正確なオーバレイ計測が可能となる。また、領域分割画像60やグループ化画像70という中間処理データを可視化できるために、特許文献3と異なり予期しないオーバレイ量が計測した場合に中間処理データを画面表示することで要因の把握が可能となる。つまり、領域分割部3では学習モデル11を参照して入力画像12を領域分割するため、上述の第1および第2の課題が解決される。また、領域分割画像は可視化できるデータであるために作業者が確認することが容易である。このため、第3の課題も解決される。
以上述べた実施例1では構成要素を変更することができる。例えば、学習モデル11は、上述のニューラルネットワーク構造以外にも、受容野176の単位で画像30から領域分割画像40を推論する任意の機械学習モデルを適用できる。例えば、画像30の受容野176の全画素から画素177のラベルを定める線形判別器でもよい。
<機能構成例>
図14は、実施例2による教師作成からオーバレイ計測までの機能構成例を示す図である。図14において、サンプル画像113は、例えば、撮影条件を変えて半導体ウエハにおける同一箇所を複数回撮影した画像の組である。ここで、撮影条件とは、電子顕微鏡の加速電圧や、反射電子像や二次電子像を撮像することや両者の合成画像を求める際の合成比率等であるが、これに限らない。
実施例2では、以上述べた構成により、単一の撮影条件では正確なオーバレイ計測が困難な場合においても、同一箇所を複数の撮影条件で画像の組を使う。これにより、領域分割部103が推論するラベルが正確となり、オーバレイ計測部105は正確なオーバレイ計測が可能になる。さらに、教師作成部101がサンプル画像113中の鮮明な部分にラベルを割り振るため、教師作成部101は、正確にラベルを小領域に割り振って教師データ114を作成することができる。
図16は、実施例3による教師作成からオーバレイ計測までの機能構成例を示す図である。まず実施例3による機能構成例の概要について説明する。
図17から図19を参照して、教師作成処理の詳細について説明する。図17は、サンプル画像213の構成例を示す図である。図18は、教師データ214の構成例を示す図である。図19は、教師作成部201による教師作成処理を説明するためのフローチャートである。
図17に示されるように、サンプル画像213は、事前に画像群231と画像群233の部分集合に分割される。
教師作成部201は、学習部2と同じ手順に従い、画像群231と領域分割画像群241から中間学習モデル(画像分231から領域分割画像群241を生成するための学習モデル)を求める。
教師作成部201は、領域分割部3と同様に、ステップS202で求めた中間学習モデルを参照して、画像群233から領域分割画像群243中を推論する(正確には画像群233中の画像30それぞれから領域分割画像40を推論することで領域分割画像群243を求める)。
サンプル画像の部分集合である画像群231が画像群233に含まれる全ての画像の性質を完全に網羅することは困難なため、ほとんどの場合において領域分割画像群243には誤ったラベルが含まれる。そこで、教師作成部201は、領域分割画像群243におけるラベルに対して統計処理による補正を実行する。統計処理による補正として、例えば、半導体チップ内における同一撮影箇所を繰返し撮影した領域分割画像群243中の部分集合内においてラベルの最頻値をとる補正を行うことができる。
教師作成部201は、領域分割画像群233中に割り振られたラベルが正確なものであるかを作業者が確認するためのユーザインタフェースを提供する。当該ユーザインタフェースは、領域分割画像群243中を構成する各々の領域分割画像40を表示する。この際に、領域分割画像40に割り振られたラベルの適否を判断しやすいように、ステップS205のユーザインタフェースには画像群233中の画像30を並べて表示したり、画像30上に領域分割画像40を透過させたブレンド画像を付加的に表示したりするようにしても良い。ステップS205で提供されるユーザインタフェースには、領域分割画像群243中の領域分割画像40のラベルを修正する機能を設けてもよい。当該ラベルを修正する機能は、入力画面91(図4参照)中に領域分割画像群243中の領域分割画像40もしくはブレンド画像の表示を行い、入力画面91における入力ペン93の操作により表示中のラベルを修正できるようにするものである。
教師作成部201は、領域分割画像群241と領域分割画像群243を合わせて教師データ214として出力する。
実施例3によれば、サンプル画像213(すなわち画像群233)において、教師作成部201が提供するユーザインタフェース(主画面90)を用いてラベルを割り振る作業の対象を部分集合である画像群231に絞って、かつサンプル画像213内の全ての画像30にラベルを割り振って教師データ214を取得することが可能となる。実施例1においては、学習モデル11はサンプル画像13中の母数が多いほど推論結果がより正確となる。その一方で、作業者が教師データ14のラベルを割り振る作業の作業量が多くなってしまうというトレードオフがあった。しかし、実施例3によれば、ラベル割り振りの作業量を低減することで上記トレードオフが解消できるようになる。特に、オーバレイ計測では通例、半導体画像中において繰返し現れる構造をオーバレイ計測の対象とする。従って、画像分231の母数を大きく絞っても、ステップS204における統計処理による補正やステップS205におけるユーザインタフェースによる補正が困難な程度までステップS203の推論結果の精度が低くなることは少ないと予期できる。よって、実施例3によれば、作業量の低減に有効と考えられる。
(i)図19で示すフローチャートにおいて、ステップS204もしくはステップS205の一方を削除してもよい。これはステップS204だけでも領域分割画像群243のラベルを補正する効果があり、ステップS205だけでも領域分割画像群243のラベルを確認および補正する効果があるためである。
図22から図25を用いて実施例4について説明する。図22は、実施例4の教師作成からオーバレイ計測までの機能構成例を示す図である。
まず機能構成の概要について説明する。教師作成部301は、教師作成部1の機能に加え、主画面90(図4参照)で作成された領域分割画像40における各画素から領域分割画像40内の小領域71a等(図8参照)の代表位置までの変位量を保持した位置情報画像を作成する機能を備える。そして、教師作成部301は、教師データ14に位置情報画像を加えた教師データ314を作成する。学習部302は、サンプル画像13中の画像30から、教師データ314中の領域分割画像40ならびに位置情報画像ができるだけ正確に推論できる学習モデル311を計算する。
以下、概要以外に特記のないグループ化部304を除き、実施例4における各機能構成の詳細について述べる。
Rx = Xc-Xp ・・・ (式3)
Ry = Xc-Yp ・・・ (式4)
なお、教師作成部301は、表80中の第2の計測対象のラベルに対しても、位置情報画像340と同様の位置情報画像を付与する。
Xic = Rix +Xip ・・・ (式5)
Yic = Riy +Yip ・・・ (式6)
なお、領域分割部303は、表80中の第2の計測対象のラベルからも、位置情報画像360と同様の位置情報画像を出力する。
図25において、ステップS314およびステップS316以外は、オーバレイ計測部5がオーバレイ計測処理時に実行するフローチャート(図11)と共通であるので、説明は省略する。以下、ステップS13からステップS18のループが、小領域371aを対象とする場合について述べる。
実施例4によれば、位置情報画像360を使ってオーバレイ量を計測できる。入力画像12にランダムノイズが重畳されたり、コントラストが低下したりする等して画質低下する場合がる。このような場合には、領域分割画像60中のラベルが不正確となり、グループ化画像70中の小領域71a等の範囲が不正確になるが、実施例4に開示の技術を用いれば、このような場合でも正確なオーバレイ計測が実行できるようになる。例えば、小領域71aの右半分が欠けてしまった場合、小領域71aの重心が本来の位置から左側にずれてしまい、この結果ステップS14では正確な位置1を求めることはできない。これに対して、ステップS314では位置情報画像370中の小領域371aのどの画素からも代表位置341aが式5および式6から算出することができる。このため、小領域71a(すなわち小領域371a)の右半分が欠けた場合でもステップS314において正確な位置1を計算することができる。
(i)教師作成部301は、教師データ14中の領域分割画像40に対してステップS204の統計処理による補正を行ってもよい。これにより、領域分割画像40中の小領域71a等の重心はオーバレイ計測部5におけるオーバレイ計測の再現性あるいは感度特性が改善するように補正することができる。このため、領域分割画像40中の小領域(71a等)の重心等に応じて定まる教師データ314中の位置情報画像370中の各画素(342m等)もオーバレイ計測の再現性あるいは感度特性が改善するように補正することができる。
実施例5は、限られた量のサンプル画像から領域分割画像(教師データに対応)を生成し、領域分割画像内の小領域を平行移動して領域分割画像およびサンプル画像のレイアウト変更を行う(領域分割画像とサンプル画像を合成する)ことにより、教師データとサンプル画像を積み増す技術について開示する。
図26は、実施例5による教師作成からオーバレイ計測までの機能構成例を示す図である。教師作成部401は、教師作成部1と同様に、サンプル画像13から教師データ14を作成するとともに、そのためのユーザインタフェースを提供する。また、教師作成部401は、教師データ14中における領域分割画像40のレイアウトを変更させた教師データ414を生成して出力する。さらに、教師作成部401は、領域分割画像40から画像30を推論する機能を備え(以下、この機能を教師作成部401の画像推論機能と呼ぶ)、教師データ414中の領域分割画像40から対応した画像30の各々を推論することにより、サンプル画像413を出力する。なお、学習部2、領域分割部3、グループ化部4、およびオーバレイ計測部5の各機能は実施例1と同じであるので、説明は省略する。つまり、実施例5における学習部2は、教師データ14と教師データ414ならびにサンプル画像13とサンプル画像413を同質のデータをみなして、実施例1と同じ手順で学習モデル11を計算する。
以下、図27から図29を参照しながら、教師作成部401が教師データ414およびサンプル画像413を出力する際の処理について詳細に説明する。
実施例5によれば、レイアウトの変更により、教師データ14に教師データ414を積み増すことができる。また、レイアウトの変更と教師作成部401の画像推論機能を使うことにより、サンプル画像13にサンプル画像413を積み増すことができる。例えば、サンプル画像13がオーバレイ量の均一な画像30から構成された場合でも、レイアウトの変更により、様々なオーバレイ量の領域分割画像40および画像30を教師データ14とサンプル画像13に積み増すことができる。よって、作業者は、事前に様々なレイアウトを備える多くのサンプル画像を用意する必要が無くなり、オーバレイ計測のための手間を省くことが可能となる。
(i)レイアウトの変更には、上述の平行移動以外にも、拡大や縮小などのオーバレイ量の変化を伴う任意の幾何的な変形も適用することができる。
実施例6は、実施例1から5における教師作成処理、学習モデル作成処理、領域分割処理をオーバレイ計測以外の計測処理に適用した場合の実施例について説明する。
例えば、学習モデル511が受容野176単位で推論するために、入力画像12内に周期的なパターンが映る場合であれば、教師作成部501において教師データ14のラベルを割り振る時のサンプル画像13中の画像30の寸法は、入力画像12中の画像60よりも小さな寸法で済む。これにより、作業者が教師データ14のラベルを割り振る工数を低減することが可能となる。
例えば、撮影条件を変えた複数の画像組を用いることで、画像30aおよび30b等の中で対象となる構造が鮮明に映る画像を用いて教師データ114を正確に作成することや、領域分割部503の推論を正確に行うことができる。
例えば、サンプル画像213中で主画面90を用いて教師データを割り振る対象を画像群231に絞ることにより、作業者の工数を減らすことができる。また、実施例3の図19のステップS202およびS203を実行することにより、サンプル画像13(サンプル画像213に対応)の残りの全数に対して教師データ514(教師データ214に対応)を取得することができる。さらに、ステップS204やS205を実行することにより、教師データ514(教師データ214に対応)を補正することもできる。
入力画像12から求めた領域分割画像60と共に位置情報画像360を使った画像計測が可能となる。
例えば、教師作成部501における画像推論機能を用いて、領域分割画像40中のレイアウトを変更することにより、サンプル画像13および教師データ14にサンプル画像413および教師データ414(教師データ14と教師データ414とを合わせて教師データ514とする)を積み増せる。
実施例6によれば、実施例1から5で開示の技術をオーバレイ計測だけでなく、領域分割画像を用いて画像計測や画像検査を行うシステム全般に適用できることが示された。
各実施例は、ソフトウェアのプログラムコードによっても実現できる。この場合、プログラムコードを記録した記憶媒体をシステム或は装置に提供し、そのシステム或は装置のコンピュータ(又はCPUやMPU)が記憶媒体に格納されたプログラムコードを読み出す。この場合、記憶媒体から読み出されたプログラムコード自体が前述した実施形態の機能を実現することになり、そのプログラムコード自体、及びそれを記憶した記憶媒体は本開示を構成することになる。このようなプログラムコードを供給するための記憶媒体としては、例えば、フレキシブルディスク、CD-ROM、DVD-ROM、ハードディスク、光ディスク、光磁気ディスク、CD-R、磁気テープ、不揮発性のメモリカード、ROMなどが用いられる。
2、102、302、502 学習部
3、103、303、503 領域分割部
4、104、304 グループ化部
5、105、305 オーバレイ計測部
11、111、311、511 学習モデル
12、112 入力画像
13、113、213、413 サンプル画像
14、114、214、314、414、514 教師データ
190 メインプロセッサ
190a 第1サブプロセッサ
190b 第2サブプロセッサ
191 メイン計算機
191a 第1サブ計算機
191b 第2サブ計算機
192 入出力装置
193 電子顕微鏡等
505 画像計測検査部
Claims (19)
- 所定の構造を含む半導体の画像計測を行う計測システムであって、
前記画像計測に関係する各種処理を実行する少なくとも1つのプロセッサと、
前記画像計測の結果を出力する出力デバイスと、を備え、
前記少なくとも1つのプロセッサは、
半導体のサンプル画像から教師データを生成する処理と、
前記サンプル画像と前記教師データに基づいて学習モデルを生成する処理と、
前記学習モデルに基づいて、前記半導体に関連する入力画像から領域分割画像を生成する処理と、
前記領域分割画像を用いて画像計測を行う計測処理と、
前記計測処理の結果を前記出力デバイスに出力する処理と、を実行し、
前記教師データは、前記サンプル画像における前記半導体の構造を含むラベルが画像の各画素に割り振られた画像であり、
前記学習モデルは、前記サンプル画像あるいは前記入力画像から前記教師データあるいは前記領域分割画像を推論するためのパラメータを含む、計測システム。 - 請求項1において、
前記学習モデルは、前記各画素に割り振られた前記ラベルを決定する際に、前記入力画像における前記各画素の近傍領域を参照する機械学習モデルである、計測システム。 - 請求項1において、
前記学習モデルは、畳み込みニューラルネットワークである、計測システム。 - 請求項1において、
前記少なくとも1つのプロセッサは、前記学習モデルを生成する処理において、前記入力画像よりも小さいサイズの前記サンプル画像および前記教師データから前記学習モデルの前記パラメータを生成する、計測システム。 - 請求項1において、
前記少なくとも1つのプロセッサは、さらに、前記ラベルに対応して前記領域分割画像をさらに画像サイズが小さい小領域に分けて、当該小領域の種別毎にグループ化する処理を実行し、
前記少なくとも1つのプロセッサは、前記計測処理として、前記グループ化された前記小領域ごとの重心からオーバレイ計測を実行する、計測システム。 - 請求項1において、
前記サンプル画像は、異なる撮像条件で前記半導体における同一箇所を複数回撮像して得られる画像の組を含み、
前記少なくとも1つのプロセッサは、前記サンプル画像から前記撮像条件に対応して前記教師データを生成し、前記撮像条件に対応して生成された前記教師データと前記サンプル画像に基づいて前記学習モデルを生成する、計測システム。 - 請求項6において、
前記異なる撮像条件で撮像することは、加速電圧を変えて撮像すること、異なる種類の電子像を撮像すること、または異なる種類の電子像の合成画像を生成する際の合成比率を変えること、のうち少なくとも1つを含む、計測システム。 - 請求項1において、
前記少なくとも1つのプロセッサは、前記サンプル画像を2つ以上のサンプル画像群に分割し、第1サンプル画像群に含まれる画像に前記ラベルを割り振ることにより第1教師データを生成し、前記第1サンプル画像群の画像と前記第1教師データに基づいて中間学習モデルを生成し、当該中間学習モデルに基づいて前記第1サンプル画像群以外の画像群に含まれる画像を推論することによって生成した教師データを前記第1教師データに追加して第2教師データを生成し、前記サンプル画像と前記第2教師データとに基づいて、前記入力画像に適用するための前記学習モデルを生成する、計測システム。 - 請求項8において、
前記少なくとも1つのプロセッサは、前記第1サンプル画像群以外の画像群に含まれる画像を推論することによって生成した教師データに対して統計処理による補正を実行する、計測システム。 - 請求項9において、
前記少なくとも1つのプロセッサは、前記半導体の同一箇所を繰り返し撮像して得られる複数の画像に対して前記統計処理による補正を行う、計測システム。 - 請求項9において、
前記少なくとも1つのプロセッサは、前記サンプル画像において類似度が高い部分領域を抽出して当該抽出した部分領域に対して前記統計処理による補正を行う、計測システム。 - 請求項9において、
前記統計処理による補正は、前記第2教師データにおける、前記ラベルが割り振られた小領域の単位で平行移動あるいは幾何学的変形を施すことである、計測システム。 - 請求項5において、
前記教師データは、各画素から前記ラベルが割り振られた小領域の代表位置までの変位量を示す位置情報画像を含み、
前記少なくとも1つのプロセッサは、前記位置情報画像を含む前記学習モデルに基づいて、前記入力画像の前記領域分割画像および前記位置情報画像を生成し、前記グループ化された小領域における前記位置情報画像を用いて前記オーバレイ計測を実行する、計測システム。 - 請求項13において、
前記位置情報画像は、統計処理による補正が施された前記教師データを用いて求められた前記変位量を示す、計測システム。 - 請求項1において、
前記少なくとも1つのプロセッサは、さらに、前記教師データのレイアウトを変更して変更教師データを生成し、当該変更教師データをレイアウト変更前の前記教師データに追加して更新教師データとする処理と、前記変更教師データから推論した画像を前記サンプル画像に追加して更新サンプル画像とする処理と、を実行し、
前記少なくとも1つのプロセッサは、前記更新教師データと前記更新サンプル画像とに基づいて、前記学習モデルと生成する、計測システム。 - 請求項15において、
前記少なくとも1つのプロセッサは、前記教師データに含まれるラベル間の遮蔽を考慮して、前記教師データのレイアウトを変更する、計測システム。 - 請求項1において、
前記計測処理は、前記半導体の、オーバレイ計測処理、寸法計測処理、欠陥パターン検出処理、あるいはパターンマッチング処理である、計測システム。 - 所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する方法であって、
少なくとも1つのプロセッサが、半導体のサンプル画像から得られる領域分割画像に対して少なくとも1つの計測対象の構造を含むラベルを割り振ることにより教師データを生成することと、
前記少なくとも1つのプロセッサが、複数の層から構成されるネットワーク構造に基づいて、前記サンプル画像の前記領域分割画像と前記教師データを用いて、前記学習モデルを生成することと、を含み、
前記学習モデルは、前記サンプル画像あるいは前記入力画像から前記教師データあるいは前記領域分割画像を推論するためのパラメータを含む、方法。 - コンピュータに、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する処理を実行させるためのプログラムを格納する記憶媒体であって、
前記プログラムは、前記コンピュータに、
半導体のサンプル画像から得られる領域分割画像に対して少なくとも1つの計測対象の構造を含むラベルを割り振ることにより教師データを生成する処理と、
前記少なくとも1つのプロセッサが、複数の層から構成されるネットワーク構造に基づいて、前記サンプル画像の前記領域分割画像と前記教師データを用いて、前記学習モデルを生成する処理と、を実行させ、
前記学習モデルは、前記サンプル画像あるいは前記入力画像から前記教師データあるいは前記領域分割画像を推論するためのパラメータを含む、記憶媒体。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980099234.7A CN114270484A (zh) | 2019-08-30 | 2019-08-30 | 测量系统、生成在进行包含预定结构的半导体的图像测量时使用的学习模型的方法、以及存储用于使计算机执行生成在进行包含预定结构的半导体的图像测量时使用的学习模型的处理的程序的存储介质 |
PCT/JP2019/034050 WO2021038815A1 (ja) | 2019-08-30 | 2019-08-30 | 計測システム、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する方法、およびコンピュータに、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する処理を実行させるためのプログラムを格納する記憶媒体 |
KR1020227004040A KR20220029748A (ko) | 2019-08-30 | 2019-08-30 | 계측 시스템, 소정의 구조를 포함하는 반도체의 화상 계측을 행할 때 사용하는 학습 모델을 생성하는 방법, 및 컴퓨터에, 소정의 구조를 포함하는 반도체의 화상 계측을 행할 때 사용하는 학습 모델을 생성하는 처리를 실행시키기 위한 프로그램을 저장하는 기억 매체 |
US17/634,805 US20220277434A1 (en) | 2019-08-30 | 2019-08-30 | Measurement System, Method for Generating Learning Model to Be Used When Performing Image Measurement of Semiconductor Including Predetermined Structure, and Recording Medium for Storing Program for Causing Computer to Execute Processing for Generating Learning Model to Be Used When Performing Image Measurement of Semiconductor Including Predetermined Structure |
JP2021541913A JP7341241B2 (ja) | 2019-08-30 | 2019-08-30 | 計測システム、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する方法、およびコンピュータに、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する処理を実行させるためのプログラムを格納する記憶媒体 |
TW109122244A TWI766303B (zh) | 2019-08-30 | 2020-07-01 | 計測系統、產生在進行包含有特定之構造的半導體之畫像計測時所使用的學習模型之方法、以及儲存有用以使電腦實行產生在進行包含有特定之構造的半導體之畫像計測時所使用的學習模型之處理之程式的記錄媒體 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2019/034050 WO2021038815A1 (ja) | 2019-08-30 | 2019-08-30 | 計測システム、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する方法、およびコンピュータに、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する処理を実行させるためのプログラムを格納する記憶媒体 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021038815A1 true WO2021038815A1 (ja) | 2021-03-04 |
Family
ID=74683410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/034050 WO2021038815A1 (ja) | 2019-08-30 | 2019-08-30 | 計測システム、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する方法、およびコンピュータに、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する処理を実行させるためのプログラムを格納する記憶媒体 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20220277434A1 (ja) |
JP (1) | JP7341241B2 (ja) |
KR (1) | KR20220029748A (ja) |
CN (1) | CN114270484A (ja) |
TW (1) | TWI766303B (ja) |
WO (1) | WO2021038815A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023238384A1 (ja) * | 2022-06-10 | 2023-12-14 | 株式会社日立ハイテク | 試料観察装置および方法 |
JP7490094B2 (ja) | 2020-06-24 | 2024-05-24 | ケーエルエー コーポレイション | 機械学習を用いた半導体オーバーレイ測定 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112785575B (zh) * | 2021-01-25 | 2022-11-18 | 清华大学 | 一种图像处理的方法、装置和存储介质 |
US20230350394A1 (en) * | 2022-04-27 | 2023-11-02 | Applied Materials, Inc. | Run-to-run control at a manufacturing system using machine learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018506168A (ja) * | 2014-12-03 | 2018-03-01 | ケーエルエー−テンカー コーポレイション | サンプリング及びフィーチャ選択を伴わない自動欠陥分類 |
JP2019110120A (ja) * | 2017-12-18 | 2019-07-04 | エフ イー アイ カンパニFei Company | 顕微鏡画像の再構成およびセグメント化のための遠隔深層学習のための方法、装置、およびシステム |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7873585B2 (en) | 2007-08-31 | 2011-01-18 | Kla-Tencor Technologies Corporation | Apparatus and methods for predicting a semiconductor parameter across an area of a wafer |
US9530199B1 (en) | 2015-07-13 | 2016-12-27 | Applied Materials Israel Ltd | Technique for measuring overlay between layers of a multilayer structure |
KR102137454B1 (ko) | 2016-01-29 | 2020-07-24 | 주식회사 히타치하이테크 | 오버레이 오차 계측 장치, 및 컴퓨터 프로그램 |
WO2019084411A1 (en) * | 2017-10-27 | 2019-05-02 | Butterfly Network, Inc. | QUALITY INDICATORS FOR COLLECTION AND AUTOMATED MEASUREMENT ON ULTRASONIC IMAGES |
-
2019
- 2019-08-30 WO PCT/JP2019/034050 patent/WO2021038815A1/ja active Application Filing
- 2019-08-30 US US17/634,805 patent/US20220277434A1/en active Pending
- 2019-08-30 KR KR1020227004040A patent/KR20220029748A/ko not_active Application Discontinuation
- 2019-08-30 CN CN201980099234.7A patent/CN114270484A/zh active Pending
- 2019-08-30 JP JP2021541913A patent/JP7341241B2/ja active Active
-
2020
- 2020-07-01 TW TW109122244A patent/TWI766303B/zh active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018506168A (ja) * | 2014-12-03 | 2018-03-01 | ケーエルエー−テンカー コーポレイション | サンプリング及びフィーチャ選択を伴わない自動欠陥分類 |
JP2019110120A (ja) * | 2017-12-18 | 2019-07-04 | エフ イー アイ カンパニFei Company | 顕微鏡画像の再構成およびセグメント化のための遠隔深層学習のための方法、装置、およびシステム |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7490094B2 (ja) | 2020-06-24 | 2024-05-24 | ケーエルエー コーポレイション | 機械学習を用いた半導体オーバーレイ測定 |
WO2023238384A1 (ja) * | 2022-06-10 | 2023-12-14 | 株式会社日立ハイテク | 試料観察装置および方法 |
Also Published As
Publication number | Publication date |
---|---|
US20220277434A1 (en) | 2022-09-01 |
JPWO2021038815A1 (ja) | 2021-03-04 |
JP7341241B2 (ja) | 2023-09-08 |
CN114270484A (zh) | 2022-04-01 |
KR20220029748A (ko) | 2022-03-08 |
TWI766303B (zh) | 2022-06-01 |
TW202109339A (zh) | 2021-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021038815A1 (ja) | 計測システム、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する方法、およびコンピュータに、所定の構造を含む半導体の画像計測を行う際に用いる学習モデルを生成する処理を実行させるためのプログラムを格納する記憶媒体 | |
JP7265592B2 (ja) | 多層構造体の層間のオーバレイを測定する技法 | |
US20070116357A1 (en) | Method for point-of-interest attraction in digital images | |
EP1791087B1 (en) | Method for point-of-interest attraction in digital images | |
US7102649B2 (en) | Image filling method, apparatus and computer readable medium for reducing filling process in processing animation | |
US20180253861A1 (en) | Information processing apparatus, method and non-transitory computer-readable storage medium | |
US9626761B2 (en) | Sampling method and image processing apparatus of CS-RANSAC for estimating homography | |
CN100577103C (zh) | 使用两个图像的图像处理设备和方法 | |
JP6824845B2 (ja) | 画像処理システム、装置、方法およびプログラム | |
US20180064409A1 (en) | Simultaneously displaying medical images | |
CN110555860A (zh) | 医学图像中肋骨区域标注的方法、电子设备和存储介质 | |
CN114399485A (zh) | 基于残差网络结构的子宫肌瘤目标图像获取方法 | |
WO2018088055A1 (ja) | 画像処理装置、画像処理方法、画像処理システム及びプログラム | |
JP2003087549A (ja) | 画像合成装置、画像合成方法、画像合成処理プログラムを記録したコンピュータ読み取り可能な記録媒体 | |
CN112330787A (zh) | 图像标注方法、装置、存储介质和电子设备 | |
JP2005270635A (ja) | 画像処理方法及び処理装置 | |
Kawaguchi et al. | Image registration methods for contralateral subtraction of chest radiographs | |
CA2471168C (en) | Image filling method, apparatus and computer readable medium for reducing filling process in producing animation | |
US20230005136A1 (en) | Determining a location at which a given feature is represented in medical imaging data | |
JP2005149165A (ja) | 画像合成方法、プログラム、記録媒体、画像合成装置およびシステム | |
JP2023142112A (ja) | 画像処理装置及び画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19943792 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021541913 Country of ref document: JP |
|
ENP | Entry into the national phase |
Ref document number: 20227004040 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19943792 Country of ref document: EP Kind code of ref document: A1 |