WO2023238384A1 - Device and method for observing sample - Google Patents

Device and method for observing sample Download PDF

Info

Publication number
WO2023238384A1
WO2023238384A1 PCT/JP2022/023459 JP2022023459W WO2023238384A1 WO 2023238384 A1 WO2023238384 A1 WO 2023238384A1 JP 2022023459 W JP2022023459 W JP 2022023459W WO 2023238384 A1 WO2023238384 A1 WO 2023238384A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
learning image
learning
observing
estimated
Prior art date
Application number
PCT/JP2022/023459
Other languages
French (fr)
Japanese (ja)
Inventor
晟 伊藤
敦 宮本
直明 近藤
洋彦 木附
Original Assignee
株式会社日立ハイテク
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立ハイテク filed Critical 株式会社日立ハイテク
Priority to PCT/JP2022/023459 priority Critical patent/WO2023238384A1/en
Priority to TW112118879A priority patent/TW202349338A/en
Publication of WO2023238384A1 publication Critical patent/WO2023238384A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L22/00Testing or measuring during manufacture or treatment; Reliability measurements, i.e. testing of parts without further processing to modify the parts as such; Structural arrangements therefor

Definitions

  • the present invention relates to a method and apparatus for imaging a sample such as a semiconductor wafer using a charged particle microscope or the like to obtain an image for sample observation, and also relates to a method and apparatus for estimating a higher quality image from the captured image. It is related to the device.
  • a device and a method are provided for estimating images whose image quality differs depending on the region when learning the correspondence between the first learning image and the second learning image in advance by a machine learning method.
  • a sample observation device is a device that takes a high-resolution image of the defect location on a wafer based on the defect location coordinates (coordinate information indicating the location of the defect on the sample) output by the inspection device, and outputs the image.
  • a sample observation device using a scanning electron microscope (SEM) (hereinafter referred to as review SEM) is widely used. Automation of observation work is desired on semiconductor mass production lines, and the review SEM is equipped with a function to perform automatic defect image collection processing (ADR: Automatic Defect Review), which automatically collects images at defect positions within a sample.
  • ADR Automatic Defect Review
  • Patent Document 1 discloses a method of estimating a high magnification image from a low magnification image by learning in advance the relationship between an image captured at a low magnification and an image captured at a high magnification. It is described in.
  • One method is to estimate a high-quality image using a captured image as an input by learning the relationship between a captured image and a high-quality image in advance using a machine learning method.
  • the loss between an estimated image and a high-quality image is calculated using a single predefined loss function, and estimation processing parameters are updated to reduce the loss.
  • the desired image quality may vary depending on the observer, as the desired area to observe differs depending on the observer. In this case, it is difficult to learn to estimate images with different image quality for each region using a single loss function.
  • a loss function Fi that evaluates the loss between the pixel group Pi of the second learning image and the pixel group Qi of the estimated image included in each region Ri using a predetermined standard.
  • the present invention it is possible to change the loss calculation method in learning of the estimation engine for each region, and the performance of the image estimation engine is improved.
  • FIG. 3 is a diagram showing an example of a learning processing sequence of an estimation engine according to the present invention.
  • FIG. 2 is a diagram for explaining details of the area division process of the embodiment shown in FIG. 1;
  • 3 is a diagram illustrating an image of a three-layer circuit pattern according to Example 1.
  • FIG. According to Example 1, the estimated image and the second learning image corresponding to the first learning image are divided into a defective region R1'' and a normal region R2'' without defects, and in R1'', the loss function F1'' is In R2'', the estimation processing parameters are learned using the loss function F2''.
  • FIG. 1 is a diagram illustrating a configuration in which a label is attached to each pattern and a background part using layout information acquired from design data, and a label image is acquired according to the first embodiment
  • FIG. 3 is a diagram for explaining a loss function Fi according to Example 1.
  • FIG. 2 is a diagram showing an example of a neural network having a three-layer structure according to the present embodiment.
  • FIG. 7 is a diagram illustrating a first learning image captured using one additional frame and a second learning image captured using 64 additional frames, according to the present embodiment.
  • This embodiment is a sample observation method and apparatus, in which a first learning image and a second learning image corresponding to the first learning image are acquired, and the first learning image and the second learning image are The estimation processing parameters of the estimation engine that estimates the second training image from the first training image are learned using the training image, and in learning the estimation processing parameters, the estimated image estimated from the first training image and the corresponding
  • This is an embodiment of a sample observation method and apparatus that learns using a loss function Fi that evaluates the loss with a pixel group Qi based on a predetermined standard.
  • Example 1 a first learning image and a second learning image corresponding to the first learning image are acquired, learning estimation processing parameters of an estimation engine that estimates a second learning image from the first learning image using the first learning image and the second learning image;
  • a loss function Fi that evaluates the loss between the pixel group Pi of the second learning image and the pixel group Qi of the estimated image included in each region Ri using a predetermined criterion.
  • the estimation engine updates the estimation parameters based on the loss
  • the image quality of the image output by the estimation engine after learning changes depending on the loss function used during learning.
  • Conventional methods apply a single loss function to the entire image and learn the parameters of the estimation engine, making it difficult to learn to estimate images with different image quality depending on the region.
  • by changing the loss function for each region it is possible to estimate images with different image quality for each region.
  • FIG. 1 shows an example of the learning processing sequence of the estimation engine according to this embodiment.
  • a sample is imaged and a plurality of pairs of a first learning image (100) and a second learning image (101) are acquired.
  • one or more of the image resolution, the number of added frames, and the focus position are changed so that the second learning image becomes the first learning image.
  • Acquire images with higher quality than the training images For example, as shown in FIG. 9, a first learning image (900) captured with the number of addition frames of 1 and a second learning image (901) captured with the number of addition frames of 64 may be used.
  • first learning image and the second learning image are aligned to obtain an aligned first learning image (103) and an aligned second learning image (106).
  • aligned first learning image 103
  • aligned second learning image 106
  • normalized correlation, pixel difference, or the like may be used as an evaluation value, and alignment may be performed based on the position where the evaluation value is maximum or minimum.
  • alignment may be performed after aligning the image resolutions by linear interpolation or the like before alignment.
  • the aligned first learning image (103) is input to the estimation engine (104) to obtain an estimated image (105).
  • Region division processing (107) is applied to the aligned second learning image (106) to obtain regions R1 (108) to region RN (109).
  • a different loss function Fi a pixel group Pi of pixels of the aligned second learning image 106 and a pixel group of pixels of the estimated image 105 are calculated. Calculate the loss with group Qi.
  • loss is calculated using loss function F1
  • loss is calculated using loss function FN different from F1.
  • the loss of the entire image is calculated (112) by adding up the losses (110, 111) for each region, and the estimation processing parameters of the estimation engine (104) are updated based on this loss.
  • estimation engine (104) such as a deep neural network.
  • a convolutional neural network described in Non-Patent Document 1 may be used.
  • a neural network having a three-layer structure as shown in FIG. 8 may be used.
  • Y is the input image
  • F1(Y) and F2(Y) are intermediate data
  • F(Y) is the estimation result.
  • the intermediate data and final results are calculated using Equations 1 to 3.
  • "*" represents a convolution operation.
  • W1 is n1 filters of size c0 ⁇ f1 ⁇ f1
  • c0 represents the number of channels of the input image
  • f1 represents the size of the spatial filter.
  • An n1-dimensional feature map is obtained by convolving the input image with a filter of size c0 ⁇ f1 ⁇ f1 n1 times.
  • B1 is an n1-dimensional vector and is a bias component corresponding to n1 filters.
  • W2 is a filter of size n1 ⁇ f2 ⁇ f2
  • B2 is a vector of n2 dimensions
  • W3 is a filter of size n2 ⁇ f3 ⁇ f3
  • B3 is a vector of c3 dimensions.
  • the parameters to be adjusted through learning by the estimation engine (104) are W1, W2, W3, B1, B2, and B3.
  • region division there are various methods for region division depending on the purpose.
  • Specific examples of the area division method include the following. (A1) Division into edge areas and non-edge areas (A2) Division into upper layer pattern areas and lower layer pattern areas (A3) Division into defect areas and normal (non-defect) areas (A4) Label the design data in advance. , Region division based on labels In this specification, (A1) to (A4) described above have been explained as examples of region division, but region division is not limited to this, and can be performed according to the purpose or user specification. It is possible to divide the area by any method based on the above.
  • a brightness gradient image is obtained by applying a differential filter to the second learning image, and the brightness gradient image is Using the gradient image and the edge determination threshold, the first learning image, the corresponding estimated image, and the second learning image are divided into the edge region R1 and non-edge region R2 of the circuit pattern (A1), and the loss is calculated in R1. It is characterized in that the estimation processing parameters are learned using the function F1 and the loss function F2 in R2.
  • a brightness gradient image (203) is obtained by applying a differential filter (202) to a second learning image (201).
  • Various methods can be used for the differential filter, but for example, the sum of the absolute value of the difference between the target pixel and the pixel on the right, and the absolute value of the difference between the target pixel and the pixel adjacent below, is used as the differential filter result.
  • the edge determination threshold value (204) may be set by the user or may be set according to a predetermined rule.
  • This feature (3) will be supplementarily explained using an image taken of a three-layer circuit pattern shown in FIG. 3 as an example.
  • the pattern in the upper layer is generally brighter, and the pattern in the lower layer is darker. Therefore, in the layer determination process, it is possible to separate the luminance of the second learning image (301) by converting it into N values based on the layer determination threshold (302). With this feature, by learning using a different loss function for each region divided into each layer, it is possible to learn to estimate images with different image quality depending on the layer, that is, images with high contrast in a specific layer. becomes. (4) In images for sample observation, when a defect exists in a sample, visibility of the defect area is important, and an image with high contrast of the defect may be required. Furthermore, it is important to estimate an image with less noise so as to avoid erroneously recognizing a defect in an area without a defect, and the items that are considered important in image estimation may differ depending on the presence or absence of a defect.
  • a first learning image and a second learning image are acquired based on defect coordinate information, Acquire a reference image that corresponds to the second learning image and does not include defects, A defective area in the second learning image is detected using the second learning image and the reference image, and based on the detected area, the estimated image and the second learning image corresponding to the first learning image are converted to the defective area R1. It is characterized in that the process is divided into '' and a normal region R2'' without defects, and the estimation processing parameters are learned using the loss function F1'' for R1'' and the loss function F2'' for R2''.
  • FIG. 4 A second learning image (401) and a region where a circuit pattern similar to the second learning image is formed are imaged to obtain a reference image (402), and the second learning image and the reference image are compared. A defective area is detected (403), and the second learning image is divided into a defective area R1'' (404) and a defect-free area R2'' (405).
  • the present invention further adds a label to the design data, aligns either the first learning image or the second learning image with the design data, and calculates the position.
  • design data refers to layout data on the sample to be observed.
  • the design data is data in which edge information of a designed shape of a semiconductor circuit pattern is written as coordinate data.
  • This feature (5) will be supplemented using FIG. 5.
  • labels are attached to each pattern and background (502), and a label image is acquired (503).
  • a label image is acquired (503).
  • three types of labels are provided.
  • the design data (501) and the second learning image (504) are compared to obtain a position (506) corresponding to the second learning image (505).
  • Regions R1''' to R3' are divided by performing region division processing (507) on the second learning image (504) based on the design data (506) and label image (503) corresponding to the second learning image.
  • ''(508-510) is obtained.
  • the contour information of the pattern may be acquired by extracting edges from the second learning image, for example, and the corresponding position may be searched for by comparing it with design data.
  • the loss F1 (P1, Q1) is calculated by a weighted sum of each element loss shown in the following formula.
  • F1 (P1, Q1) w11f11 (P1, Q1) + w12f12 (P1, Q1) + ...+w1Mf1M (P1, Q1)
  • the element loss function is a method of calculating the loss (element loss) corresponding to each element of image quality.
  • the loss function Fi the element loss function regarding brightness is the absolute square error (Pi-Qi) ⁇ 2, contrast
  • the element loss function for the brightness gradient is expressed as the absolute square error of the brightness gradient (Pi'-Qi') ⁇ 2. Note that Pi' is the brightness gradient of the pixel group Pi, and Qi' is the brightness gradient of the pixel group Qi.
  • FIG. 7 shows an example of a Graphical User Interface (GUI) when implementing this embodiment.
  • GUI Graphical User Interface
  • Region segmentation is performed on the second learning image (700), and the defect region (701), edge region (702), non-edge region (703), first layer region (704), and second layer region (705) are Is displayed. Further, it is possible to set a threshold value used for region division (706, 707). It is also possible to set each area (708, 710, 712) and the degree of importance of each element in each area (709, 711, 713). By setting the weight (604) of each element loss function based on the importance level set here, it is possible to reflect it on the loss.

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Power Engineering (AREA)
  • Analytical Chemistry (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A problem exists in the prior art in which it is difficult to perform learning such that images of different image quality are estimated for individual regions by a single loss function when the ideal image quality varies by region of an estimated image. In this method and device for observing a sample, a first learning image and a second learning image corresponding to the first learning image are acquired, estimation processing parameters for an estimation engine for estimating the second learning image from the first learning image are learned using the first learning image and the second learning image, an estimated image estimated from the first learning image and the second learning image corresponding to the first learning image are divided into areas Ri (i = 1 to N, and N is the number of areas) during learning of the estimation processing parameters, and the loss of pixel groups Pi from the second learning image and pixels groups Qi from the estimated image which are included in each of the areas Ri is learned using a loss function Fi for performing evaluation by means of a predetermined standard.

Description

試料観察装置および方法Sample observation device and method
 本発明は、荷電粒子顕微鏡などを用いて、半導体ウェハなどの試料を撮像し、試料観察用の画像を取得する方法および装置に関するものであり、撮像画像からより高品質な画像を推定する方法および装置に関するものである。第1学習用画像と第2学習用画像の対応関係を事前に機械学習手法により学習する際に、領域により画質が異なる画像を推定する装置と方法を備える。 The present invention relates to a method and apparatus for imaging a sample such as a semiconductor wafer using a charged particle microscope or the like to obtain an image for sample observation, and also relates to a method and apparatus for estimating a higher quality image from the captured image. It is related to the device. A device and a method are provided for estimating images whose image quality differs depending on the region when learning the correspondence between the first learning image and the second learning image in advance by a machine learning method.
 半導体ウェハの製造では、製造プロセスを迅速に立ち上げ、高歩留まりの量産体制に早期に移行させることが、収益確保のため重要である。この目的のため、製造ラインには各種の検査装置や観察装置、計測装置が導入されている。 In semiconductor wafer manufacturing, it is important to quickly start up the manufacturing process and move quickly to a high-yield mass production system in order to secure profits. For this purpose, various inspection devices, observation devices, and measurement devices are introduced into production lines.
 試料観察装置とは、検査装置が出力した欠陥位置座標(試料上の欠陥の位置を示した座標情報)をもとに、ウェハ上の欠陥位置を高解像度に撮像し、画像を出力する装置であり、走査型電子顕微鏡(SEM:Scanning Eelectron Microscope)を用いた試料観察装置(以下、レビューSEMと記載)が広く使われている。半導体の量産ラインでは観察作業の自動化が望まれており、レビューSEMは試料内の欠陥位置における画像を自動収集する欠陥画像自動収集処理(ADR:Automatic Defect Review)を行う機能を搭載している。 A sample observation device is a device that takes a high-resolution image of the defect location on a wafer based on the defect location coordinates (coordinate information indicating the location of the defect on the sample) output by the inspection device, and outputs the image. A sample observation device using a scanning electron microscope (SEM) (hereinafter referred to as review SEM) is widely used. Automation of observation work is desired on semiconductor mass production lines, and the review SEM is equipped with a function to perform automatic defect image collection processing (ADR: Automatic Defect Review), which automatically collects images at defect positions within a sample.
 半導体ウェハ上に形成される回路パターンの構造の種類は多数あり、発生する欠陥も種類や発生位置など様々なものがあり、欠陥や回路パターンなどの視認性が高い、高画質な画像を撮像、出力することが重要である。そのため、レビューSEMの検出器から得られた信号を画像化した生の撮像画像に対し、画像処理技術を用いて視認性を高めることで試料観察用画像を取得している。視認性を高める一方法として、事前に画質が異なる画像の対応関係を学習し、一方の画質と同様な画像が入力された際に、他方の画質の画像を推定する方法が多数提案されている。例えば、事前に低倍率で撮像した画像と高倍率で撮像した画像の関係性を学習することで、低倍率画像から高倍率画像を推定する方法が特開2018-137275号公報(特許文献1)に記載されている。 There are many types of circuit pattern structures formed on semiconductor wafers, and the defects that occur also vary in type and location. It is important to output. Therefore, an image for sample observation is obtained by increasing the visibility using image processing technology for a raw image obtained by converting a signal obtained from a detector of a review SEM into an image. As a way to improve visibility, many methods have been proposed in which the correspondence between images of different image quality is learned in advance, and when an image similar to one image quality is input, the image of the other image quality is estimated. . For example, Japanese Patent Application Publication No. 2018-137275 (Patent Document 1) discloses a method of estimating a high magnification image from a low magnification image by learning in advance the relationship between an image captured at a low magnification and an image captured at a high magnification. It is described in.
特開2018-137275号公報Japanese Patent Application Publication No. 2018-137275
 試料観察装置では欠陥や回路パターンなどの観察のために、これらの視認性が高い画像を出力することが重要であり、撮像画像に対し画像処理を適用し、画像の視認性を高めることが行われている。その一方法として、事前に機械学習手法により撮像画像と高画質な画像との関係性を学習することで、撮像画像を入力とし高画質な画像を推定する方法がある。一般に、機械学習手法では推定画像と高画質画像との損失を事前に定義した単一の損失関数により算出し、損失が小さくなるよう推定処理パラメータを更新する。しかし、観察の際には観察者によって観察したい領域が異なるため、領域ごとに求められる画質が異なる場合がある。この場合において、単一の損失関数では領域ごとに異なる画質の画像を推定するように学習することが困難である。 In sample observation equipment, it is important to output images with high visibility in order to observe defects, circuit patterns, etc., and it is possible to apply image processing to captured images to improve image visibility. It is being said. One method is to estimate a high-quality image using a captured image as an input by learning the relationship between a captured image and a high-quality image in advance using a machine learning method. Generally, in machine learning methods, the loss between an estimated image and a high-quality image is calculated using a single predefined loss function, and estimation processing parameters are updated to reduce the loss. However, during observation, the desired image quality may vary depending on the observer, as the desired area to observe differs depending on the observer. In this case, it is difficult to learn to estimate images with different image quality for each region using a single loss function.
 前述の課題を解決するため、本発明は以下の特徴を有する外観検査方法および外観検査システムとした。
すなわち、試料の観察方法及び装置であって、
第1学習用画像と、前記第1学習用画像と対応する第2学習用画像を取得し、
前記第1学習用画像と第2学習用画像を用いて、第1学習用画像から第2学習用画像を推定する推定エンジンの推定処理パラメータを学習し、
推定処理パラメータの学習では、第1学習用画像から推定された推定画像と該第1学習用画像と対応する第2学習用画像を領域Ri(i=1~N、N:領域数)に分割し、
各領域Riに含まれる第2学習用画像の画素群Piと推定画像の画素群Qiとの損失を所定の基準で評価する損失関数Fiを用いて学習する。
In order to solve the above-mentioned problems, the present invention provides a visual inspection method and a visual inspection system having the following features.
That is, a sample observation method and apparatus,
obtaining a first learning image and a second learning image corresponding to the first learning image;
learning estimation processing parameters of an estimation engine that estimates a second learning image from the first learning image using the first learning image and the second learning image;
In learning the estimation processing parameters, the estimated image estimated from the first learning image and the second learning image corresponding to the first learning image are divided into regions Ri (i=1 to N, N: number of regions). death,
Learning is performed using a loss function Fi that evaluates the loss between the pixel group Pi of the second learning image and the pixel group Qi of the estimated image included in each region Ri using a predetermined standard.
本発明によれば、推定エンジンの学習における損失の計算方法を領域ごとに変更することが可能となり、画像推定エンジンの性能が向上する。 According to the present invention, it is possible to change the loss calculation method in learning of the estimation engine for each region, and the performance of the image estimation engine is improved.
本発明に係る推定エンジンの学習処理シーケンスの一実施例を示す図。FIG. 3 is a diagram showing an example of a learning processing sequence of an estimation engine according to the present invention. 図1に示した本実施例の領域分割処理の詳細を説明するための図。FIG. 2 is a diagram for explaining details of the area division process of the embodiment shown in FIG. 1; 実施例1に係る、3層の回路パターンを撮像した画像を例示した図。3 is a diagram illustrating an image of a three-layer circuit pattern according to Example 1. FIG. 実施例1に係る、第1学習用画像と対応する推定画像および第2学習用画像を欠陥領域R1’’と欠陥のない正常領域R2’’に分割し、R1’’では損失関数F1’’を、R2’’では損失関数F2’’を用いて推定処理パラメータを学習することを設目するための図。According to Example 1, the estimated image and the second learning image corresponding to the first learning image are divided into a defective region R1'' and a normal region R2'' without defects, and in R1'', the loss function F1'' is In R2'', the estimation processing parameters are learned using the loss function F2''. 実施例1に係る、設計データから取得されるレイアウト情報を用い、各パターンや背景部に対しラベルを付与し、ラベル画像を取得する構成を示す図。1 is a diagram illustrating a configuration in which a label is attached to each pattern and a background part using layout information acquired from design data, and a label image is acquired according to the first embodiment; FIG. 実施例1に係る、損失関数Fiを説明するための図。損失関数Fiは複数の要素損失関数fij(j=1~M、Mは要素損失数)で算出される要素損失の重み付き和で定義され、第2学習用画像の領域Riごとに要素損失値の重みwijを変更する3 is a diagram for explaining a loss function Fi according to Example 1. FIG. The loss function Fi is defined as a weighted sum of elemental losses calculated by a plurality of elemental loss functions fij (j = 1 to M, M is the number of elemental losses), and the elemental loss value is calculated for each region Ri of the second learning image. change the weight wij of 本実施例に係る、Graphical User Interface(GUI)の一例を示す図。FIG. 3 is a diagram showing an example of a Graphical User Interface (GUI) according to the present embodiment. 本実施例に係る、3層構造を持つニューラルネットワークの一例を示す図。FIG. 2 is a diagram showing an example of a neural network having a three-layer structure according to the present embodiment. 本実施例に係る、加算フレーム数1で撮像した第1学習用画像と、加算フレーム数64で撮像した第2学習用画像を示す図。FIG. 7 is a diagram illustrating a first learning image captured using one additional frame and a second learning image captured using 64 additional frames, according to the present embodiment.
 以下本発明を実施するための実施形態について、図面を参照して説明する。なお、以下に説明する実施形態は特許請求の範囲に係る発明を限定するものではなく、また実施形態の中で説明されている諸要素および、その組み合わせの全てが発明の解決手段に必須であるとは限らない。 Embodiments for implementing the present invention will be described below with reference to the drawings. The embodiments described below do not limit the claimed invention, and all of the elements and combinations thereof described in the embodiments are essential to the solution of the invention. Not necessarily.
 本実施例は、試料の観察方法及び装置であって、第1学習用画像と、前記第1学習用画像と対応する第2学習用画像を取得し、前記第1学習用画像と第2学習用画像を用いて、第1学習用画像から第2学習用画像を推定する推定エンジンの推定処理パラメータを学習し、推定処理パラメータの学習では、第1学習用画像から推定された推定画像と該第1学習用画像と対応する第2学習用画像を領域Ri(i=1~N、N:領域数)に分割し、各領域Riに含まれる第2学習用画像の画素群Piと推定画像の画素群Qiとの損失を所定の基準で評価する損失関数Fiを用いて学習する試料の観察方法及び装置の実施例である。
(1)
すなわち、実施例1は、第1学習用画像と、前記第1学習用画像と対応する第2学習用画像を取得し、
前記第1学習用画像と第2学習用画像を用いて、第1学習用画像から第2学習用画像を推定する推定エンジンの推定処理パラメータを学習し、
推定処理パラメータの学習では、第1学習用画像から推定された推定画像と該第1学習用画像と対応する第2学習用画像を領域Ri(i=1~N、N:領域数)に分割し、
各領域Riに含まれる第2学習用画像の画素群Piと推定画像の画素群Qiとの損失を所定の基準で評価する損失関数Fiを用いて学習する構成の実施例である。
This embodiment is a sample observation method and apparatus, in which a first learning image and a second learning image corresponding to the first learning image are acquired, and the first learning image and the second learning image are The estimation processing parameters of the estimation engine that estimates the second training image from the first training image are learned using the training image, and in learning the estimation processing parameters, the estimated image estimated from the first training image and the corresponding The second learning image corresponding to the first learning image is divided into regions Ri (i=1 to N, N: number of regions), and the pixel group Pi of the second learning image included in each region Ri and the estimated image This is an embodiment of a sample observation method and apparatus that learns using a loss function Fi that evaluates the loss with a pixel group Qi based on a predetermined standard.
(1)
That is, in Example 1, a first learning image and a second learning image corresponding to the first learning image are acquired,
learning estimation processing parameters of an estimation engine that estimates a second learning image from the first learning image using the first learning image and the second learning image;
In learning the estimation processing parameters, the estimated image estimated from the first learning image and the second learning image corresponding to the first learning image are divided into regions Ri (i=1 to N, N: number of regions). death,
This is an example of a configuration in which learning is performed using a loss function Fi that evaluates the loss between the pixel group Pi of the second learning image and the pixel group Qi of the estimated image included in each region Ri using a predetermined criterion.
 本特徴について補足する。推定エンジンは損失を基に推定パラメータを更新するため、学習後の推定エンジンが出力する画像の画質は学習時に使用した損失関数により変化する。従来の方法では画像全体で単一の損失関数を適用し、推定エンジンのパラメータを学習するため、領域により異なる画質の画像を推定するように学習することが困難である。本実施例では領域ごとに損失関数を変更することで領域ごとに異なる画質の画像を推定することが可能となる。  Additional information about this feature. Since the estimation engine updates the estimation parameters based on the loss, the image quality of the image output by the estimation engine after learning changes depending on the loss function used during learning. Conventional methods apply a single loss function to the entire image and learn the parameters of the estimation engine, making it difficult to learn to estimate images with different image quality depending on the region. In this embodiment, by changing the loss function for each region, it is possible to estimate images with different image quality for each region.
 以降、本実施例の特徴について、さらに詳細に説明する。 Hereinafter, the features of this embodiment will be explained in more detail.
 図1に本実施例に係る推定エンジンの学習処理シーケンスの一例に示す。同図において、試料を撮像し、第1学習用画像(100)と第2学習用画像(101)を対として複数組取得する。 FIG. 1 shows an example of the learning processing sequence of the estimation engine according to this embodiment. In the figure, a sample is imaged and a plurality of pairs of a first learning image (100) and a second learning image (101) are acquired.
 ここで、第1学習用画像と第2学習用画像を取得する際には、画像解像度、加算フレーム数、フォーカス位置のいずれか1つ以上の条件を変更し、第2学習用画像が第1学習用画像より高品質となるよう取得する。例えば、図9に示すように、加算フレーム数1で撮像した第1学習用画像(900)と、加算フレーム数64で撮像した第2学習用画像(901)を用いればよい。 Here, when acquiring the first learning image and the second learning image, one or more of the image resolution, the number of added frames, and the focus position are changed so that the second learning image becomes the first learning image. Acquire images with higher quality than the training images. For example, as shown in FIG. 9, a first learning image (900) captured with the number of addition frames of 1 and a second learning image (901) captured with the number of addition frames of 64 may be used.
 次に、第1学習用画像と第2学習用画像の位置合わせを行い、位置合わせ済第1学習用画像(103)と位置合わせ済第2学習用画像(106)を取得する。なお、位置合わせでは正規化相関や画素差分などを評価値とし、評価値が最大もしくは最小となる位置を基に位置合わせを行えばよい。また、第1学習用画像と第2学習用画像の画像解像度が異なる場合は、位置合わせ前に線形補間等により画像解像度を揃えた後に位置合わせを行えばよい。 Next, the first learning image and the second learning image are aligned to obtain an aligned first learning image (103) and an aligned second learning image (106). Note that in alignment, normalized correlation, pixel difference, or the like may be used as an evaluation value, and alignment may be performed based on the position where the evaluation value is maximum or minimum. Furthermore, if the image resolutions of the first learning image and the second learning image are different, alignment may be performed after aligning the image resolutions by linear interpolation or the like before alignment.
 位置合わせ済第1学習用画像(103)を推定エンジン(104)に入力し推定画像(105)を取得する。位置合わせ済第2学習用画像(106)に対し領域分割処理(107)を適用し領域R1(108)~領域RN(109)を取得する。次に、領域Ri(i=1~N、N:領域数)ごとに、異なる損失関数Fiを用いて、位置合わせ済第2学習用画像106の画素の画素群Piと、推定画像105の画素群Qiとの損失を算出する。 The aligned first learning image (103) is input to the estimation engine (104) to obtain an estimated image (105). Region division processing (107) is applied to the aligned second learning image (106) to obtain regions R1 (108) to region RN (109). Next, for each region Ri (i=1 to N, N: number of regions), using a different loss function Fi, a pixel group Pi of pixels of the aligned second learning image 106 and a pixel group of pixels of the estimated image 105 are calculated. Calculate the loss with group Qi.
 すなわち、領域R1では、損失関数F1により損失を計算し、領域RNでは、F1とは異なる損失関数FNにより損失を計算する。領域ごとの損失(110、111)を足し合わせることで画像全体の損失を算出(112)し、この損失に基づき推定エンジン(104)の推定処理パラメータを更新する。 That is, in region R1, loss is calculated using loss function F1, and in region RN, loss is calculated using loss function FN different from F1. The loss of the entire image is calculated (112) by adding up the losses (110, 111) for each region, and the estimation processing parameters of the estimation engine (104) are updated based on this loss.
 なお、推定エンジン(104)としては、既存の様々な機械学習型の手法を用いることができるが、例えば深層ニューラルネットワーク等が挙げられる。 Note that various existing machine learning methods can be used as the estimation engine (104), such as a deep neural network.
 深層ニューラルネットワークを用いた一方法として、非特許文献1に記載されている畳み込みニューラルネットワークを用いればよい。具体的には図8に示す様な3層構造を持つニューラルネットワークを用いれば良い。ここで、Yは入力画像、F1(Y)、F2(Y)は中間データを示し、F(Y)が推定結果である。なお、中間データと最終結果は式1~3により算出される。ただし、“*”は畳み込み演算を表す。ここで、W1はn1個のc0×f1×f1サイズのフィルタであり、c0は入力画像のチャネル数、f1は空間フィルタのサイズを表す。入力画像にc0×f1×f1サイズのフィルタをn1回畳み込むことでn1次元の特徴マップが得られる。B1はn1次元のベクトルでありn1個のフィルタに対応したバイアス成分である。同様に、W2はn1×f2×f2サイズのフィルタ、B2はn2次元のベクトル、W3はn2×f3×f3サイズのフィルタ、B3はc3次元のベクトルである。
F1(Y)= max(0、W1*Y+B1)     ・・・(式1)
F2(Y)= max(0、W2*F1(Y)+B2) ・・・(式2)
F(Y)= W3*F2(Y)+B3         ・・・(式3)
 このうち、c0とc3は第1学習用画像と第2学習用画像のチャネル数により決まる値である。また、f1やf2、n1、n2は学習シーケンス前にユーザが決定するハイパーパラメータであり、たとえばf1=9、f2=5、n1=128、n2=64とすれば良い。推定エンジン(104)の学習により調整するパラメータは、W1、W2、W3、B1、B2、B3である。
As one method using a deep neural network, a convolutional neural network described in Non-Patent Document 1 may be used. Specifically, a neural network having a three-layer structure as shown in FIG. 8 may be used. Here, Y is the input image, F1(Y) and F2(Y) are intermediate data, and F(Y) is the estimation result. Note that the intermediate data and final results are calculated using Equations 1 to 3. However, "*" represents a convolution operation. Here, W1 is n1 filters of size c0×f1×f1, c0 represents the number of channels of the input image, and f1 represents the size of the spatial filter. An n1-dimensional feature map is obtained by convolving the input image with a filter of size c0×f1×f1 n1 times. B1 is an n1-dimensional vector and is a bias component corresponding to n1 filters. Similarly, W2 is a filter of size n1×f2×f2, B2 is a vector of n2 dimensions, W3 is a filter of size n2×f3×f3, and B3 is a vector of c3 dimensions.
F1(Y)=max(0, W1*Y+B1)...(Formula 1)
F2(Y)=max(0, W2*F1(Y)+B2)...(Formula 2)
F(Y)=W3*F2(Y)+B3...(Formula 3)
Among these, c0 and c3 are values determined by the number of channels of the first learning image and the second learning image. Further, f1, f2, n1, and n2 are hyperparameters determined by the user before the learning sequence, and may be set to, for example, f1=9, f2=5, n1=128, and n2=64. The parameters to be adjusted through learning by the estimation engine (104) are W1, W2, W3, B1, B2, and B3.
 なお、領域分割には、目的に応じ様々な方法がある。領域分割方法の具体例としては、例えば下記が挙げられる。
(A1)エッジ領域と非エッジ領域の分割
(A2)上層パターン領域と下層パターン領域の分割
(A3)欠陥領域と正常(非欠陥)領域の分割
(A4)設計データに対し事前にラベル付けを行い、ラベルを基づいた領域分割
 本明細書では、以上記載した(A1)~(A4)を領域分割の実施例として説明したが、領域分割はこれに限定されるものではなく、用途やユーザ指定に基づいて任意な方法で領域分割することが可能である。
(2)
 試料観察用画像では、回路パターンの輪郭等のエッジ領域ではコントラストが強い画像や細かな特長の推定が重要であり、一方で、凹凸のない平坦部(非エッジ領域)では欠陥や回路パターン等と誤認識しないよう、コントラストが弱い推定画像やノイズ等の細かな要素が除去された推定画像であることが重要である。
Note that there are various methods for region division depending on the purpose. Specific examples of the area division method include the following.
(A1) Division into edge areas and non-edge areas (A2) Division into upper layer pattern areas and lower layer pattern areas (A3) Division into defect areas and normal (non-defect) areas (A4) Label the design data in advance. , Region division based on labels In this specification, (A1) to (A4) described above have been explained as examples of region division, but region division is not limited to this, and can be performed according to the purpose or user specification. It is possible to divide the area by any method based on the above.
(2)
In images for sample observation, it is important to estimate strong contrast and fine features in edge areas such as the outline of circuit patterns, while on the other hand, it is important to estimate defects and circuit patterns in flat areas without irregularities (non-edge areas). In order to avoid erroneous recognition, it is important that the estimated image has weak contrast or that small elements such as noise have been removed.
 この課題に対し、本実施例では上述した特徴(1)に加え、領域分割処理(104)において、第2学習用画像に対して微分フィルタを適用することで輝度勾配画像を取得し、該輝度勾配画像とエッジ判定しきい値を用いて、第1学習用画像と対応する推定画像および第2学習用画像を回路パターンのエッジ領域R1と非エッジ領域R2に分割(A1)し、R1では損失関数F1を、R2では損失関数F2を用いて推定処理パラメータを学習することを特徴とする。 To solve this problem, in this embodiment, in addition to the feature (1) described above, in the region segmentation process (104), a brightness gradient image is obtained by applying a differential filter to the second learning image, and the brightness gradient image is Using the gradient image and the edge determination threshold, the first learning image, the corresponding estimated image, and the second learning image are divided into the edge region R1 and non-edge region R2 of the circuit pattern (A1), and the loss is calculated in R1. It is characterized in that the estimation processing parameters are learned using the function F1 and the loss function F2 in R2.
 本特徴について図2を用いて補足する。 
  同図において、第2学習用画像(201)に微分フィルタ(202)を適用することで、輝度勾配画像(203)を取得する。微分フィルタには様々な手法を用いることができるが、例えば、対象画素と右隣の画素との差分の絶対値と、対象画素と下隣の画素との差分の絶対値の和を微分フィルタ結果としてもよい。次に、エッジ判定処理(205)では輝度勾配画像(203)とエッジ判定用しきい値(204)を用い、輝度勾配画像(203)を2値化することで行い、エッジ領域R1(206)と非エッジ領域R2(207)に分割する。なお、エッジ判定用しきい値(204)はユーザにより値を設定したり、あらかじめ定められたルールに従って設定しても良い。以上の特徴により、エッジ領域と非エッジ領域に対し異なる損失関数による学習が行われ、エッジ領域と非エッジ領域それぞれで異なる画質の画像を推定するように学習することが可能となる。
(3)
 試料観察用画像では試料が多層の回路パターンを有する場合に、特定の層においてコントラストの高い画像を推定することが必要な場合がある。この課題に対し、本実施例では(1)(2)に記載した特徴に加え、領域分割処理(107)において、第2学習用画像と層判定しきい値を用いて、第1学習用画像と対応する推定画像および第2学習用画像を第1層パターンの領域R1’~第N層パターンの領域RN’に分割し、Ri’ (i=1~N、N:領域数)では損失関数Fi’を用いて推定処理パラメータを学習する。
This feature will be supplemented using FIG. 2.
In the figure, a brightness gradient image (203) is obtained by applying a differential filter (202) to a second learning image (201). Various methods can be used for the differential filter, but for example, the sum of the absolute value of the difference between the target pixel and the pixel on the right, and the absolute value of the difference between the target pixel and the pixel adjacent below, is used as the differential filter result. You can also use it as Next, in edge determination processing (205), the brightness gradient image (203) is binarized using the brightness gradient image (203) and the edge determination threshold (204), and the edge region R1 (206) is and a non-edge region R2 (207). Note that the edge determination threshold value (204) may be set by the user or may be set according to a predetermined rule. Due to the above characteristics, learning is performed using different loss functions for edge regions and non-edge regions, and it becomes possible to perform learning to estimate images of different image quality for edge regions and non-edge regions, respectively.
(3)
In images for sample observation, when a sample has a multilayer circuit pattern, it may be necessary to estimate an image with high contrast in a specific layer. To solve this problem, in this embodiment, in addition to the features described in (1) and (2), in the region segmentation process (107), the second learning image and the layer determination threshold are used to The estimated image and second learning image corresponding to are divided into regions R1' of the first layer pattern to region RN' of the Nth layer pattern, and Ri' (i=1 to N, N: number of regions) is a loss function. Estimated processing parameters are learned using Fi'.
 本特徴(3)について、図3に示す3層の回路パターンを撮像した画像を例にして補足説明する。
  第2学習用画像(301)と層判定しきい値(302)を層判定処理(303)に入力することで1層パターン領域R1’(304)と2層パターン領域R2’(305)と3層パターン領域R3’(305)に分割する。
This feature (3) will be supplementarily explained using an image taken of a three-layer circuit pattern shown in FIG. 3 as an example.
By inputting the second learning image (301) and the layer determination threshold (302) to the layer determination process (303), the 1st layer pattern region R1' (304), the 2nd layer pattern region R2' (305) and the 3rd layer pattern region R2' (305) It is divided into layer pattern regions R3' (305).
 例えば、荷電粒子顕微鏡により多層の回路パターンを有する半導体ウェハを撮像した場合、一般的に上層のパターンほど明るく、下層のパターンほど暗い画像が得られる。そのため、層判定処理では第2学習用画像(301)の輝度を層判定しきい値(302)を基にN値化することで分離することが可能である。本特徴により、層ごとに分割した領域ごとに異なる損失関数を用いて学習することで、層によって異なる画質の画像、すなわち特定の層のコントラストが高い画像、を推定するように学習することが可能となる。
(4)
試料観察用画像では試料に欠陥が存在する場合、欠陥領域の視認性が重要であり、欠陥のコントラストが高い画像が必要である場合がある。また、欠陥の無い領域では欠陥があると誤認識されないようノイズが少ない画像を推定することが重要であり、画像推定において重要視される項目が欠陥の有無により異なる場合がある。
For example, when a semiconductor wafer having a multilayer circuit pattern is imaged using a charged particle microscope, the pattern in the upper layer is generally brighter, and the pattern in the lower layer is darker. Therefore, in the layer determination process, it is possible to separate the luminance of the second learning image (301) by converting it into N values based on the layer determination threshold (302). With this feature, by learning using a different loss function for each region divided into each layer, it is possible to learn to estimate images with different image quality depending on the layer, that is, images with high contrast in a specific layer. becomes.
(4)
In images for sample observation, when a defect exists in a sample, visibility of the defect area is important, and an image with high contrast of the defect may be required. Furthermore, it is important to estimate an image with less noise so as to avoid erroneously recognizing a defect in an area without a defect, and the items that are considered important in image estimation may differ depending on the presence or absence of a defect.
 この課題に対し、本実施例では上述した特徴に加え、欠陥座標情報に基づいて第1学習用画像と第2学習用画像を取得し、
第2学習用画像と対応し、欠陥を含まない参照画像を取得し、
第2学習用画像と参照画像を用いて第2学習用画像中の欠陥領域を検出し、検出領域に基づいて、第1学習用画像と対応する推定画像および第2学習用画像を欠陥領域R1’’と欠陥のない正常領域R2’’に分割し、R1’’では損失関数F1’’を、R2’’では損失関数F2’’を用いて推定処理パラメータを学習することを特徴とする。
To solve this problem, in this embodiment, in addition to the above-mentioned characteristics, a first learning image and a second learning image are acquired based on defect coordinate information,
Acquire a reference image that corresponds to the second learning image and does not include defects,
A defective area in the second learning image is detected using the second learning image and the reference image, and based on the detected area, the estimated image and the second learning image corresponding to the first learning image are converted to the defective area R1. It is characterized in that the process is divided into '' and a normal region R2'' without defects, and the estimation processing parameters are learned using the loss function F1'' for R1'' and the loss function F2'' for R2''.
 本特徴(4)について図4を用いて補足する。
  第2学習用画像(401)と、第2学習用画像と同様の回路パターンが形成されている領域を撮像し参照画像(402)を取得し、第2学習用画像と参照画像を比較することで欠陥領域の検出(403)を行い、第2学習用画像を欠陥領域R1’’(404)と欠陥の無い領域R2’’(405)に分割する。
This feature (4) will be supplemented using FIG. 4.
A second learning image (401) and a region where a circuit pattern similar to the second learning image is formed are imaged to obtain a reference image (402), and the second learning image and the reference image are compared. A defective area is detected (403), and the second learning image is divided into a defective area R1'' (404) and a defect-free area R2'' (405).
 本特徴により欠陥領域R1’’と欠陥の無い領域R2’’において異なる損失関数を用いた学習を可能となり、欠陥の視認性が高く、欠陥の無い領域ではノイズの少ない画像を推定するように学習することが可能となる。
(5)
試料観察用画像では特定のパターン内のコントラストが高い画像を推定することが必要な場合がる。この課題に対し、本発明では上述の特徴に加え、 更に設計データに対してラベルを付与し、第1学習用画像と第2学習用画像のいずれかと、設計データとの位置合わせを行い、位置合わせ結果と前記ラベルに基づいて第2学習用画像を領域Ri(i=1~N、N:領域数)、に分割し、領域Ri’’’では損失関数Fi’’’を用いて推定処理パラメータを学習する特徴を備える。
This feature makes it possible to perform learning using different loss functions in the defective region R1'' and the defect-free region R2'', and learn to estimate an image with high defect visibility and low noise in the defect-free region. It becomes possible to do so.
(5)
In images for sample observation, it is sometimes necessary to estimate images with high contrast within a specific pattern. To solve this problem, in addition to the above-mentioned features, the present invention further adds a label to the design data, aligns either the first learning image or the second learning image with the design data, and calculates the position. The second learning image is divided into regions Ri (i=1 to N, N: number of regions) based on the matching result and the label, and estimation processing is performed using the loss function Fi''' in the region Ri'''. Equipped with the feature of learning parameters.
 ここで、設計データとは観察する試料上のレイアウトデータのことである。例えば、試料が半導体であれば、設計データは半導体回路パターンの設計形状のエッジ情報が座標データとして書き込まれたデータである。 Here, design data refers to layout data on the sample to be observed. For example, if the sample is a semiconductor, the design data is data in which edge information of a designed shape of a semiconductor circuit pattern is written as coordinate data.
 本特徴(5)について図5を用いて補足する。
設計データから取得されるレイアウト情報(501)を用い、各パターンや背景部に対しラベルを付与し(502)、ラベル画像を取得する(503)。図5の例では3種のラベルを付与している。また、設計データ(501)と第2学習用画像(504)を照合し、第2学習用画像と対応する位置(506)を取得する(505)。第2学習用画像と対応する設計データ(506)とラベル画像(503)に基づいて第2学習用画像(504)に対し領域分割処理(507)を行うことで領域R1’’’~ R3’’’(508~510)を取得する。
This feature (5) will be supplemented using FIG. 5.
Using the layout information (501) acquired from the design data, labels are attached to each pattern and background (502), and a label image is acquired (503). In the example of FIG. 5, three types of labels are provided. Further, the design data (501) and the second learning image (504) are compared to obtain a position (506) corresponding to the second learning image (505). Regions R1''' to R3' are divided by performing region division processing (507) on the second learning image (504) based on the design data (506) and label image (503) corresponding to the second learning image. ''(508-510) is obtained.
 対応位置の取得処理では、例えば第2学習用画像からエッジ抽出することでパターンの輪郭情報を取得し、設計データと比較することで対応位置を探索しても良い。
  本特徴により、設計データに付与したラベルと対応する領域ごとに異なる損失関数を用い学習することで、特定パターンのコントラストが高い画像を推定するように学習することが可能となる。
(6)
 画質については明るさやコントラストなど様々な要素がある中、一つの損失関数では表現困難である。そのため、 本実施例では(1)に記載の特徴に加え、損失関数Fiは複数の要素損失関数fij(j=1~M、Mは要素損失数)で算出される要素損失の重み付き和で定義され、第2学習用画像の領域Riごとに要素損失値の重みwijを変更することを特徴とする。
In the corresponding position acquisition process, the contour information of the pattern may be acquired by extracting edges from the second learning image, for example, and the corresponding position may be searched for by comparing it with design data.
With this feature, by learning using a different loss function for each region corresponding to the label given to the design data, it becomes possible to learn to estimate an image with a high contrast of a specific pattern.
(6)
There are various factors involved in image quality, such as brightness and contrast, and it is difficult to express it using a single loss function. Therefore, in this embodiment, in addition to the feature described in (1), the loss function Fi is a weighted sum of elemental losses calculated by a plurality of elemental loss functions fij (j = 1 to M, M is the number of elemental losses). It is characterized in that the weight wij of the element loss value is changed for each region Ri of the second learning image.
 本特徴(6)について、図6を用いて補足する。なお、損失F1(P1、Q1)を例にして説明するが、任意の損失Fi(Pi、Qi)において適用可能である。
  推定画像(600)と第2学習用画像を入力とし、要素損失関数F11~F1M(602~603)を算出する。どの要素損失を重要視するかを示す各要素損失の重みw11~w1M(604)と各要素損失(602~603)を入力とし、損失F1(P1、Q1)を算出する(605)。
This feature (6) will be supplemented using FIG. 6. Although the explanation will be given using the loss F1 (P1, Q1) as an example, the present invention can be applied to any loss Fi (Pi, Qi).
Using the estimated image (600) and the second learning image as input, element loss functions F11 to F1M (602 to 603) are calculated. The weights w11 to w1M (604) of each element loss indicating which element loss is considered important and each element loss (602 to 603) are input, and the loss F1 (P1, Q1) is calculated (605).
 ここで、損失F1(P1、Q1)の計算では、下式に示す各要素損失の重み付き和で計算される。
F1(P1、Q1)=w11f11(P1、Q1)+w12f12(P1、Q1)+
・・・+w1Mf1M(P1、Q1)
 要素損失関数とは画質の各要素に対応した損失(要素損失)を算出する方法であり、例えば、損失関数Fiにおいて、明るさに関する要素損失関数は絶対二乗誤差(Pi-Qi)^2、コントラストに関する要素損失関数は輝度勾配の絶対二乗誤差(Pi’-Qi’)^2であらわされる。なお、Pi’は画素群Piの輝度勾配、Qi’は画素群Qiの輝度勾配とする。
Here, the loss F1 (P1, Q1) is calculated by a weighted sum of each element loss shown in the following formula.
F1 (P1, Q1) = w11f11 (P1, Q1) + w12f12 (P1, Q1) +
...+w1Mf1M (P1, Q1)
The element loss function is a method of calculating the loss (element loss) corresponding to each element of image quality. For example, in the loss function Fi, the element loss function regarding brightness is the absolute square error (Pi-Qi)^2, contrast The element loss function for the brightness gradient is expressed as the absolute square error of the brightness gradient (Pi'-Qi')^2. Note that Pi' is the brightness gradient of the pixel group Pi, and Qi' is the brightness gradient of the pixel group Qi.
 以上の特徴により、複数の要素損失関数により算出した複数の要素損失の重み付き和を領域Riごとに算出し学習することで、画質に関する複数の要素を反映した画像を推定するように学習することが可能となる。
(7)
 図7に、本実施例を実施する際のGraphical User Interface(GUI)の一例を示す。
Due to the above characteristics, by calculating and learning for each region Ri a weighted sum of multiple element losses calculated using multiple element loss functions, it is possible to learn to estimate an image that reflects multiple elements related to image quality. becomes possible.
(7)
FIG. 7 shows an example of a Graphical User Interface (GUI) when implementing this embodiment.
 本GUIでは領域分割における判定しきい値の設定や各領域の重視度を指定できる。
第2学習用画像(700)に対し領域分割を行い、欠陥領域(701)、エッジ領域(702)、非エッジ領域(703)、第1層領域(704)、第2層領域(705)が表示される。また、領域分割の際に使用するしきい値を設定可能である(706、707)。また、各領域の設定(708、710、712)や、各領域における各要素の重視度を設定可能である(709、711、713)。ここで設定された重視度を基に各要素損失関数の重み(604)を設定することで損失への反映が可能である。
With this GUI, it is possible to set a determination threshold for region division and specify the importance level of each region.
Region segmentation is performed on the second learning image (700), and the defect region (701), edge region (702), non-edge region (703), first layer region (704), and second layer region (705) are Is displayed. Further, it is possible to set a threshold value used for region division (706, 707). It is also possible to set each area (708, 710, 712) and the degree of importance of each element in each area (709, 711, 713). By setting the weight (604) of each element loss function based on the importance level set here, it is possible to reflect it on the loss.
 なお、本発明は、以上に説明した実施例に限定されるものではなく、さらに、様々な変形例が含まれる。例えば、前記した実施形態および変形例は、本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。 Note that the present invention is not limited to the embodiments described above, and further includes various modifications. For example, the above-described embodiments and modified examples have been described in detail to explain the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the configurations described.

Claims (8)

  1. 試料の観察方法及び装置であって、
    第1学習用画像と、前記第1学習用画像と対応する第2学習用画像を取得し、
    前記第1学習用画像と第2学習用画像を用いて、第1学習用画像から第2学習用画像を推定する推定エンジンの推定処理パラメータを学習し、
    推定処理パラメータの学習では、第1学習用画像から推定された推定画像と該第1学習用画像と対応する第2学習用画像を領域Ri(i=1~N、N:領域数)に分割し、
    各領域Riに含まれる第2学習用画像の画素群Piと推定画像の画素群Qiとの損失を所定の基準で評価する損失関数Fiを用いて学習する、
    ことを特徴とする試料の観察方法及び装置。
    A sample observation method and apparatus, comprising:
    obtaining a first learning image and a second learning image corresponding to the first learning image;
    learning estimation processing parameters of an estimation engine that estimates a second learning image from the first learning image using the first learning image and the second learning image;
    In learning the estimation processing parameters, the estimated image estimated from the first learning image and the second learning image corresponding to the first learning image are divided into regions Ri (i=1 to N, N: number of regions). death,
    Learning using a loss function Fi that evaluates the loss between the pixel group Pi of the second learning image and the pixel group Qi of the estimated image included in each region Ri using a predetermined standard.
    A method and apparatus for observing a sample, characterized in that:
  2. 請求項1に記載の試料の観察方法及び装置であって
    第1学習用画像と第2学習用画像を撮像する際に、
    画像解像度、加算フレーム数、フォーカス位置のいずれか1つ以上の条件を変化させ、第2学習用画像は第1学習用画像より高品質であることを特徴とする、
    ことを特徴とする試料の観察方法及び装置。
    The method and apparatus for observing a sample according to claim 1, wherein when capturing the first learning image and the second learning image,
    The second learning image is characterized in that the quality is higher than the first learning image by changing one or more of the image resolution, the number of added frames, and the focus position.
    A method and apparatus for observing a sample, characterized in that:
  3. 請求項2に記載の試料の観察方法及び装置であって、
    推定エンジンでは畳み込みニューラルネットワークを用いることを特徴とし、
    損失関数により算出される損失が小さくなるよう誤差逆伝搬処理により推定処理パラメータを更新することを特徴とする、
    ことを特徴とする試料の観察方法及び装置。
    The method and apparatus for observing a sample according to claim 2,
    The estimation engine is characterized by using a convolutional neural network,
    The method is characterized in that the estimation processing parameters are updated by error backpropagation processing so that the loss calculated by the loss function is reduced.
    A method and apparatus for observing a sample, characterized in that:
  4. 請求項3に記載の試料の観察方法及び装置であって、 
    第2学習用画像に対して微分フィルタを適用することで輝度勾配画像を取得し、該輝度勾配画像とエッジ判定しきい値を用いて、第1学習用画像から推定された推定画像および第2学習用画像を回路パターンのエッジ領域R1と非エッジ領域R2に分割し、R1では損失関数F1を、R2では損失関数F2を用いて推定処理パラメータを学習する、
    ことを特徴とする試料の観察方法及び装置。
    The method and apparatus for observing a sample according to claim 3,
    A brightness gradient image is obtained by applying a differential filter to the second learning image, and the estimated image estimated from the first learning image and the second The training image is divided into an edge region R1 and a non-edge region R2 of the circuit pattern, and the estimation processing parameters are learned using the loss function F1 in R1 and the loss function F2 in R2.
    A method and apparatus for observing a sample, characterized in that:
  5. 請求項3に記載の試料の観察方法及び装置であって、
    第2学習用画像と層判定しきい値を用いて、第1学習用画像から推定された推定画像および第2学習用画像を第1層パターンの領域R1’~第N層パターンの領域RN’に分割し、Ri’では損失関数Fi’ (i=1~N、N:領域数)を用いて推定処理パラメータを学習する、
    ことを特徴とする試料の観察方法及び装置。
    The method and apparatus for observing a sample according to claim 3,
    Using the second learning image and the layer determination threshold, the estimated image estimated from the first learning image and the second learning image are converted to the region R1' of the first layer pattern to the region RN' of the Nth layer pattern. In Ri', the estimation processing parameters are learned using the loss function Fi' (i = 1 to N, N: number of regions).
    A method and apparatus for observing a sample, characterized in that:
  6. 請求項3に記載の試料の観察方法及び装置であって、
    欠陥座標情報に基づいて第1学習用画像と第2学習用画像を取得し、
    第2学習用画像と対応し、欠陥を含まない参照画像を取得し、
    第2学習用画像と参照画像を用いて第2学習用画像中の欠陥領域を検出し、検出領域に基づいて、第1学習用画像から推定された推定画像および第2学習用画像を欠陥領域R1’’と欠陥のない正常領域R2’’に分割し、R1’ ’’では損失関数F1’ ’’を、R2’ ’では損失関数F2’ ’’を用いて推定処理パラメータを学習する、
    ことを特徴とする試料の観察方法及び装置。
    The method and apparatus for observing a sample according to claim 3,
    Obtaining a first learning image and a second learning image based on the defect coordinate information,
    Acquire a reference image that corresponds to the second learning image and does not include defects,
    A defective area in the second learning image is detected using the second learning image and the reference image, and based on the detected area, the estimated image estimated from the first learning image and the second learning image are used as the defective area. Divide into R1'' and a normal region R2'' with no defects, and learn the estimation processing parameters using the loss function F1'' for R1'' and the loss function F2'' for R2''.
    A method and apparatus for observing a sample, characterized in that:
  7. 請求項3に記載の試料の観察方法及び装置であって、
    設計データに対してラベルを付与し、
    、第1学習用画像と第2学習用画像のいずれかと、設計データとの位置合わせを行い、、
    位置合わせ結果と前記ラベルに基づいて第2学習用画像を領域Ri’’’ (i=1~N、N:領域数)、に分割し、
    領域Ri’’’では損失関数Fi’’’を用いて推定処理パラメータを学習する、
    ことを特徴とする試料の観察方法及び装置。
    The method and apparatus for observing a sample according to claim 3,
    Assign labels to design data,
    , align either the first learning image or the second learning image with the design data,
    Divide the second learning image into regions Ri''' (i=1 to N, N: number of regions) based on the alignment result and the label,
    In the region Ri''', the estimation processing parameters are learned using the loss function Fi'''.
    A method and apparatus for observing a sample, characterized in that:
  8. 請求項3乃至7の何れか一項に記載の試料の観察方法及び装置であって、
    前記損失関数Fiは複数の要素損失関数fij(j=1~M、M:要素損失数)で算出される要素損失の重み付き和で定義され、第2学習用画像の領域Riごとに異なる重みを有することを特徴とするする試料の観察方法及び装置。
    The method and apparatus for observing a sample according to any one of claims 3 to 7,
    The loss function Fi is defined as a weighted sum of elemental losses calculated by a plurality of elemental loss functions fij (j=1 to M, M: number of elemental losses), and has different weights for each region Ri of the second learning image. A method and apparatus for observing a sample, characterized by comprising the following.
PCT/JP2022/023459 2022-06-10 2022-06-10 Device and method for observing sample WO2023238384A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2022/023459 WO2023238384A1 (en) 2022-06-10 2022-06-10 Device and method for observing sample
TW112118879A TW202349338A (en) 2022-06-10 2023-05-22 Device and method for observing sample

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/023459 WO2023238384A1 (en) 2022-06-10 2022-06-10 Device and method for observing sample

Publications (1)

Publication Number Publication Date
WO2023238384A1 true WO2023238384A1 (en) 2023-12-14

Family

ID=89117809

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/023459 WO2023238384A1 (en) 2022-06-10 2022-06-10 Device and method for observing sample

Country Status (2)

Country Link
TW (1) TW202349338A (en)
WO (1) WO2023238384A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020113769A (en) * 2017-02-20 2020-07-27 株式会社日立ハイテク Image estimation method and system
WO2021038815A1 (en) * 2019-08-30 2021-03-04 株式会社日立ハイテク Measurement system, method for generating learning model to be used when performing image measurement of semiconductor including predetermined structure, and recording medium for storing program for causing computer to execute processing for generating learning model to be used when performing image measurement of semiconductor including predetermined structure
JP2022515353A (en) * 2018-12-31 2022-02-18 エーエスエムエル ネザーランズ ビー.ブイ. Fully automated SEM sampling system for improving electron beam images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020113769A (en) * 2017-02-20 2020-07-27 株式会社日立ハイテク Image estimation method and system
JP2022515353A (en) * 2018-12-31 2022-02-18 エーエスエムエル ネザーランズ ビー.ブイ. Fully automated SEM sampling system for improving electron beam images
WO2021038815A1 (en) * 2019-08-30 2021-03-04 株式会社日立ハイテク Measurement system, method for generating learning model to be used when performing image measurement of semiconductor including predetermined structure, and recording medium for storing program for causing computer to execute processing for generating learning model to be used when performing image measurement of semiconductor including predetermined structure

Also Published As

Publication number Publication date
TW202349338A (en) 2023-12-16

Similar Documents

Publication Publication Date Title
JP6952268B2 (en) Processing method and processing equipment using it
US8045789B2 (en) Method and apparatus for inspecting defect of pattern formed on semiconductor device
KR101361374B1 (en) Defect observation method and defect observation device
KR101523159B1 (en) Pattern matching apparatus, and a recording medium having a computer program stored thereon
TW202142852A (en) Method of deep learning - based examination of a semiconductor specimen and system thereof
JP5313939B2 (en) Pattern inspection method, pattern inspection program, electronic device inspection system
WO2013168487A1 (en) Defect analysis assistance device, program executed by defect analysis assistance device, and defect analysis system
JP6726641B2 (en) Image classification program, classification data creation program, and classification data creation method
KR20180113572A (en) Defect classification apparatus and defect classification method
US10275872B2 (en) Method of detecting repeating defects and system thereof
KR20130108413A (en) Charged particle beam apparatus
CN107369176B (en) System and method for detecting oxidation area of flexible IC substrate
KR20220012217A (en) Machine Learning-Based Classification of Defects in Semiconductor Specimens
KR102530950B1 (en) Classification of Defects in Semiconductor Specimens
CN112150460A (en) Detection method, detection system, device, and medium
KR20140044395A (en) Defect observation method and defect observation device
CN112200790B (en) Cloth defect detection method, device and medium
JP5088165B2 (en) Defect detection method and defect detection apparatus
JP2008020235A (en) Defect inspection device and defect inspection method
JP4857095B2 (en) Defect review method and apparatus
CN116935092A (en) Automated defect classification and detection
JP2010071826A (en) Teacher data preparation method, and image sorting method and image sorter
WO2023238384A1 (en) Device and method for observing sample
JP4629086B2 (en) Image defect inspection method and image defect inspection apparatus
JP5239275B2 (en) Defect detection method and defect detection apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22945883

Country of ref document: EP

Kind code of ref document: A1