WO2024069701A1 - Model generating method and defect inspection system - Google Patents

Model generating method and defect inspection system Download PDF

Info

Publication number
WO2024069701A1
WO2024069701A1 PCT/JP2022/035715 JP2022035715W WO2024069701A1 WO 2024069701 A1 WO2024069701 A1 WO 2024069701A1 JP 2022035715 W JP2022035715 W JP 2022035715W WO 2024069701 A1 WO2024069701 A1 WO 2024069701A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
defect candidate
model
doi
images
Prior art date
Application number
PCT/JP2022/035715
Other languages
French (fr)
Japanese (ja)
Inventor
大地 稲富
貴裕 浦野
淳二 山本
展明 広瀬
野央 波多
洋憲 櫻井
Original Assignee
株式会社日立ハイテク
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立ハイテク filed Critical 株式会社日立ハイテク
Priority to PCT/JP2022/035715 priority Critical patent/WO2024069701A1/en
Publication of WO2024069701A1 publication Critical patent/WO2024069701A1/en

Links

Images

Definitions

  • the present invention relates to a technology that uses a deep learning model to classify defect candidates into defects of interest (DOI) and nuisance.
  • DOE defects of interest
  • Semiconductor devices are manufactured by subjecting wafers made of silicon and other materials to multiple processes to form fine circuits.
  • visual inspections may be performed before and after each process in order to improve and stabilize yields.
  • equipment used for visual inspection equipment used for visual inspection of wafers after circuit patterns have been formed detects defects such as pattern defects or foreign objects based on reference images and inspection images obtained by shining lamp light, laser light, or electron beams on areas corresponding to two patterns that were originally formed to have the same shape. Specifically, the difference between the reference image and the inspection image is calculated, and areas where the difference is greater than a separately determined threshold value are detected as defect candidates.
  • the lower the threshold value the smaller the defects that can be detected.
  • lowering the threshold value will result in many false reports due to errors during image capture, roughness, minute differences in the pattern, or differences in brightness due to uneven film thickness. Such false reports are called nuisance.
  • DOIs Defects of Interest
  • Patent Document 1 discloses that in order to eliminate false reports due to patterns, a threshold value is set to detect defect candidates based on the signal variation calculated for each area in the chip.
  • Patent Documents 2 and 3 examples of technology for removing nuisance from defect candidates are disclosed in Patent Documents 2 and 3, for example.
  • Designing features requires several weeks to several months of work, so it takes time to respond to changes in what is to be removed.
  • applying deep learning methods to distinguish between DOI and Nuisance makes it easier to respond to changes in what is to be removed.
  • classification rules are automatically generated by teaching example images and training the deep learning model, eliminating the need to design features to identify Nuisance.
  • the image type is a combination of an image of the background pattern and an image of the defect candidate.
  • the model generation method is a model generation method for generating a deep learning model that classifies defect candidate images into DOI and Nuisance, and includes a first step of preparing a plurality of deep learning models with different hyperparameters in advance and reading the defect candidate images to be used for training the deep learning models, a second step of labeling some of the defect candidate images to create teaching images and evaluation images, a third step of evaluating the classification accuracy using the evaluation images after updating the parameters of each of the plurality of deep learning models using the teaching images, and selecting the best model with the highest classification accuracy from the plurality of deep learning models, and a fourth step of classifying the defect candidate images that have not been labeled using the best model, and selecting a portion of the defect candidate images to be labeled based on the DOI-likeness according to the best model, and after executing the fourth step, executing the second and third steps again.
  • 1 is a diagram illustrating an example of a configuration of a defect inspection device.
  • 2 is an example of a hardware configuration of an information processing device.
  • 2 is a diagram illustrating an example of a configuration of an image acquisition unit.
  • 2 is an example of an inspection target and an acquired image of the defect inspection device.
  • 2 is an outline of an inspection operation of the defect inspection device.
  • 1 is an example of a deep learning model generation process.
  • 13 is another example of a deep learning model generation process.
  • 1 is an example of a configuration of a defect inspection system.
  • 1 is an example of a timing chart of a deep learning model generation process.
  • 13 is an example of an operation screen.
  • 13 is an example of an operation screen.
  • 13 is an example of an operation screen.
  • 13 is an example of an operation screen.
  • 13 is an example of an operation screen.
  • the defect inspection device of this embodiment is a device that detects defects present on the surface of a sample based on a signal obtained by irradiating the sample with electromagnetic waves such as light or a charged particle beam such as an electron beam.
  • the defect inspection device of this embodiment includes a bright-field inspection device that irradiates the sample with light and detects defects based on the reflected light, a dark-field inspection device that irradiates the sample with light and detects defects based on the scattered light, and an electron beam inspection device (including a device called a review SEM) that detects defects based on secondary electrons obtained by irradiating the sample with an electron beam.
  • the defect inspection device 100 of this embodiment is configured with an image acquisition unit 200, a control unit 102, a calculation unit 103, a storage unit 104, an input/output unit 105, and a communication unit 106 as main functional blocks.
  • the input/output unit 105 is connected to an input/output device 110.
  • the input/output device 110 includes input devices such as a keyboard and a pointing device, and a display device such as a display.
  • the communication unit 106 is connected to a model generation device 610.
  • the image acquisition unit 200 acquires inspection image data of the semiconductor wafer.
  • the calculation unit 103 extracts defect candidate images based on the feature amount of the image transferred from the image acquisition unit 200, performs nuisance removal processing by deep learning described later, and transmits data on the remaining defect candidates (hereinafter referred to as defect candidate data) to the control unit 102.
  • the defect candidate data includes, for example, image data, coordinates on the sample indicating the position where the image was acquired, evaluation value (DOI likeness), and other information.
  • the control unit 102 stores the received defect candidate data in the storage unit 104, transmits data of the received defect candidate data that can be used to create inspection conditions and check results to the input/output unit 105, and transmits data used to generate a model for nuisance removal described later to the communication unit 106.
  • the input/output unit 105 processes the received defect candidate data into a form that can be visually confirmed by humans and transmits it to the input/output device 110.
  • the input/output device 110 outputs the received defect candidate data on a screen.
  • the communication unit 106 transmits the defect candidate data to the model generation device 610.
  • the functional blocks of the control unit 102, the calculation unit 103, the memory unit 104, the input/output unit 105, and the communication unit 106 of the defect inspection device 100 are realized by the information processing device 101.
  • the information processing device 101 includes a processor (CPU) 121, a memory 122, a storage device 123, an input/output port 124, a network interface 125, and a bus 126 as shown in FIG. 2A.
  • the processor 121 functions as a functional unit (functional block) that provides a predetermined function by executing processing according to a program loaded in the memory 122.
  • the storage device 123 stores data and programs used in the functional unit.
  • a non-volatile storage medium such as an HDD (Hard Disk Drive) or an SSD (Solid State Drive) is used.
  • the input/output port 124 is connected to the input/output device 110 and executes the exchange of signals between the information processing device 101 and the input/output device 110.
  • the network interface 125 enables communication with other information processing devices via a network.
  • the other information processing devices include a model generating device 610. These components of the information processing device 101 are communicatively connected to each other via a bus 126.
  • FIG. 2B shows an example of the configuration of the image acquisition unit 200 of the defect inspection device 100.
  • the image acquisition unit 200 is composed of a stage 210, an illumination optical system 220, a detection optical system 230, an image sensor 240, and a signal processing unit 250.
  • the sample 211 is an object to be inspected, such as a semiconductor wafer.
  • the stage 210 mounts the sample 211 and is capable of moving within the XY plane, rotating ( ⁇ ), and moving in the Z direction.
  • the illumination optical system 220 irradiates the sample 211 with light 221.
  • the illumination optical system 220 irradiates the sample 211 with light 221.
  • reflected light 222 and scattered light 223 are generated from the sample 211.
  • the detection optical system 230 directs the reflected light 222 or scattered light 223 toward the imaging surface of the image sensor 240.
  • the image sensor 240 captures the scattered light 223.
  • the detection optical system 230 may include a spatial filter that cuts out light resulting from a pattern that is repeated at a constant cycle.
  • the image sensor 240 transmits an imaging signal to the signal processing unit 250.
  • the signal processing unit 250 processes the imaging signal received from the image sensor 240 to generate an observation image of the surface of the sample 211.
  • An observation image obtained by irradiating the sample 211 with light 221 from diagonally above is called a dark-field image.
  • a method may also be used in which the observation image is obtained by capturing reflected light 222 from the sample 211 above the sample 211.
  • An observation image obtained by this method is called a bright-field image.
  • Figure 3 shows a schematic plan view of the sample 211.
  • the sample 211 is a patterned semiconductor wafer
  • a circuit pattern of a semiconductor chip is formed on the surface of the sample 211.
  • Each semiconductor chip before being separated from the semiconductor wafer is called a die.
  • the dies have the same circuit pattern.
  • an image of the entire surface of the semiconductor wafer is taken and divided into images of a predetermined size.
  • this divided image is called an observation image. Since the dies D301 to D304 in Figure 3 have the same circuit pattern, if there is no defect, the observation images of the regions P305 to P308 occupying the same coordinates on the die will be the same.
  • the observation image of region P305 among the observation images of regions P305 to P308 has a large brightness value
  • the observation image of region P305 is extracted as a defect candidate image.
  • the feature is not limited to this, and multiple feature values can also be used.
  • FIG. 4 is a diagram showing an overview of the inspection operation of the defect inspection device 100.
  • the sample 211 is loaded into the defect inspection device 100 (S400).
  • the image acquisition unit 200 captures the observation image (S401).
  • the calculation unit 103 performs a feature calculation process on the acquired observation image, such as calculating the difference by comparing the brightness of the observation image of an area occupying the same coordinates on the die (S402), and extracts defect candidates by comparing with a separately calculated threshold value and performing an operation such as a filter process using the feature (S403).
  • nuisance is removed from the defect candidates extracted in step S403 to identify the defect (S404), and the inspection result is output (S405).
  • the defect inspection device 100 reads the defect candidate image acquired by executing the flow in FIG. 4 (S500). If multiple defect inspection devices each acquire defect candidate images, the defect candidate images from those defect inspection devices may be read.
  • the defect candidate image may be a defect candidate image extracted by inspection at normal inspection sensitivity, or a defect candidate image extracted by inspection with extremely increased inspection sensitivity. In the latter case, the defect candidate image extracted in step S403 will contain a large amount of Nuisance, but this embodiment includes a step (S404) of removing Nuisance using a deep learning model, so the inspection results are not degraded.
  • images are randomly selected from the loaded defect candidate images (S501).
  • images may be selected so that the image features of the selected defect candidate images are sparse.
  • the defect candidate images are selected so that they include a wide range of defect candidate images from high brightness to low brightness.
  • the selected defect candidate images are displayed on the GUI of the input/output device 110 (S502), and a user operation is accepted (S503).
  • a label of "DOI” or "Nuisance” is assigned to each defect candidate image according to the user's operation, and classification is performed (S504).
  • a prescribed number is set to, for example, 5 or more. If the specified number is not met, the process returns to the selection of images to be displayed (S501), and if the specified number is met, the defect candidate images are divided into training images and evaluation images (S506). This division is performed so that the proportions of each class (images labeled "DOI” and images labeled "Nuisance") contained in the training images and evaluation images are equal.
  • a deep learning model for classifying DOI and Nuisance is trained.
  • the optimal network configuration of the deep learning model may differ for each image type.
  • multiple models are prepared in advance in the model generation device 610.
  • CNN Convolutional Neural Network
  • multiple CNNs with different hyperparameters are prepared.
  • Multiple CNNs are trained using training images, and the CNN that provides the most accurate judgment is selected as the model for classifying DOI and Nuisance for that image type. This makes it possible to handle a variety of image types. The flow will be explained below.
  • the parameters of a number of models prepared in advance are updated (trained) using the training image (S507), and the evaluation image is classified using these models (S508).
  • the models prepared in advance may be models that have been previously trained using other test results, or may be untrained models, or may include both.
  • the model outputs, for example, the probability that an input image is DOI (hereinafter referred to as DOI-likeness), and a threshold value that can best classify the evaluation image into DOI and Nuisance is also recorded.
  • DOI-likeness may be determined based on the probability of Nuisance.
  • the model with the highest performance when classifying the evaluation image is selected as the best model (S509).
  • the accuracy of the model can be evaluated using the AUC (Area Under Curve) of the ROC curve as an index of the classification accuracy of the model.
  • AUC rea Under Curve
  • the target value may be changed by the user.
  • Images are selected based on some criteria using DOI-likeness. Alternatively, images may be selected based on image features. In either case, images are selected to cover the spread of DOI-likeness or image features in all images and to avoid arbitrary bias. In deep learning, inference results for types of images that have not been learned are ambiguous. As described above, by selecting images to be used as training images so as to cover the entire variability of the image type, and repeating the process of training using the created training images, the accuracy of the model can be efficiently improved. After the selection of the display image (S514) is completed, the image is displayed (S502), an operation is accepted (S503), and a label is assigned (S504), and then the process returns to dividing the image data (S506).
  • the best model is determined as a deep learning model that classifies the image types of defect candidate images into DOI and Nuisance. Furthermore, all images that are not labeled with the best model may be classified (S511). Images whose DOI-likeness determined by the best model exceeds the threshold value recorded in step S508 may be output as DOI, and which defect candidate images have been determined to be DOI may be stored in a memory unit (S512).
  • the classification of DOI and Nuisance is performed by comparing the probability that the model outputs a DOI (DOI-likeness) with a threshold value, and the threshold value used is the value recorded in step S508, but the user may adjust the threshold value.
  • step S520 the classification accuracy of the best model may be displayed on the operation screen, and the user may make a judgment by looking at the numerical value.
  • FIG. 6 shows an example of the system configuration of a defect inspection system 600 of this embodiment.
  • the defect inspection system 600 has one or more inspection devices 100-1 to 100-3, a model generation device 610, and an input/output device 620.
  • the input/output device 620 includes input devices such as a keyboard and a pointing device, and a display device such as a display.
  • the functional blocks of the communication unit 611, control unit 612, calculation unit 613, storage unit 614, and input/output unit 615 of the model generation device 610 are realized by an information processing device.
  • the hardware configuration of the information processing device is the same as that of the information processing device 101 shown in FIG. 2A, so repeated explanations will be omitted.
  • defect candidate images are acquired using inspection devices 100-1 to 3.
  • one inspection device or some or all of a group of inspection devices may be used.
  • the sample inspected by the inspection device may be, for example, the same semiconductor wafer, or different semiconductor wafers on which dies with the same circuit pattern are formed.
  • Inspection devices 100-1 to 3 transmit information on the defect candidates they have generated to communication unit 611 of model generation device 610 via a network.
  • model generation device 610 uses the transmitted defect candidate images, model generation device 610 generates a deep learning model that inspection device 100 uses in step S404 of defect inspection, following the flow shown in FIG. 5A or FIG. 5B.
  • the calculation unit 613 uses the received defect candidate image to train one or more untrained models or pre-trained models stored in the storage unit 614.
  • the storage unit 614 stores a plurality of deep learning models with different hyperparameters in advance.
  • the calculation unit 613 transmits the defect candidate image selected as a labeling target from among the unlabeled images to the input/output unit 615.
  • the input/output unit 615 displays the received defect candidate image on the display device of the input/output device 620 (S502).
  • the input device of the input/output device 620 accepts a user's operation and transmits a signal for labeling to the input/output unit 615.
  • the input/output unit 615 uses the received signal to label the defect candidate image and transmits the label to the calculation unit 613 and the storage unit 614.
  • the calculation unit 613 updates the parameters of the deep learning model using the received labeled image (S507). Each time the parameter update is completed, the calculation unit 613 may evaluate the model using the labeled image stored in the storage unit 614 and check the accuracy.
  • the control unit 612 transmits the best model at that time to each of the inspection devices 100-1 to 100-3 via the communication unit 611.
  • the timing for updating the model used by the inspection devices 100-1 to 100-3 in step S404 is arbitrary. For example, it may be triggered by the results of an accuracy evaluation performed by the calculation unit 613 using the labeled images stored in the memory unit 614.
  • the inspection devices 100-1 to 100-3 use the received trained model to remove nuisance from the results of subsequent inspections and identify defects.
  • the received deep learning model may be used repeatedly.
  • FIG. 7 shows the user's work (labeling (S504)) and the processing of the model generating device 610 in the deep learning model generation processing shown in FIG. 5A or FIG. 5B, and the transition of the number of labeled images accompanying these processes.
  • the model generating device 610 updates the parameters of multiple deep learning models using labeled images (training images) obtained by the user's labeling work.
  • an inference operation 705 is executed in parallel with the user's second labeling operation 704 to select the next image to be labeled.
  • the inference operations 703 and 705 are inferences based on the same model, so the same inference results are obtained. However, since the image that was the subject of the second labeling operation 704 is not subject to inference in the inference operation 705, an image that has not yet been subjected to labeling processing can be selected as the image to be labeled by the inference operation 705.
  • the next image to be labeled selected by the inference operation 705 is displayed as soon as the user's second labeling operation 704 is completed, so that the user can move on to the third labeling operation 706.
  • the next parameter update operation 707 and inference operation 708 are started in parallel at the same time that the third labeling operation 706 is started.
  • FIGS. 8A-D show an example of the operation screen of this embodiment.
  • the time required for teaching can be shortened.
  • At least several images 801-804 to be labeled and corresponding label selection buttons 811-814 and an operation completion button group 830 are displayed in a pop-up window 800, which is the operation screen.
  • the pop-up window 800 may display an image switching control button group 820 shown in FIG. 8A.
  • the image display method adjustment button group 840 shown in FIG. 8B may be displayed.
  • the division boundary selection buttons 860-1-5 shown in FIG. 8D may be displayed. The operation of this operation screen will be described below with reference to FIG. 8A-D.
  • the images to be labeled selected in steps S501 and S514 are displayed on the operation screen in order of their DOI resemblance.
  • the number of images to be displayed is about ten to several tens of images. If all images cannot be displayed at once in the pop-up window 800, a scroll operation is accepted so that the displayed images to be labeled can be switched.
  • Label selection buttons 811 to 814 corresponding to the images to be labeled 801 to 804 are arranged near the displayed images to be labeled, and the label to be assigned to each image to be labeled can be selected by clicking the mouse pointer 810 on the buttons.
  • the label selection buttons can be radio buttons as shown in FIG.
  • DOI is represented as "A” and Nuisance as “B”
  • the image to which the opposite label is to be assigned can be clicked with the mouse pointer 810, or they can be check boxes so that only DOI or Nuisance can be clicked.
  • FIG. 8C a rectangular selection can be performed by dragging the mouse pointer 810, and the labels of the images inside the rectangular area can be inverted.
  • FIG. 8D clicking one of the division boundary selection buttons 860-1 to 860-5 with the mouse pointer 810 may invert the labels of all images displayed to the left or right of that boundary. In this case, if an individual label selection button 811 to 814 is clicked before clicking the division boundary selection button 860-1 to 860-5, processing may be performed that does not invert the labels of that image.
  • an operation completion button 831 located in the group of operation completion buttons 830 When an operation completion button 831 located in the group of operation completion buttons 830 is clicked with the mouse pointer 810, a label specified by the user is assigned to the displayed images 801-804 to be labeled, they are saved, and the next image to be labeled is displayed. Also, a teaching completion button 832 may be placed in the group of operation completion buttons 830. Clicking this button with the mouse pointer 810 may end the labeling process and allow the user to proceed to the next process.
  • defect candidate images 801-1 to 804-1 to be labeled and corresponding reference die images 801-2 to 804-2 may be displayed alternately.
  • the defect candidate images are grouped based on coordinate information on the die, and the reference die image is selected from the observation images of the area occupying the same coordinates on the die (see FIG. 3).
  • Image switching may be stopped by clicking an image switching on/off button 822 arranged in the image switching control button group 820 shown in FIG. 8A with the mouse pointer 810.
  • the displayed images may be switched at the time input into a switching speed adjustment box 821 arranged in the image switching control button group 820 or set by the button.
  • the image quality of the images to be labeled can be adjusted by the group of image display method adjustment buttons 840 shown in FIG. 8B.
  • the group of image display method adjustment buttons 840 may have a box 841 for setting the maximum brightness and a box 842 for setting the minimum brightness.
  • Brightness may be adjusted by setting pixels equal to or higher than the tone set as the maximum brightness as white and pixels equal to or lower than the tone set as the minimum brightness as black, and allocating pixel values of the tones in between evenly between the maximum tone and the minimum tone on the display of the input/output device 620.
  • Figure 9 shows an example of the configuration of another operation screen.
  • a group of labeling target images 911-1 to 4 that are inferred to have a high DOI-likeness are displayed, and in the right area 920, a group of labeling target images 921-1 to 4 that are inferred to have a low DOI-likeness are displayed.
  • Images that have been incorrectly classified can be selected and corrected by clicking with the mouse pointer 810 on the arrows 912-1 to 4 or arrows 922-1 to 4 that correspond to the images.
  • the present invention is not limited to the above-described embodiment, but includes various modifications.
  • the above-described embodiment has been described in detail to make the present invention easier to understand, and is not necessarily limited to having all of the configurations described.
  • an inspection device that detects defects on a semiconductor wafer on which a pattern is formed is used as an example for explanation, but nuisance can also be removed in an inspection device that detects defects on a semiconductor wafer on which no pattern is formed (hereinafter referred to as a surface inspection device).
  • a surface inspection device is a device that detects defects such as foreign matter and scratches on the surface by applying light such as a laser to the surface of a sample such as a wafer.
  • the sample When inspecting the surface of a wafer using a surface inspection device, the sample is fixed on a rotating stage or a stage that moves in the XYZ directions using, for example, a vacuum chuck or a holding device, and light is applied to the sample surface while moving the sample, and the reflected light and scattered light generated at that time are observed. Foreign matter and scratches on the sample surface are detected by detecting the difference between when the surface of the sample is as expected (defect-free state) and when foreign matter or scratches are present.
  • the surface inspection device detects defects by comparing the signal waveforms obtained, and can remove nuisance using a trained model. For classifying the data to train the model, data obtained by converting waveform information into an image may be used, the waveform information may be used as is, coordinate information on the sample may be used, or a combination of these pieces of information may be used.
  • model generation device connected to the defect inspection device
  • model generation software in the information processing device of the defect inspection device

Abstract

The present invention has: a first step for reading defect candidate images used for training deep learning models; a second step for assigning labels to a portion of the defect candidate images to create teaching images and evaluation images; a third step for, after updating the parameters for each of a plurality of the deep learning models using teaching images, evaluating the classification accuracy using the evaluation images, and selecting the best model with the highest classification accuracy among the plurality of deep learning models; and a fourth step for classifying defect candidate images that have not been assigned a label using the best model, and select a portion of the defect candidate images for assigning labels on the basis of the DOI-likeness according to the best model, and executes the second step and the third step again after execution of the fourth step.

Description

モデル生成方法及び欠陥検査システムModel generation method and defect inspection system
 本発明は、深層学習モデルによって欠陥候補を顧客重要欠陥(DOI)と虚報(Nuisance)とに分類する技術に関する。 The present invention relates to a technology that uses a deep learning model to classify defect candidates into defects of interest (DOI) and nuisance.
 半導体デバイスはシリコンなどで作られたウェーハに対し複数の処理を施し、微細な回路を形成することで製造される。半導体デバイスの製造過程では、歩留まり向上および安定化を目的に各工程の前後で外観検査が実施される場合がある。外観検査に用いる装置のうち、回路パターンが形成後のウェーハの外観検査に用いられる装置では、本来同一形状となるように形成された2つのパターンに対応する領域に、ランプ光、レーザ光または電子線など当てることで得られた参照画像と検査画像を元に、パターン欠陥、あるいは異物などの欠陥を検出する。具体的には、参照画像と検査画像の差を計算し、別途定めたしきい値とよりも差が大きくなる部分を欠陥候補として検出する。 Semiconductor devices are manufactured by subjecting wafers made of silicon and other materials to multiple processes to form fine circuits. In the manufacturing process of semiconductor devices, visual inspections may be performed before and after each process in order to improve and stabilize yields. Among the equipment used for visual inspection, equipment used for visual inspection of wafers after circuit patterns have been formed detects defects such as pattern defects or foreign objects based on reference images and inspection images obtained by shining lamp light, laser light, or electron beams on areas corresponding to two patterns that were originally formed to have the same shape. Specifically, the difference between the reference image and the inspection image is calculated, and areas where the difference is greater than a separately determined threshold value are detected as defect candidates.
 このような検査では、しきい値を低くするほど微小な欠陥まで検出可能となるが、しきい値を低くすると画像取得時の誤差やラフネス、パターンの微小な相違、あるいは膜厚ムラによる明るさの差などに起因する虚報が多く発生してしまう。このような虚報をNuisanceと呼ぶ。これに対し、顧客が検出を求める欠陥のことをDOI(Defect of Interest)と呼ぶ。 In this type of inspection, the lower the threshold value, the smaller the defects that can be detected. However, lowering the threshold value will result in many false reports due to errors during image capture, roughness, minute differences in the pattern, or differences in brightness due to uneven film thickness. Such false reports are called nuisance. In contrast, defects that customers want detected are called DOIs (Defects of Interest).
 特許文献1にはパターンによる虚報をなくすために、チップ内の領域ごとに算出した信号のばらつきをもとに、欠陥候補を検出するしきい値を設定することが開示されている。また、欠陥候補の中からNuisanceを除去するための技術の例が、例えば、特許文献2、特許文献3に開示されている。 Patent Document 1 discloses that in order to eliminate false reports due to patterns, a threshold value is set to detect defect candidates based on the signal variation calculated for each area in the chip. In addition, examples of technology for removing nuisance from defect candidates are disclosed in Patent Documents 2 and 3, for example.
特開2000-105203号公報JP 2000-105203 A 特開2012-112915号公報JP 2012-112915 A 特開2014-149177号公報JP 2014-149177 A
 特許文献2、特許文献3に開示される従来技術では、欠陥候補に含まれるNuisanceを分析し、Nuisanceを特定するための特徴量を設計することによって除去を実現する。そのため、検査対象のウェーハ種が変更になったり、工程が変更になったりして除去すべきNuisanceが変わった場合には、特徴量の設計をやり直す必要がある。 In the conventional technologies disclosed in Patent Documents 2 and 3, nuisance contained in defect candidates is analyzed and removal is achieved by designing features to identify the nuisance. Therefore, if the type of wafer to be inspected is changed or the process is changed, and the nuisance to be removed changes, it is necessary to redesign the features.
 特徴量の設計には数週間から数か月の作業を要するため、除去対象の変化に対応するのに時間がかかる。一方、DOIとNuisanceとを識別するために深層学習手法を適用することにより、除去対象の変化への対応が容易になる。深層学習手法を適用する場合には、例となる画像を教示して深層学習モデルのトレーニングを行うことにより分類ルールが自動で生成され、Nuisance特定のための特徴量の設計作業が不要となるためである。 Designing features requires several weeks to several months of work, so it takes time to respond to changes in what is to be removed. On the other hand, applying deep learning methods to distinguish between DOI and Nuisance makes it easier to respond to changes in what is to be removed. When applying deep learning methods, classification rules are automatically generated by teaching example images and training the deep learning model, eliminating the need to design features to identify Nuisance.
 しかしながら、深層学習を用いて欠陥候補画像をDOIとNuisanceとに分類する場合、高い精度での分類を可能とするためには、分類対象の画像種ごとに最適な深層学習モデルを設計する必要がある。ここで画像種は、背景となるパターンの像と欠陥候補の像との組み合わせである。また、各クラス(ここでは、DOIとNuisance)について数千枚程度のラベル付き画像を教示用データとして用意する必要があり、教示用データの準備に時間と手間がかかる。 However, when using deep learning to classify defect candidate images into DOI and Nuisance, in order to enable classification with high accuracy, it is necessary to design an optimal deep learning model for each type of image to be classified. Here, the image type is a combination of an image of the background pattern and an image of the defect candidate. In addition, it is necessary to prepare several thousand labeled images as training data for each class (in this case, DOI and Nuisance), and preparing the training data takes time and effort.
 本発明の一実施の形態であるモデル生成方法は、欠陥候補画像をDOIとNuisanceとにクラス分類する深層学習モデルを生成するモデル生成方法であって、ハイパーパラメータの異なる複数の深層学習モデルをあらかじめ用意しておき、深層学習モデルのトレーニングに使用する欠陥候補画像を読み込む第1の工程と、欠陥候補画像の一部についてラベル付与を行って教示用画像及び評価用画像を作成する第2の工程と、教示用画像により複数の深層学習モデルのそれぞれについてパラメータ更新を行った後に、評価用画像を用いて分類精度を評価し、複数の深層学習モデルのうち最も分類精度の高いベストモデルを選出する第3の工程と、ベストモデルによりラベル付与を行っていない欠陥候補画像の分類を行い、ベストモデルによるDOIらしさに基づきラベル付与を行う欠陥候補画像の一部を選択する第4の工程を有し、第4の工程の実行後に、再度第2の工程及び第3の工程を実行する。 The model generation method according to one embodiment of the present invention is a model generation method for generating a deep learning model that classifies defect candidate images into DOI and Nuisance, and includes a first step of preparing a plurality of deep learning models with different hyperparameters in advance and reading the defect candidate images to be used for training the deep learning models, a second step of labeling some of the defect candidate images to create teaching images and evaluation images, a third step of evaluating the classification accuracy using the evaluation images after updating the parameters of each of the plurality of deep learning models using the teaching images, and selecting the best model with the highest classification accuracy from the plurality of deep learning models, and a fourth step of classifying the defect candidate images that have not been labeled using the best model, and selecting a portion of the defect candidate images to be labeled based on the DOI-likeness according to the best model, and after executing the fourth step, executing the second and third steps again.
 本発明によれば、除去対象のNuisanceが変わった場合でも、Nuisance除去可能な深層学習モデルを短い時間で得られるため、常に高感度な欠陥検査を実施することができる。その他の課題と新規な特徴は、本明細書の記述および添付図面から明らかになるであろう。  According to the present invention, even if the Nuisance to be removed changes, a deep learning model capable of removing Nuisance can be obtained in a short time, so that highly sensitive defect inspection can always be performed. Other issues and novel features will become apparent from the description of this specification and the accompanying drawings.
欠陥検査装置の構成の一例である。1 is a diagram illustrating an example of a configuration of a defect inspection device. 情報処理装置のハードウェア構成の一例である。2 is an example of a hardware configuration of an information processing device. 画像取得部の構成の一例である。2 is a diagram illustrating an example of a configuration of an image acquisition unit. 欠陥検査装置の検査対象と取得される画像の一例である。2 is an example of an inspection target and an acquired image of the defect inspection device. 欠陥検査装置の検査動作の概要である。2 is an outline of an inspection operation of the defect inspection device. 深層学習モデル生成処理の一例である。1 is an example of a deep learning model generation process. 深層学習モデル生成処理の別の一例である。13 is another example of a deep learning model generation process. 欠陥検査システムの構成の一例である。1 is an example of a configuration of a defect inspection system. 深層学習モデル生成処理のタイミングチャートの一例である。1 is an example of a timing chart of a deep learning model generation process. 操作画面の一例である。13 is an example of an operation screen. 操作画面の一例である。13 is an example of an operation screen. 操作画面の一例である。13 is an example of an operation screen. 操作画面の一例である。13 is an example of an operation screen. 操作画面の一例である。13 is an example of an operation screen.
 以下、本発明の実施の形態を図面に基づいて詳細に説明する。なお、実施の形態を説明するための全図において、同一の部材には原則として同一の符号を付し、その繰り返しの説明は省略する。 The following describes in detail an embodiment of the present invention with reference to the drawings. In all drawings used to explain the embodiment, the same components are generally designated by the same reference numerals, and repeated explanations will be omitted.
 図1は本実施例の欠陥検査装置100の構成の一例を示したものである。本実施例の欠陥検査装置は、光のような電磁波や電子線のような荷電粒子線を試料に照射することで得られる信号を元に試料の表面に存在する欠陥を検出する装置である。例えば、光を試料に照射し、その反射光に基づき欠陥検出する明視野検査装置、光を試料に照射し、その散乱光に基づき欠陥検出する暗視野検査装置、電子線を試料に照射して得られる二次電子に基づき欠陥検出する電子線検査装置(レビューSEMと呼ばれる装置も含む)を含む。本実施例の欠陥検査装置100は、主要な機能ブロックとして、画像取得部200、制御部102、演算部103、記憶部104、入出力部105、通信部106を有して構成される。入出力部105は入出力装置110に接続されている。入出力装置110は、キーボード、ポインティングデバイスなどの入力装置、及びディスプレイなどの表示装置を含む。通信部106はモデル生成装置610に接続されている。画像取得部200では半導体ウェーハの検査画像データを取得する。演算部103は画像取得部200から転送された画像の特徴量に基づいて欠陥候補画像を抽出し、後述する深層学習によるNuisance除去処理を行い、残った欠陥候補についてのデータ(以下、欠陥候補データという)を制御部102へ送信する。欠陥候補データは、例えば画像データの他、画像を取得した位置を示す試料上の座標、評価値(DOIらしさ)などの情報を含んでいる。制御部102は受け取った欠陥候補データを記憶部104に記憶させるとともに、受け取った欠陥候補データのうち、検査条件作成や結果確認に使用可能なデータを入出力部105へ送信し、後述のNuisance除去のモデル生成に使用するデータを通信部106へ送信する。入出力部105は受け取った欠陥候補データを人間が目視確認可能な形に処理し入出力装置110へ送信する。入出力装置110は受け取った欠陥候補データを画面出力する。通信部106は欠陥候補データをモデル生成装置610に送信する。 1 shows an example of the configuration of the defect inspection device 100 of this embodiment. The defect inspection device of this embodiment is a device that detects defects present on the surface of a sample based on a signal obtained by irradiating the sample with electromagnetic waves such as light or a charged particle beam such as an electron beam. For example, it includes a bright-field inspection device that irradiates the sample with light and detects defects based on the reflected light, a dark-field inspection device that irradiates the sample with light and detects defects based on the scattered light, and an electron beam inspection device (including a device called a review SEM) that detects defects based on secondary electrons obtained by irradiating the sample with an electron beam. The defect inspection device 100 of this embodiment is configured with an image acquisition unit 200, a control unit 102, a calculation unit 103, a storage unit 104, an input/output unit 105, and a communication unit 106 as main functional blocks. The input/output unit 105 is connected to an input/output device 110. The input/output device 110 includes input devices such as a keyboard and a pointing device, and a display device such as a display. The communication unit 106 is connected to a model generation device 610. The image acquisition unit 200 acquires inspection image data of the semiconductor wafer. The calculation unit 103 extracts defect candidate images based on the feature amount of the image transferred from the image acquisition unit 200, performs nuisance removal processing by deep learning described later, and transmits data on the remaining defect candidates (hereinafter referred to as defect candidate data) to the control unit 102. The defect candidate data includes, for example, image data, coordinates on the sample indicating the position where the image was acquired, evaluation value (DOI likeness), and other information. The control unit 102 stores the received defect candidate data in the storage unit 104, transmits data of the received defect candidate data that can be used to create inspection conditions and check results to the input/output unit 105, and transmits data used to generate a model for nuisance removal described later to the communication unit 106. The input/output unit 105 processes the received defect candidate data into a form that can be visually confirmed by humans and transmits it to the input/output device 110. The input/output device 110 outputs the received defect candidate data on a screen. The communication unit 106 transmits the defect candidate data to the model generation device 610.
 欠陥検査装置100の制御部102、演算部103、記憶部104、入出力部105、通信部106の機能ブロックは、情報処理装置101により実現される。情報処理装置101は、図2Aに示すようなプロセッサ(CPU)121、メモリ122、ストレージ装置123、入出力ポート124、ネットワークインタフェース125、バス126を含む。プロセッサ121は、メモリ122にロードされたプログラムに従って処理を実行することによって、所定の機能を提供する機能部(機能ブロック)として機能する。ストレージ装置123は、機能部で使用するデータやプログラムを格納する。ストレージ装置123には、例えばHDD(Hard Disk Drive)やSSD(Solid State Drive)のような不揮発性記憶媒体が用いられる。入出力ポート124は、入出力装置110と接続され、情報処理装置101と入出力装置110との間の信号のやり取りを実行する。ネットワークインタフェース125は、ネットワークを介して他の情報処理装置と通信を可能にする。他の情報処理装置にはモデル生成装置610が含まれる。情報処理装置101のこれらの構成要素はバス126により互いに通信可能に接続されている。 The functional blocks of the control unit 102, the calculation unit 103, the memory unit 104, the input/output unit 105, and the communication unit 106 of the defect inspection device 100 are realized by the information processing device 101. The information processing device 101 includes a processor (CPU) 121, a memory 122, a storage device 123, an input/output port 124, a network interface 125, and a bus 126 as shown in FIG. 2A. The processor 121 functions as a functional unit (functional block) that provides a predetermined function by executing processing according to a program loaded in the memory 122. The storage device 123 stores data and programs used in the functional unit. For the storage device 123, a non-volatile storage medium such as an HDD (Hard Disk Drive) or an SSD (Solid State Drive) is used. The input/output port 124 is connected to the input/output device 110 and executes the exchange of signals between the information processing device 101 and the input/output device 110. The network interface 125 enables communication with other information processing devices via a network. The other information processing devices include a model generating device 610. These components of the information processing device 101 are communicatively connected to each other via a bus 126.
 図2Bに、欠陥検査装置100の画像取得部200の構成の一例を示す。ここでは光により欠陥検出を行う光学検査装置の例を示している。画像取得部200は、ステージ210、照明光学系220、検出光学系230、イメージセンサ240、信号処理部250を有して構成される。試料211は例えば半導体ウェーハなどの被検査物である。ステージ210は試料211を搭載してXY平面内の移動および回転(θ)とZ方向への移動が可能である。 FIG. 2B shows an example of the configuration of the image acquisition unit 200 of the defect inspection device 100. Here, an example of an optical inspection device that uses light to detect defects is shown. The image acquisition unit 200 is composed of a stage 210, an illumination optical system 220, a detection optical system 230, an image sensor 240, and a signal processing unit 250. The sample 211 is an object to be inspected, such as a semiconductor wafer. The stage 210 mounts the sample 211 and is capable of moving within the XY plane, rotating (θ), and moving in the Z direction.
 照明光学系220は試料211に対して光221を照射する。試料211に光221が当たると、反射光222と散乱光223とが試料211から生じる。 The illumination optical system 220 irradiates the sample 211 with light 221. When the light 221 hits the sample 211, reflected light 222 and scattered light 223 are generated from the sample 211.
 検出光学系230は、反射光222または散乱光223をイメージセンサ240の撮像面へ向かわせる。イメージセンサ240は散乱光223を撮像する。検出光学系230には一定の周期で繰り返されるパターンから生じた光をカットする空間フィルタを含めてもよい。イメージセンサ240は撮像信号を信号処理部250に送信する。 The detection optical system 230 directs the reflected light 222 or scattered light 223 toward the imaging surface of the image sensor 240. The image sensor 240 captures the scattered light 223. The detection optical system 230 may include a spatial filter that cuts out light resulting from a pattern that is repeated at a constant cycle. The image sensor 240 transmits an imaging signal to the signal processing unit 250.
 信号処理部250はイメージセンサ240から受け取った撮像信号を処理し、試料211の表面の観察画像を生成する。試料211に対して斜め上方から光221を照射することによって取得する観察画像は、暗視野画像と呼ばれる。なお、試料211からの反射光222を、試料211より上方で撮像して観察画像を得る方式でもよい。この方式によって取得する観察画像は、明視野画像と呼ばれる。 The signal processing unit 250 processes the imaging signal received from the image sensor 240 to generate an observation image of the surface of the sample 211. An observation image obtained by irradiating the sample 211 with light 221 from diagonally above is called a dark-field image. Note that a method may also be used in which the observation image is obtained by capturing reflected light 222 from the sample 211 above the sample 211. An observation image obtained by this method is called a bright-field image.
 図3に試料211を模式的に示す平面図を示す。試料211がパターン付き半導体ウェーハである場合、試料211の表面には半導体チップの回路パターンが形成されている。半導体ウェーハから分離する前の各半導体チップのことをダイとよぶ。ダイ同士は同じ回路パターンを有している。検査方法については特に限定しないが、ここでは半導体ウェーハ全面について撮像し、所定の大きさの画像に分割する。以下では、この分割された画像を観察画像という。図3中のダイD301~304は同じ回路パターンを有しているため、欠陥がない場合、ダイ上の同一座標を占める領域P305~P308の観察画像は同一となる。これに対し、領域P305~P308の観察画像のうち例えば領域P305の観察画像のみ輝度値が大きいような場合には、領域P305の位置に例えば異物が存在する可能性がある。そこで、領域P305の観察画像を欠陥候補画像として抽出する。ここでは観察画像の輝度(明度)の差分を特徴量として欠陥候補画像を抽出する例を説明したが、特徴量はこれに限定されるものではなく、複数の特徴量を用いることも可能である。 Figure 3 shows a schematic plan view of the sample 211. When the sample 211 is a patterned semiconductor wafer, a circuit pattern of a semiconductor chip is formed on the surface of the sample 211. Each semiconductor chip before being separated from the semiconductor wafer is called a die. The dies have the same circuit pattern. Although there is no particular limitation on the inspection method, here, an image of the entire surface of the semiconductor wafer is taken and divided into images of a predetermined size. Hereinafter, this divided image is called an observation image. Since the dies D301 to D304 in Figure 3 have the same circuit pattern, if there is no defect, the observation images of the regions P305 to P308 occupying the same coordinates on the die will be the same. On the other hand, if, for example, only the observation image of region P305 among the observation images of regions P305 to P308 has a large brightness value, there is a possibility that, for example, a foreign object exists at the position of region P305. Therefore, the observation image of region P305 is extracted as a defect candidate image. Here, we have explained an example of extracting defect candidate images using the difference in luminance (brightness) of the observed images as a feature, but the feature is not limited to this, and multiple feature values can also be used.
 図4は欠陥検査装置100の検査動作の概要を表した図である。検査が開始されると、欠陥検査装置100に試料211が搬入される(S400)。次に画像取得部200にて観察画像の取り込みを実施する(S401)。続いて、演算部103にて、取得された観察画像に対して、例えばダイ上で同じ座標を占める領域の観察画像の明度を比較することで差分を計算するような特徴量算出処理を実施し(S402)、別途計算したしきい値との比較および、例えば特徴量を用いたフィルタ処理のような操作によって欠陥候補を抽出する(S403)。本実施例ではさらに、モデル生成装置610で生成した深層学習モデルを用いて、工程S403で抽出された欠陥候補からNuisanceを除去して欠陥を特定し(S404)、検査結果を出力する(S405)。 FIG. 4 is a diagram showing an overview of the inspection operation of the defect inspection device 100. When the inspection starts, the sample 211 is loaded into the defect inspection device 100 (S400). Next, the image acquisition unit 200 captures the observation image (S401). Next, the calculation unit 103 performs a feature calculation process on the acquired observation image, such as calculating the difference by comparing the brightness of the observation image of an area occupying the same coordinates on the die (S402), and extracts defect candidates by comparing with a separately calculated threshold value and performing an operation such as a filter process using the feature (S403). In this embodiment, further, using the deep learning model generated by the model generation device 610, nuisance is removed from the defect candidates extracted in step S403 to identify the defect (S404), and the inspection result is output (S405).
 深層学習モデル生成処理の一例を図5Aのフローを用いて説明する。本処理は、モデル生成装置610によって実行される。まず、欠陥検査装置100が図4のフローを実行して取得した欠陥候補画像を読み込む(S500)。複数台の欠陥検査装置がそれぞれ欠陥候補画像を取得している場合には、それらの欠陥検査装置からの欠陥候補画像を読み込んでもよい。欠陥候補画像は、通常の検査感度で検査して抽出された欠陥候補画像を用いてもよいし、検査感度を極端に上げて検査して抽出された欠陥候補画像を用いてもよい。後者の場合、工程S403で抽出される欠陥候補画像には大量のNuisanceを含むことになるが、本実施例では深層学習モデルを用いてNuisanceを除去する工程(S404)を含むため、検査結果を劣化させることはない。 An example of the deep learning model generation process will be described using the flow in FIG. 5A. This process is executed by the model generation device 610. First, the defect inspection device 100 reads the defect candidate image acquired by executing the flow in FIG. 4 (S500). If multiple defect inspection devices each acquire defect candidate images, the defect candidate images from those defect inspection devices may be read. The defect candidate image may be a defect candidate image extracted by inspection at normal inspection sensitivity, or a defect candidate image extracted by inspection with extremely increased inspection sensitivity. In the latter case, the defect candidate image extracted in step S403 will contain a large amount of Nuisance, but this embodiment includes a step (S404) of removing Nuisance using a deep learning model, so the inspection results are not degraded.
 つぎに、読み込んだ欠陥候補画像の中から無作為に10枚程度の画像を選択する(S501)。あるいは、後述するモデルの学習を効率的に進めるため、選択した欠陥候補画像の持つ画像特徴量がまばらになるように画像を選択してもよい。例えば、画像特徴量を欠陥候補画像の輝度とする場合、高輝度の欠陥候補画像から低輝度の欠陥候補画像までまんべんなく含まれるように欠陥候補画像を選択する。選択された欠陥候補画像を入出力装置110のGUI上に表示し(S502)、ユーザの操作を受け付ける(S503)。つづいて、ユーザの操作に応じて各欠陥候補画像に「DOI」または「Nuisance」のラベルを付与し、クラス分けを行う(S504)。ラベル「DOI」の画像数とラベル「Nuisance」の画像数とがそれぞれ規定枚数以上になったかを判定する(S505)。規定枚数は例えば5件以上というように定めておく。規定枚数を満たしていない場合、表示画像の選択(S501)に戻り、規定枚数を満たしている場合には、欠陥候補画像をそれぞれ教示用画像と評価用画像とに分割する(S506)。この分割では教示用画像と評価用画像とに含まれる各クラス(ラベル「DOI」が付与された画像とラベル「Nuisance」が付与された画像)の割合が等しくなるように行う。 Next, about 10 images are randomly selected from the loaded defect candidate images (S501). Alternatively, in order to efficiently proceed with model learning (described later), images may be selected so that the image features of the selected defect candidate images are sparse. For example, if the image feature is the brightness of the defect candidate images, the defect candidate images are selected so that they include a wide range of defect candidate images from high brightness to low brightness. The selected defect candidate images are displayed on the GUI of the input/output device 110 (S502), and a user operation is accepted (S503). Next, a label of "DOI" or "Nuisance" is assigned to each defect candidate image according to the user's operation, and classification is performed (S504). It is determined whether the number of images labeled "DOI" and the number of images labeled "Nuisance" are each equal to or greater than a prescribed number (S505). The prescribed number is set to, for example, 5 or more. If the specified number is not met, the process returns to the selection of images to be displayed (S501), and if the specified number is met, the defect candidate images are divided into training images and evaluation images (S506). This division is performed so that the proportions of each class (images labeled "DOI" and images labeled "Nuisance") contained in the training images and evaluation images are equal.
 つづいて、DOIとNuisanceとを分類するための深層学習モデルのトレーニングを行う。上述したように、最適な深層学習モデルのネットワーク構成は画像種ごとに異なる可能性がある。このため、モデル生成装置610にはあらかじめ複数のモデルを用意しておく。例えば、判定にCNN(Convolutional Neural Network:畳み込みニューラルネットワーク)を用いる場合、ハイパーパラメータの異なる複数のCNNを用意する。教示用画像を用いて複数のCNNのトレーニングを行い、その中から最も精度よく判定するCNNを当該画像種のDOIとNuisanceとを分類するモデルとして選択する。これにより、多様な画像種に対して対応可能になる。以下、フローに沿って説明する。 Next, a deep learning model for classifying DOI and Nuisance is trained. As mentioned above, the optimal network configuration of the deep learning model may differ for each image type. For this reason, multiple models are prepared in advance in the model generation device 610. For example, when using a CNN (Convolutional Neural Network) for judgment, multiple CNNs with different hyperparameters are prepared. Multiple CNNs are trained using training images, and the CNN that provides the most accurate judgment is selected as the model for classifying DOI and Nuisance for that image type. This makes it possible to handle a variety of image types. The flow will be explained below.
 まず、教示用画像を用いて、あらかじめ用意しておいた複数のモデルのパラメータ更新(トレーニング)を行い(S507)、これら複数のモデルを用いて、評価用画像を分類する(S508)。あらかじめ用意しておくモデルは、別の検査結果を用いて事前に学習(トレーニング)を進めたモデルとしてもよいし、未学習のモデルとしてもよいし、両方を含んでいてもよい。モデルは、例えば入力された画像がDOIである確率(以下、DOIらしさという)を出力するので、評価用画像を最もよくDOIとNuisanceとに分類できるしきい値をあわせて記録しておく。なお、DOIらしさはNuisanceである確率に基づき判定してもよい。つづいて、評価用画像を分類した際の成績が最も高かったモデルをベストモデルとして選出する(S509)。例えば、モデルの分類精度の指標としてROC曲線のAUC(Area Under Curve)を用いて、モデルの精度を評価することができる。つづいて、選出されたベストモデルの分類精度(例えば、AUC)があらかじめ設定しておいた目標値に達しているかを判定する(S510)。目標値はユーザが変更してもよい。 First, the parameters of a number of models prepared in advance are updated (trained) using the training image (S507), and the evaluation image is classified using these models (S508). The models prepared in advance may be models that have been previously trained using other test results, or may be untrained models, or may include both. The model outputs, for example, the probability that an input image is DOI (hereinafter referred to as DOI-likeness), and a threshold value that can best classify the evaluation image into DOI and Nuisance is also recorded. The DOI-likeness may be determined based on the probability of Nuisance. Next, the model with the highest performance when classifying the evaluation image is selected as the best model (S509). For example, the accuracy of the model can be evaluated using the AUC (Area Under Curve) of the ROC curve as an index of the classification accuracy of the model. Next, it is determined whether the classification accuracy (e.g., AUC) of the selected best model reaches a target value set in advance (S510). The target value may be changed by the user.
 目標値に達していない場合、ベストモデルでラベルの付いていない、すなわち「DOI」または「Nuisance」のクラス分けがされていない全ての画像を分類し(S513)、DOIらしさが最も高かった画像(DOIらしさが最高値またはその近傍である画像)と、DOIらしさが工程S508で記録したしきい値にもっとも近かった画像(DOIらしさがしきい値またはその近傍である画像)と、DOIらしさが最も低かった画像(DOIらしさが最低値またはその近傍である画像)をそれぞれ数件ずつ選択する(S514)。なお、選択方法はこれに限られず、ベストモデルによって判定されたDOIらしさ順に並べたときに等間隔に並ぶように数件ずつ選択してもよい。DOIらしさを用いて何らかの基準で画像を選択する。または、画像特徴量によって画像を選択してもよい。いずれの場合も、全画像におけるDOIらしさまたは画像特徴量の広がりをカバーし、恣意的な偏りがないように画像を選択する。深層学習においては、学習したことがない種類の画像に対する推論結果はあいまいになる。以上のように、当該画像種のもつばらつきの全体をカバーするように教示用画像とする画像を選択し、作成した教示用画像によりトレーニングを行う工程を繰り返すことにより、効率よくモデルの精度を向上させることができる。表示画像の選択(S514)が完了したら、画像の表示(S502)、操作受付(S503)、およびラベル付与(S504)を行ってから、画像データの分割(S506)に戻る。 If the target value is not reached, all images that are not labeled by the best model, i.e., not classified as "DOI" or "Nuisance", are classified (S513), and several images each with the highest DOI-likeness (images with the highest DOI-likeness or close to it), images with the DOI-likeness closest to the threshold recorded in step S508 (images with the DOI-likeness at or close to the threshold), and images with the lowest DOI-likeness (images with the lowest DOI-likeness or close to it) are selected (S514). Note that the selection method is not limited to this, and several images may be selected so that they are evenly spaced when sorted in order of DOI-likeness determined by the best model. Images are selected based on some criteria using DOI-likeness. Alternatively, images may be selected based on image features. In either case, images are selected to cover the spread of DOI-likeness or image features in all images and to avoid arbitrary bias. In deep learning, inference results for types of images that have not been learned are ambiguous. As described above, by selecting images to be used as training images so as to cover the entire variability of the image type, and repeating the process of training using the created training images, the accuracy of the model can be efficiently improved. After the selection of the display image (S514) is completed, the image is displayed (S502), an operation is accepted (S503), and a label is assigned (S504), and then the process returns to dividing the image data (S506).
 一方、ベストモデルの分類精度があらかじめ設定しておいた目標値に達している場合、ベストモデルを欠陥候補画像の画像種について、DOIとNuisanceとにクラス分類する深層学習モデルとして決定する。さらに、ベストモデルでラベルの付いていない全ての画像を分類してもよい(S511)。ベストモデルで判定したDOIらしさが工程S508で記録したしきい値を超えている画像をDOIとして出力し、どの欠陥候補画像をDOIと判断したかを記憶部に保存しておくとよい(S512)。なお、DOIとNuisanceとの分類はモデルが出力するDOIである確率(DOIらしさ)としきい値とを比較することによって行い、しきい値は工程S508で記録した値を用いるが、ユーザがしきい値を調整して用いてもよい。 On the other hand, if the classification accuracy of the best model reaches a preset target value, the best model is determined as a deep learning model that classifies the image types of defect candidate images into DOI and Nuisance. Furthermore, all images that are not labeled with the best model may be classified (S511). Images whose DOI-likeness determined by the best model exceeds the threshold value recorded in step S508 may be output as DOI, and which defect candidate images have been determined to be DOI may be stored in a memory unit (S512). The classification of DOI and Nuisance is performed by comparing the probability that the model outputs a DOI (DOI-likeness) with a threshold value, and the threshold value used is the value recorded in step S508, but the user may adjust the threshold value.
 深層学習モデル生成処理の別の一例を図5Bのフローを用いて説明する。図5Aのフローとの同一部分については説明を割愛する。図5Aのフローと異なる点はユーザによる精度判定(S520)が含まれる点である。ベストモデルによる評価用画像の分類成績が目標に達していないと判断された場合でもユーザがラベリング(S504)の際に画像を見て分類精度が十分高くなったと感じた場合に終了を伝える操作を実行すれば、ベストモデルによるすべての画像の分類(S511)、および分類結果出力(S512)の処理に進むことができる。工程S520においては、ベストモデルの分類精度を操作画面上に表示し、ユーザがその数値を見て判断を行うようにしてもよい。 Another example of the deep learning model generation process will be described using the flow in Figure 5B. The same parts as in the flow in Figure 5A will not be described. The difference from the flow in Figure 5A is that it includes an accuracy judgment by the user (S520). Even if it is determined that the classification performance of the evaluation images using the best model does not reach the target, if the user looks at the images during labeling (S504) and feels that the classification accuracy is high enough, they can perform an operation to indicate completion, and the process can proceed to classifying all images using the best model (S511) and outputting the classification results (S512). In step S520, the classification accuracy of the best model may be displayed on the operation screen, and the user may make a judgment by looking at the numerical value.
 図6に本実施例の欠陥検査システム600のシステム構成例を示す。欠陥検査システム600は、1つ以上の検査装置100-1~3、モデル生成装置610、入出力装置620を有している。入出力装置620は、キーボード、ポインティングデバイスなどの入力装置、及びディスプレイなどの表示装置を含む。モデル生成装置610の通信部611、制御部612、演算部613、記憶部614、入出力部615の機能ブロックは、情報処理装置により実現される。情報処理装置のハードウェア構成は、図2Aに示した情報処理装置101と同じであるため、重複する説明は省略する。 FIG. 6 shows an example of the system configuration of a defect inspection system 600 of this embodiment. The defect inspection system 600 has one or more inspection devices 100-1 to 100-3, a model generation device 610, and an input/output device 620. The input/output device 620 includes input devices such as a keyboard and a pointing device, and a display device such as a display. The functional blocks of the communication unit 611, control unit 612, calculation unit 613, storage unit 614, and input/output unit 615 of the model generation device 610 are realized by an information processing device. The hardware configuration of the information processing device is the same as that of the information processing device 101 shown in FIG. 2A, so repeated explanations will be omitted.
 モデルの生成にあたり、検査装置100-1~3を用いて欠陥候補画像を取得する。この時使用する検査装置は1つまたは検査装置群の中の一部または全部でよい。また、検査装置で検査を行う試料は、例えば同一の半導体ウェーハであっても、同じ回路パターンのダイが形成された異なる半導体ウェーハであってもよい。検査装置100-1~3はそれぞれが生成した欠陥候補の情報を、ネットワークを介してモデル生成装置610の通信部611に送信する。送信された欠陥候補画像を用いて、モデル生成装置610は図5Aまたは図5Bに示したフローにしたがって、検査装置100が欠陥検査における工程S404で使用する深層学習モデルを生成する。 When generating a model, defect candidate images are acquired using inspection devices 100-1 to 3. At this time, one inspection device or some or all of a group of inspection devices may be used. Furthermore, the sample inspected by the inspection device may be, for example, the same semiconductor wafer, or different semiconductor wafers on which dies with the same circuit pattern are formed. Inspection devices 100-1 to 3 transmit information on the defect candidates they have generated to communication unit 611 of model generation device 610 via a network. Using the transmitted defect candidate images, model generation device 610 generates a deep learning model that inspection device 100 uses in step S404 of defect inspection, following the flow shown in FIG. 5A or FIG. 5B.
 演算部613は受け取った欠陥候補画像を用いて、記憶部614に保存されている1つ以上の未学習モデルまたは事前学習済みモデルを学習させる。上述のように、記憶部614には、ハイパーパラメータの異なる複数の深層学習モデルがあらかじめ記憶されている。この際、モデルの学習(トレーニング)のために必要なラベル付き画像を取得するために、ラベル無し画像のうち、ラベル付与対象として選択した欠陥候補画像を入出力部615に送信する。入出力部615は受け取った欠陥候補画像を入出力装置620の表示装置に表示する(S502)。入出力装置620の入力装置はユーザの操作を受け付け、入出力部615にラベル付与のための信号を送信する。入出力部615は受け取った信号を用いて欠陥候補画像にラベルを付与して演算部613と記憶部614に送信する。演算部613は受け取ったラベル付き画像を用いて深層学習モデルのパラメータを更新する(S507)。パラメータ更新が完了する度に、演算部613は記憶部614に保存されたラベル付き画像を用いてモデルの評価を行い、精度の確認を行ってもよい。ユーザが学習完了の操作を行った場合(S510またはS520でYes)、制御部612がその時点でのベストモデルを、通信部611を介して各検査装置100-1~3に送信する。 The calculation unit 613 uses the received defect candidate image to train one or more untrained models or pre-trained models stored in the storage unit 614. As described above, the storage unit 614 stores a plurality of deep learning models with different hyperparameters in advance. At this time, in order to obtain a labeled image necessary for model training, the calculation unit 613 transmits the defect candidate image selected as a labeling target from among the unlabeled images to the input/output unit 615. The input/output unit 615 displays the received defect candidate image on the display device of the input/output device 620 (S502). The input device of the input/output device 620 accepts a user's operation and transmits a signal for labeling to the input/output unit 615. The input/output unit 615 uses the received signal to label the defect candidate image and transmits the label to the calculation unit 613 and the storage unit 614. The calculation unit 613 updates the parameters of the deep learning model using the received labeled image (S507). Each time the parameter update is completed, the calculation unit 613 may evaluate the model using the labeled image stored in the storage unit 614 and check the accuracy. When the user performs an operation to complete learning (Yes in S510 or S520), the control unit 612 transmits the best model at that time to each of the inspection devices 100-1 to 100-3 via the communication unit 611.
 検査装置100-1~3が工程S404で使用するモデルを更新するタイミングは任意である。例えば、演算部613が記憶部614に保存されたラベル付き画像を用いて行う精度評価の結果をきっかけにして行ってもよい。検査装置100-1~3は受け取った学習済みモデルを用いて、その後実施する検査の結果からモデルを用いてNuisanceを除去して欠陥を特定する。受け取った深層学習モデルは何度も繰り返し使用してよい。 The timing for updating the model used by the inspection devices 100-1 to 100-3 in step S404 is arbitrary. For example, it may be triggered by the results of an accuracy evaluation performed by the calculation unit 613 using the labeled images stored in the memory unit 614. The inspection devices 100-1 to 100-3 use the received trained model to remove nuisance from the results of subsequent inspections and identify defects. The received deep learning model may be used repeatedly.
 図5Aまたは図5Bに示した深層学習モデル生成処理における、ユーザの作業(ラベル付与(S504))およびモデル生成装置610の処理と、これらの処理に伴うラベル付き画像件数の推移を図7に示す。深層学習モデル生成処理のフローにおいて説明した通り、モデル生成装置610は複数の深層学習モデルのパラメータを、ユーザのラベリング作業によって得られたラベル付き画像(教示用画像)を用いて更新する。パラメータの更新(S507)と、次の教示用画像を選出するために必要な全ラベル無し画像に対するDOIらしさの推論処理(S513)には時間がかかるため、それぞれの操作を直列に実施すると、ユーザの作業(S504)が終了してから、モデル生成装置610がこれらの処理を行って次のラベリング対象画像が表示される(S502)までに待ち時間が発生するおそれがある。このような待ち時間の発生を抑えるため、図7のタイミングチャートのようにユーザの作業とモデル生成装置の処理を並列に実行するとよい。 FIG. 7 shows the user's work (labeling (S504)) and the processing of the model generating device 610 in the deep learning model generation processing shown in FIG. 5A or FIG. 5B, and the transition of the number of labeled images accompanying these processes. As explained in the flow of the deep learning model generation processing, the model generating device 610 updates the parameters of multiple deep learning models using labeled images (training images) obtained by the user's labeling work. Since it takes time to update the parameters (S507) and to infer the DOI-likeness of all unlabeled images (S513) necessary to select the next training image, if each operation is performed in series, there is a risk of a waiting time occurring between the end of the user's work (S504) and the time when the model generating device 610 performs these processes and the next labeling target image is displayed (S502). In order to reduce the occurrence of such a waiting time, it is preferable to execute the user's work and the processing of the model generating device in parallel, as shown in the timing chart of FIG. 7.
 ユーザの第1のラベリング作業701が終了して、モデル生成装置610のパラメータ更新処理702と推論処理703が終わった後、ユーザの第2のラベリング作業704と平行して次のラベリング対象画像を選出するための推論処理705を実行する。推論処理703と推論処理705は同じモデルによる推論であるから推論結果は同一の結果が得られることになるが、推論処理705においては第2のラベリング作業704の対象となった画像は推論対象外となっていることから、推論処理705によりまだラベリング処理がされていない画像をラベリング対象画像として選択することができる。推論処理705によって選択した、次のラベリング対象画像をユーザの第2のラベリング作業704が終了次第表示し、第3のラベリング作業706に移れるようにする。また、第3のラベリング作業706が開始されると同時に、並行して次のパラメータ更新処理707と推論処理708を開始する。 After the user's first labeling operation 701 is completed and the model generating device 610 has completed the parameter update operation 702 and the inference operation 703, an inference operation 705 is executed in parallel with the user's second labeling operation 704 to select the next image to be labeled. The inference operations 703 and 705 are inferences based on the same model, so the same inference results are obtained. However, since the image that was the subject of the second labeling operation 704 is not subject to inference in the inference operation 705, an image that has not yet been subjected to labeling processing can be selected as the image to be labeled by the inference operation 705. The next image to be labeled selected by the inference operation 705 is displayed as soon as the user's second labeling operation 704 is completed, so that the user can move on to the third labeling operation 706. In addition, the next parameter update operation 707 and inference operation 708 are started in parallel at the same time that the third labeling operation 706 is started.
 このように、ラベル付与と学習・推論の繰り返しにおいて、最初の推論を2度行うことによってユーザによるラベリング作業と装置の学習・推論処理を並列して行うことが可能になる。これにより、モデル生成装置610の処理にかかる時間を隠蔽し、処理効率を向上させることができる。なお、最初の推論を2度行う代わりに、最初の推論については、2セットのラベリング作業の対象画像を選択するようにしてもよい。 In this way, by performing the initial inference twice in the repeated labeling and learning/inference, it becomes possible for the labeling work by the user and the learning/inference processing of the device to be performed in parallel. This makes it possible to hide the time required for the processing of the model generating device 610 and improve processing efficiency. Note that instead of performing the initial inference twice, two sets of target images for the labeling work may be selected for the initial inference.
 図8A~Dに本実施例の操作画面の一例を示す。操作画面は操作性の高いGUIとすることにより、教示にかかる時間を短縮することができる。操作画面であるポップアップウインドウ800内に、少なくとも数件のラベリング対象画像801~804及び対応して設けられるラベル選択用ボタン811~814と操作完了用ボタン群830とが表示される。ポップアップウインドウ800には、図8Aに示す画像切り替え制御ボタン群820を表示してもよい。または、図8Bに示す画像表示方法調整ボタン群840を表示してもよい。または、図8Dに示す分割境界選択ボタン860-1~5を表示してもよい。以下、この操作画面の動作について、図8A~Dを参照しながら説明する。 FIGS. 8A-D show an example of the operation screen of this embodiment. By making the operation screen a highly operable GUI, the time required for teaching can be shortened. At least several images 801-804 to be labeled and corresponding label selection buttons 811-814 and an operation completion button group 830 are displayed in a pop-up window 800, which is the operation screen. The pop-up window 800 may display an image switching control button group 820 shown in FIG. 8A. Alternatively, the image display method adjustment button group 840 shown in FIG. 8B may be displayed. Alternatively, the division boundary selection buttons 860-1-5 shown in FIG. 8D may be displayed. The operation of this operation screen will be described below with reference to FIG. 8A-D.
 操作画面には工程S501や工程S514において選択したラベリング対象画像をDOIらしさが高い順に並べて表示する。表示する画像の件数は十~数十枚程度とする。ポップアップウインドウ800内に1度に全ての画像を表示できない場合にはスクロール操作を受け付けるようにして、表示するラベリング対象画像を切り替えられるようにする。表示したラベリング対象画像801~804の近傍にはそれぞれに対応するラベル選択用ボタン811~814を配置し、マウスポインター810を合わせてクリックすることにより各ラベリング対象画像に付与するラベルを選択できるようにする。ラベル選択用ボタンは図8のようなラジオボタン(ここではDOIを「A」と表記し、Nuisanceを「B」と表記している)として、初期状態をDOIまたはNuisanceとしてその逆のラベルを付けたい画像に対してマウスポインター810でクリックを行うようにしてもよいし、チェックボックスとして、DOIまたはNuisanceのみに対してクリックを行うようにしてもよい。図8Cのようにマウスポインター810のドラッグ操作によって矩形選択を行い、矩形領域の内側にある画像のラベルを反転させてもよい。または、図8Dのように分割境界選択ボタン860-1~5のどれかをマウスポインター810でクリックすることによってその境界より左または右に表示された全画像のラベルを反転させてもよい。このとき、分割境界選択ボタン860-1~5をクリックする前に個別のラベル選択用ボタン811~814がクリックされていた場合には、その画像のラベルは反転させない処理を行ってもよい。 The images to be labeled selected in steps S501 and S514 are displayed on the operation screen in order of their DOI resemblance. The number of images to be displayed is about ten to several tens of images. If all images cannot be displayed at once in the pop-up window 800, a scroll operation is accepted so that the displayed images to be labeled can be switched. Label selection buttons 811 to 814 corresponding to the images to be labeled 801 to 804 are arranged near the displayed images to be labeled, and the label to be assigned to each image to be labeled can be selected by clicking the mouse pointer 810 on the buttons. The label selection buttons can be radio buttons as shown in FIG. 8 (here, DOI is represented as "A" and Nuisance as "B"), with the initial state being DOI or Nuisance, and the image to which the opposite label is to be assigned can be clicked with the mouse pointer 810, or they can be check boxes so that only DOI or Nuisance can be clicked. As shown in FIG. 8C, a rectangular selection can be performed by dragging the mouse pointer 810, and the labels of the images inside the rectangular area can be inverted. Alternatively, as shown in FIG. 8D, clicking one of the division boundary selection buttons 860-1 to 860-5 with the mouse pointer 810 may invert the labels of all images displayed to the left or right of that boundary. In this case, if an individual label selection button 811 to 814 is clicked before clicking the division boundary selection button 860-1 to 860-5, processing may be performed that does not invert the labels of that image.
 操作完了用ボタン群830の中に配置される操作完了ボタン831をマウスポインター810でクリックすると、表示していたラベリング対象画像801~804にユーザが指定したラベルを付与して保存し、次のラベリング対象画像を表示させる。また、操作完了用ボタン群830の中には教示完了ボタン832を配置してもよい。このボタンをマウスポインター810でクリックすることで、ラベル付与工程を終了させて次の工程に進めるようにしてもよい。 When an operation completion button 831 located in the group of operation completion buttons 830 is clicked with the mouse pointer 810, a label specified by the user is assigned to the displayed images 801-804 to be labeled, they are saved, and the next image to be labeled is displayed. Also, a teaching completion button 832 may be placed in the group of operation completion buttons 830. Clicking this button with the mouse pointer 810 may end the labeling process and allow the user to proceed to the next process.
 図8Aのように、ラベリング対象である欠陥候補画像801-1~804-1と対応する参照ダイの画像801-2~804~2を交互に表示してもよい。欠陥候補画像はダイ上の座標情報に基づきグループ化されており、参照ダイの画像は、ダイ上の同一座標を占める領域の観察画像(図3参照)から選ばれる。図8Aに示される画像切り替え制御ボタン群820中に配置される画像切り替えオンオフボタン822をマウスポインター810でクリックすることで、画像の切り替えを停止できるようにしてもよい。また、画像切り替え制御ボタン群820に配置される切り替え速度調整ボックス821に入力された、あるいはボタンによって設定された時間ごとに表示する画像を切り替えるようにしてもよい。 As shown in FIG. 8A, defect candidate images 801-1 to 804-1 to be labeled and corresponding reference die images 801-2 to 804-2 may be displayed alternately. The defect candidate images are grouped based on coordinate information on the die, and the reference die image is selected from the observation images of the area occupying the same coordinates on the die (see FIG. 3). Image switching may be stopped by clicking an image switching on/off button 822 arranged in the image switching control button group 820 shown in FIG. 8A with the mouse pointer 810. Also, the displayed images may be switched at the time input into a switching speed adjustment box 821 arranged in the image switching control button group 820 or set by the button.
 また、ポップアップウインドウ800に表示するラベリング対象画像801~804の明度を調整できるとユーザの判定が容易になる。図8Bに示す画像表示方法調整ボタン群840によってラベリング対象画像の画質を調整可能とする。例えば、画像表示方法調整ボタン群840には、最大明度を設定するボックス841及び最小明度を設定するボックス842を配置する。最大明度として設定された諧調以上の画素を白、最小明度として設定された諧調以下の画素を黒とし、その間の諧調の画素値を入出力装置620のディスプレイの最大諧調から最小諧調までの間に均等に割り当てることで、明度の調整を行ってもよい。 Also, if the brightness of the images 801 to 804 to be labeled displayed in the pop-up window 800 can be adjusted, it becomes easier for the user to make a judgment. The image quality of the images to be labeled can be adjusted by the group of image display method adjustment buttons 840 shown in FIG. 8B. For example, the group of image display method adjustment buttons 840 may have a box 841 for setting the maximum brightness and a box 842 for setting the minimum brightness. Brightness may be adjusted by setting pixels equal to or higher than the tone set as the maximum brightness as white and pixels equal to or lower than the tone set as the minimum brightness as black, and allocating pixel values of the tones in between evenly between the maximum tone and the minimum tone on the display of the input/output device 620.
 なお、図8A~Dの例では、ラベリング対象画像を横方向に並べて表示しているが、縦方向に並べて表示することで構成してもよい。 Note that in the examples of Figures 8A to 8D, the images to be labeled are displayed side-by-side, but they may also be displayed side-by-side vertically.
 図9に別の操作画面の構成例を示す。ポップアップウインドウ900には、左側領域910にDOIらしさが高いと推論されたラベリング対象画像911-1~4のグループを表示し、右側領域920にDOIらしさが低いと推論されたラベリング対象画像921-1~4のグループを表示する。分類が誤っている画像については、画像に対応して設けられている矢印912-1~4または矢印922-1~4をマウスポインター810でクリックすることで選択訂正することができる。 Figure 9 shows an example of the configuration of another operation screen. In the left area 910 of the pop-up window 900, a group of labeling target images 911-1 to 4 that are inferred to have a high DOI-likeness are displayed, and in the right area 920, a group of labeling target images 921-1 to 4 that are inferred to have a low DOI-likeness are displayed. Images that have been incorrectly classified can be selected and corrected by clicking with the mouse pointer 810 on the arrows 912-1 to 4 or arrows 922-1 to 4 that correspond to the images.
 なお、本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすくするために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。 The present invention is not limited to the above-described embodiment, but includes various modifications. For example, the above-described embodiment has been described in detail to make the present invention easier to understand, and is not necessarily limited to having all of the configurations described.
 例えば、実施例ではパターンが形成された半導体ウェーハ上の欠陥を検出する検査装置を例に説明を行ったが、パターンが形成されていない半導体ウェーハ上の欠陥を検出する検査装置(以下、表面検査装置という)についても同様にNuisanceの除去が可能である。表面検査装置は、例えばウェーハのような試料の表面に例えばレーザのような光をあてて表面の異物や傷などの欠陥を検出する装置である。表面検査装置によってウェーハの表面を検査する際には、試料を例えば真空チャックまたは保持装置を用いて、回転ステージ、またはXYZ方向に移動するステージの上に固定し、試料を移動させながら光を試料表面に当てて、その際に発生する反射光や散乱光を観察する。試料の表面が期待通りの出来(欠陥のない状態)である場合と異物や傷が存在する場合との差違を検出することで試料表面の異物や傷などを検出する。表面検査装置は得られる信号波形を比較することによって、欠陥検出を行うものであって、学習済みのモデルを用いてNuisance除去を行うことができる。モデルのトレーニングのためのクラス分類には波形情報を画像に変換したデータを用いてもよいし、波形情報をそのまま用いてもよいし、試料上の座標情報を用いてもよいし、これらの情報を組み合わせて用いてもよい。 For example, in the embodiment, an inspection device that detects defects on a semiconductor wafer on which a pattern is formed is used as an example for explanation, but nuisance can also be removed in an inspection device that detects defects on a semiconductor wafer on which no pattern is formed (hereinafter referred to as a surface inspection device). A surface inspection device is a device that detects defects such as foreign matter and scratches on the surface by applying light such as a laser to the surface of a sample such as a wafer. When inspecting the surface of a wafer using a surface inspection device, the sample is fixed on a rotating stage or a stage that moves in the XYZ directions using, for example, a vacuum chuck or a holding device, and light is applied to the sample surface while moving the sample, and the reflected light and scattered light generated at that time are observed. Foreign matter and scratches on the sample surface are detected by detecting the difference between when the surface of the sample is as expected (defect-free state) and when foreign matter or scratches are present. The surface inspection device detects defects by comparing the signal waveforms obtained, and can remove nuisance using a trained model. For classifying the data to train the model, data obtained by converting waveform information into an image may be used, the waveform information may be used as is, coordinate information on the sample may be used, or a combination of these pieces of information may be used.
 また、実施例ではNuisanceを除去するためのモデルの生成を欠陥検査装置に接続されるモデル生成装置により行う例を説明したが、欠陥検査装置の情報処理装置にモデル生成用のソフトウェアを搭載することによってモデルの生成を行ってもよい。 In the embodiment, an example was described in which the generation of a model for removing nuisance was performed by a model generation device connected to the defect inspection device, but the model may also be generated by installing model generation software in the information processing device of the defect inspection device.
100:欠陥検査装置、101:情報処理装置、102:制御部、103:演算部、104:記憶部、105:入出力部、106:通信部、110:入出力装置、121:プロセッサ(CPU)、122:メモリ、123:ストレージ装置、124:入出力ポート、125:ネットワークインタフェース、126:バス、200:画像取得部、210:ステージ、211:試料、220:照明光学系、221:光、222:反射光、223:散乱光、230:検出光学系、240:イメージセンサ、250:信号処理部、600:欠陥検査システム、610:モデル生成装置、611:通信部、612:制御部、613:演算部、614:記憶部、615:入出力部、620:入出力装置、701,704,706:ラベリング作業、702,707:パラメータ更新処理、703,705,708:推論処理、800,900:ポップアップウインドウ、801,802,803,804,911,921:ラベリング対象画像、810:マウスポインター、811,812,813,814:ラベル選択用ボタン、820:画像切り替え制御ボタン群、821:切り替え速度調整ボックス、822:画像切り替えオンオフボタン、830:操作完了用ボタン群、831:操作完了ボタン、832:教示完了ボタン、840:画像表示方法調整ボタン群、841,842:ボックス、832:教示完了ボタン、860:分割境界選択ボタン、910:左側領域、920:右側領域、912,922:矢印。 100: defect inspection device, 101: information processing device, 102: control unit, 103: calculation unit, 104: memory unit, 105: input/output unit, 106: communication unit, 110: input/output device, 121: processor (CPU), 122: memory, 123: storage device, 124: input/output port, 125: network interface, 126: bus, 200: image acquisition unit, 210: stage, 211: sample, 220: illumination optical system, 221: light, 222: reflected light, 223: scattered light, 230: detection optical system, 240: image sensor, 250: signal processing unit, 600: defect inspection system, 610: model generation device, 611: communication unit, 612: control unit, 613: calculation unit, 614: memory unit, 615: input/output unit, 620: input/output device, 701, 70 4, 706: labeling operation, 702, 707: parameter update processing, 703, 705, 708: inference processing, 800, 900: pop-up window, 801, 802, 803, 804, 911, 921: image to be labeled, 810: mouse pointer, 811, 812, 813, 814: label selection button, 820: image switching control button group, 821: switching speed adjustment box, 822: image switching on/off button, 830: operation completion button group, 831: operation completion button, 832: teaching completion button, 840: image display method adjustment button group, 841, 842: box, 832: teaching completion button, 860: division boundary selection button, 910: left side area, 920: right side area, 912, 922: arrows.

Claims (12)

  1.  欠陥候補画像をDOIとNuisanceとにクラス分類する深層学習モデルを生成するモデル生成方法であって、
     ハイパーパラメータの異なる複数の深層学習モデルをあらかじめ用意しておき、
     深層学習モデルのトレーニングに使用する欠陥候補画像を読み込む第1の工程と、
     前記欠陥候補画像の一部についてラベル付与を行って教示用画像及び評価用画像を作成する第2の工程と、
     前記教示用画像により前記複数の深層学習モデルのそれぞれについてパラメータ更新を行った後に、前記評価用画像を用いて分類精度を評価し、前記複数の深層学習モデルのうち最も分類精度の高いベストモデルを選出する第3の工程と、
     前記ベストモデルによりラベル付与を行っていない前記欠陥候補画像の分類を行い、前記ベストモデルによるDOIらしさに基づきラベル付与を行う前記欠陥候補画像の一部を選択する第4の工程を有し、
     前記第4の工程の実行後に、再度前記第2の工程及び前記第3の工程を実行するモデル生成方法。
    A model generation method for generating a deep learning model that classifies defect candidate images into DOI and Nuisance,
    Multiple deep learning models with different hyperparameters are prepared in advance,
    A first step of reading defect candidate images for use in training a deep learning model;
    a second step of labeling a portion of the defect candidate image to generate a teaching image and an evaluation image;
    a third step of evaluating classification accuracy using the evaluation image after updating parameters for each of the plurality of deep learning models using the teaching image, and selecting a best model having the highest classification accuracy from among the plurality of deep learning models;
    a fourth step of classifying the defect candidate images that have not been labeled by the best model, and selecting a portion of the defect candidate images to be labeled based on a DOI likelihood according to the best model;
    A model generating method comprising the steps of: executing the fourth step, and then executing the second step and the third step again.
  2.  請求項1において、
     前記第3の工程で選出された前記ベストモデルが所望の分類精度を満たさない場合には、前記第3の工程に続いて前記第4の工程を実行し、
     前記第3の工程で選出された前記ベストモデルが所望の分類精度を満たす場合には、前記ベストモデルを前記欠陥候補画像の画像種についてDOIとNuisanceとにクラス分類する深層学習モデルとするモデル生成方法。
    In claim 1,
    If the best model selected in the third step does not satisfy a desired classification accuracy, the fourth step is performed following the third step;
    A model generation method in which, if the best model selected in the third step satisfies the desired classification accuracy, the best model is a deep learning model that classifies the image type of the defect candidate image into DOI and Nuisance.
  3.  請求項1において、
     前記第3の工程において、前記ベストモデルが前記評価用画像をDOIとNuisanceとを分類するDOIらしさのしきい値を記録し、
     前記第4の工程においてラベル付与を行う前記欠陥候補画像を、前記ベストモデルによって評価されたDOIらしさが前記しきい値またはその近傍の画像、前記ベストモデルによって評価されたDOIらしさが最高値またはその近傍の画像、及び前記ベストモデルによって評価されたDOIらしさが最低値またはその近傍の画像が含まれるよう選択するモデル生成方法。
    In claim 1,
    In the third step, a threshold value of DOI-likeness at which the best model classifies the evaluation image into DOI and Nuisance is recorded;
    A model generation method in which the defect candidate images to be labeled in the fourth step are selected so as to include images whose DOI-likeness evaluated by the best model is at or near the threshold value, images whose DOI-likeness evaluated by the best model is at or near the highest value, and images whose DOI-likeness evaluated by the best model is at or near the lowest value.
  4.  請求項1において、
     前記第2乃至前記第4の工程を繰り返し実行する場合において、前記第3の工程と、前記第4の工程により選択された前記欠陥候補画像の一部についての前記第2の工程とを並列に実行するモデル生成方法。
    In claim 1,
    A model generation method in which, when the second to fourth steps are repeatedly executed, the third step and the second step are executed in parallel for a portion of the defect candidate images selected by the fourth step.
  5.  請求項1において、
     前記第4の工程の実行後における前記第2の工程において、前記欠陥候補画像の一部がDOIであるか、Nuisanceであるかをユーザに選択させるウインドウを表示装置に表示し、
     前記ウインドウに表示される欠陥候補画像は、前記ベストモデルによって評価されたDOIらしさの順に表示されるモデル生成方法。
    In claim 1,
    In the second step after the execution of the fourth step, a window is displayed on a display device to allow a user to select whether a part of the defect candidate image is a DOI or a Nuisance;
    A model generation method in which defect candidate images displayed in the window are displayed in order of DOI likelihood evaluated by the best model.
  6.  請求項5において、
     前記ウインドウには、前記ウインドウに表示される欠陥候補画像に対応して、当該欠陥候補画像がDOIであるか、Nuisanceであるかを選択するボタンが設けられるモデル生成方法。
    In claim 5,
    A model generating method in which the window is provided with a button for selecting whether the defect candidate image displayed in the window is a DOI or a Nuisance.
  7.  請求項6において、
     前記ウインドウには、前記ウインドウに表示される欠陥候補画像の間に分割境界選択ボタンが設けられ、前記分割境界選択ボタンのいずれかを指定することにより、当該分割境界選択ボタンによって区分される一方の欠陥候補画像がDOIであり、他方の欠陥候補画像がNuisanceであると選択されるモデル生成方法。
    In claim 6,
    A model generation method in which division boundary selection buttons are provided in the window between the defect candidate images displayed in the window, and by specifying one of the division boundary selection buttons, one of the defect candidate images divided by the division boundary selection button is selected as DOI, and the other defect candidate image is selected as Nuisance.
  8.  請求項1において、
     前記第4の工程の実行後における前記第2の工程において、前記欠陥候補画像の一部がDOIであるか、Nuisanceであるかをユーザに特定させるウインドウを表示装置に表示し、
     前記ウインドウに表示される欠陥候補画像は、前記ベストモデルによってDOIらしさが高いと評価されたグループと低いと評価されたグループとに分けて表示されるモデル生成方法。
    In claim 1,
    In the second step after the execution of the fourth step, a window is displayed on a display device to allow a user to specify whether a part of the defect candidate image is a DOI or a Nuisance;
    A model generation method in which the defect candidate images displayed in the window are divided into a group evaluated by the best model as having a high DOI likelihood and a group evaluated as having a low DOI likelihood.
  9.  欠陥検査装置と、前記欠陥検査装置と接続され、前記欠陥検査装置が取得した欠陥候補画像をDOIとNuisanceとにクラス分類する深層学習モデルを生成するモデル生成装置とを備えた欠陥検査システムであって、
     前記欠陥検査装置は、観察画像を取得する画像取得部と、前記観察画像の特徴量を算出して欠陥候補画像を抽出し、抽出された欠陥候補画像のうち前記深層学習モデルを用いてDOIと分類された欠陥候補画像を欠陥として特定する情報処理装置とを備え、
     前記モデル生成装置は、ハイパーパラメータの異なる複数の深層学習モデルがあらかじめ記憶される記憶部と、第1乃至第4の工程を実行する演算部とを備え、
     前記第1の工程は、深層学習モデルのトレーニングに使用する欠陥候補画像を読み込み、
     前記第2の工程は、前記欠陥候補画像の一部についてラベル付与を行って教示用画像及び評価用画像を作成し、
     前記第3の工程は、前記教示用画像により前記複数の深層学習モデルのそれぞれについてパラメータ更新を行った後に、前記評価用画像を用いて分類精度を評価し、前記複数の深層学習モデルのうち最も分類精度の高いベストモデルを選出し、
     前記第4の工程は、前記ベストモデルによりラベル付与を行っていない前記欠陥候補画像の分類を行い、前記ベストモデルによるDOIらしさに基づきラベル付与を行う前記欠陥候補画像の一部を選択し、
     前記第4の工程の実行後に、再度前記第2の工程及び前記第3の工程を実行することを特徴とする欠陥検査システム。
    A defect inspection system including a defect inspection device and a model generation device connected to the defect inspection device and configured to generate a deep learning model for classifying a defect candidate image acquired by the defect inspection device into a DOI and a Nuisance,
    The defect inspection apparatus includes an image acquisition unit that acquires an observation image, and an information processing device that calculates a feature amount of the observation image to extract a defect candidate image, and identifies, from among the extracted defect candidate images, a defect candidate image that is classified as a DOI using the deep learning model, as a defect;
    The model generation device includes a storage unit in which a plurality of deep learning models having different hyperparameters are stored in advance, and a calculation unit that executes first to fourth steps;
    The first step includes reading defect candidate images to be used for training a deep learning model;
    The second step is to label a part of the defect candidate image to generate a teaching image and an evaluation image;
    The third step includes updating parameters of each of the deep learning models using the training image, evaluating classification accuracy using the evaluation image, and selecting a best model having the highest classification accuracy from among the deep learning models;
    The fourth step classifies the defect candidate images that have not been labeled by the best model, and selects a portion of the defect candidate images to be labeled based on the DOI likelihood by the best model;
    A defect inspection system, comprising: a step of executing the second step and the third step again after the fourth step is executed.
  10.  請求項9において、
     前記第3の工程で選出された前記ベストモデルが所望の分類精度を満たさない場合には、前記演算部は前記第3の工程に続いて前記第4の工程を実行し、
     前記第3の工程で選出された前記ベストモデルが所望の分類精度を満たす場合には、前記ベストモデルが前記欠陥検査装置で使用される深層学習モデルとして決定されることを特徴とする欠陥検査システム。
    In claim 9,
    If the best model selected in the third step does not satisfy a desired classification accuracy, the calculation unit executes the fourth step following the third step,
    A defect inspection system characterized in that, if the best model selected in the third step satisfies the desired classification accuracy, the best model is determined as the deep learning model to be used in the defect inspection device.
  11.  請求項9において、
     前記欠陥候補画像は、背景となるパターンの像と欠陥候補の像とを含むことを特徴とする欠陥検査システム。
    In claim 9,
    A defect inspection system, wherein the defect candidate image includes an image of a background pattern and an image of the defect candidate.
  12.  請求項9において、
     前記画像取得部は、電磁波または荷電粒子線を検査対象試料に照射して得られる信号をもとに前記観察画像を取得することを特徴とする欠陥検査システム。
    In claim 9,
    A defect inspection system, characterized in that the image acquisition unit acquires the observed image based on a signal obtained by irradiating an inspection target sample with an electromagnetic wave or a charged particle beam.
PCT/JP2022/035715 2022-09-26 2022-09-26 Model generating method and defect inspection system WO2024069701A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/035715 WO2024069701A1 (en) 2022-09-26 2022-09-26 Model generating method and defect inspection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/035715 WO2024069701A1 (en) 2022-09-26 2022-09-26 Model generating method and defect inspection system

Publications (1)

Publication Number Publication Date
WO2024069701A1 true WO2024069701A1 (en) 2024-04-04

Family

ID=90476645

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/035715 WO2024069701A1 (en) 2022-09-26 2022-09-26 Model generating method and defect inspection system

Country Status (1)

Country Link
WO (1) WO2024069701A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011036846A1 (en) * 2009-09-28 2011-03-31 株式会社日立ハイテクノロジーズ Defect inspection device and defect inspection method
WO2020166076A1 (en) * 2019-02-15 2020-08-20 株式会社日立ハイテク Structure estimation system and structure estimation program
JP2020154602A (en) * 2019-03-19 2020-09-24 日本製鉄株式会社 Active learning method and active learning device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011036846A1 (en) * 2009-09-28 2011-03-31 株式会社日立ハイテクノロジーズ Defect inspection device and defect inspection method
WO2020166076A1 (en) * 2019-02-15 2020-08-20 株式会社日立ハイテク Structure estimation system and structure estimation program
JP2020154602A (en) * 2019-03-19 2020-09-24 日本製鉄株式会社 Active learning method and active learning device

Similar Documents

Publication Publication Date Title
JP7399235B2 (en) pattern inspection system
JP5662146B2 (en) Semiconductor device feature extraction, generation, visualization, and monitoring methods
KR102083706B1 (en) Adaptive sampling for semiconductor inspection recipe creation, defect review, and metrology
JP7200113B2 (en) Systems and methods for training and applying defect classifiers on wafers with deeply stacked layers
US8331651B2 (en) Method and apparatus for inspecting defect of pattern formed on semiconductor device
JP4616864B2 (en) Appearance inspection method and apparatus, and image processing evaluation system
JP5537282B2 (en) Defect inspection apparatus and defect inspection method
JP5543872B2 (en) Pattern inspection method and pattern inspection apparatus
US9311697B2 (en) Inspection method and device therefor
TWI643280B (en) Defect detection using structural information
JP5225297B2 (en) Method for recognizing array region in die formed on wafer, and setting method for such method
JP6078234B2 (en) Charged particle beam equipment
TWI631638B (en) Inspection recipe setup from reference image variation
TWI791806B (en) Mode selection for inspection
TW202105549A (en) Method of defect detection on a specimen and system thereof
US11686689B2 (en) Automatic optimization of an examination recipe
JP2009206453A (en) Manufacturing process monitoring system
TW201725381A (en) Determining one or more characteristics of a pattern of interest on a specimen
JP7169344B2 (en) Defect detection for transparent or translucent wafers
KR20180081820A (en) Registration and design in the die internal inspection Reduction of the noise caused by the peripheral part
KR20220012217A (en) Machine Learning-Based Classification of Defects in Semiconductor Specimens
TW202226027A (en) Deep generative models for optical or other mode selection
JP2001188906A (en) Method and device for automatic image calssification
WO2024069701A1 (en) Model generating method and defect inspection system
JP3752849B2 (en) Pattern defect inspection apparatus and pattern defect inspection method