WO2020050072A1 - 学習装置、推論装置及び学習済みモデル - Google Patents
学習装置、推論装置及び学習済みモデル Download PDFInfo
- Publication number
- WO2020050072A1 WO2020050072A1 PCT/JP2019/033168 JP2019033168W WO2020050072A1 WO 2020050072 A1 WO2020050072 A1 WO 2020050072A1 JP 2019033168 W JP2019033168 W JP 2019033168W WO 2020050072 A1 WO2020050072 A1 WO 2020050072A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- processing
- image data
- learning
- unit
- Prior art date
Links
- 238000012545 processing Methods 0.000 claims abstract description 148
- 238000004519 manufacturing process Methods 0.000 claims abstract description 77
- 238000013459 approach Methods 0.000 claims abstract description 8
- 239000004065 semiconductor Substances 0.000 claims description 73
- 238000000034 method Methods 0.000 claims description 49
- 230000008569 process Effects 0.000 claims description 45
- 239000000463 material Substances 0.000 claims description 19
- 230000007613 environmental effect Effects 0.000 claims description 13
- 239000000203 mixture Substances 0.000 claims description 5
- 238000004148 unit process Methods 0.000 claims 2
- 238000004088 simulation Methods 0.000 abstract description 53
- 238000007493 shaping process Methods 0.000 description 64
- 238000001312 dry etching Methods 0.000 description 56
- 238000007781 pre-processing Methods 0.000 description 52
- 230000008021 deposition Effects 0.000 description 39
- 238000000151 deposition Methods 0.000 description 39
- 235000012431 wafers Nutrition 0.000 description 36
- 238000010586 diagram Methods 0.000 description 31
- 238000010801 machine learning Methods 0.000 description 26
- 238000005259 measurement Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 19
- 238000007906 compression Methods 0.000 description 18
- 230000006835 compression Effects 0.000 description 17
- 230000000875 corresponding effect Effects 0.000 description 17
- 238000011161 development Methods 0.000 description 13
- 238000012805 post-processing Methods 0.000 description 11
- 238000013500 data storage Methods 0.000 description 10
- 230000008859 change Effects 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000010606 normalization Methods 0.000 description 7
- 230000002596 correlated effect Effects 0.000 description 6
- 230000015654 memory Effects 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 4
- 239000002245 particle Substances 0.000 description 4
- 239000010408 film Substances 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000007790 scraping Methods 0.000 description 2
- 238000001069 Raman spectroscopy Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 150000004767 nitrides Chemical class 0.000 description 1
- 238000005293 physical law Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- C—CHEMISTRY; METALLURGY
- C23—COATING METALLIC MATERIAL; COATING MATERIAL WITH METALLIC MATERIAL; CHEMICAL SURFACE TREATMENT; DIFFUSION TREATMENT OF METALLIC MATERIAL; COATING BY VACUUM EVAPORATION, BY SPUTTERING, BY ION IMPLANTATION OR BY CHEMICAL VAPOUR DEPOSITION, IN GENERAL; INHIBITING CORROSION OF METALLIC MATERIAL OR INCRUSTATION IN GENERAL
- C23C—COATING METALLIC MATERIAL; COATING MATERIAL WITH METALLIC MATERIAL; SURFACE TREATMENT OF METALLIC MATERIAL BY DIFFUSION INTO THE SURFACE, BY CHEMICAL CONVERSION OR SUBSTITUTION; COATING BY VACUUM EVAPORATION, BY SPUTTERING, BY ION IMPLANTATION OR BY CHEMICAL VAPOUR DEPOSITION, IN GENERAL
- C23C14/00—Coating by vacuum evaporation, by sputtering or by ion implantation of the coating forming material
- C23C14/22—Coating by vacuum evaporation, by sputtering or by ion implantation of the coating forming material characterised by the process of coating
- C23C14/54—Controlling or regulating the coating process
-
- C—CHEMISTRY; METALLURGY
- C23—COATING METALLIC MATERIAL; COATING MATERIAL WITH METALLIC MATERIAL; CHEMICAL SURFACE TREATMENT; DIFFUSION TREATMENT OF METALLIC MATERIAL; COATING BY VACUUM EVAPORATION, BY SPUTTERING, BY ION IMPLANTATION OR BY CHEMICAL VAPOUR DEPOSITION, IN GENERAL; INHIBITING CORROSION OF METALLIC MATERIAL OR INCRUSTATION IN GENERAL
- C23C—COATING METALLIC MATERIAL; COATING MATERIAL WITH METALLIC MATERIAL; SURFACE TREATMENT OF METALLIC MATERIAL BY DIFFUSION INTO THE SURFACE, BY CHEMICAL CONVERSION OR SUBSTITUTION; COATING BY VACUUM EVAPORATION, BY SPUTTERING, BY ION IMPLANTATION OR BY CHEMICAL VAPOUR DEPOSITION, IN GENERAL
- C23C16/00—Chemical coating by decomposition of gaseous compounds, without leaving reaction products of surface material in the coating, i.e. chemical vapour deposition [CVD] processes
- C23C16/44—Chemical coating by decomposition of gaseous compounds, without leaving reaction products of surface material in the coating, i.e. chemical vapour deposition [CVD] processes characterised by the method of coating
- C23C16/52—Controlling or regulating the coating process
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L21/00—Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L21/00—Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
- H01L21/02—Manufacture or treatment of semiconductor devices or of parts thereof
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L21/00—Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
- H01L21/02—Manufacture or treatment of semiconductor devices or of parts thereof
- H01L21/04—Manufacture or treatment of semiconductor devices or of parts thereof the devices having potential barriers, e.g. a PN junction, depletion layer or carrier concentration layer
- H01L21/18—Manufacture or treatment of semiconductor devices or of parts thereof the devices having potential barriers, e.g. a PN junction, depletion layer or carrier concentration layer the devices having semiconductor bodies comprising elements of Group IV of the Periodic Table or AIIIBV compounds with or without impurities, e.g. doping materials
- H01L21/30—Treatment of semiconductor bodies using processes or apparatus not provided for in groups H01L21/20 - H01L21/26
- H01L21/302—Treatment of semiconductor bodies using processes or apparatus not provided for in groups H01L21/20 - H01L21/26 to change their surface-physical characteristics or shape, e.g. etching, polishing, cutting
- H01L21/306—Chemical or electrical treatment, e.g. electrolytic etching
- H01L21/3065—Plasma etching; Reactive-ion etching
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L21/00—Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
- H01L21/67—Apparatus specially adapted for handling semiconductor or electric solid state devices during manufacture or treatment thereof; Apparatus specially adapted for handling wafers during manufacture or treatment of semiconductor or electric solid state devices or components ; Apparatus not specifically provided for elsewhere
- H01L21/67005—Apparatus not specifically provided for elsewhere
- H01L21/67011—Apparatus for manufacture or treatment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/32—Normalisation of the pattern dimensions
Definitions
- the present invention relates to a learning device, an inference device, and a learned model.
- the present disclosure improves simulation accuracy of a manufacturing process.
- the learning device has, for example, the following configuration. That is, Image data of the object, an acquisition unit that acquires data relating to processing on the object, image data of the object, and data relating to the processing are input to a learning model, and the output of the learning model is A learning unit that learns the learning model so as to approach the image data of the object after the processing.
- FIG. 1 is a diagram illustrating an example of the overall configuration of the simulation system.
- FIG. 2 is a diagram illustrating an example of a hardware configuration of each device included in the simulation system.
- FIG. 3 is a diagram illustrating an example of the learning data.
- FIG. 4 is a diagram illustrating an example of a functional configuration of a learning unit of the learning device according to the first embodiment.
- FIG. 5 is a diagram illustrating an example of a functional configuration of a data shaping unit of the learning device according to the first embodiment.
- FIG. 6 is a diagram illustrating a specific example of a process performed by the data shaping unit of the learning device according to the first embodiment.
- FIG. 1 is a diagram illustrating an example of the overall configuration of the simulation system.
- FIG. 2 is a diagram illustrating an example of a hardware configuration of each device included in the simulation system.
- FIG. 3 is a diagram illustrating an example of the learning data.
- FIG. 4 is a diagram illustrating an example of a functional configuration of
- FIG. 7 is a diagram illustrating a specific example of a process using the learning model for dry etching of the learning device according to the first embodiment.
- FIG. 8 is a flowchart illustrating the flow of the learning process.
- FIG. 9 is a diagram illustrating an example of a functional configuration of an execution unit of the inference apparatus.
- FIG. 10 is a diagram showing the simulation accuracy of the learned model for dry etching.
- FIG. 11 is a diagram showing the simulation accuracy of the learned model for deposition.
- FIG. 12 is a diagram illustrating an example of a functional configuration of a data shaping unit of the learning device according to the second embodiment.
- FIG. 13 is a diagram illustrating a specific example of a process performed by the learning model for dry etching of the learning apparatus according to the second embodiment.
- FIG. 14 is a diagram illustrating an example of a functional configuration of a learning unit of the learning device according to the third embodiment.
- FIG. 15 is a diagram illustrating an example of a functional configuration of a data shaping unit of the learning device according to the fourth embodiment.
- FIG. 16 is a diagram illustrating an application example of the inference apparatus.
- FIG. 1 is a diagram illustrating an example of the overall configuration of the simulation system.
- the simulation system 100 includes a learning device 120 and an inference device 130. Note that various data and various information used in the simulation system 100 are obtained from a semiconductor maker or a database of a semiconductor maker.
- predetermined parameter data (details will be described later) is set in the semiconductor manufacturing apparatus 110, and a plurality of unprocessed wafers (objects) are transported so that each manufacturing process ( For example, processing corresponding to dry etching, deposition) is performed.
- the measurement apparatus 111 generates, for example, pre-processing image data (two-dimensional image data) indicating a cross-sectional shape at each position of the pre-processing wafer.
- the measuring device 111 includes a scanning electron microscope (SEM), a length-measuring scanning electron microscope (CD-SEM), a transmission electron microscope (TEM), an atomic force microscope (AFM), and the like. Further, it is assumed that various metadata such as the magnification of the microscope is associated with the pre-processing image data generated by the measuring device 111.
- the processed wafer is unloaded from the semiconductor manufacturing apparatus 110.
- the semiconductor manufacturing apparatus 110 measures the environment during the processing when the processing according to each manufacturing process is performed on the wafer before the processing, and stores the measured environment as environment information.
- the measuring device 112 As processed wafers, some of the processed wafers are transported to the measuring device 112, and the shape is measured by the measuring device 112 at various positions. Thereby, the measuring device 112 generates, for example, processed image data (two-dimensional image data) indicating a cross-sectional shape at each position of the processed wafer.
- the measuring device 112 includes a scanning electron microscope (SEM), a length-measuring scanning electron microscope (CD-SEM), a transmission electron microscope (TEM), an atomic force microscope (AFM), and the like. Is included.
- the pre-processing image data generated by the measuring device 111, the parameter data set in the semiconductor manufacturing apparatus 110 and the retained environment information, and the post-processing image data generated by the measuring device 112 are used as learning data as learning devices 120. Collected at The learning device 120 stores the collected learning data in the learning data storage unit 123.
- the parameter data set in the semiconductor manufacturing apparatus 110 and the stored environment information correspond to the processing performed when the semiconductor manufacturing apparatus 110 executes a process corresponding to the manufacturing process on the wafer (object) before processing. Is any data about As described above, by using arbitrary data related to a process corresponding to a manufacturing process as learning data when the process is performed according to the manufacturing process, a factor correlated with each event of the manufacturing process can be reflected in machine learning.
- a data shaping program and a learning program are installed in the learning device 120, and the learning device 120 functions as the data shaping unit 121 and the learning unit 122 by executing the programs.
- the data shaping unit 121 is an example of a processing unit.
- the data shaping unit 121 reads the learning data stored in the learning data storage unit 123, and processes a part of the read learning data into a predetermined format suitable for the learning unit 122 to input to the learning model. I do.
- the learning unit 122 performs machine learning on the learning model using the read learning data (including the learning data processed by the data shaping unit 121), and generates a learned model of the semiconductor manufacturing process.
- the learned model generated by the learning unit 122 is provided to the inference device 130.
- a data shaping program and an execution program are installed in the inference device 130, and the inference device 130 functions as the data shaping unit 131 and the execution unit 132 by executing the programs.
- the data shaping unit 131 is an example of a processing unit.
- the data shaping unit 131 acquires the pre-processing image data generated by the measuring device 111, the parameter data input to the inference device 130, and the environment information. Further, the data shaping unit 131 processes the acquired parameter data and environment information into a predetermined format suitable for the execution unit 132 to input the learned model.
- the execution unit 132 inputs the pre-processing image data, the parameter data and the environment information processed into a predetermined format by the data shaping unit 131 to the learned model, and executes a simulation, so that the post-processing image data ( (Simulation result) is output (inference).
- the user of the inference apparatus 130 compares the processed image data output by the execution unit 132 executing the simulation using the learned model with the corresponding processed image data generated by the measurement apparatus 112. This verifies the trained model.
- the user of the inference apparatus 130 can calculate the simulation error of the learned model and verify the simulation accuracy.
- FIG. 2 is a diagram illustrating an example of a hardware configuration of each device included in the simulation system.
- FIG. 2 is a diagram illustrating an example of a hardware configuration of the learning device.
- the learning device 120 has a CPU (Central Processing Unit) 201 and a ROM (Read Only Memory) 202.
- the learning device 120 has a RAM (Random Access Memory) 203 and a GPU (Graphics Processing Unit) 204.
- a processor processing circuit, Processing @ Circuit, Processing @ Circuitry
- the CPU 201 and the GPU 204 and memories such as the ROM 202 and the RAM 203 form a so-called computer.
- the learning device 120 further includes an auxiliary storage device 205, an operation device 206, a display device 207, an I / F (Interface) device 208, and a drive device 209.
- the hardware of the learning device 120 is mutually connected via a bus 210.
- the CPU 201 is an arithmetic device that executes various programs (for example, a data shaping program, a learning program, and the like) installed in the auxiliary storage device 205.
- the ROM 202 is a non-volatile memory and functions as a main storage device.
- the ROM 202 stores various programs, data, and the like necessary for the CPU 201 to execute various programs installed in the auxiliary storage device 205.
- the ROM 202 stores a boot program such as BIOS (Basic Input / Output System) and EFI (Extensible Firmware Interface).
- the RAM 203 is a volatile memory such as a DRAM (Dynamic Random Access Memory) or an SRAM (Static Random Access Memory), and functions as a main storage device.
- the RAM 203 provides a work area where various programs installed in the auxiliary storage device 205 are developed when the CPU 201 executes the programs.
- the GPU 204 is an arithmetic device for image processing.
- the CPU 201 executes various programs, the GPU 204 performs high-speed arithmetic on various image data by parallel processing.
- the auxiliary storage device 205 is a storage unit that stores various programs, various image data that is subjected to image processing by the GPU 204 when the various programs are executed by the CPU 201, and the like.
- the learning data storage unit 123 is realized in the auxiliary storage device 205.
- the operation device 206 is an input device used when the administrator of the learning device 120 inputs various instructions to the learning device 120.
- the display device 207 is a display device that displays the internal state of the learning device 120.
- the I / F device 208 is a connection device for connecting to and communicating with another device.
- the drive device 209 is a device for setting the recording medium 220.
- the recording medium 220 includes a medium that optically, electrically, or magnetically records information, such as a CD-ROM, a flexible disk, and a magneto-optical disk.
- the recording medium 220 may include a semiconductor memory such as a ROM and a flash memory that electrically records information.
- the various programs to be installed in the auxiliary storage device 205 are installed, for example, by setting the distributed recording medium 220 to the drive device 209 and reading out the various programs recorded on the recording medium 220 by the drive device 209. Is done.
- various programs to be installed in the auxiliary storage device 205 may be installed by being downloaded via a network (not shown).
- FIG. 3 is a diagram illustrating an example of the learning data.
- the learning data 300 includes, as information items, “process”, “job ID”, “image data before processing”, “parameter data”, “environment information”, and “image data after processing”. "Is included.
- Step 4 a name indicating a semiconductor manufacturing process is stored.
- the example of FIG. 3 shows a state in which two names “dry etching” and “deposition” are stored as “processes”.
- the "job ID" stores an identifier for identifying a job executed by the semiconductor manufacturing apparatus 110.
- FIG. 3 shows an example in which “PJ001” and “PJ002” are stored as “job ID” of dry etching. Further, the example of FIG. 3 shows a state where “PJ101” is stored as the “job ID” of the deposition.
- the file name of the image data before processing generated by the measuring device 111 is stored in the “image data before processing”.
- the pre-processing image of the file name “shape data LD001” is acquired by the measuring device 111 for one pre-processing wafer of the lot (wafer group) of the job. Indicates that data has been generated.
- the “parameter data” stores parameters indicating predetermined processing conditions set when the pre-processing wafer is processed in the semiconductor manufacturing apparatus 110.
- “parameter 001_1”, “parameter 001_2”, “parameter 001_3”,. -Set as a set value in the semiconductor manufacturing apparatus 110 such as Pressure (pressure in a chamber), Power (power of a high-frequency power source), Gas (gas flow rate), Temperature (temperature in a chamber or temperature on a wafer surface), and the like.
- Data, Data set as target values in the semiconductor manufacturing apparatus 110 such as CD (Critical Dimension), Depth (depth), Taper (taper angle), Tilting (tilting angle), Bowing (Boeing), etc.
- Information on the hardware configuration of the semiconductor manufacturing apparatus 110, Etc. are included.
- the “environment information” stores information indicating an environment during processing of the unprocessed wafer, which is measured when the semiconductor processing apparatus 110 processes the unprocessed wafer.
- the environment information “environment data 001_1”, “environment data 001_2”, “environment data 001_3”,. Indicates that was measured.
- Vpp potential difference
- Vdc DC self-bias voltage
- OES emission intensity by emission spectral analysis
- Reflect reflected wave power
- Data output from the semiconductor manufacturing apparatus 110 during processing mainly, data relating to current and voltage
- -Plasma density plasma density
- Ion energy ion energy
- Ion flux ion flow rate
- the "processed image data” stores the file name of the processed image data generated by the measuring device 112.
- FIG. 4 is a diagram illustrating an example of a functional configuration of a learning unit of the learning device according to the first embodiment.
- the learning unit 122 of the learning device 120 includes a learning model for dry etching 420, a learning model for deposition 421, a comparing unit 430, and a changing unit 440.
- the pre-processing image data, parameter data, and environment information of the learning data 300 stored in the learning data storage unit 123 are read by the data shaping unit 121 and input to the corresponding learning model.
- the parameter data and the environment information are processed into a predetermined format by the data shaping unit 121 and then input to the corresponding learning model.
- the parameter data and the environment information may be processed in a predetermined format in advance, and the data shaping unit 121 may read out the data processed in a predetermined format in advance and input the read data into the corresponding learning model. .
- the learning model for dry etching 420 outputs an output result.
- the learning model for dry etching 420 inputs an output result to the comparison unit 430.
- the deposition learning model 421 outputs an output result.
- the deposition learning model 421 inputs the output result to the comparison unit 430.
- the change unit 440 updates the model parameters of the dry etching learning model 420 or the deposition learning model 421 based on the difference information notified from the comparison unit 430.
- the difference information used for updating the model parameters may be a square error or an absolute error.
- the learning unit 122 inputs the pre-processing image data, the parameter data processed in the predetermined format, and the environment information to the learning model, and the output result output from the learning model approaches the processed image data.
- the model parameters are updated by machine learning.
- the learning unit 122 can reflect the processed image data in which the influence of each event of the semiconductor manufacturing process appears on the machine learning, and also machine-learns the relationship between these events, the parameter data, and the environment information. be able to.
- FIG. 5 is a diagram illustrating an example of a functional configuration of a data shaping unit of the learning device according to the first embodiment.
- the data shaping unit 121 includes a shape data obtaining unit 501, a channel-specific data generating unit 502, a one-dimensional data obtaining unit 511, a one-dimensional data expanding unit 512, and a connecting unit 520.
- the shape data acquisition unit 501 reads the pre-processing image data of the learning data 300 from the learning data storage unit 123, and notifies the data-by-channel generation unit 502.
- the channel-specific data generation unit 502 is an example of a generation unit.
- the channel-specific data generation unit 502 converts the pre-processing image data (here, represented by a pixel value corresponding to the composition ratio (or content ratio) of each material) notified from the shape data acquisition unit 501. get. Further, the channel-specific data generation unit 502 generates image data of a plurality of channels according to the type of the material from the acquired pre-processing image data.
- the image data of the channel according to the type of the material is referred to as channel-specific data.
- the channel-specific data generating unit 502 generates channel-specific data including a layer of air and four channel-specific data including respective layers of four types of materials from the unprocessed image data.
- the channel-specific data generating unit 502 notifies the linking unit 520 of the plurality of generated channel-specific data.
- the channel-specific data generation unit 502 generates the channel-specific data, but the channel-specific data may be generated in advance.
- the channel-specific data generation unit 502 reads out the channel-specific data generated in advance and notifies the connection unit 520 of the read data.
- the one-dimensional data acquisition unit 511 reads the parameter data and the environment information of the learning data 300 from the learning data storage unit 123 and notifies the one-dimensional data development unit 512 of the read.
- the one-dimensional data development unit 512 converts the parameter data and the environment information notified from the one-dimensional data acquisition unit 511 into a predetermined format (two-dimensional data corresponding to the vertical size and the horizontal size of the pre-processing image data) corresponding to the pre-processing image data. In the form of a dimensional array).
- the parameter data includes, for example, numerical values of parameters such as “parameter 001_1”, “parameter 001_2”, “parameter 001_3”,... Arranged one-dimensionally. More specifically, the parameter data is configured by one-dimensionally arranging numerical values of N types of parameters.
- the one-dimensional data development unit 512 extracts the values of the N types of parameters included in the parameter data one by one, and converts the extracted values into two-dimensional values according to the vertical size and the horizontal size of the pre-processing image data. Array. As a result, the one-dimensional data developing unit 512 generates N parameter data arranged two-dimensionally.
- the one-dimensional data development unit 512 notifies the coupling unit 520 of the N pieces of parameter data arranged two-dimensionally.
- the environment information is formed by one-dimensionally arranging numerical values of each environment information such as “environment data 001_1”, “environment data 001_2”, “environment data 001_3”, and so on.
- the environment information is configured by one-dimensionally arraying numerical values of M types of environment data.
- the one-dimensional data development unit 512 extracts the numerical values of the M types of environmental data included in the environmental information one by one, and divides the extracted numerical values into two according to the vertical size and the horizontal size of the pre-processing image data. Array in dimensions. As a result, the one-dimensional data development unit 512 generates M pieces of environmental information arranged two-dimensionally.
- the one-dimensional data development unit 512 notifies the coupling unit 520 of the M pieces of environmental information arranged two-dimensionally.
- the linking unit 520 includes the two-dimensionally arranged N parameter data and the M environment, which are notified from the one-dimensional data expanding unit 512, to the plurality of channel-specific data notified from the channel-specific data generation unit 502.
- the information is linked as a new channel to generate linked data.
- the connection unit 520 generates the connection data, but the connection data may be generated in advance. In this case, the connection unit 520 reads out the connection data generated in advance and inputs the connection data to the learning model.
- FIG. 6 is a diagram illustrating a specific example of processing by the data shaping unit.
- the pre-processing image data 600 includes an air layer, a material A layer, a material B layer, a material C layer, and a material D layer.
- the channel-specific data generation unit 502 generates channel-specific data 601, 602, 603, 604, and 605.
- the parameter data 610 includes, for example, numerical values of respective parameters (“parameter 001_1”, “parameter 001_2”, “parameter 001_3”,%) Arranged one-dimensionally.
- the environment information 620 for example, numerical values of each environment data such as “environment data 001_1”, “environment data 001_2”, “environment data 001_3”,. It becomes.
- the one-dimensional data development unit 512 arranges the parameters 001_1 two-dimensionally (the same values are arranged vertically and horizontally) according to the vertical size and the horizontal size of the image data 600 before processing. Similarly, the one-dimensional data development unit 512 arranges the parameter 001_2 two-dimensionally according to the vertical size and the horizontal size of the image data 600 before processing. Similarly, the one-dimensional data developing unit 512 arranges the parameters 001_3 two-dimensionally according to the vertical size and the horizontal size of the pre-processing image data 600.
- the one-dimensional data development unit 512 arranges the environment data 001_1 two-dimensionally (the same values are arranged vertically and horizontally) according to the vertical size and the horizontal size of the image data 600 before processing. Similarly, the one-dimensional data developing unit 512 arranges the environment data 001_2 in a two-dimensional manner according to the vertical size and the horizontal size of the pre-processing image data 600. Similarly, the one-dimensional data development unit 512 arranges the environment data 001_3 two-dimensionally according to the vertical size and the horizontal size of the image data 600 before processing.
- connection data 630 is generated.
- FIG. 7 is a diagram illustrating a specific example of a process using the learning model for dry etching of the learning device according to the first embodiment.
- a learning model (so-called UNET) based on a U-shaped convolutional neural network (CNN) is used as the learning model 420 for dry etching.
- the one-dimensional data development unit 512 of the data shaping unit 121 is configured to two-dimensionally arrange the parameter data and the environment information in order to convert the data input to UNET into image data. It is. By being able to input parameter data and environmental information to UNET, it becomes possible to perform machine learning using factors correlated with each event of dry etching.
- FIG. 7 shows a state in which connected data 630 is input to the learning model 420 for dry etching using UNET, and an output result 700 including a plurality of channel-specific data is output.
- FIG. 8 is a flowchart illustrating the flow of the learning process.
- step S801 the measurement apparatus 111 measures shapes at various positions on the unprocessed wafer before being processed by the semiconductor manufacturing apparatus 110, and generates pre-processing image data.
- step S802 the measurement apparatus 112 measures the shape of the processed wafer after being processed by the semiconductor manufacturing apparatus 110 at various positions, and generates processed image data.
- step S803 the learning apparatus 120 obtains the parameter data set in the semiconductor manufacturing apparatus 110 and the environment during the processing when the semiconductor manufacturing apparatus 110 performs processing corresponding to each manufacturing process. Get environmental information.
- step S804 the learning device 120 uses the pre-processing image data generated by the measuring device 111, the post-processing image data generated by the measuring device 112, the acquired parameter data, and the environment information as learning data as learning data. It is stored in the storage unit 123.
- step S805 the data shaping unit 121 of the learning device 120 reads out the unprocessed image data, parameter data, and environment information from the learning data storage unit 123, and generates linked data.
- step S806 the learning unit 122 of the learning device 120 performs machine learning on the learning model using the connected data as input and the processed image data as output to generate a learned model.
- step S807 the learning unit 122 of the learning device 120 transmits the generated learned model to the inference device 130.
- FIG. 9 is a diagram illustrating an example of a functional configuration of the execution unit of the inference apparatus.
- the execution unit 132 of the inference apparatus 130 includes a learned model 920 for dry etching, a learned model 921 for deposition, and an output unit 930.
- the learned model for dry etching 920 when the connected data is input from the data shaping unit 131, a simulation is performed. In addition, the learned model for dry etching 920 notifies the output unit 930 of an output result output by executing the simulation.
- the deposition learned model 921 executes a simulation. Further, the deposition learned model 921 notifies the output unit 930 of an output result output by executing the simulation.
- the pre-processing image data generated by the measuring device 111 is input, but arbitrary pre-processing image data is input to the dry-etched learned model 920 and the deposition-learned model 921. It is possible.
- the user of the inference apparatus 130 has input the same parameter data and environment information as the parameter data set in the semiconductor manufacturing apparatus 110 and the stored environment information.
- the user of the inference device 130 can verify the simulation accuracy of the inference device 130.
- FIG. 10 is a diagram showing the simulation accuracy of the learned model for dry etching.
- FIG. 11 is a diagram showing the simulation accuracy of the learned model for deposition.
- the simulation accuracy can be improved even when compared with a general physical model (a model in which a semiconductor manufacturing process is identified based on physical laws).
- each event that cannot be represented by a physical equation cannot be reflected in simulation, but in the case of a learning model, each event that affects the processed image data is machine-learned. Because it can be reflected in Further, in the case of the learning model according to the present embodiment, since factors (parameter data, environmental information) correlated with each event of the semiconductor manufacturing process are input, the relationship between each event and the factor can be machine-learned. It is.
- Each event that cannot be expressed by the physical equation in the case of dry etching includes, for example, an event in which the gas in the chamber becomes non-uniform. Alternatively, there is an event in which the etched particles adhere as deposition. In addition, in the case of deposition, for example, there is an event that particles adhere and then bounce once or more.
- the gas in the chamber is treated as being uniform during dry etching. Further, in a general physical model, particles are treated as being attached to a position where the particles first come into contact during deposition.
- the processed image data in which the effects of these events have appeared can be reflected in machine learning, and the relationship between these events, parameter data, and environmental information is machine-learned. be able to. Therefore, in the case of a trained model, simulation accuracy can be improved as compared with a general physical model.
- the trained model it is possible to realize a simulation accuracy that cannot be achieved by a simulator based on a physical model.
- the simulation time can be reduced as compared with a simulator based on a physical model.
- the learning device includes: In the semiconductor manufacturing apparatus, parameter data set when processing the unprocessed wafer and environment information indicating the processing environment of the unprocessed wafer measured when processing the unprocessed wafer are acquired.
- Acquire pre-processing image data which is image data indicating a pre-processing shape of a pre-processing wafer processed in the semiconductor manufacturing apparatus.
- the acquired parameter data and environment information are processed into the image data format.
- linked data is generated by linking the processed parameter data and environment information to the image data before processing. -The generated concatenated data is input to a U-shaped convolutional neural network-based learning model, and machine learning is performed so that the output result approaches processed image data indicating the shape of the processed wafer.
- the learning device it is possible to reflect factors correlated with each event of the semiconductor manufacturing process in machine learning, and generate a learned model that realizes highly accurate simulation. be able to.
- the inference apparatus includes: -Acquire pre-processing image data, parameter data, and environment information.
- the acquired parameter data and environment information are two-dimensionally arranged according to the vertical size and the horizontal size of the acquired image data before processing, thereby processing the image data into a format.
- linked data is generated by linking the processed parameter data and environment information to the pre-processing image data.
- -Input the generated concatenated data to the trained model and execute simulation.
- the inference apparatus it is possible to generate a machine-learned model using a factor correlated with each event of the semiconductor manufacturing process, and realize a highly accurate simulation. can do.
- the simulation accuracy of the semiconductor manufacturing process can be improved.
- the parameter data and the environment information are processed into the format of the image data according to the vertical size and the horizontal size of the image data before processing, and the learning model (or the learned model) is connected to the image data before processing. Model).
- the method of processing the parameter data and the environment information and the method of inputting the processed parameter data and the environment information to the learning model (or the trained model) are not limited thereto.
- the processed parameter data and environment information may be configured to be input to each layer of the learning model (or the learned model).
- a predetermined format used when converting the image data convolved in each layer of the learning model (or the trained model) is used. You may comprise so that it may process.
- the second embodiment will be described focusing on differences from the first embodiment.
- FIG. 12 is a diagram illustrating an example of a functional configuration of a data shaping unit of the learning device according to the second embodiment.
- a difference from the functional configuration of the data shaping unit 121 illustrated in FIG. 5 is that the data shaping unit 1200 illustrated in FIG. 12 includes a connecting unit 1201 and a normalizing unit 1202.
- the linking unit 1201 links the plurality of channel-specific data notified from the channel-specific data generating unit 502 to generate linked data.
- the normalization unit 1202 normalizes the parameter data and the environment information notified from the one-dimensional data acquisition unit 511, and generates normalized parameter data and normalized environment information.
- FIG. 13 is a diagram illustrating a specific example of a process performed by the learning model for dry etching of the learning apparatus according to the second embodiment.
- connection data 1310 generated by the connection unit 1201 of the data shaping unit 1200 is input to the learning model for dry etching 1300.
- the learning model for dry etching 1300 includes the normalization parameter data generated by the normalization unit 1202 of the data shaping unit 1200, Environment information is entered.
- the learning model for dry etching 1300 includes a neural network unit 1301, which is a fully-coupled learning model, in addition to UNET, which is a CNN-based learning model.
- the neural network unit 1301 converts the value of each pixel of each image data to be subjected to convolution processing in each layer of UNET into a value of a predetermined format (for example, linear coefficients ⁇ and ⁇ ) are output.
- a predetermined format for example, linear coefficients ⁇ and ⁇
- the neural network unit 1301 has a function of processing the normalized parameter data and the normalized environment information into a predetermined format (for example, a linear coefficient format).
- the neural network unit 1301 outputs ( ⁇ 1 , ⁇ 1 ) to ( ⁇ 9 , ⁇ 9 ) as coefficients of a linear expression.
- the coefficient of the linear equation is input to each layer for each data for each channel. It is assumed that a plurality of sets are input.
- the coefficients ( ⁇ 1 , ⁇ 1 ) to ( ⁇ 9 , ⁇ 9 ) of the linear expression are, for example, any of the image data among the image data for each channel for which the convolution processing is performed in each layer of the UNET. It can be considered as an indicator that data is important. That is, the neural network unit 1301 performs a process of calculating an index indicating importance of each image data processed in each layer of the learning model based on the normalized parameter data and the normalized environment information.
- an output result 700 including a plurality of channel-specific data is output. You.
- the output result 700 is compared with the processed image data by the comparing unit 430, and difference information is calculated.
- the changing unit 440 updates the model parameters of the UNET and the model parameters of the neural network unit 1301 in the learning model for dry etching 1300 based on the difference information.
- image data having high importance is normalized by the normalization parameter data and the normalization parameter data in each layer of UNET. Extraction can be performed based on environmental information.
- the learning device includes: In the semiconductor manufacturing apparatus, parameter data set when processing the unprocessed wafer and environment information indicating the processing environment of the unprocessed wafer measured when processing the unprocessed wafer are acquired.
- Acquire pre-processing image data which is image data indicating a pre-processing shape of a pre-processing wafer processed in the semiconductor manufacturing apparatus. Normalize the acquired parameter data and environment information, and process them into the form of linear coefficients used when converting the value of each pixel of each image data to be convolved in each layer of the learning model.
- the learning unit performs machine learning, the value of each pixel of the image data convolved in each layer is converted using a linear expression.
- the learning apparatus it is possible to reflect factors correlated with each event of the semiconductor manufacturing process in machine learning, and generate a learned model that realizes a highly accurate simulation. be able to.
- the learning device has been described in the second embodiment, the same processing is performed in the inference device when the execution unit executes the simulation.
- FIG. 14 is a diagram illustrating an example of a functional configuration of a learning unit of the learning device according to the third embodiment.
- the internal configuration in the learning model is different from the functional configuration of the learning unit 122 illustrated in FIG.
- the internal structure in the learning model will be described using the learning model for dry etching 1410, but the learning model for deposition also has the same internal structure.
- the learning model for dry etching 1410 of the learning unit 1400 includes a sigmoid function unit 1412 and a multiplication unit 1413 in addition to the UNET 1411.
- the sigmoid function unit 1412 is an example of a processing unit. As shown in FIG. 14, the sigmoid function unit 1412 multiplies a first output result output from the UNET 1411 by a sigmoid function 1420 to output a second output result 1421.
- the multiplication unit 1413 obtains the second output result 1421 from the sigmoid function unit 1412. Further, the multiplication unit 1413 acquires the pre-processing image data from the data shaping unit 121. Further, the multiplying unit 1413 notifies the comparing unit 430 of the final output result 1422 by multiplying the obtained image data before processing by the obtained second output result 1421.
- the UNET 1411 in the case where the learning model for dry etching 1410 is machine-learned is used as the first output result as the first output result. Is output.
- the scraping rate refers to a value of a change rate indicating how much each material layer included in the pre-processing image data has been cut in the post-processing image data.
- the shaving rate approaches a value obtained by dividing the image data after processing by the image data before processing.
- the first output result output from the UNET 1411 in the process of machine learning takes an arbitrary value.
- the sigmoid function unit 1412 is a function for converting an arbitrary value to a value of 0 to 1.
- the sigmoid function unit 1412 converts the first output result to a second output result, thereby obtaining the domain knowledge. Can be reflected.
- image data indicating the adhesion rate is output as the first output result from UNET when the learning model for deposition is machine-learned.
- the adhesion rate refers to a value of a change rate indicating how much a thin film adheres to the layer of each material included in the image data before processing in the image data after processing.
- the adhesion rate approaches a value obtained by dividing the difference between the image data before processing and the image data after processing by the image data before processing.
- the first output result output from UNET in the process of machine learning takes an arbitrary value.
- the adhesion rate falls within the range of 0 to 1.
- the sigmoid function unit is a function that converts an arbitrary value into a value of 0 to 1.
- the sigmoid function unit converts the first output result into the second output result, thereby reflecting the domain knowledge. Can be done.
- the domain knowledge can be reflected in the machine learning, and the simulation accuracy can be further improved.
- the data shaping unit has been described as generating the linked data of the vertical size and the horizontal size according to the vertical size and the horizontal size of the image data before processing.
- the vertical size and the horizontal size of the concatenated data generated by the data shaping unit are arbitrary, and the concatenated data may be generated after compressing the image data before processing.
- the fourth embodiment will be described focusing on differences from the first to third embodiments.
- FIG. 15 is a diagram illustrating an example of a functional configuration of a data shaping unit of the learning device according to the fourth embodiment.
- reference numeral 15a in FIG. 15 denotes a data shaping unit 1510 in which a compression unit 1511 is added to the data shaping unit 121 of the learning device according to the first embodiment.
- the compression unit 1511 compresses the pre-processing image data acquired by the shape data acquisition unit 501.
- the obtained average value is defined as the pixel value of one pixel obtained by combining the n pixels.
- the compression unit 1511 can compress the unprocessed image data by a factor of 1 / n.
- the compression unit 1511 performs the composition ratio (or content ratio) of the material before and after compression. Compression processing is performed so that is maintained as much as possible. Note that the compression rate of the compression process by the compression unit 1511 is not limited to an integral multiple, and the compression unit 1511 can perform compression processing at an arbitrary compression ratio.
- 15b in FIG. 15 shows a data shaping unit 1520 obtained by adding a compression unit 1511 to the data shaping unit 1200 of the learning device according to the second embodiment.
- the compression unit 1511 of the data shaping unit 1520 has the same function as the compression unit 1511 of the data shaping unit 1510. Therefore, a detailed description is omitted here.
- the size of the concatenated data input to the learning units 122 and 1400 (or the execution unit 132) can be reduced.
- the learning time when the learning units 122 and 1400 perform the machine learning or the simulation time when the execution unit 132 executes the simulation can be reduced.
- the learning unit 122 is provided with the learning model for dry etching 420 and the learning model for deposition 421, and machine learning is performed separately using different learning data.
- dry etching and deposition may occur simultaneously.
- one learning model may be provided in the learning unit 122 so that machine learning is performed for a case where dry etching and deposition occur simultaneously.
- the learning unit 122 includes, for the one learning model, a learning model including pre-processing image data before dry etching and deposition occurs and post-processing image data after dry etching and deposition occurs. Perform machine learning using data.
- the simulator can be provided integrally.
- the data shaping unit 121 processes both the parameter data and the environment information into a predetermined format and inputs the processed data to the corresponding learning model.
- the data shaping unit 121 may process only the parameter data into a predetermined format and input the processed parameter data to the corresponding learning model. That is, in performing the machine learning of the learning model in the learning unit 122, only the parameter data may be used without using the environment information.
- the data shaping unit 131 processes both parameter data and environment information into a predetermined format and inputs the processed data to the corresponding learned model.
- the data shaping unit 131 may process only the parameter data into a predetermined format and input the processed parameter data to the corresponding learned model. That is, when performing a simulation using the learned model in the execution unit 132, only the parameter data may be used without using the environment information.
- the image data before processing and the image data after processing are described as two-dimensional image data.
- the image data before processing and the image data after processing are not limited to two-dimensional image data, but may be three-dimensional image data (so-called voxel data).
- the concatenated data When the pre-processing image data is two-dimensional image data, the concatenated data has an array of (channel, vertical size, horizontal size). However, when the pre-processing image data is three-dimensional image data, the concatenated data is , (Channel, vertical size, horizontal size, depth size).
- the two-dimensional image data is deformed or the three-dimensional image data is deformed and handled.
- three-dimensional image data may be acquired, two-dimensional image data of a predetermined cross section may be generated, and input as pre-processing image data.
- three-dimensional image data may be generated based on two-dimensional image data of a continuous predetermined cross section and input as pre-processing image data.
- the method of generating the data for each channel is not limited to this, and the data for each channel is generated based on a larger classification such as Oxide, Silicon, Organics, and Nitride instead of each specific film type. Is also good.
- the inference apparatus 130 outputs the post-processing image data and ends the processing when the pre-processing image data, the parameter data, and the environment information are input. did.
- the configuration of the inference device 130 is not limited to this.
- a configuration may be adopted in which the post-processing image data output by inputting the pre-processing image data, the parameter data, and the environment information is input again to the inference apparatus 130 together with the corresponding parameter data and environment information. Good.
- the inference device 130 can continuously output a change in shape.
- the processed image data is input to the inference apparatus 130 again, it is assumed that the corresponding parameter data and environment information can be arbitrarily changed.
- the inference apparatus 130 may be applied to, for example, a service that searches for an optimal recipe, optimal parameter data, and an optimal hardware configuration and provides the semiconductor manufacturer.
- FIG. 16 is a diagram illustrating an application example of the inference apparatus, in which the inference apparatus 130 is applied to a service providing system 1600.
- the service providing system 1600 is connected to, for example, each office of a semiconductor manufacturer via a network 1640, and acquires image data before processing.
- the acquired image data before processing is stored in the data storage unit 1602.
- the inference apparatus 130 reads out the pre-processing image data from the data storage unit 1602 and executes the simulation while changing the parameter data and the environment information. Thereby, the user of the inference apparatus 130 can search for the optimal recipe, the optimal parameter data, or the optimal hardware configuration.
- the information providing apparatus 1601 provides the optimal recipe and the optimal parameter data searched by the user of the inference apparatus 130 to each office of the semiconductor maker.
- the service providing system 1600 can provide semiconductor manufacturers with optimal recipes and optimal parameter data.
- the pre-processing wafer has been described as the target.
- the target is not limited to the pre-processing wafer. There may be.
- the measuring device 111 (or the measuring device 112) generates the pre-processing image data (or the post-processing image data) has been described.
- the image data before processing is not limited to the case where the measurement device 111 (or the measurement device 112) generates.
- the measurement device 111 (or the measurement device 112) generates multidimensional measurement data indicating the shape of the target object
- the learning device 120 generates the pre-processing image data (or the post-processing image data) based on the measurement data. May be generated.
- the measurement data generated by the measurement device 111 includes, for example, data including position information and film type information. Specifically, it includes data generated by the CD-SEM and combining the position information and the CD measurement data. Alternatively, data including a combination of a two-dimensional or three-dimensional shape generated by X-ray or Raman method and information such as a film type is included. That is, it is assumed that the multidimensional measurement data indicating the shape of the target object includes various expression formats according to the type of the measurement device.
- the learning device 120 and the inference device 130 are shown as separate bodies, but they may be integrally configured.
- the learning device 120 has been described as being configured by one computer, but may be configured by a plurality of computers.
- the inference apparatus 130 has been described as being configured by one computer, but may be configured by a plurality of computers.
- the learning device 120 and the inference device 130 have been described as being applied to the semiconductor manufacturing process.
- the learning device 120 and the inference device 130 may be applied to processes other than the semiconductor manufacturing process.
- the processes other than the semiconductor manufacturing process include manufacturing processes other than the semiconductor manufacturing process and non-manufacturing processes.
- the learning device 120 and the inference device 130 are realized by causing a general-purpose computer to execute various programs. Not limited.
- a dedicated electronic circuit that is, hardware
- IC Integrated Circuit
- a plurality of components may be realized by one electronic circuit, one component may be realized by a plurality of electronic circuits, or one component and one electronic circuit may be realized.
- the present invention is not limited to the configuration shown here, such as a combination of the configuration described in the above embodiment with other elements. These points can be changed without departing from the spirit of the present invention, and can be appropriately determined according to the application form.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Power Engineering (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Manufacturing & Machinery (AREA)
- Computer Hardware Design (AREA)
- Chemical Kinetics & Catalysis (AREA)
- Metallurgy (AREA)
- Organic Chemistry (AREA)
- Materials Engineering (AREA)
- Mechanical Engineering (AREA)
- Quality & Reliability (AREA)
- General Chemical & Material Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Plasma & Fusion (AREA)
- Drying Of Semiconductors (AREA)
Abstract
Description
対象物の画像データと、前記対象物に対する処理に関するデータとを取得する取得部と、前記対象物の画像データと、前記処理に関するデータとを学習モデルに入力し、前記学習モデルの出力が、前記処理後の前記対象物の画像データに近づくように、前記学習モデルを学習する学習部とを備える。
<シミュレーションシステムの全体構成>
はじめに、半導体製造プロセスのシミュレーションを行うシミュレーションシステムの全体構成について説明する。図1は、シミュレーションシステムの全体構成の一例を示す図である。図1に示すように、シミュレーションシステム100は、学習装置120、推論装置130を有する。なお、シミュレーションシステム100において用いられる各種データ、各種情報は、半導体製造メーカから、あるいは半導体製造装置メーカのデータベースなどから入手される。
・処理前画像データ、パラメータデータ、環境情報をデータ整形部131に入力したことで実行部132により出力される処理後画像データと、
・処理前ウェハが半導体製造装置110により処理され、処理後ウェハが測定装置112により測定されることで生成される処理後画像データと、
を対比する。これにより、推論装置130のユーザは、学習済みモデルのシミュレーション誤差を算出し、シミュレーション精度を検証することができる。
次に、シミュレーションシステム100を構成する各装置(学習装置120、推論装置130)のハードウェア構成について、図2を用いて説明する。図2は、シミュレーションシステムを構成する各装置のハードウェア構成の一例を示す図である。
次に、学習用データ格納部123に格納される学習用データについて説明する。図3は、学習用データの一例を示す図である。図3に示すように、学習用データ300には、情報の項目として、“工程”、“ジョブID”、“処理前画像データ”、“パラメータデータ”、“環境情報”、“処理後画像データ”が含まれる。
・Pressure(チャンバ内の圧力)、Power(高周波電源の電力)、Gas(ガス流量)、Temperature(チャンバ内の温度またはウェハの表面の温度)等のように、半導体製造装置110に設定値として設定されるデータ、
・CD(Critical Dimension)、Depth(深さ)、Taper(テーパ角)、Tilting(チルト角)、Bowing(ボーイング)等のように、半導体製造装置110に目標値として設定されるデータ、
・半導体製造装置110のハードウェア形体に関する情報、
等が含まれる。
・Vpp(電位差)、Vdc(直流自己バイアス電圧)、OES(発光分光分析による発光強度)、Reflect(反射波電力)、
等のように、処理中に半導体製造装置110から出力されるデータ(主に電流や電圧に関するデータ)、
・Plasma density(プラズマ密度)、Ion energy(イオンエネルギ)、Ion flux(イオン流量)、
等のように、処理中に測定されるデータ(主に光に関するデータのほか、温度、圧力に関するデータ)、
が含まれる。
次に、学習装置120の各部(データ整形部121、学習部122)の機能構成の詳細について説明する。
はじめに、学習装置120の学習部122の機能構成の詳細について説明する。図4は、第1の実施形態に係る学習装置の学習部の機能構成の一例を示す図である。図4に示すように、学習装置120の学習部122は、ドライエッチング用学習モデル420、デポジション用学習モデル421、比較部430、変更部440を有する。
次に、学習装置120のデータ整形部121の機能構成の詳細について説明する。図5は、第1の実施形態に係る学習装置のデータ整形部の機能構成の一例を示す図である。図5に示すように、データ整形部121は、形状データ取得部501、チャネル別データ生成部502、1次元データ取得部511、1次元データ展開部512、連結部520を有する。
次に、学習装置120の各部による処理のうち、上述したデータ整形部121による処理及び学習部122内のドライエッチング用学習モデル420による処理の具体例について説明する。
図6は、データ整形部による処理の具体例を示す図である。図6において、処理前画像データ600は、例えば、ファイル名=「形状データLD001」の処理前画像データである。
次に、学習部122内のドライエッチング用学習モデル420による処理の具体例について説明する。図7は、第1の実施形態に係る学習装置のドライエッチング用学習モデルによる処理の具体例を示す図である。図7に示すように、本実施形態では、ドライエッチング用学習モデル420として、U字型の畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)ベースの学習モデル(いわゆるUNET)を用いる。
次に、学習処理全体の流れについて説明する。図8は、学習処理の流れを示すフローチャートである。
次に、推論装置130の機能構成の詳細について説明する。なお、推論装置130の各部(データ整形部131、実行部132)のうち、データ整形部131の機能構成の詳細は、学習装置120のデータ整形部121の機能構成の詳細と同じである。そこで、データ整形部131の機能構成の詳細についての説明は、ここでは省略し、以下では、実行部132の機能構成の詳細について説明する。
以上の説明から明らかなように、第1の実施形態に係る学習装置は、
・半導体製造装置において、処理前ウェハを処理する際に設定されたパラメータデータと、処理前ウェハを処理する際に測定された処理前ウェハの処理中の環境を示す環境情報とを取得する。
・半導体製造装置において処理される処理前ウェハの処理前の形状を示す画像データである、処理前画像データを取得する。
・取得した処理前画像データの縦サイズ及び横サイズに応じて、取得したパラメータデータ及び環境情報を2次元に配列することで、取得したパラメータデータ及び環境情報を、画像データの形式に加工する。また、処理前画像データに、加工したパラメータデータ及び環境情報を連結して連結データを生成する。
・生成した連結データをU字型の畳み込みニューラルネットワークベースの学習モデルに入力し、出力結果が、処理後ウェハの形状を示す処理後画像データに近づくように、機械学習を行う。
・処理前画像データと、パラメータデータ及び環境情報とを取得する。
・取得した処理前画像データの縦サイズ及び横サイズに応じて、取得したパラメータデータ及び環境情報を2次元に配列することで、画像データの形式に加工する。また、処理前画像データに、加工したパラメータデータ及び環境情報を連結して連結データを生成する。
・生成した連結データを、学習済みモデルに入力し、シミュレーションを実行する。
上記第1の実施形態では、処理前画像データの縦サイズ及び横サイズに応じて、パラメータデータ及び環境情報を画像データの形式に加工し、処理前画像データと連結して学習モデル(または学習済みモデル)に入力するものとして説明した。
はじめに、第2の実施形態に係る学習装置のデータ整形部の機能構成の詳細について説明する。図12は、第2の実施形態に係る学習装置のデータ整形部の機能構成の一例を示す図である。図5に示したデータ整形部121の機能構成との相違点は、図12に示すデータ整形部1200の場合、連結部1201と正規化部1202とを有している点である。
次に、ドライエッチング用学習モデルによる処理の具体例について説明する。図13は、第2の実施形態に係る学習装置のドライエッチング用学習モデルによる処理の具体例を示す図である。
以上の説明から明らかなように、第2の実施形態に係る学習装置は、
・半導体製造装置において、処理前ウェハを処理する際に設定されたパラメータデータと、処理前ウェハを処理する際に測定された処理前ウェハの処理中の環境を示す環境情報とを取得する。
・半導体製造装置において処理される処理前ウェハの処理前の形状を示す画像データである、処理前画像データを取得する。
・取得したパラメータデータ及び環境情報を正規化し、学習モデルの各層で畳み込み処理される各画像データの各画素の値を変換する際に用いる1次式の係数の形式に加工する。
・学習部が機械学習を行う際、各層で畳み込み処理された画像データの各画素の値を、1次式を用いて変換する。
上記第1及び第2の実施形態では、学習部が機械学習を行うにあたり、半導体製造プロセス固有の制約条件については特に言及しなかった。一方で、半導体製造プロセスには、固有の制約条件があり、学習部による機械学習に反映させることで(つまり、学習部による機械学習にドメイン知識を反映させることで)、シミュレーション精度を更に向上させることができる。以下、ドメイン知識を反映させた第3の実施形態について、上記第1及び第2の実施形態との相違点を中心に説明する。
図14は、第3の実施形態に係る学習装置の学習部の機能構成の一例を示す図である。図4に示した学習部122の機能構成とは、学習モデル内の内部構成が異なる。なお、ここでは、ドライエッチング用学習モデル1410を用いて、学習モデル内の内部構成について説明するが、デポジション用学習モデルも同様の内部構成を有しているものとする。
上記第1乃至第3の実施形態では、データ整形部が、処理前画像データの縦サイズ及び横サイズに応じた縦サイズ及び横サイズの連結データを生成するものとして説明した。しかしながら、データ整形部が生成する連結データの縦サイズ及び横サイズは任意であり、処理前画像データを圧縮してから、連結データを生成するように構成してもよい。以下、第4の実施形態について、上記第1乃至第3の実施形態との相違点を中心に説明する。
図15は、第4の実施形態に係る学習装置のデータ整形部の機能構成の一例を示す図である。このうち、図15の15aは、第1の実施形態に係る学習装置のデータ整形部121に、圧縮部1511を付加したデータ整形部1510を示している。
上記第1の実施形態では、学習部122に、ドライエッチング用学習モデル420と、デポジション用学習モデル421とをそれぞれ設け、異なる学習用データを用いて別々に機械学習を行うものとして説明した。
110 :半導体製造装置
111 :測定装置
112 :測定装置
120 :学習装置
121 :データ整形部
122 :学習部
130 :推論装置
131 :データ整形部
132 :実行部
300 :学習用データ
420 :ドライエッチング用学習モデル
421 :デポジション用学習モデル
430 :比較部
440 :変更部
501 :形状データ取得部
502 :チャネル別データ生成部
511 :1次元データ取得部
512 :1次元データ展開部
520 :連結部
600 :処理前画像データ
601~605 :チャネル別データ
610 :パラメータデータ
611~613 :2次元に配列されたパラメータデータ
620 :環境情報
621~623 :2次元に配列された環境情報
630 :連結データ
700 :出力結果
920 :ドライエッチング用学習済みモデル
921 :デポジション用学習済みモデル
930 :出力部
1200 :データ整形部
1201 :連結部
1300 :ドライエッチング用学習モデル
1301 :ニューラルネットワーク部
1310 :連結データ
1400 :学習部
1510、1520 :データ整形部
1511 :圧縮部
Claims (21)
- 対象物の画像データと、前記対象物に対する処理に関するデータとを取得する取得部と、
前記対象物の画像データと、前記処理に関するデータとを学習モデルに入力し、前記学習モデルの出力が、前記処理後の前記対象物の画像データに近づくように、前記学習モデルを学習する学習部と
を備える学習装置。 - 前記処理に関するデータを前記対象物の画像データに応じた形式に加工する加工部、を更に備え、
前記学習部は、前記加工されたデータを前記学習モデルに入力する
請求項1に記載の学習装置。 - 前記学習モデルに入力する前記対象物の画像データは、前記対象物に含まれるマテリアルに応じた複数のチャネルを有し、各チャネルは各マテリアルの組成比または含有比に応じた値を有する
請求項1または2に記載の学習装置。 - 前記処理が半導体製造プロセスに応じた処理である
請求項1乃至3のいずれか1項に記載の学習装置。 - 前記処理に関するデータは、前記半導体製造プロセスに応じた処理を半導体製造装置が実行する際の、処理条件を示すパラメータを含む
請求項4に記載の学習装置。 - 前記処理に関するデータは、前記半導体製造プロセスに応じた処理を半導体製造装置が実行する際に測定される環境情報を含む
請求項4または5に記載の学習装置。 - 前記パラメータは、少なくとも、前記半導体製造装置に設定される設定値、前記半導体製造装置のハードウェアの形体のいずれか1つを含む
請求項5に記載の学習装置。 - 前記環境情報は、少なくとも、前記半導体製造装置において測定される電流に関するデータ、電圧に関するデータ、光に関するデータ、温度に関するデータ、圧力に関するデータのいずれか1つを含む
請求項6に記載の学習装置。 - 前記加工部は、前記処理に関するデータを、前記画像データの縦サイズ及び横サイズに応じた2次元配列の形式に加工する
請求項2に記載の学習装置。 - 前記学習部は、
前記学習モデルの出力と、前記処理後の前記対象物の画像データとを比較する比較部と、
前記比較部による比較により得られた差分情報に基づいて前記学習モデルのモデルパラメータを更新する変更部と
を備える請求項1に記載の学習装置。 - 第1の対象物の画像データと、前記第1の対象物に対する第1の処理に関するデータとが入力された場合の出力が、前記第1の処理後の前記第1の対象物の画像データに近づくように学習された学習済みモデルを記憶する記憶部と、
第2の対象物の画像データと、第2の処理に関するデータとを前記学習済みモデルに入力し、前記第2の対象物に対する前記第2の処理後の画像データを推論する実行部と
を備える推論装置。 - 前記第2の処理に関するデータを前記第2の対象物の画像データに応じた形式に加工する加工部、を更に備え、
前記実行部は、前記加工された第2の処理に関するデータを前記学習済みモデルに入力する
請求項11に記載の推論装置。 - 前記学習済みモデルに入力する前記第2の対象物の画像データは、前記第2の対象物に含まれるマテリアルに応じた複数のチャネルを有し、各チャネルは各マテリアルの組成比または含有比に応じた値を有する
請求項11または12に記載の推論装置。 - 前記第1の処理及び前記第2の処理が半導体製造プロセスに応じた処理である
請求項11乃至13のいずれか1項に記載の推論装置。 - 前記第2の処理に関するデータは、前記半導体製造プロセスに応じた処理を半導体製造装置が実行する際の、処理条件を示すパラメータを含む
請求項14に記載の推論装置。 - 前記第2の処理に関するデータは、前記半導体製造プロセスに応じた処理を半導体製造装置が実行する際に測定される環境情報を含む
請求項14または15に記載の推論装置。 - 前記パラメータは、少なくとも、前記半導体製造装置に設定される設定値、前記半導体製造装置のハードウェアの形体のいずれか1つを含む
請求項15に記載の推論装置。 - 前記環境情報は、少なくとも、前記半導体製造装置において測定される電流に関するデータ、電圧に関するデータ、光に関するデータ、温度に関するデータ、圧力に関するデータのいずれか1つを含む
請求項16に記載の推論装置。 - 前記加工部は、前記第2の処理に関するデータを、前記第2の対象物の画像データの縦サイズ及び横サイズに応じた2次元配列の形式に加工する
請求項12に記載の推論装置。 - 前記実行部は、推論された前記第2の対象物に対する前記第2の処理後の画像データと、第3の処理に関するデータとを、前記学習済みモデルに入力し、前記第2の処理後の前記第2の対象物に対する前記第3の処理後の画像データを推論する
請求項11乃至19のいずれか1項に記載の推論装置。 - 第1の対象物の画像データと、前記第1の対象物に対する第1の処理に関するデータとが入力された場合の出力が、前記第1の処理後の前記第1の対象物の画像データに近づくように学習されており、
第2の対象物の画像データと、第2の処理に関するデータとが入力された場合に、前記第2の対象物に対する第2の処理後の画像データを推論する、
処理をコンピュータに実行させるための学習済みモデル。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980057338.1A CN112640038A (zh) | 2018-09-03 | 2019-08-23 | 学习装置、推断装置及学习完成模型 |
KR1020217006357A KR102541743B1 (ko) | 2018-09-03 | 2019-08-23 | 학습 장치, 추론 장치 및 학습 완료 모델 |
JP2020541139A JP7190495B2 (ja) | 2018-09-03 | 2019-08-23 | 推論方法、推論装置、モデルの生成方法及び学習装置 |
US17/189,608 US11922307B2 (en) | 2018-09-03 | 2021-03-02 | Learning device, inference device, and learned model |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-164931 | 2018-09-03 | ||
JP2018164931 | 2018-09-03 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/189,608 Continuation US11922307B2 (en) | 2018-09-03 | 2021-03-02 | Learning device, inference device, and learned model |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020050072A1 true WO2020050072A1 (ja) | 2020-03-12 |
Family
ID=69722532
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/033168 WO2020050072A1 (ja) | 2018-09-03 | 2019-08-23 | 学習装置、推論装置及び学習済みモデル |
Country Status (6)
Country | Link |
---|---|
US (1) | US11922307B2 (ja) |
JP (1) | JP7190495B2 (ja) |
KR (1) | KR102541743B1 (ja) |
CN (1) | CN112640038A (ja) |
TW (1) | TWI803690B (ja) |
WO (1) | WO2020050072A1 (ja) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022145225A1 (ja) * | 2020-12-28 | 2022-07-07 | 東京エレクトロン株式会社 | パラメータ導出装置、パラメータ導出方法及びパラメータ導出プログラム |
WO2022180827A1 (ja) * | 2021-02-26 | 2022-09-01 | 日本電信電話株式会社 | 光学特性のai予測システム |
KR20230124638A (ko) | 2020-12-25 | 2023-08-25 | 도쿄엘렉트론가부시키가이샤 | 관리 시스템, 관리 방법 및 관리 프로그램 |
KR20230127251A (ko) | 2020-12-28 | 2023-08-31 | 도쿄엘렉트론가부시키가이샤 | 관리 장치, 예측 방법 및 예측 프로그램 |
JP7399783B2 (ja) | 2020-04-30 | 2023-12-18 | 株式会社Screenホールディングス | 基板処理装置、基板処理方法、学習用データの生成方法、学習方法、学習装置、学習済モデルの生成方法、および、学習済モデル |
JP7467292B2 (ja) | 2020-03-13 | 2024-04-15 | 東京エレクトロン株式会社 | 解析装置、解析方法及び解析プログラム |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW202115390A (zh) * | 2019-06-06 | 2021-04-16 | 日商東京威力科創股份有限公司 | 基板檢查裝置、基板檢查系統及基板檢查方法 |
EP4035127A4 (en) | 2019-09-24 | 2023-10-18 | Applied Materials, Inc. | INTERACTIVE TRAINING OF A MACHINE LEARNING MODEL FOR TISSUE SEGMENTATION |
US20230043803A1 (en) * | 2021-08-04 | 2023-02-09 | Theia Scientific, LLC | System and method for multi-modal microscopy |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06224126A (ja) * | 1993-01-25 | 1994-08-12 | Fuji Electric Co Ltd | 半導体製造装置の膜質予測装置 |
JPH11330449A (ja) * | 1998-05-20 | 1999-11-30 | Toshiba Corp | 半導体装置の製造方法、シミュレーション装置、シミュレーション方法、シミュレーションプログラムを記録した記録媒体、及びシミュレーション用データを記録した記録媒体 |
JP2004040004A (ja) * | 2002-07-08 | 2004-02-05 | Renesas Technology Corp | 配線設計データを利用した化学的機械的研磨方法、加工物の製造方法、およびデザインルール決定方法 |
JP2004153229A (ja) * | 2002-03-14 | 2004-05-27 | Nikon Corp | 加工形状の予測方法、加工条件の決定方法、加工量予測方法、加工形状予測システム、加工条件決定システム、加工システム、加工形状予測計算機プログラム、加工条件決定計算機プログラム、プログラム記録媒体、及び半導体デバイスの製造方法 |
JP2007227618A (ja) * | 2006-02-23 | 2007-09-06 | Hitachi High-Technologies Corp | 半導体プロセスモニタ方法およびそのシステム |
JP2011071296A (ja) * | 2009-09-25 | 2011-04-07 | Sharp Corp | 特性予測装置、特性予測方法、特性予測プログラムおよびプログラム記録媒体 |
JP2013518449A (ja) * | 2010-01-29 | 2013-05-20 | 東京エレクトロン株式会社 | 半導体製造ツールを自己学習及び自己改善するための方法及びシステム |
US20170194126A1 (en) * | 2015-12-31 | 2017-07-06 | Kla-Tencor Corporation | Hybrid inspectors |
JP2018049936A (ja) * | 2016-09-21 | 2018-03-29 | 株式会社日立製作所 | 探索装置および探索方法 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20020085688A (ko) | 2001-05-09 | 2002-11-16 | 학교법인 인하학원 | 반도체 식각 공정 모의 실험 해석기 및 해석 방법 |
US6819427B1 (en) | 2001-10-10 | 2004-11-16 | Advanced Micro Devices, Inc. | Apparatus of monitoring and optimizing the development of a photoresist material |
KR20040080742A (ko) | 2003-03-13 | 2004-09-20 | 원태영 | 식각 공정 시뮬레이션의 병렬 연산 구현 방법 |
JP2005202949A (ja) | 2003-12-01 | 2005-07-28 | Oscillated Recall Technology:Kk | 大域的エッチングシミュレータ |
US8722547B2 (en) | 2006-04-20 | 2014-05-13 | Applied Materials, Inc. | Etching high K dielectrics with high selectivity to oxide containing layers at elevated temperatures with BC13 based etch chemistries |
US9245714B2 (en) | 2012-10-01 | 2016-01-26 | Kla-Tencor Corporation | System and method for compressed data transmission in a maskless lithography system |
JP6173889B2 (ja) | 2013-11-28 | 2017-08-02 | ソニーセミコンダクタソリューションズ株式会社 | シミュレーション方法、シミュレーションプログラム、加工制御システム、シミュレータ、プロセス設計方法およびマスク設計方法 |
US10056304B2 (en) | 2014-11-19 | 2018-08-21 | Deca Technologies Inc | Automated optical inspection of unit specific patterning |
US9965901B2 (en) * | 2015-11-19 | 2018-05-08 | KLA—Tencor Corp. | Generating simulated images from design information |
CN108700818B (zh) | 2015-12-22 | 2020-10-16 | Asml荷兰有限公司 | 用于过程窗口表征的设备和方法 |
JP2017182129A (ja) * | 2016-03-28 | 2017-10-05 | ソニー株式会社 | 情報処理装置。 |
WO2018048575A1 (en) * | 2016-09-07 | 2018-03-15 | Elekta, Inc. | System and method for learning models of radiotherapy treatment plans to predict radiotherapy dose distributions |
US11580398B2 (en) * | 2016-10-14 | 2023-02-14 | KLA-Tenor Corp. | Diagnostic systems and methods for deep learning models configured for semiconductor applications |
US20210305070A1 (en) | 2017-10-17 | 2021-09-30 | Ulvac, Inc. | Object processing apparatus |
US10572697B2 (en) | 2018-04-06 | 2020-02-25 | Lam Research Corporation | Method of etch model calibration using optical scatterometry |
JP2020057172A (ja) * | 2018-10-01 | 2020-04-09 | 株式会社Preferred Networks | 学習装置、推論装置及び学習済みモデル |
JP2021089526A (ja) * | 2019-12-03 | 2021-06-10 | 株式会社Preferred Networks | 推定装置、訓練装置、推定方法、訓練方法、プログラム及び非一時的コンピュータ可読媒体 |
-
2019
- 2019-08-23 WO PCT/JP2019/033168 patent/WO2020050072A1/ja active Application Filing
- 2019-08-23 KR KR1020217006357A patent/KR102541743B1/ko active IP Right Grant
- 2019-08-23 JP JP2020541139A patent/JP7190495B2/ja active Active
- 2019-08-23 CN CN201980057338.1A patent/CN112640038A/zh active Pending
- 2019-09-02 TW TW108131481A patent/TWI803690B/zh active
-
2021
- 2021-03-02 US US17/189,608 patent/US11922307B2/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06224126A (ja) * | 1993-01-25 | 1994-08-12 | Fuji Electric Co Ltd | 半導体製造装置の膜質予測装置 |
JPH11330449A (ja) * | 1998-05-20 | 1999-11-30 | Toshiba Corp | 半導体装置の製造方法、シミュレーション装置、シミュレーション方法、シミュレーションプログラムを記録した記録媒体、及びシミュレーション用データを記録した記録媒体 |
JP2004153229A (ja) * | 2002-03-14 | 2004-05-27 | Nikon Corp | 加工形状の予測方法、加工条件の決定方法、加工量予測方法、加工形状予測システム、加工条件決定システム、加工システム、加工形状予測計算機プログラム、加工条件決定計算機プログラム、プログラム記録媒体、及び半導体デバイスの製造方法 |
JP2004040004A (ja) * | 2002-07-08 | 2004-02-05 | Renesas Technology Corp | 配線設計データを利用した化学的機械的研磨方法、加工物の製造方法、およびデザインルール決定方法 |
JP2007227618A (ja) * | 2006-02-23 | 2007-09-06 | Hitachi High-Technologies Corp | 半導体プロセスモニタ方法およびそのシステム |
JP2011071296A (ja) * | 2009-09-25 | 2011-04-07 | Sharp Corp | 特性予測装置、特性予測方法、特性予測プログラムおよびプログラム記録媒体 |
JP2013518449A (ja) * | 2010-01-29 | 2013-05-20 | 東京エレクトロン株式会社 | 半導体製造ツールを自己学習及び自己改善するための方法及びシステム |
US20170194126A1 (en) * | 2015-12-31 | 2017-07-06 | Kla-Tencor Corporation | Hybrid inspectors |
JP2018049936A (ja) * | 2016-09-21 | 2018-03-29 | 株式会社日立製作所 | 探索装置および探索方法 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7467292B2 (ja) | 2020-03-13 | 2024-04-15 | 東京エレクトロン株式会社 | 解析装置、解析方法及び解析プログラム |
JP7399783B2 (ja) | 2020-04-30 | 2023-12-18 | 株式会社Screenホールディングス | 基板処理装置、基板処理方法、学習用データの生成方法、学習方法、学習装置、学習済モデルの生成方法、および、学習済モデル |
KR20230124638A (ko) | 2020-12-25 | 2023-08-25 | 도쿄엘렉트론가부시키가이샤 | 관리 시스템, 관리 방법 및 관리 프로그램 |
WO2022145225A1 (ja) * | 2020-12-28 | 2022-07-07 | 東京エレクトロン株式会社 | パラメータ導出装置、パラメータ導出方法及びパラメータ導出プログラム |
KR20230127251A (ko) | 2020-12-28 | 2023-08-31 | 도쿄엘렉트론가부시키가이샤 | 관리 장치, 예측 방법 및 예측 프로그램 |
WO2022180827A1 (ja) * | 2021-02-26 | 2022-09-01 | 日本電信電話株式会社 | 光学特性のai予測システム |
Also Published As
Publication number | Publication date |
---|---|
KR20210038665A (ko) | 2021-04-07 |
JP7190495B2 (ja) | 2022-12-15 |
US20210209413A1 (en) | 2021-07-08 |
CN112640038A (zh) | 2021-04-09 |
US11922307B2 (en) | 2024-03-05 |
KR102541743B1 (ko) | 2023-06-13 |
TW202026959A (zh) | 2020-07-16 |
JPWO2020050072A1 (ja) | 2021-08-26 |
TWI803690B (zh) | 2023-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020050072A1 (ja) | 学習装置、推論装置及び学習済みモデル | |
US20210090244A1 (en) | Method and system for optimizing optical inspection of patterned structures | |
KR102120523B1 (ko) | 프로세스 유도된 왜곡 예측 및 오버레이 에러의 피드포워드 및 피드백 보정 | |
WO2020049974A1 (ja) | 学習装置、推論装置、学習モデルの生成方法及び推論方法 | |
US20200104708A1 (en) | Training apparatus, inference apparatus and computer readable storage medium | |
US10628935B2 (en) | Method and system for identifying defects of integrated circuits | |
CN110325843B (zh) | 引导式集成电路缺陷检测 | |
JPWO2020111258A1 (ja) | 仮想測定装置、仮想測定方法及び仮想測定プログラム | |
Mack et al. | Analytical linescan model for SEM metrology | |
TW202011110A (zh) | 用於計算光學模型模擬的特徵核心的方法 | |
JP6956806B2 (ja) | データ処理装置、データ処理方法及びプログラム | |
TW202123057A (zh) | 推論裝置、推論方法及推論程式 | |
Valade et al. | Tilted beam SEM, 3D metrology for industry | |
US20230369032A1 (en) | Etching processing apparatus, etching processing system, analysis apparatus, etching processing method, and storage medium | |
WO2022145225A1 (ja) | パラメータ導出装置、パラメータ導出方法及びパラメータ導出プログラム | |
Chu et al. | Overlay run-to-run control based on device structure measured overlay in DRAM HVM | |
TWI837123B (zh) | 光阻及蝕刻模型建立 | |
Nikitin | Use of mathematical modeling for measurements of nanodimensions in microelectronics | |
Fang et al. | A compact physical CD-SEM simulator for IC photolithography modeling applications | |
TW202230204A (zh) | 用於提取特徵向量以辨識圖案物件之特徵提取方法 | |
TW202336623A (zh) | 用於建立基於物理之模型之系統和方法 | |
TW202006817A (zh) | 光阻及蝕刻模型建立 | |
JP2002289842A (ja) | シミュレーション装置およびシミュレーション方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19856532 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020541139 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20217006357 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19856532 Country of ref document: EP Kind code of ref document: A1 |