WO2019132901A1 - Génération d'un modèle d'apprentissage automatique pour placer des sraf - Google Patents
Génération d'un modèle d'apprentissage automatique pour placer des sraf Download PDFInfo
- Publication number
- WO2019132901A1 WO2019132901A1 PCT/US2017/068599 US2017068599W WO2019132901A1 WO 2019132901 A1 WO2019132901 A1 WO 2019132901A1 US 2017068599 W US2017068599 W US 2017068599W WO 2019132901 A1 WO2019132901 A1 WO 2019132901A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sraf
- placements
- exploratory
- training
- group
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03F—PHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
- G03F1/00—Originals for photomechanical production of textured or patterned surfaces, e.g., masks, photo-masks, reticles; Mask blanks or pellicles therefor; Containers specially adapted therefor; Preparation thereof
- G03F1/36—Masks having proximity correction features; Preparation thereof, e.g. optical proximity correction [OPC] design processes
Definitions
- the present disclosure relates to the field of photolithography, and more specifically relates to generating a machine learning model to place sub-resolution enhancement features (SRAFs).
- SRAFs sub-resolution enhancement features
- the wavelength of the light source may be larger than the feature size of a design pattern to be transferred onto the wafer. This may produce image errors such as interference patterns, diffraction patterns, or the like, or combinations thereof, which may cause a different shape to be printed on the wafer than the shape of the design pattern.
- image errors such as interference patterns, diffraction patterns, or the like, or combinations thereof, which may cause a different shape to be printed on the wafer than the shape of the design pattern.
- EPE edge placement error
- SRAF Sub resolution enhancement features
- SRAFs may be used in Inverse Lithography Technology (ILT) or optical proximity correction (OPC). Techniques such as ILT that work backward from a mask may be associated with slower run time than OPC and/or less optimal Process Window (PW). In other techniques with faster run time or better PW, SRAFs may be placed a priori. Machine learning may be used to assist in SRAF in techniques in which SRAFs are placed a priori. Some known techniques for machine learning for SRAF generation may have too much variability (e.g., may too frequently produce different SRAF placement for a same geometry) and/or may result in domain mismatches (e.g., may too frequently produce different SRAF placement for different domains of a same layout).
- FIG. 1 is a block diagram of a system including a datastore of exploratory sub-resolution enhancement features (SRAF) placements and a computing device to generate, using information from the datastore, a machine learning model to select an SRAF placement for correction of a mask pattern to compensate for model predicted liabilities, according to various embodiments.
- SRAF exploratory sub-resolution enhancement features
- FIG. 2 is a flow chart showing a process of generating, using exploratory SRAF placements, a machine learning model to select an SRAF placement for correction of a mask pattern to compensate for model predicted liabilities, according to various embodiments.
- FIG. 3 is a flow chart showing one process that may be performed by any data summarization module described herein, according to various embodiments.
- FIG. 4A is a flow chart showing another process that may be performed by any data summarization module described herein, according to various embodiments.
- FIG. 4B illustrates a block diagram showing the use of search rays, according to various embodiments.
- FIG. 5 illustrates a graph denoting the location of SRAFs with respect to a nearest main polygon in X and Y directions.
- FIG. 6 illustrates an example compute device that may employ the apparatuses and/or methods described herein, according to various embodiments.
- an apparatus to generate, using exploratory SRAF placements, a machine learning model to select an SRAF placement for a mask pattern to compensate for model predicted liabilities may include a processor to: select features based on a training data set, wherein the training data set includes information about a first group of the exploratory SRAF placements; perform a training of the machine learning model based on the selected features; responsive to the training of the model, generate an SRAF placement using the model; validate the SRAF placement using a simulation; and based on a result of the simulation, identify whether to further train the model based on a different training set including information about a second group of the exploratory SRAF placements, or not, prior to outputting the machine learning model.
- a processor to: select features based on a training data set, wherein the training data set includes information about a first group of the exploratory SRAF placements; perform a training of the machine learning model based on the selected features; responsive to the training of the model, generate an SRAF placement using the model; validate the SRAF placement using a simulation
- phrase“A and/or B” means (A), (B), or (A and B).
- phrase“A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).
- circuitry may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
- ASIC Application Specific Integrated Circuit
- processor shared, dedicated, or group
- memory shared, dedicated, or group
- Some techniques for placement of SRAFs have either focused on heuristic geometric rules or inverse lithography.
- the rule based approach may lead to extremely sub-optimal placement and hence large off-focus EPE errors for some patterns.
- the inverse lithography based approach may lead to large variations in SRAF placements at similar patterns, leading to uncontrollable large variations in EPE error.
- Some embodiments disclosed herein relate to SRAF placement using machine learning algorithms that are trained with learnings from a combination of data from inverse lithography (ILT) simulations, fab experiments, heuristic simulations by placing SRAFs by trial and error, minimizing EPE across focus variation and reducing variability for similar features, or the like.
- Some embodiments may combine advantages gained by an OPC model based strategy with the uniformity of rule based approaches. Experiments based on these approaches has yielded an average of 2.5 nm lower CD (critical dimension) variation than ILT, with worse case CD process variability also improving by 1.5 nm.
- a result of the validation may be used to identify whether to further train the machine learning model based on a different training set (e.g., an augmented data set including at least one different exploratory SRAF placement than the previous data set), or not.
- the machine learning model may be used to select an SRAF placement for a mask pattern to compensate for model predicted liabilities.
- the three stages may be repeated. For instance, data summarization may be performed to generate different data than data generated in a previous instance of data summarization.
- the model may be further trained based on the new data, and the further trained model may be validated. A result of the validation of the further trained model may be used to identify whether to perform yet further training of the model based on a yet another different training set, or not.
- Some embodiments may include pre-processing of a training data set, which may be based on a group of one or more SRAF placement strategies (e.g., each training configuration may utilize one or more SRAF placements).
- the pre-processing may be prior to training the ML model.
- Each training configuration may utilize an SRAF placement (of the group of SRAF placements) that yields the best lithographic quality within the group.
- SRAF placement of the group of SRAF placements
- a different group of SRAF placement strategies may be used to train the data set for patterning challenging configurations.
- FIG. 1 is a block diagram of a system 100 including a datastore 110 to store exploratory sub-resolution enhancement features (SRAF) placements l-N and a computing device 111 to generate, using information from the datastore 110, an ML model 129 to select an SRAF placement for correction of a mask pattern to compensate for model predicted liabilities, according to various embodiments.
- the computing device 111 may include a datastore 110 to store exploratory sub-resolution enhancement features (SRAF) placements l-N and a computing device 111 to generate, using information from the datastore 110, an ML model 129 to select an SRAF placement for correction of a mask pattern to compensate for model predicted liabilities, according to various embodiments.
- the computing device 111 may include a datastore 110 to store exploratory sub-resolution enhancement features (SRAF) placements l-N and a computing device 111 to generate, using information from the datastore 110, an ML model 129 to select an SRAF placement for correction of a mask pattern to compensate for model predicted liabilities, according to various
- summarization module 121 to receive one or more inputs 119 from the datastore 110, an ML calibration module 122 coupled to an output of the data summarization module 121, and an ML validation module 123 coupled to an output of the ML calibration module 122 and to generate the ML model 129.
- the data summarization module 121 may be configured to identify a first group of the SRAF placements l-N of the datastore 110.
- the data summarization module 121 may be configured to identify features to represent data of the SRAF placements of the first group.
- the data summarization module 121 may be configured to select one of the SRAF placements of the first group based on optimization of lithographic quality.
- the data summarization module 121 may generate an output (e.g., training data) based on the SRAF placement selection.
- the ML calibration module 122 may be configured to perform data modeling.
- the ML calibration module 122 may train an ML model based on the output of the data summarization module 121.
- the ML validation module 123 may be configured to validate the ML model using contour based checking techniques. In some embodiments, ML validation module 123 may be configured to generate an SRAF placement using the ML model. The ML validation module 123 may be configured to identify whether the generated SRAF placement is within error tolerances. In the case that the generated SRAF placement is not within the error tolerances, the ML validation module 123 may be configured to generate a signal 128 to cause the datastore 110 to be augmented. Augmentation may include addition of a new exploratory SRAF placement, removal of an exploratory SRAF placement, modification of a stored exploratory SRAF placement, or the like, or combinations thereof.
- the datastore 110 may include SRAF placements l-M (not shown), and at a second time the SRAF placement N may be added to the data store.
- the datastore 110 may include SRAF placements l-N and an SRAF placement, say SRAF placement 1, may be removed leaving exploratory SRAF placements 2-N in the datastore 110 at a second time.
- the original content (not shown) of the datastore 110 may be generated by any entity, such as the computing device 111, a different computing device (not shown) coupled to the datastore 110, and/or a user to identify an SRAF placement (say a user identified modification to an SRAF placement), and once generated, input into the datastore 110.
- the datastore 110 may be augmented one or more times by the same or different entity following any model validation described herein, e.g., additional SRAF placements may be generated and added to the datastore, an SRAF placement of the datastore may be removed from the datastore, a stored SRAF placement of the datastore may be modified, or the like, or combinations thereof.
- a datastore 110 may include a first group of the exploratory SRAF placements l-N prior to one of these augmentations, and the datastore 110 may include a second group of the exploratory SRAF placements l-N that is different than the first group following that augmentation.
- the data summarization module 121 may receive another input 119 identifying a second different group of SRAF placements l-N.
- the modules 121-123 may be configured to perform similar operations based on the input of the second different group of SRAF placements l-N.
- the ML model may be eligible to be used in production (e.g., fabrication).
- the ML validation module 123 may output a signal to indicate training complete and/or may retain the ML model 129.
- the signal may include the ML model 129 for storage in any datastore.
- the ML model may be used to generate an SRAF placement for correction of a mask pattern to compensate for model predicted liabilities.
- Photomask fabrication equipment (not shown) may fabricate a photomask based on the mask pattern.
- FIG. 2 is a flow chart showing a process 200 of generating, using exploratory SRAF placements, a machine learning model to select an SRAF placement for correction of a mask pattern to compensate for model predicted liabilities, according to various embodiments.
- the process 200 may be performed by any system and/or computing device described herein, such as the system 100 (FIG. 100).
- Block 201 the system 100 may select features based on a training data set that includes information about a current group of exploratory SRAF placements (e.g., of the datastore 110, FIG. 1).
- Block 201 may include any process for data summarization described herein, including any process of feature selection.
- feature selection may include the process 300 of FIG. 3.
- feature selection may include the process 400 of FIG. 4.
- Block 202 the system 100 may perform a training of a machine learning model based on the selected features.
- Block 202 may include any process for data modeling described herein.
- the model may be trained using Variation Bayesian Gaussian Mixture Models (VBGMMs).
- the training may be based on a stochastic gradient descent (SGD) multi-class classifier.
- the model may be trained using decision trees and/or random forests.
- the system 100 may use the model to generate an SRAF placement to be evaluated.
- the system 100 may validate the SRAF placement (e.g., using a simulation). If the SRAF placement is within error tolerances in diamond 205, then in block 206 the system 100 may retain the ML model for use in production.
- the system 100 may identify a new group of exploratory SRAF placements.
- the process 200 may return to block 201.
- FIG. 3 is a flow chart showing one process 300 that may be performed by any data summarization module described herein, according to various embodiments.
- the data summarization module may perform the process responsive to receiving an input layout having a main polygon and one or more other polygons (e.g., around the main polygon). Each of the other polygons around the main polygon may be associated with one or more individual SRAFs of an SRAF placement.
- the data summarization module may identify features (e.g., one or more first values to represent characteristics of a polygon of one of the individual SRAFs of the input layout).
- the data summarization module may identify the features based on a predefined geometry, e.g., a rectangle, and the one or more first values may comprise length values and width values.
- the data summarization module may identify a target variable, e.g., a second value indicative of a location of the individual SRAF relative to the main polygon.
- the data summarization module may add the features and target variable to a training data table.
- a computing device may be configured to (e.g., a data summarization module of the computing device may be configured to), for each SRAF of one of the SRAF placements of the first group of SRAF placements, identify features values based on a predefined geometry, identify a target variable value based on distance of the SRAF from a reference associated with a target pattern, and add the values to a row of the training data table.
- the data summarization module may use rectangularization of this shape to characterize it as a collection of rectangles (each specified by a length, a width, and a distance from a reference point of the target pattern).
- the training data table may be used to perform a training of a machine learning model.
- FIG. 4A is a flow chart showing another process 400 that may be performed by any data summarization module described herein, according to various embodiments.
- Process 400 is similar to process 300, but does not require a predefined geometry and may be more suitable for some SRAF placements.
- a data summarization module may identify the features associated with search rays.
- FIG. 4B illustrates the use of first search rays, e.g., X direction search rays, and second search rays, e.g., Y direction search rays, to identify features associated with an individual SRAF 455.
- blocks 402, 403, 405, and diamond 404 may be similar to blocks 302, 303, 305, and diamond 304 (FIG. 3), respectively.
- FIG. 6 illustrates an example compute device 500 that may employ the apparatuses and/or methods described herein, according to various embodiments (for instance, any apparatus and/or method associated with any compute device or electronic device described earlier with respect to FIGS. 1-4B, including for instance any of the modules described with reference to FIG. 1).
- the example compute device 500 may include a number of components, such as one or more processors 504 (one shown) and at least one communication chip 506.
- the one or more processors 504 each may include one or more processor cores.
- the at least one communication chip 506 may be physically and electrically coupled to the one or more processors 504.
- the at least one communication chip 506 may be part of the one or more processors 504.
- compute device 500 may include printed circuit board (PCB) 502.
- PCB printed circuit board
- the one or more processors 504 and the at least one communication chip 506 may be disposed thereon.
- compute device 500 may include other components that may or may not be physically and electrically coupled to the PCB 502. These other components include, but are not limited to, a memory controller (not shown), volatile memory (e.g., dynamic random access memory (DRAM) 520), non-volatile memory such as flash memory 522, hardware accelerator 524, an I/O controller (not shown), a digital signal processor (not shown), a crypto processor (not shown), a graphics processor 530, one or more antenna 528, a display (not shown), a touch screen display 532, a touch screen controller 546, a battery 536, an audio codec (not shown), a video codec (not shown), a global positioning system (GPS) device 540, a compass 542, an accelerometer (not shown), a gyroscope (not shown), a speaker 550, and a mass storage device (such as hard disk drive, a solid state drive, compact disk (CD), digital versatile disk (DVD)) (not shown), and so
- volatile memory
- the one or more processor 504, DRAM 520, flash memory 522, and/or a storage device may include associated firmware (not shown) storing programming instructions configured to enable compute device 500, in response to execution of the programming instructions by one or more processor 504, to perform methods described herein such as machine learning techniques for identifying an SRAF placement (such as generating, using exploratory SRAF placements, a machine learning model to select an SRAF placement for a mask pattern to compensate for liabilities predicted by modeling).
- these aspects may additionally or alternatively be implemented using hardware separate from the one or more processor 504, flash memory 512, or storage device 511, such as hardware accelerator 524 (which may be a Field Programmable Gate Array (FPGA)).
- FPGA Field Programmable Gate Array
- the at least one communication chip 506 may enable wired and/or wireless
- wireless and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not.
- the at least one communication chip 506 may implement any of a number of wireless standards or protocols, including but not limited to IEEE 702.20, Long Term Evolution (LTE), LTE Advanced (LTE- A), General Packet Radio Service (GPRS), Evolution Data Optimized (Ev-DO), Evolved High Speed Packet Access (HSPA+), Evolved High Speed Downlink Packet Access (HSDPA+), Evolved High Speed Uplink Packet Access (HSUPA+), Global System for Mobile Communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Worldwide Interoperability for Microwave Access (WiMAX), Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 5G, 5G, and beyond.
- the at least one communication chip 506 may include a plurality of communication chips 506. For instance, a first communication chip 506 may be dedicated to shorter range wireless communications such as
- communication chip 506 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.
- the compute device 500 may be a component of a laptop, a netbook, a notebook, an ultrabook, a smartphone, a computing tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, a digital camera, an appliance, a portable music player, and/or a digital video recorder.
- the compute device 500 may be any other electronic device that processes data.
- the computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non- exhaustive list) of the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a transmission media such as those supporting the Internet or an intranet, or a magnetic storage device.
- the computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
- a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- the computer- usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave.
- the computer usable program code may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc.
- Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the“C” programming language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- Some embodiments may apply techniques disclosed herein to several other aspects of OPC correction algorithms from data accumulated through experiences as well ILT.
- Some example embodiments may include learning mask shapes, pre-biasing for different features, learning correction damping rate (MEEF (mask error enhancement factor) of OPC correctors) for different features, SMO (source mask optimization), or the like, or combinations thereof.
- MEEF mask error enhancement factor
- Any process described in this section may be performed by any data summarization module described herein, e.g., data summarization module 121 (FIG. 1) and/or in any process of selecting features (e.g., block 201 of FIG. 2).
- data summarization module 121 FIG. 1
- FIG. 2 A block diagram illustrating an exemplary computing environment in this section.
- model based inverse lithography may be used to obtain predicted SRAFs. Additional SRAFs may be generated based on prior learning for the process that are known to play vital role in enabling the printing process.
- this data may be converted into feature-based tabular data.
- Feature selection may depends on the design rules for layer.
- various measurements may be performed for each of the SRAFs in the training set.
- the common features that may be measured may include:
- the first listed feature may include continuous variables that may need further processing, and these may correspond to SRAF properties to be modeled.
- the next three listed features may also represent SRAF properties that help define the target variable.
- the remaining features may be feature variables, e.g., the independent variables in our model.
- the last three features in the list may depend on local SRAF placement density.
- the ML model validation phase may include two sub-phases. In the first sub-phase, a simpler model may be used (without using these features) for one or more SRAF placements. Then, in the second sub-phase, using the SRAFs from the first sub-phase, SRAF density related features (e.g., the last three features in the list) may be collected. The model may then be used to predict the SRAFs.
- These features may define various aspects of geometric neighborhood of SRAFs. This list is not limited to the above feature set. Some embodiments may utilize more advanced features for complex design rules, including features like local geometric and image map hashes.
- an additional categorical feature may be used, e.g., type of SRAF - with three levels as comer SRAFs, horizontal side SRAFs and vertical side SRAFs.
- some systems may first collect the X and Y distance of SRAFs from main features, then may convert this into categorical data. This may be done to ensure that the modeling problem is a classification problem rather than a regression problem.
- the conversion of X, Y data to the categorical data may be done in an unsupervised learning setup.
- the continuous variables X, and Y distance measured above the first bullet in the list may be converted to binned values using the following scheme using Variational Bayesian Gaussian Mixture Models (VBGMMs).
- VGMMs Variational Bayesian Gaussian Mixture Models
- FIG. 5 illustrates a graph 599 resulting from the use of such a model in one example.
- the ellipses show the center of different types of extracted SRAFs.
- The“SizeX” and“SizeY” axes variables denote the location of SRAFs with respect to the nearest region of the main polygon in X and Y directions, respectively.
- the spread of different clustered may be controlled though VBGMM hyper parameters.
- the wide spread of SRAF placement may be categorized into nineteen types of SRAFs, as denoted in the figure by ellipses.
- each of them may be given a unique ID.
- the ML model (e.g., the output ML model once all training is complete) may be configured to predict the ID of SRAF type and, the SRAF may be placed by the ML model based on the centroid of the ID.
- Training / Validation For the selected layout(s), for every unique polygon-SRAF pairs, there may be different features that describe the surrounding (using the feature variable described above) and the response variable that defines the type of the SRAF placed as the ID of the SRAF determined using the VBGMM clustering. In some embodiments, this data set may include tens of millions of entries (e.g., rows). Some embodiments may use a 10-fold cross validation to evaluate and regularize ML models.
- SRAF placement is desired, all the feature variables may be evaluated. This may be later used to identify a prediction and placement of SRAF.
- Any process described in this section may be performed by any ML calibration module described herein, e.g., ML calibration module 122 (FIG. 1) and/or in any process of performing a training (e.g., block 202 of FIG. 2).
- Some embodiments may use a stochastic gradient descent (SGD) multi-class classifier to build a model to find a mapping between feature variables and the type of SRAF (the SRAF ID).
- SGD models may be highly efficient for large data sets in terms of training and prediction.
- SGD models may be partially trained (e.g., when new training data is identified, the model may be updated). This may yield highly parallebzable implementations of such models.
- SGD models may be highly sensitive to feature scaling. Therefore, before training/prediction from these models, some systems may standardize it based on a mean 0 and variance 1. Furthermore, the same scaling function may be applied to the both training and testing data set.
- a modified Huber loss function may be a smooth loss function that is tolerant to outliers and may support a prediction of a probability of different SRAF IDs rather than exact ID.
- Different hyper parameters - learning rate, and number of iterations may be optimized using grid search and 10-fold cross validation.
- SRAF generation may be done in two phases.
- first phase the SRAF placement may be performed using the simple model.
- second phase the SRAF density related features may be collected from the result of the first step and then the full model may be used to place SRAFs.
- the SRAF placement may be performed using a protocol.
- an exhaustive set of feature parameters may be created to represent a design space.
- the selected model may then used to predict the corresponding SRAF ID.
- Each of these SRAF ID may then be replaced by the corresponding centroid positions.
- the combination of feature space and predicted SRAF locations may define the rule with which SRAF placement should be done in any new database.
- Some SRAF placements may utilize rule-based SRAFs.
- Some embodiments may use any data summarization, ML calibration, and ML model validation described herein, and may use search rays for feature identification.
- search rays may be any data summarization, ML calibration, and ML model validation described herein, and may use search rays for feature identification.
- the features may not make any assumption on the topology of the develop check critical dimensions (DCCD) target.
- the features may be distances between the “center” point (located on a DCCD target edge) and other DCCD edges. Further details are provided in the subsections entitled Features.
- the ML model may only tries to predict the distance between the“center” point (DCCD target) and the closest predicted SRAF.
- features that represent the DCCD pattern around the point may be extracted.
- the system may trace rays in x and y directions around the center point and calculate the orthogonal distance between the center point and other DCCD target edges. In some embodiments, calculation may be based on the spacing to the three closest edges (three in the outer side, and three in the inner side). Some embodiments may utilize three rays in the x direction and three rays in the y direction, as illustrated in FIG. 4B.
- some embodiments may use
- the ML model may attempt to predict the distance between the center point and the closest SRAF in the direction orthogonal to the DCCD target.
- the SRAF distance may be predicted at intervals (say, every few nanometers) along the edges of the DCCD target.
- the SRAF generation step if the spacing between center points is small enough, the combination of all the SRAF generated is a non-rectangular shape that is not compliant with the MRC rules. Some embodiments may use rectangularization to characterize this shape.
- Some embodiments may utilize decision trees for ML model calibration.
- the ML model may be associated with a hierarchy of two decision trees.
- the first decision tree in the hierarchy may predict the existence of SRAFs.
- the second decision tree may predict the distance between the SRAF and the DCCD target.
- Some embodiments may utilize random forests (e.g., an ensemble of decision trees) to prevent overfitting the data.
- random forests e.g., an ensemble of decision trees
- the ML model may be interpreted as a rule-based SRAF placement algorithm with highly complex rules.
- Target pattern e.g., a wafer target pattern - information representing the desired pattern we wish to have printed on the wafer.
- Mask pattern - a pattern that is to be used to create a photomask for optical lithography.
- OPC model - a numerical model developed to predict the wafer target pattern given a mask pattern.
- Model based OPC modifying a given input mask pattern to mitigate wafer or patterning liabilities as predicted by one or more OPC models.
- Model predicted liabilities - a model may be used for specific process and optic considerations, such as to predict differences between the wafer target pattern and the mask pattern prior to fabrication. Predicted differences that exceed tolerances (such as CD differences) may require model based OPC or other manipulation of the mask pattern, e.g., addition of SRAFs to the mask pattern, removal of SRAFs from the mask pattern, changes to SRAFs in the mask pattern, or the like, or combinations thereof.
- Example 1 is an apparatus for generating a machine learning model to place SRAFs.
- the apparatus may include a processor to generate, using exploratory SRAF placements, a machine learning model to select an SRAF placement for a mask pattern to compensate for model predicted liabilities, the processor further to: select features based on a training data set, wherein the training data set includes information about a first group of the exploratory SRAF placements; perform a training of the machine learning model based on the selected features; responsive to the training of the machine learning model, generate an SRAF placement using the machine learning model; validate the SRAF placement using a simulation; and based on a result of the simulation, identify whether to further train the machine learning model based on a different training set including information about a second group of the exploratory SRAF placements, or not, prior to outputting the machine learning model.
- Example 2 includes the subject matter of example 1 (or any other example described herein), further comprising wherein the training data set comprises a training data table, the processor further to, for each SRAF of one of the SRAF placements of the first group of SRAF placements: identify feature values based on a predefined geometry; identify a target variable value based on a distance of the SRAF from a reference associated with a target pattern; and add the values to a row of the training data table.
- Example 3 includes the subject matter of any of examples 1-2 (or any other example described herein), wherein the predefined geometry comprises a rectangle and the feature values comprise length values and width values.
- Example 4 includes the subject matter of any of examples 1-3 (or any other example described herein), wherein the training data set comprises a training data table, the processor further to, for each SRAF of one of the SRAF placements of the first group of SRAF placements: identify a feature value associated with a search ray; identify a target variable value based on a distance of the SRAF from a reference associated with a target pattern; and add the values to a row of the training data table.
- Example 5 includes the subject matter of any of examples 1-4 (or any other example described herein), wherein identify the features value associated with the search ray further comprises: identify a length of a distance from a reference point associated with the target pattern to the respective one of the SRAFs, wherein the distance comprises a length of the search ray.
- Example 6 is a system for generating a machine learning model to place SRAFs.
- the system may include a datastore to retain exploratory SRAF placements; and a computing device to generate a machine learning (ML) model to select an SRAF placement for a mask pattern to compensate for model predicted liabilities, the computing device comprising: a data
- the summarization module to access the database responsive to each request from the computing device, to identify an SRAF placement based on each access to the database, and to provide an output to an ML calibration module based on each identified SRAF placement; the ML calibration module to perform a training of a current instance of the ML model responsive to a receipt of data from the data summarization module, to output the trained current instance of the ML model to an ML validation module; and the ML validation module to identify whether an SRAF placement generated using the trained current instance of the ML is within error tolerances; wherein responsive to not within the error tolerances, the computing device is further to augment the database and send a next request to the data summarization module for a next access to the database following augmentation; and wherein responsive to within the error tolerances, the generated ML model comprises the trained current instance of the ML model.
- Example 7 includes the subject matter of example 6 (or any other example described herein), wherein the ML calibration module is further to perform training using one or more Variational Bayesian Gaussian Mixture Models (VBGMMs) that are trained based on at least one stochastic gradient descent (SGD) multi-class classifier.
- VGMMs Variational Bayesian Gaussian Mixture Models
- SGD stochastic gradient descent
- Example 8 includes the subject matter of any of examples 6-7 (or any other example described herein), wherein the ML calibration module is further to perform training using decision trees or random forests.
- Example 9 includes the subject matter of any of examples 6-8 (or any other example described herein), wherein the data summarization module is further to identify feature values for each SRAF of the identified SRAF placement based on a predefined geometry.
- Example 10 includes the subject matter of any of examples 6-9 (or any other example described herein), wherein the data summarization module is further to identify feature values for each SRAF of the identified SRAF placement based on a search ray.
- Example 11 is a method of generating, using exploratory SRAF placements, a machine learning model to select an SRAF placement for a mask pattern to compensate for model predicted liabilities, the method comprising: selecting features based on a training data set, wherein the training data set includes information about a first group of the exploratory SRAF placements; performing a training of the machine learning model based on the selected features; responsive to the training of the machine learning model, generating an SRAF placement using the machine learning model; validating the SRAF placement using a simulation; and identifying whether to further train the machine learning model based on a different training set including information about a second group of the exploratory SRAF placements, or not, based on a result of the simulation.
- Example 12 includes the subject matter of example 11 (or any other example described herein), wherein the first group of the exploratory SRAF placements comprises more than one of the exploratory SRAF placements.
- Example 13 includes the subject matter of any of examples 11-12 (or any other example described herein), wherein the second group of the exploratory SRAF placements comprises more than one of the exploratory SRAF placements.
- Example 14 includes the subject matter of any of examples 11-13 (or any other example described herein), wherein the second group of exploratory SRAF placements comprises at least one of the exploratory SRAF placements of the first group of exploratory SRAF placements.
- Example 15 includes the subject matter of any of examples 11-14 (or any other example described herein), wherein the second group of exploratory SRAF placements is different than the first group of exploratory SRAF placements, and wherein the second group of exploratory SRAF placements includes all of the exploratory SRAF placements of the first group of exploratory SRAF placements.
- Example 16 includes the subject matter of any of examples 11-15 (or any other example described herein), wherein at least one of the exploratory SRAF placements is generated from inverse lithography (ILT) simulations or heuristic simulations.
- ILT inverse lithography
- Example 17 includes the subject matter of any of examples 11-16 (or any other example described herein), wherein the first group of exploratory SRAF placements includes at least one exploratory SRAF placement generated from one or more inverse lithography (ILT) simulations and at least one exploratory SRAF placement generated from one or more heuristic simulations.
- the first group of exploratory SRAF placements includes at least one exploratory SRAF placement generated from one or more inverse lithography (ILT) simulations and at least one exploratory SRAF placement generated from one or more heuristic simulations.
- ILT inverse lithography
- Example 18 includes the subject matter of any of examples 11-17 (or any other example described herein), wherein the training data set comprises a training data table, and the method further comprises, for each SRAF of one of the SRAF placements of the first group of SRAF placements: identifying feature values based on a predefined geometry; identifying a target variable value based on a distance of the SRAF from a reference associated with a target pattern; and adding the values to a row of the training data table.
- Example 19 includes the subject matter of any of examples 11-18 (or any other example described herein), wherein the training data set comprises a training data table, and the method further comprises, for each SRAF of one of the SRAF placements of the first group of SRAF placements: identifying a features value associated with a search ray; identifying a target variable value based on distance of the SRAF from a reference associated with a target pattern; and adding the values to a row of the training data table.
- Example 20 is a machine readable medium storing instructions that, when executed, cause the machine to perform the steps of any of claims 11-19 (or any other example described here).
- Example 21 is a system for generating a machine learning model to place SRAFs.
- the system may include photomask fabrication equipment to form a photomask based on a mask pattern; and a computer readable media having instructions for generating, using exploratory SRAF placements, a machine learning model to select an SRAF placement for the mask pattern to compensate for model predicted liabilities, wherein the instructions when executed, cause a processor to: select features based on a training data set, wherein the training data set includes information about a first group of the exploratory SRAF placements; perform a training of the machine learning model based on the selected features; responsive to the training of the machine learning model, generate an SRAF placement using the machine learning model; validate the SRAF placement using a simulation; and based on a result of the simulation, identify whether to further train the machine learning model based on a different training set including information about a second group of the exploratory SRAF placements, or not, prior to outputting the machine learning model.
- Example 22 includes the subject matter of example 21 (or any other example described herein), wherein the training data set comprises a training data table, and the instructions are further to cause the processor to, for each SRAF of one of the SRAF placements of the first group of SRAF placements: identify feature values based on a predefined geometry; identify a target variable value based on a distance of the SRAF from a reference associated with a target pattern; and add the values to a row of the training data table.
- Example 23 includes the subject matter of any of examples 21-22 (or any other example described herein), wherein the predefined geometry comprises a rectangle and the feature values comprise length values and width values.
- Example 24 includes the subject matter of any of examples 21-23 (or any other example described herein), wherein the training data set comprises a training data table, and the instructions are further to cause the processor to, for each SRAF of one of the SRAF placements of the first group of SRAF placements: identify a feature value associated with a search ray; identify a target variable value based on a distance of the SRAF from a reference associated with a target pattern; and add the values to a row of the training data table.
- Example 25 includes the subject matter of any of examples 21-24 (or any other example described herein), wherein identify the features value associated with the search ray further comprises: identify a length of a distance from a reference point associated with the target pattern to the respective one of the SRAFs, wherein the distance comprises a length of the search ray.
- Example 26 is an apparatus for generating using, exploratory SRAF placements, a machine learning model to select an SRAF placement for a mask pattern to compensate for model predicted liabilities, the apparatus comprising: means for selecting features based on a training data set, wherein the training data set includes information about a first group of the exploratory SRAF placements; means for performing a training of the machine learning model based on the selected features; means for generating an SRAF placement using the machine learning model responsive to the training of the machine learning model; means for validating the SRAF placement using a simulation; and means for identifying whether to further train the machine learning model based on a different training set including information about a second group of the exploratory SRAF placements, or not, based on a result of the simulation.
- Example 27 includes the subject matter of example 26 (or any other example described herein), wherein at least one of the exploratory SRAF placements is generated from inverse lithography (ILT) simulations.
- ILT inverse lithography
- Example 28 includes the subject matter of any of examples 26-27 (or any other example described herein), wherein at least one of the exploratory SRAF placements is generated from heuristic simulations.
- Example 29 includes the subject matter of any of examples 26-28 (or any other example described herein), wherein the first group of exploratory SRAF placements includes at least one exploratory SRAF placement generated from one or more inverse lithography (ILT) simulations and at least one exploratory SRAF placement generated from one or more heuristic simulations.
- the first group of exploratory SRAF placements includes at least one exploratory SRAF placement generated from one or more inverse lithography (ILT) simulations and at least one exploratory SRAF placement generated from one or more heuristic simulations.
- ILT inverse lithography
- Example 30 includes the subject matter of any of examples 26-29 (or any other example described herein), wherein the training data set comprises a training data table, and the apparatus further comprises: means for identifying, for each SRAF of one of the SRAF placements of the first group of SRAF placements, feature values based on a predefined geometry; means for identifying a target variable value based on a distance of the SRAF from a reference associated with a target pattern; and means for adding the values to a row of the training data table.
- Example 31 includes the subject matter of any of examples 26-30 (or any other example described herein), wherein the training data set comprises a training data table, and the apparatus further comprises: means for identifying, for each SRAF of one of the SRAF placements of the first group of SRAF placements, a features value associated with a search ray; identifying a target variable value based on distance of the SRAF from a reference.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
La présente invention concerne des appareils, des procédés et un support de mémoire associés à la génération d'un modèle d'apprentissage automatique pour placer des caractéristiques d'amélioration de sous-résolution (SRAF). Un appareil peut générer, à l'aide de placements de SRAF exploratoires, un modèle d'apprentissage automatique pour sélectionner un placement de SRAF destiné à un motif de masque pour compenser les risques prédits de modèle et peut comprendre un processeur servant : à sélectionner des caractéristiques sur la base d'un ensemble de données d'apprentissage, l'ensemble de données d'apprentissage comprenant des informations concernant un premier groupe des placements de SRAF exploratoires; à effectuer un apprentissage du modèle sur la base des caractéristiques sélectionnées; en réponse à l'apprentissage du modèle, à générer un placement de SRAF à l'aide du modèle; à valider le placement de SRAF à l'aide d'une simulation; sur la base d'un résultat de la simulation, à identifier s'il faut ou non davantage former le modèle sur la base d'un ensemble d'apprentissage différent comprenant des informations concernant un second groupe des placements de SRAF exploratoires, avant la sortie du modèle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2017/068599 WO2019132901A1 (fr) | 2017-12-27 | 2017-12-27 | Génération d'un modèle d'apprentissage automatique pour placer des sraf |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2017/068599 WO2019132901A1 (fr) | 2017-12-27 | 2017-12-27 | Génération d'un modèle d'apprentissage automatique pour placer des sraf |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019132901A1 true WO2019132901A1 (fr) | 2019-07-04 |
Family
ID=67068012
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2017/068599 WO2019132901A1 (fr) | 2017-12-27 | 2017-12-27 | Génération d'un modèle d'apprentissage automatique pour placer des sraf |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019132901A1 (fr) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130031518A1 (en) * | 2011-07-26 | 2013-01-31 | Juan Andres Torres Robles | Hybrid Hotspot Detection |
US20140358830A1 (en) * | 2013-05-30 | 2014-12-04 | Synopsys, Inc. | Lithographic hotspot detection using multiple machine learning kernels |
US20150213374A1 (en) * | 2014-01-24 | 2015-07-30 | International Business Machines Corporation | Detecting hotspots using machine learning on diffraction patterns |
WO2017171890A1 (fr) * | 2016-04-02 | 2017-10-05 | Intel Corporation | Systèmes, procédés et appareils pour réduire une erreur de modèle d'opc par l'intermédiaire d'un algorithme d'apprentissage automatique |
-
2017
- 2017-12-27 WO PCT/US2017/068599 patent/WO2019132901A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130031518A1 (en) * | 2011-07-26 | 2013-01-31 | Juan Andres Torres Robles | Hybrid Hotspot Detection |
US20140358830A1 (en) * | 2013-05-30 | 2014-12-04 | Synopsys, Inc. | Lithographic hotspot detection using multiple machine learning kernels |
US20150213374A1 (en) * | 2014-01-24 | 2015-07-30 | International Business Machines Corporation | Detecting hotspots using machine learning on diffraction patterns |
WO2017171890A1 (fr) * | 2016-04-02 | 2017-10-05 | Intel Corporation | Systèmes, procédés et appareils pour réduire une erreur de modèle d'opc par l'intermédiaire d'un algorithme d'apprentissage automatique |
Non-Patent Citations (1)
Title |
---|
XU, X. ET AL.: "A machine learning based framework for sub-resolution assist feature generation", INTERNATIONAL SYMPOSIUM ON PHYSICAL DESIGN, 6 March 2016 (2016-03-06), pages 161 - 168, XP058079862 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210240907A1 (en) | Method and Apparatus for Integrated Circuit Mask Patterning | |
US8762912B2 (en) | Tiered schematic-driven layout synchronization in electronic design automation | |
US8423941B2 (en) | Structural migration of integrated circuit layout | |
US11562118B2 (en) | Hard-to-fix (HTF) design rule check (DRC) violations prediction | |
US20150339430A1 (en) | Virtual hierarchical layer usage | |
KR20220050980A (ko) | 집적 회로들을 위한 신경망 기반 마스크 합성 | |
US9898567B2 (en) | Automatic layout modification tool with non-uniform grids | |
US11481536B2 (en) | Method and system for fixing violation of layout | |
US20220335197A1 (en) | Post-Routing Congestion Optimization | |
KR20220041117A (ko) | 인공 신경망에 의해 예측된 고장 모드들에 기초한 레티클 향상 기법 레시피들의 적용 | |
US12093629B2 (en) | Method of manufacturing semiconductor device and system for same | |
US20180196349A1 (en) | Lithography Model Calibration Via Genetic Algorithms with Adaptive Deterministic Crowding and Dynamic Niching | |
CN115470741A (zh) | 用于光源掩模协同优化的方法、电子设备和存储介质 | |
US20140282299A1 (en) | Method and apparatus for performing optical proximity and photomask correction | |
WO2019132901A1 (fr) | Génération d'un modèle d'apprentissage automatique pour placer des sraf | |
Wang et al. | Optimization of self-aligned double patterning (SADP)-compliant layout designs using pattern matching for 10nm technology nodes and beyond | |
Asthana et al. | OPC recipe optimization using genetic algorithm | |
Verma et al. | Pattern-based pre-OPC operation to improve model-based OPC runtime | |
US20120127442A1 (en) | Determining lithographic set point using optical proximity correction verification simulation | |
US11651135B2 (en) | Dose optimization techniques for mask synthesis tools | |
US11657207B2 (en) | Wafer sensitivity determination and communication | |
US10839133B1 (en) | Circuit layout similarity metric for semiconductor testsite coverage | |
US11972191B2 (en) | System and method for providing enhanced net pruning | |
Duan et al. | Design technology co-optimization for 14/10nm metal1 double patterning layer | |
US20220035241A1 (en) | Dose information generation and communication for lithography manufacturing systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17936845 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17936845 Country of ref document: EP Kind code of ref document: A1 |