CN111310407A - Method for designing optimal feature vector of reverse photoetching based on machine learning - Google Patents

Method for designing optimal feature vector of reverse photoetching based on machine learning Download PDF

Info

Publication number
CN111310407A
CN111310407A CN202010083839.1A CN202010083839A CN111310407A CN 111310407 A CN111310407 A CN 111310407A CN 202010083839 A CN202010083839 A CN 202010083839A CN 111310407 A CN111310407 A CN 111310407A
Authority
CN
China
Prior art keywords
training
neural network
value
network model
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010083839.1A
Other languages
Chinese (zh)
Inventor
时雪龙
赵宇航
陈寿面
李琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai IC R&D Center Co Ltd
Original Assignee
Shanghai IC R&D Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai IC R&D Center Co Ltd filed Critical Shanghai IC R&D Center Co Ltd
Priority to CN202010083839.1A priority Critical patent/CN111310407A/en
Publication of CN111310407A publication Critical patent/CN111310407A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)

Abstract

An optimal feature vector design method for reverse photoetching solution based on machine learning comprises dividing a design target pattern into a plurality of grid units; calculating a set of feature functions { Ki (x, y) }, i ═ 1,2, … N1 according to imaging conditions; establishing a neural network model, and selecting training samples and verification samples required to be included in training; calculating a set of signals { Si (x, y) } for each of the grid cells using a set of feature functions { Ki (x, y) }; and taking the value of the strict reverse photoetching at the corresponding position as a target value of neural network training; during training, different input end dimensions N1, hidden layer numbers N2 and each hidden layer are adoptedNumber M of neurons in the hidden layer1,M2,…MN2Training with training samples and validating with validation samples until finding a neuron with an input end dimension of N1, a number of hidden layers of N2, and a number of neurons per hidden layer of M is found1,M2,…MN2The combined neural network model is satisfied. Therefore, the design method of the invention ensures that the neural network does not need a feature extraction layer any more, thereby simplifying the network architecture and shortening the training time.

Description

Method for designing optimal feature vector of reverse photoetching based on machine learning
Technical Field
The invention belongs to the field of integrated circuit manufacturing, and particularly relates to a method for designing an optimal feature vector of reverse photoetching based on machine learning.
Background
Computational lithography plays a vital role in the semiconductor industry. When the semiconductor technology node is reduced to 14nm or below, the photoetching technology is gradually close to the physical limit, and light Source mask optimization (SMO for short) is used as a novel resolution enhancement technology, so that the overlapping process window of semiconductor photoetching under the limit size can be obviously improved, and the life cycle of the conventional photoetching technology can be effectively extended. SMO is not only an important component of 193nm immersion lithography, but will also be an essential technology in EUV lithography.
The basic principle of light source mask co-optimization simulation calculation is similar to model-based proximity effect correction. The edge of the mask pattern is moved and its deviation from the target pattern on the wafer, i.e., edge placement error, is calculated. Disturbance of exposure dose, focusing power and pattern size on the mask plate is deliberately introduced into the model during optimization, and edge placement errors of images on the wafer caused by the disturbance are calculated. The merit function and optimization are both based on edge placement errors. The light source mask co-optimizes the calculated result to include not only a pixelated light source but also proximity effect corrections to the input design.
After source mask co-optimization, inverse lithography has become the ultimate frontier of computational lithography. However, reverse lithography requires huge computing hardware resources and long computing time, and it is still impractical to implement a strict full-chip reverse lithography solution. Since the 3D effect of Extreme Ultraviolet (EUV) masks is more pronounced than immersion lithography masks, in this case, it becomes more computationally intensive and more difficult if one also tries to achieve full-chip strict reverse lithography solutions for EUV.
Inverse Lithography Technology (ILT) is considered as a new generation of resolution enhancement Technology for 45 nm, 32 nm and even 22 nm Lithography. One very promising technique to overcome this obstacle is to leverage the evolving neural network structure-based machine learning techniques, and in computational lithography, leverage the evolving neural network structure-based machine learning techniques, and in particular, the Deep Convolutional Neural Network (DCNN), to obtain a solution for Inverse Lithography (ILT), and is much faster than the rigorous inverse lithography computation.
However, in DCNN, in order to extract feature vectors with sufficient resolution and near-complete representation capability, the feature vector extraction layer is very complex and lacks a real physical meaning. In order to extract feature vectors with sufficient resolution and near-complete representation capability, training of the DCNN network requires a large number of equalized samples, which makes training more difficult and time consuming.
Disclosure of Invention
In order to achieve the purpose, the technical scheme of the invention is as follows:
an optimal feature vector design method for reverse photoetching solution based on machine learning is used for predicting/calculating the value of the reverse photoetching solution; the method comprises the following steps:
step S1: dividing a design target pattern into N grid cells, wherein the size of the grid cells is determined by an imaging condition;
step S2: calculating a set of feature functions { Ki (x, y) }, i ═ 1,2, … N1 according to imaging conditions; wherein the set of feature functions { Ki (x, y) } is an optimal set of optical scales for measuring the surrounding environment of any one grid cell in the design target pattern; the value of the N1 is related to the requirement of completeness of the surrounding environment of the representation grid unit, and the N1 is the number of the optical scales Ki (x, y);
step S3: establishing a neural network model, wherein the neural network model comprises an input layer, a hidden layer and an output layer; the dimension of the input layer is equal to N1, the hidden layers have N2 layers in total, and the number of neurons of each hidden layer is M1, M2, … MN 2; wherein said M1, M2, … MN2 are the same, partially the same or different;
step S4: training the neural network model comprises training samples and verification samples, wherein the training samples and the verification samples are part of target patterns in the design target patterns, a feature function set { Ki (x, y) } is used for calculating a signal set { Si (x, y) } of each grid unit as the neural network model input of the grid unit, the signal set { Si (x, y) } represents the surrounding environment of one grid unit in the target patterns, and the signal set { Si (x, y) } is also called a feature vector; using the value of the strict reverse photoetching at the corresponding position as a target value of neural network training, namely using the same partial target pattern and using a strict reverse photoetching algorithm to generate an optimal mask image as an original training target image of the neural network training;
step S5: in the training of the neural network model, different input end dimensions N1, hidden layer number N2 and the number M of neurons of each hidden layer are adopted1,M2,…MN2Training with the training samples and verifying with a verification sample until finding a neuron with the input dimension N1, a number of hidden layers N2, and a number of neurons per hidden layer M1,M2,…MN2(ii) the neural network models combined satisfactorily; wherein the satisfactory combination means that, for each grid cell in the training set and the verification set, an error between a predicted value of the neural network model and a value of an inverse lithography rigorous solution is less than or equal to a predefined error specification;
step S6: in the application implementation stage, a design wafer pattern is divided into grid units, a { Si (x, y) } value is calculated for each grid unit, and the { Si (x, y) } value is input into a trained neural network model to obtain a value for predicting reverse photoetching resolution.
Further, the inverse lithography solution is a machine learning based inverse lithography solution, a machine learning based optical proximity correction, or a machine learning based lithography hotspot detection solution.
Further, when the training sample is selected in step S4, first, the original training target image trained by the neural network is divided into a first class region and a second class region; the first type area is an area which does not have obvious repeatability on the useful information for model training, and the second type area is an area which has obvious repeatability on the useful information for model training.
Further, the step of dividing the original training target image trained by the neural network into a first class region and a second class region includes:
solving the maximum intensity value in the original training target image trained by the neural network;
determining the intensity threshold value of the selected seed pixel point by multiplying the found maximum intensity value by a coefficient;
creating an auxiliary image having the same size as the original training target image, the intensity value of the auxiliary image being initially set to zero;
finding out the pixel position of which the intensity value is greater than a seed threshold value in the original training target image;
setting the intensity value of the pixel position with the intensity value larger than the seed threshold value to be a preset value in the auxiliary image, so that a plurality of islands can be formed in the auxiliary image;
and iterating the island for multiple times through image growth morphological operation to finally form the first-class region in the original training target image, and taking the remaining region in the original training target image as the second-class region.
Further, in the training process of step S5, a batch normalization technique is adopted, and dynamic adaptive sample weighting is adopted to improve the training quality.
Further, in the step S5, the neural network model is trained by using a stochastic gradient descent method.
Further, the imaging conditions include exposure wavelength, numerical aperture, and mode and setting of exposure illumination.
Further, the N1 is less than or equal to 200.
Further, the N2 is less than or equal to 6.
Further, the method further includes step S7: obtaining an image of the reverse photolysis according to the value of the reverse photolysis; identifying a main pattern area of an original design from an image of reverse photoetching solution; in the remaining regions, the auxiliary function regions are positioned by predefined intensity thresholds to determine the best position of the full-chip sub-resolution auxiliary pattern.
According to the technical scheme, the method for designing the optimal feature vector of the reverse photoetching through machine learning eliminates the requirement of a feature vector extraction layer in a DCNN neural network based on the design of the physical feature vector, only a mapping function layer needs to be constructed, the required neural network is greatly simplified, and the training time of the neural network is greatly shortened. Such a neural network architecture may accelerate the generation of full-chip sub-resolution assist patterns (SRAFs).
Drawings
FIG. 1 is a schematic flow chart of a process from designing a pattern to a strict reverse photolithography
FIG. 2 is a flowchart illustrating a method for designing an optimal inverse lithography eigenvector based on machine learning according to an embodiment of the present invention
FIG. 3 is a schematic diagram of a design target pattern divided into N1 grid cells according to an embodiment of the present invention
FIG. 4 is a schematic diagram illustrating an embodiment of the present invention in which a set of measurement values { Si (x, y) } under a set of feature functions { Ki (x, y) } is used as an input feature vector for reverse lithography based on a neural network model
FIG. 5 is a schematic diagram illustrating an original training target image being divided into a first class region and a second class region according to an embodiment of the present invention
FIG. 6 is a schematic diagram illustrating specific steps of dividing an original training target image trained by the neural network into a first class region and a second class region according to an embodiment of the present invention
FIG. 7 is a schematic diagram illustrating training of a neural network model according to an embodiment of the present invention
FIGS. 8-14 are schematic diagrams illustrating the prediction values of the neural network model using training in the embodiment of the present invention
FIG. 15 is a schematic diagram illustrating the determination of the optimal position of full-chip sub-resolution assist patterns (SRAFs) according to an embodiment of the present invention
Detailed Description
The following description of the present invention will be made in detail with reference to the accompanying drawings 1 to 15.
It should be noted that the optimal feature vector design method for performing an inverse lithography solution based on machine learning according to the present invention is used for predicting a value of the inverse lithography solution. The optimal feature vector design method can be used for immersion lithography calculation lithography (reverse lithography, optical proximity correction and lithography hotspot detection) and EUV lithography calculation lithography (reverse lithography, optical proximity correction and lithography hotspot detection).
Referring to FIG. 1, FIG. 1 is an idealized schematic diagram of a process from patterning to rigorous reverse lithography. As shown in FIG. 1, the design target pattern is on the left and the ideal reverse lithographic solution of the mask is on the right. It is clear to those skilled in the art that all computer lithography techniques based on machine learning, including reverse lithography techniques based on machine learning, all need to solve the problem how to characterize the environment near a point, and the response of a certain point (x, y) only depends on the neighboring environment within its influence range, which is essentially the design of a feature vector.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating a method for designing an optimal eigenvector for machine learning reverse lithography according to the present invention. As shown, the method comprises the following steps:
step S1: dividing a design target pattern into N grid cells, wherein the size of the grid cells is determined by an imaging condition; wherein the imaging conditions include exposure wavelength, numerical aperture, and mode and setting of exposure illumination.
It is clear to the skilled person that if the surrounding environment is simply divided into small cells and the geometric weights of the patterns in each cell are used as feature vector elements, the total element amount is:
total element amount ═ ((2 × influence range)/unit step size)2
If the influence range is assumed to be 1000 nm/side and the unit step size is 10nm, the total number of elements is (2000/10)240000. This simple feature vector design is inefficient.
In an embodiment of the present invention, the N grid cells are generally square in shape, with the square side length dimension determined by the imaging conditions and proportional to λ/(NA (1+ σ))max) ); where λ is the exposure wavelength, NA is the numerical aperture, σmaxIs the maximum angle of incidence of the exposure illumination. For example, in an embodiment of the present invention, the wafer design may be divided into grid cells, each cell may be 5nmx5nm (as shown in FIG. 3), the grid cell size being determined by the imaging conditions, i.e., exposure wavelength and illumination conditions.
In addition, designing effective feature vectors while achieving optimal resolution, sufficiency, and efficiency requires consideration of the symmetry characteristics of the problem (imaging system).
Step S2: calculating a set of feature functions { Ki (x, y) }, i ═ 1,2, … N1 according to imaging conditions; wherein the set of feature functions { Ki (x, y) } is an optimal set of optical scales for measuring the surrounding environment of any one grid cell in the design target pattern; the value of N1 is related to the requirement for completeness of the surrounding environment of the representation grid cell, and N1 is the number of optical scales Ki (x, y).
Step S3: establishing a neural network model, wherein the neural network model comprises an input layer, a hidden layer and an output layer; the dimension of the input layer is equal to N1, the hidden layers have N2 layers, and the number of neurons of each hidden layer is M1,M2,…MN2(ii) a Wherein, M is1,M2,…MN2May be the same, partially the same or different.
In the embodiment of the invention, the established neural network model and the N1, N2 layers and each layer comprise M neurons1,M2,…MN2The numbers are correlated. That is, in embodiments of the present invention, how many layers the neural network model includes (e.g., N2 layers), and how many neurons each layer includes (e.g., M2 layers), may be reset based on the values obtained to predict the inverse lithographic solution1,M2,…MN2) Preferably, the value of N1 is equal to or less than 140, the value of N2 is 5, and the number of neurons in the first layer of the neural network model is the same as N1 (as shown in fig. 5).
Step S4: training the neural network model, wherein the training comprises training samples and verification samples, the training samples and the verification samples are part of the target patterns in the design target patterns, and then, a feature function set { Ki (x, y) } is used for calculating a signal set { Si (x, y) } of each grid unit to be used as the neural network model input of the grid unit, the signal set { Si (x, y) } represents the surrounding environment of one grid unit in the target pattern, and the signal set { Si (x, y) } is also called a feature vector; and taking the value of the strict reverse photoetching at the corresponding position as a target value of the neural network training, namely using the same partial target pattern to generate an optimal mask image by using a strict reverse photoetching algorithm as an original training target image of the neural network training.
In particular, computing the set of signal values for each grid cell may begin with an imaging equation.
Hopkin (Hopkin) partial coherent illumination imaging equation 1 is as follows:
Figure BDA0002381299210000071
in the formula, gamma (x)2-x1,y2-y1) Is (x) in the object plane (i.e. mask plane)1,y1) And (x)2,y2) The coefficients of mutual coherence between them, determined by the illumination; p (x-x)1,y-y1) Is the impulse response function of the optical imaging system and is determined by the pupil function of the optical system. More specifically, it is due to (x) in the object plane (i.e. the mask plane)1,y1) Unit amplitude and zero phase point source, complex amplitude caused by interference at point (x, y) in the image plane. M (x)1,y1) Is a point (x) in the object plane (i.e. the mask plane)1,y1) The complex transfer function of (a). An asterisked variable refers to the conjugate of the original variable, e.g., P is the conjugate of P and M is the conjugate of M. According to the Mercer's theorem, the above formula (1) can be converted into a simpler formula:
Figure BDA0002381299210000072
Figure BDA0002381299210000073
wherein the content of the first and second substances,
Figure BDA0002381299210000074
represents the convolution operation between the function Ki (x, y) and the mask transfer function M (x, y);
iand { K }iThe eigenvalues and eigenfunctions of the following equations.
∫∫W(x1',y1';x2',y2')Ki(x2',y2')dx2'dy2'=αiKi(x1',y1') (3a)
W(x1',y1';x2',y2')=γ(x2'-x1',y2'-y1')P(x1',y1')P*(x2',y2') (3b)
The significance of the above equation (2) is that it indicates that the partially coherent imaging system can be decomposed into a series of coherent imaging systems, and the coherent imaging systems are independent of each other. The above method has proven to be the best method, commonly referred to as optimal coherence decomposition, although there are other methods that can decompose a partially coherent imaging system into a series of coherent imaging systems.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating an embodiment of the present invention in which a measurement value set { Si (x, y) } under a feature function set { Ki (x, y) } is used as an input feature vector of a reverse lithography based on a neural network model. As shown, the set of feature functions { K }i(x, y) } is the optimal set of optical scales that characterize the environment in the vicinity of a point under given imaging conditions, and can set the feature functions { K }iSet of measurement values under (x, y) } { Si(x, y) } calculating a signal value set of each grid cell as an input feature vector of inverse lithography based on a neural network model. During the neural network model training phase, the signal value sets (S1, S2 … SN) are feature vectors, which are inputs to the forward neural network.
Typically, many training target images contain considerable areas in which there is significant repeatability of information useful for model training. If these regions are placed completely in the model training, the training samples may be biased, which is detrimental to the model accuracy. Thus, regions that have no significant repetition of useful information for model training may be selected first, such as the white regions shown as B in FIG. 5. For the black area shown as B in fig. 5, we select only a small portion for training, for example, in the black area, less than or equal to 10% of the samples may be selected. By doing so, the model training samples may be better balanced.
Specifically, when the training sample is selected in step S4, the original training target image trained by the neural network (as shown in a diagram a in fig. 5) is first divided into a first-class region and a second-class region; the first type of area is an area having no significant repetition of the useful information for model training (e.g., white area shown in fig. 5, panel B), and the second type of area (e.g., black area shown in fig. 5, panel B) is an area having significant repetition of the useful information for model training.
In an embodiment of the present invention, the step of dividing the original training target image trained by the neural network into a first class region and a second class region includes:
solving the maximum intensity value in the original training target image (shown as the left image in fig. 6) trained by the neural network;
determining an intensity threshold for selecting seed pixel points by multiplying the maximum intensity value found above by a factor, for example, 0.05;
creating an auxiliary image having the same size as the original training target image, the intensity value of the auxiliary image being initially set to zero;
finding out the pixel position of which the intensity value is greater than a seed threshold value in the original training target image;
setting the intensity value of the pixel position where the intensity value is found to be greater than the seed threshold value to a predetermined value (e.g., 1.0) in the auxiliary image, so that a plurality of islands are formed in the auxiliary image (as shown in the middle graph of fig. 6);
and (3) iterating the island for multiple times through image growing morphological operation to finally form the first-class region in the original training target image, and taking the remaining region in the original training target image as the second-class region (as shown in the right diagram in fig. 6).
Step S5: in the training of the neural network model, different input end dimensions N1, hidden layer number N2 and the number M of neurons of each hidden layer are adopted1,M2,…MN2Training with the training samples and verifying with a verification sample until finding a neuron with the input dimension N1, a number of hidden layers N2, and a number of neurons per hidden layer M1,M2,…MN2The neural network models combined satisfactorily. Wherein the satisfactory combination means that, for each grid cell in the training set and the verification set, an error between a predicted value of the neural network model and a value of a strict inverse lithography solution is less than or equal to a predefined error specification; for example 10%.
In an embodiment of the invention, the training target is an image of a strict reverse-phase photolithography, the training phase is divided into two phases, the first phase is to randomly select { Si (x, y) } of each training sub-batch, and a random gradient descent (SGD) method can be used to train the network. The gradient descent method, as an optimization algorithm used more frequently in machine learning, has three different forms: batch Gradient decline (Batch Gradient Descriptent), random Gradient decline (Stochastic Gradient Descriptent), and Mini-Batch Gradient decline (Mini-Batch Gradient Descriptent). Among them, the small batch gradient descent method is also commonly used in deep learning to train the model.
That is, Next, in the second phase, { S ] of all training sub-batchesi(x, y) } and strict reverse photoetching resolving images as a single batch, and generating the (S) by adopting a neural network modeli(x, y) } corresponding rigorous inverse lithographic images to train the neural network model.
Preferably, the good neural network model is determined as a neuron having the input dimension N1, the number of hidden layers N2, and the number of neurons per hidden layer M1,M2,…MN2After combination, He initialization strategy can be adopted to configure initial values of network parameters during training, and an ELU activation function, a Relu activation function, a tanh activation function or other activation functions are used as activation functions of the network. In addition, the optimization algorithm for training the neural network model can use an Adam optimization algorithm; the optimized objective function may be the MSE mean square error.
In the training process of the embodiment of the invention, the trained neural network model can be verified, and whether the trained neural network model meets the design requirements or not is judged according to the verification result. If the design requirement is not met, adjusting the input end dimension N1, the number of hidden layers N2 and the number of neurons of each hidden layerM1,M2,…MN2The corrected neural network model is established, and then the training step is executed; if the design requirements are met, the neural network model can be determined to be a trained neural network model. Wherein meeting the design requirement means that, for each grid cell in the training set and the validation set, an error between a predicted value of the neural network model and a value of the rigorous inverse lithography solution is less than or equal to a predefined error specification.
In addition, in the training process, batch normalization technology can be adopted, and dynamic adaptive sample weighting is adopted to improve the training quality.
In summary, the present invention fully utilizes the mature machine learning technology based on the neural network structure and the Deep Convolutional Neural Network (DCNN), so as to obtain the solution of the Inverse Lithography Technology (ILT), which is much faster than the strict inverse lithography calculation.
In the embodiment of the invention, with the trained neural network model, the design wafer pattern can be divided into grid units in the application implementation stage, the { Si (x, y) } value is calculated for each grid unit, and the { Si (x, y) } value is input into the trained neural network model, so as to obtain the value for predicting the reverse photoetching solution.
Step S6: in the application implementation stage, a design wafer pattern is divided into grid units, a { Si (x, y) } value is calculated for each grid unit, and the { Si (x, y) } value is input into a trained neural network model to obtain a value for predicting reverse photoetching resolution.
Referring to fig. 8-14, fig. 8-14 are schematic diagrams illustrating the prediction of the inverse lithography solution value by using the trained neural network model according to the embodiment of the present invention. As shown, these patterns show well the results of the reverse lithography of the present invention based on machine learning of the physically optimized feature vectors.
It should be noted that the neural network structure based on machine learning can accelerate the implementation of full-chip reverse lithography. From the inverse lithography solution, the optimal position of full-chip sub-resolution assist patterns (SRAFs) can be determined, where the advanced lithography process window can be maximized.
Referring to fig. 15, fig. 15 is a schematic diagram illustrating the determination of the optimal position of full-chip sub-resolution auxiliary patterns (SRAFs) according to an embodiment of the present invention. Specifically, the method may further include step S7: obtaining an image of the reverse photolysis according to the value of the reverse photolysis; identifying a main pattern area of an original design from an image of reverse photoetching solution; in the remaining regions, the auxiliary function regions are positioned by predefined intensity thresholds to determine the best position of the full-chip sub-resolution auxiliary pattern.
The above description is only for the preferred embodiment of the present invention, and the embodiment is not intended to limit the scope of the present invention, so that all the equivalent structural changes made by using the contents of the description and the drawings of the present invention should be included in the scope of the present invention.

Claims (10)

1. An optimal feature vector design method for reverse photoetching solution based on machine learning is used for predicting/calculating the value of the reverse photoetching solution; characterized in that the method comprises the following steps:
step S1: dividing a design target pattern into N grid cells, wherein the size of the grid cells is determined by an imaging condition;
step S2: computing a set of feature functions { K) from imaging conditionsi(x, y) }, i ═ 1,2, … N1; wherein the set of feature functions { K }i(x, y) is an optimal set of optical scales for measuring the surrounding environment of any one grid cell in the design target pattern; the value of the N1 is related to the requirement of completeness of the surrounding environment of the representation grid unit, and the N1 is the number of the optical scales Ki (x, y);
step S3: establishing a neural network model, wherein the neural network model comprises an input layer, a hidden layer and an output layer; the dimension of the input layer is equal to N1, the hidden layers have N2 layers, and the number of neurons of each hidden layer is M1,M2,…MN2(ii) a Wherein, M is1,M2,…MN2Are identical, partly identical or differentThe same is carried out;
step S4: training the neural network model comprises training samples and verification samples, wherein the training samples and the verification samples are part of the design target patterns selected randomly, and a characteristic function set { K } is usedi(x, y) calculating a set of signals S for each of the grid cellsi(x, y) as a neural network model input for the grid cell, the set of signals { S }i(x, y) characterizing the surroundings of a grid cell in the target pattern, said set of signals { S }i(x, y) } also known as feature vectors; using the value of the strict reverse photoetching at the corresponding position as a target value of neural network training, namely using the same partial target pattern and using a strict reverse photoetching algorithm to generate an optimal mask image as an original training target image of the neural network training;
step S5: in the training of the neural network model, different input end dimensions N1, hidden layer number N2 and the number M of neurons of each hidden layer are adopted1,M2,…MN2Training with the training samples and verifying with a verification sample until finding a neuron with the input dimension N1, a number of hidden layers N2, and a number of neurons per hidden layer M1,M2,…MN2(ii) the neural network models combined satisfactorily; wherein the satisfactory combination means that, for each grid cell in the training set and the verification set, an error between a predicted value of the neural network model and a value of a strict inverse lithography solution is less than or equal to a predefined error specification;
step S6: in the application implementation stage, a design wafer pattern is divided into grid units, a { Si (x, y) } value is calculated for each grid unit, and the { Si (x, y) } value is input into a trained neural network model to obtain a value for predicting reverse photoetching resolution.
2. The method of claim 1, wherein the inverse lithography solution is a machine learning based inverse lithography solution, a machine learning based optical proximity correction, or a machine learning based lithography hotspot detection solution.
3. The method according to claim 1, wherein when the training samples are selected in step S4, the original training target image trained by the neural network is first divided into a first class region and a second class region; the first type area is an area which does not have obvious repeatability on the useful information for model training, and the second type area is an area which has obvious repeatability on the useful information for model training.
4. The method of claim 3, wherein the step of partitioning the original training target image trained on the neural network into a first class region and a second class region comprises:
solving the maximum intensity value in the original training target image trained by the neural network;
determining the intensity threshold value of the selected seed pixel point by multiplying the found maximum intensity value by a coefficient;
creating an auxiliary image having the same size as the original training target image, the intensity value of the auxiliary image being initially set to zero;
finding out the pixel position of which the intensity value is greater than a seed threshold value in the original training target image;
setting the intensity value of the pixel position with the intensity value larger than the seed threshold value to be a preset value in the auxiliary image, so that a plurality of islands can be formed in the auxiliary image;
and iterating the island for multiple times through image growth morphological operation to finally form the first-class region in the original training target image, and taking the remaining region in the original training target image as the second-class region.
5. The method according to claim 1, wherein in the training process of step S5, a batch normalization technique is adopted, and dynamic adaptive sample weighting is adopted to improve training quality.
6. The method according to claim 1, wherein in the step S5, the neural network model is trained by a stochastic gradient descent method.
7. The method of claim 1, wherein the imaging conditions include exposure wavelength, numerical aperture, and mode and setting of exposure illumination.
8. The method of claim 1, wherein N1 is less than or equal to 200.
9. The method of claim 1, wherein N2 is equal to or less than 6.
10. The method according to claim 1, further comprising step S7: obtaining an image of the reverse photolysis according to the value of the reverse photolysis; identifying a main pattern area of an original design from an image of reverse photoetching solution; in the remaining regions, the auxiliary function regions are positioned by predefined intensity thresholds to determine the best position of the full-chip sub-resolution auxiliary pattern.
CN202010083839.1A 2020-02-10 2020-02-10 Method for designing optimal feature vector of reverse photoetching based on machine learning Withdrawn CN111310407A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010083839.1A CN111310407A (en) 2020-02-10 2020-02-10 Method for designing optimal feature vector of reverse photoetching based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010083839.1A CN111310407A (en) 2020-02-10 2020-02-10 Method for designing optimal feature vector of reverse photoetching based on machine learning

Publications (1)

Publication Number Publication Date
CN111310407A true CN111310407A (en) 2020-06-19

Family

ID=71156363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010083839.1A Withdrawn CN111310407A (en) 2020-02-10 2020-02-10 Method for designing optimal feature vector of reverse photoetching based on machine learning

Country Status (1)

Country Link
CN (1) CN111310407A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985611A (en) * 2020-07-21 2020-11-24 上海集成电路研发中心有限公司 Computing method based on physical characteristic diagram and DCNN machine learning reverse photoetching solution
CN112485976A (en) * 2020-12-11 2021-03-12 上海集成电路装备材料产业创新中心有限公司 Method for determining optical proximity correction photoetching target pattern based on reverse etching model
CN112541545A (en) * 2020-12-11 2021-03-23 上海集成电路装备材料产业创新中心有限公司 Method for predicting CDSEM image after etching process based on machine learning
CN112561873A (en) * 2020-12-11 2021-03-26 上海集成电路装备材料产业创新中心有限公司 CDSEM image virtual measurement method based on machine learning
CN112578646A (en) * 2020-12-11 2021-03-30 上海集成电路装备材料产业创新中心有限公司 Offline photoetching process stability control method based on image
CN113589644A (en) * 2021-07-15 2021-11-02 中国科学院上海光学精密机械研究所 Curve type reverse photoetching method based on sub-resolution auxiliary graph seed insertion
CN113674235A (en) * 2021-08-15 2021-11-19 上海立芯软件科技有限公司 Low-cost photoetching hotspot detection method based on active entropy sampling and model calibration
CN114925605A (en) * 2022-05-16 2022-08-19 北京华大九天科技股份有限公司 Method for selecting training data in integrated circuit design

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080077907A1 (en) * 2006-09-21 2008-03-27 Kulkami Anand P Neural network-based system and methods for performing optical proximity correction
CN107908071A (en) * 2017-11-28 2018-04-13 上海集成电路研发中心有限公司 A kind of optical adjacent correction method based on neural network model
CN109976087A (en) * 2017-12-27 2019-07-05 上海集成电路研发中心有限公司 The generation method of mask pattern model and the optimization method of mask pattern

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080077907A1 (en) * 2006-09-21 2008-03-27 Kulkami Anand P Neural network-based system and methods for performing optical proximity correction
CN107908071A (en) * 2017-11-28 2018-04-13 上海集成电路研发中心有限公司 A kind of optical adjacent correction method based on neural network model
CN109976087A (en) * 2017-12-27 2019-07-05 上海集成电路研发中心有限公司 The generation method of mask pattern model and the optimization method of mask pattern

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XUELONG SHI等: "Optimal feature vector design for computational lithography" *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985611A (en) * 2020-07-21 2020-11-24 上海集成电路研发中心有限公司 Computing method based on physical characteristic diagram and DCNN machine learning reverse photoetching solution
WO2022016802A1 (en) * 2020-07-21 2022-01-27 上海集成电路研发中心有限公司 Physical feature map- and dcnn-based computation method for machine learning-based inverse lithography technology solution
CN112485976A (en) * 2020-12-11 2021-03-12 上海集成电路装备材料产业创新中心有限公司 Method for determining optical proximity correction photoetching target pattern based on reverse etching model
CN112541545A (en) * 2020-12-11 2021-03-23 上海集成电路装备材料产业创新中心有限公司 Method for predicting CDSEM image after etching process based on machine learning
CN112561873A (en) * 2020-12-11 2021-03-26 上海集成电路装备材料产业创新中心有限公司 CDSEM image virtual measurement method based on machine learning
CN112578646A (en) * 2020-12-11 2021-03-30 上海集成电路装备材料产业创新中心有限公司 Offline photoetching process stability control method based on image
CN112541545B (en) * 2020-12-11 2022-09-02 上海集成电路装备材料产业创新中心有限公司 Method for predicting CDSEM image after etching process based on machine learning
CN112561873B (en) * 2020-12-11 2022-11-25 上海集成电路装备材料产业创新中心有限公司 CDSEM image virtual measurement method based on machine learning
CN113589644A (en) * 2021-07-15 2021-11-02 中国科学院上海光学精密机械研究所 Curve type reverse photoetching method based on sub-resolution auxiliary graph seed insertion
CN113674235A (en) * 2021-08-15 2021-11-19 上海立芯软件科技有限公司 Low-cost photoetching hotspot detection method based on active entropy sampling and model calibration
CN113674235B (en) * 2021-08-15 2023-10-10 上海立芯软件科技有限公司 Low-cost photoetching hot spot detection method based on active entropy sampling and model calibration
CN114925605A (en) * 2022-05-16 2022-08-19 北京华大九天科技股份有限公司 Method for selecting training data in integrated circuit design

Similar Documents

Publication Publication Date Title
CN111310407A (en) Method for designing optimal feature vector of reverse photoetching based on machine learning
CN107908071B (en) Optical proximity correction method based on neural network model
CN111627799B (en) Method for manufacturing semiconductor element
CN107797391B (en) Optical proximity correction method
US8732625B2 (en) Methods for performing model-based lithography guided layout design
US7882480B2 (en) System and method for model-based sub-resolution assist feature generation
CN108535952B (en) Computational lithography method based on model-driven convolutional neural network
WO2019162346A1 (en) Methods for training machine learning model for computation lithography
CN109976087B (en) Method for generating mask pattern model and method for optimizing mask pattern
US7328424B2 (en) Method for determining a matrix of transmission cross coefficients in an optical proximity correction of mask layouts
CN111581907B (en) Hessian-Free photoetching mask optimization method and device and electronic equipment
TW201007383A (en) Illumination optimization
CN108490735A (en) The method, apparatus and computer-readable medium that full chip mask pattern generates
CN108228981B (en) OPC model generation method based on neural network and experimental pattern prediction method
US20160162623A1 (en) Method, computer readable storage medium and computer system for creating a layout of a photomask
US9779186B2 (en) Methods for performing model-based lithography guided layout design
CN113759657A (en) Optical proximity correction method
CN110426914A (en) A kind of modification method and electronic equipment of Sub-resolution assist features
CN113238460B (en) Deep learning-based optical proximity correction method for extreme ultraviolet
CN110597023B (en) Photoetching process resolution enhancement method and device based on multi-objective optimization
CN112578644B (en) Self-adaptive full-chip light source optimization method and system
Pang et al. Optimization from design rules, source and mask, to full chip with a single computational lithography framework: level-set-methods-based inverse lithography technology (ILT)
Luo et al. SVM based layout retargeting for fast and regularized inverse lithography
US20180196349A1 (en) Lithography Model Calibration Via Genetic Algorithms with Adaptive Deterministic Crowding and Dynamic Niching
CN111985611A (en) Computing method based on physical characteristic diagram and DCNN machine learning reverse photoetching solution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200619

WW01 Invention patent application withdrawn after publication