CN109976087A - The generation method of mask pattern model and the optimization method of mask pattern - Google Patents
The generation method of mask pattern model and the optimization method of mask pattern Download PDFInfo
- Publication number
- CN109976087A CN109976087A CN201711444124.9A CN201711444124A CN109976087A CN 109976087 A CN109976087 A CN 109976087A CN 201711444124 A CN201711444124 A CN 201711444124A CN 109976087 A CN109976087 A CN 109976087A
- Authority
- CN
- China
- Prior art keywords
- mask pattern
- imaging signal
- neural network
- model
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000005457 optimization Methods 0.000 title abstract description 6
- 238000003384 imaging method Methods 0.000 claims abstract description 67
- 238000003062 neural network model Methods 0.000 claims abstract description 41
- 230000003287 optical effect Effects 0.000 claims abstract description 19
- 238000012549 training Methods 0.000 claims abstract description 17
- 238000012360 testing method Methods 0.000 claims abstract description 16
- 230000008569 process Effects 0.000 claims abstract description 10
- 238000001259 photo etching Methods 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 26
- 238000001459 lithography Methods 0.000 claims description 15
- 238000012546 transfer Methods 0.000 claims description 7
- 230000000946 synaptic effect Effects 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 3
- 239000013598 vector Substances 0.000 description 12
- 230000001427 coherent effect Effects 0.000 description 11
- 230000008901 benefit Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000012634 optical imaging Methods 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 230000003121 nonmonotonic effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005316 response function Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03F—PHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
- G03F1/00—Originals for photomechanical production of textured or patterned surfaces, e.g., masks, photo-masks, reticles; Mask blanks or pellicles therefor; Containers specially adapted therefor; Preparation thereof
- G03F1/36—Masks having proximity correction features; Preparation thereof, e.g. optical proximity correction [OPC] design processes
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03F—PHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
- G03F1/00—Originals for photomechanical production of textured or patterned surfaces, e.g., masks, photo-masks, reticles; Mask blanks or pellicles therefor; Containers specially adapted therefor; Preparation thereof
- G03F1/68—Preparation processes not covered by groups G03F1/20 - G03F1/50
- G03F1/76—Patterning of masks by imaging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)
Abstract
The present invention provides a kind of generation method of mask pattern model and the optimization method of mask pattern.The generation method of mask pattern model, comprising: S11: the eigenfunction group under the conditions of setting photoetching process is calculated, eigenfunction group includes n eigenfunction;S12: obtaining the imaging signal value set at each definition position of each test pattern, and imaging signal value set includes n imaging signal values, and each imaging signal values are calculated based on the convolution value of eigenfunction and optical transport function in eigenfunction group;S13: using the imaging signal value set at each definition position of each test pattern as the input of a neural network model;S14: the continuous tone mask pattern of each test pattern is calculated, and using continuous tone mask pattern as the training objective of the output of neural network model;S15: the parameter of training neural network model;S16: using the neural network model after training as mask pattern model.Method provided by the invention can obtain optimal mask pattern.
Description
Technical Field
The present invention relates to the field of semiconductor technologies, and in particular, to a method for generating a mask pattern model and a method for optimizing a mask pattern.
Background
In order to achieve an improvement in performance of semiconductor chips, a reduction in power consumption of chips, and a reduction in area, the feature size on semiconductor chips has been shrinking for decades. In order to achieve the ever-decreasing feature sizes, the semiconductor industry has made tremendous advances in both lithography and resolution enhancement. However, as EUV technology has progressed slowly, the role of computational lithography in current immersion-based lithography technology is becoming increasingly important. To maintain the lithography process window, more complex computational lithography solutions are required. Such mask optimization algorithms can be implemented in either the actual spatial domain, such as a level set based algorithm, or the frequency domain spatial domain, such as a source mask co-optimization algorithm. However, this strict mask optimization algorithm cannot be applied to a full chip because the calculation time is too long. It would be desirable in the lithography industry if there were a solution that could provide both reverse lithography quality solutions, including the placement of scratch assist patterns and OPC of the main design patterns, while also being computationally very fast. In this invention, we want to propose such a method based on machine learning. The key of the invention lies in the design of the model input vector.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for generating a mask pattern model and a method for optimizing a mask pattern, so as to realize an optimal mask pattern.
According to an aspect of the present invention, there is provided a method of generating a mask pattern model, including: s11: calculating an intrinsic function group under the set photoetching process condition, wherein the intrinsic function group comprises n intrinsic functions, and n is an integer greater than 0; s12: acquiring a set of imaging signal values at each defined position of each test pattern, the set of imaging signal values comprising n imaging signal values, each of the imaging signal values being calculated based on a convolution value of an eigenfunction and an optical transfer function in the set of eigenfunctions; s13: using the set of imaging signal values at each defined position of each test pattern as an input to a neural network model; s14: calculating continuous tone mask patterns of each test pattern, and taking the continuous tone mask patterns as a training target of the output of the neural network model; s15: training parameters of the neural network model; s16: and taking the trained neural network model as the mask pattern model.
Optionally, n is less than or equal to 10.
Optionally, the neural network model comprises at least one hidden layer.
Optionally, the neural network model comprises an input layer, a hidden layer and an output layer, the input layer comprises N +1 input units, the hidden layer comprises N hidden units, the output layer comprises an output unit,
wherein, the input of the first input unit is 1, the input of the 2 nd to the (N + 1) th input units is each imaging signal value in the imaging signal value set, the value of the output unit is the output of the neural network model, and N is an integer greater than 0.
Optionally, the value output of the output unit of the neural network model is calculated according to the following formula:
wherein, ω isiIs a synaptic connection connecting the output unit and the i-th hidden unit, whose value hiCalculated according to the following formula:
wherein, wjiIs a synaptic connection connecting the j-th input cell and the i-th hidden cell, SjIs the value of the jth input cell.
Optionally, each imaging signal value of the set of imaging signal values is calculated according to the following formula:
wherein S isi(x, y) is the i-th imaging signal value at the defined position (x, y), Φi(x, y) is the ith eigenfunction of the set of eigenfunctions, M (x,y) is the optical transfer function of the photomask, and i is an integer of 1 or more and n or less.
Optionally, the step S15 includes: and training parameters of the neural network model by using a back propagation algorithm.
Optionally, the continuous tone mask pattern is generated according to an inverse lithography algorithm.
According to still another aspect of the present invention, there is also provided a method of optimizing a mask pattern, including: s21: obtaining a set of imaging signal values at each defined position of the initial mask pattern from convolution values of each eigenfunction of the set of eigenfunctions at each defined position of the initial mask pattern and the optical transfer function, the set of imaging signal values comprising n imaging signal values, n being an integer greater than 0; s22: taking the set of imaging signal values at each defined position of an initial mask pattern as input to a mask pattern model generated by the method as described above; s23: an optimized mask pattern is generated from the output of the mask pattern model.
Optionally, the initial mask pattern comprises: an auxiliary pattern or a main pattern and an auxiliary pattern.
Compared with the prior art, the invention has the advantages that: an imaging signal set obtained according to an eigen function set under a set photoetching process condition is used as an optical scale to measure the environment around one point, the imaging signal set is used as the input of machine learning, and a neural network model is used for realizing a non-iterative OPC solution. Furthermore, the continuous tone mask pattern obtained by the inverse lithography solution is targeted as an output of the neural network model to have the quality of the inverse lithography solution, but is faster to implement.
Drawings
The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
FIG. 1 shows a flow chart of a method of generating a mask pattern model according to an embodiment of the invention.
Fig. 2 and 3 show schematic views of a geometric scale of the mask pattern.
Fig. 4 shows a schematic diagram of eigenfunctions of an independent coherent imaging system.
FIG. 5 shows a schematic diagram of eigenfunctions defining a position for an embodiment of the invention.
FIG. 6 shows a schematic diagram of a neural network model of an embodiment of the present invention.
Fig. 7 shows a flowchart of an optimization method of a mask pattern according to an embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
In order to provide optimal positions for each segment in a non-iterative manner and to make the provided OPC solution optimal from a lithography process window perspective, the inventors have used the geometric description of the layout as input feature vectors for the neural network model to obtain optimal displacements. The geometric description as an input feature vector of the neural network model can be explained with reference to fig. 2 and 3. As shown in FIG. 2, the reticle plane is divided into a number of cells (e.g., 5nm is a cell 201), and the value {1 or 0} in each cell 201 depends on how much of the "weight" of the layout pattern 202 is located in that cell. In other embodiments, the input feature vectors may be generated using equidistant concentric circle 203 sampling, as shown in FIG. 3.
However, these feature vectors based on pure geometric descriptions are less efficient to process: 1) the number of elements of the input feature vector based on the pure geometric description is very large, and the constructed input vector reaches about 200 or more elements, which results in low computational efficiency; 2) simple geometric description is used as an input feature vector of the neural network model, and in order to enable the model to achieve certain accuracy, more training data are needed for training the model because of the large number of free parameters; 3) with simple geometric description as the input feature vector of the neural network model, the output response mapping function becomes more nonlinear and non-monotonic, which requires a more complex neural network model to capture the essence of the mapping function and also makes the model itself reduce the ability of wide prediction.
In order to solve the problem of the geometric scale, the invention adopts an imaging signal under an optical scale as an input of model training.
In particular, starting from the theory of optical imaging of partially coherent illumination, the intensity function of the imaged image of the mask under given imaging conditions can be expressed as
Where M (x, y) is a function of mask light transmission, { αiAnd phiiIs the set of characteristic values and the set of characteristic functions of the following equations.
∫∫W(x1',y1';x2',y2')Φi(x2',y2')dx2'dy2'=αiΦi(x1',y1') (2)
Wherein, W (x)1',y1';x2',y2') is calculated according to the following formula:
W(x1',y1';x2',y2')=γ(x2'-x1',y2'-y1')K(x1',y1')K*(x2',y2') (3)
wherein, gamma (x)2-x1,y2-y1) Is in the object plane, i.e. in the mask plane, (x)1,y1) And (x)2,y2) The cross-correlation factor between two points, which is determined by the light illumination conditions; k (x)1’,y1') is the impulse response function of the optical imaging system, K*Is the conjugate of K.
Equation (1) above shows that a partially coherent imaging system can be decomposed into a series of coherent imaging systems, which are independent of each other, i.e. their optical phase relationship changes randomly over time. Although there are many ways to decompose a partially coherent imaging system into a series of coherent imaging systems, the above methods have proven to be the best methods, commonly referred to as optimal coherence decomposition. The greatest advantage of decomposition is that each coherent imaging system, characterized by its eigenfunctions, has a rapidly decreasing contribution during the imaging process. This can be seen from the corresponding eigenvalues, see fig. 4.
We now define the eigenfunction of a coherent imaging system, { Φ i } as a set of "optical scales" that measure the ambient environment of a point, and the environment of this point is described by its set of light-field signals { S1, S2, … Sn }:
in the present invention, we will use the set of imaging signals S1,S2,…SnAs the input vector of our learning machine, the number n of elements needed for estimation is less than or equal to 10.
Of course, the number of input vector elements depends on the accuracy required to describe the environment in the vicinity of a point. The decisive advantage of using an "optical scale" is three-fold: 1) the description precision of the environment near one point can be controlled by the number of the optical scales, and under the specified optical imaging condition, about 10 optical scales can achieve very high precision; 2) utilizing a signal set measured by an optical scale to make a mapping function from a photoetching target layer to a mask layer become monotonous and smooth; 3) the symmetry of the imaging system is automatically encoded into the input vector design, making the design of the learning machine simpler.
Based on the above principle, the present invention provides a method for generating a mask pattern model, as shown in fig. 1. Fig. 1 shows a total of 6 steps:
step S11: and calculating an intrinsic function group under the set photoetching process condition, wherein the intrinsic function group comprises n intrinsic functions, and n is an integer greater than 0.
Optionally, the set of eigenfunctions consists of eigenfunctions of the first n coherent imaging systems decomposed by the partially coherent imaging system. Referring specifically to fig. 4, fig. 4 shows a schematic diagram of eigenfunctions of an independent coherent imaging system. As shown in fig. 4, the importance of each eigenfunction (i.e., the imaging kernel) decreases rapidly with the eigenfunction exponent. As shown in fig. 5, the set of eigenfunctions defining locations 301 of the pattern 300 includes 4 eigenfunctions 302 (i.e., first through fourth eigenfunctions).
Step S12: acquiring a set of imaging signal values at each defined position of each test pattern, the set of imaging signal values comprising n imaging signal values, each of the imaging signal values being calculated based on a convolution value of an eigenfunction and an optical transfer function of the set of eigenfunctions.
In some embodiments, each imaging signal value of the set of imaging signal values at each defined position of a respective test pattern is calculated according to the formula:
wherein S isi(x, y) is the i-th imaging signal value at the defined position (x, y), Φi(x, y) is the ith eigenfunction of the eigenfunction group, M (x, y) is the optical transmission function of the photomask, and i is an integer of 1 or more and n or less.
Step S13: the set of imaging signal values at each defined position of each test pattern is used as an input to a neural network model.
Step S14: a continuous tone mask pattern is calculated for each test pattern and used as a training target for the output of the neural network model.
Alternatively, the continuous tone mask pattern is generated using an inverse lithography algorithm.
Step S15: training parameters of the neural network model.
In various embodiments of the present invention, the neural network model includes at least one hidden layer.
In the following, a hidden layer is taken as an example, and as shown in fig. 6, the neural network model 400 includes an input layer, a hidden layer and an output layer. The input layer comprises N +1 input units 401, the hidden layer comprises N hidden units 402, and the output layer comprises one output unit 403.
First input unit S0Is 1 (for bias generation of a concealment unit), the inputs of the 2 nd to the N +1 th input units are respective imaging signal values of the set of imaging signal values, the value of the output unit is the output of the neural network model, and N is an integer greater than 0. Optionally, N is an integer from 3 to 5.
In the neural network model 400 shown in fig. 6, the value output of the output unit 403 is calculated according to the following formula:
wherein, ω isiIs a synaptic connection connecting the output unit and the i-th hidden unit, whose value hiCalculated according to the following formula:
wherein, wjiIs a synaptic connection connecting the j-th input cell and the i-th hidden cell, SjIs the value of the jth input cell.
In a preferred embodiment of the invention, the parameters from the neural network model may be trained using a back propagation algorithm.
Step S16: and taking the trained neural network model as the mask pattern model.
In various embodiments of the present invention, the test pattern may be a pre-designed pattern or a pattern selected from a product chip for training. The set of test patterns may contain about several hundred training patterns, and several hundred patterns for cross-checking.
After the model is trained, a generalization test of the model is required. If the model error is large on some patterns, then these patterns will be included in the next round of training. This iterative process may be repeated several times until the generalization ability of the trained model is satisfied. Once the model is trained and the generalization capability of the model is verified, the model can be used to achieve full-chip pattern correction.
After the mask pattern model is generated through the above steps, the present invention further provides a method for optimizing a mask pattern, as shown in fig. 7. Fig. 7 shows a total of 3 steps:
step S21: obtaining a set of imaging signal values at each defined position of the initial mask pattern from convolution values of each eigenfunction of the set of eigenfunctions at each defined position of the initial mask pattern and the optical transfer function, the set of imaging signal values comprising n imaging signal values, n being an integer larger than 0.
Alternatively, the initial mask pattern may be an auxiliary pattern. In still other embodiments, the initial mask pattern may be a primary pattern and a secondary pattern.
Step S22: the set of imaging signal values at each defined position of the initial mask pattern is taken as an input to a mask pattern model, which is generated by the method as described above.
Step S23: an optimized mask pattern is generated from the output of the mask pattern model.
Compared with the prior art, the invention has the advantages that: an imaging signal set obtained according to an eigen function set under a set photoetching process condition is used as an optical scale to measure the environment around one point, the imaging signal set is used as the input of machine learning, and a neural network model is used for realizing a non-iterative OPC solution. Furthermore, the continuous tone mask pattern obtained by the inverse lithography solution is targeted as an output of the neural network model to have the quality of the inverse lithography solution, but is faster to implement.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Claims (10)
1. A method of generating a mask pattern model, comprising:
s11: calculating an intrinsic function group under the set photoetching process condition, wherein the intrinsic function group comprises n intrinsic functions, and n is an integer greater than 0;
s12: acquiring a set of imaging signal values at each defined position of each test pattern, the set of imaging signal values comprising n imaging signal values, each of the imaging signal values being calculated based on a convolution value of an eigenfunction and an optical transfer function in the set of eigenfunctions;
s13: using the set of imaging signal values at each defined position of each test pattern as an input to a neural network model;
s14: calculating continuous tone mask patterns of each test pattern, and taking the continuous tone mask patterns as a training target of the output of the neural network model;
s15: training parameters of the neural network model;
s16: and taking the trained neural network model as the mask pattern model.
2. The method of generating a mask pattern model according to claim 1, wherein n is 10 or less.
3. The method of generating a mask pattern model of claim 1, wherein the neural network model comprises at least one hidden layer.
4. The method of generating a mask pattern model according to claim 1, wherein the neural network model comprises an input layer, a hidden layer and an output layer, the input layer comprising N +1 input cells, the hidden layer comprising N hidden cells, the output layer comprising an output cell,
wherein, the input of the first input unit is 1, the input of the 2 nd to the (N + 1) th input units is each imaging signal value in the imaging signal value set, the value of the output unit is the output of the neural network model, and N is an integer greater than 0.
5. The method for generating a mask pattern model according to claim 4, wherein the value output of the output unit of the neural network model is calculated according to the following formula:
wherein,ωiis a synaptic connection connecting the output unit and the i-th hidden unit, whose value hiCalculated according to the following formula:
wherein, wjiIs a synaptic connection connecting the j-th input cell and the i-th hidden cell, SjIs the value of the jth input cell.
6. A method of generating a mask pattern model according to any one of claims 1 to 5, wherein each imaging signal value of the set of imaging signal values is calculated according to the formula:
wherein S isi(x, y) is the i-th imaging signal value at the defined position (x, y), Φi(x, y) is the ith eigenfunction of the eigenfunction group, M (x, y) is the optical transmission function of the photomask, and i is an integer of 1 or more and n or less.
7. The method of generating a mask pattern model according to any one of claims 1 to 5, wherein the step S15 includes:
and training parameters of the neural network model by using a back propagation algorithm.
8. The method of generating a mask pattern model according to any one of claims 1 to 5, wherein the continuous tone mask pattern is generated according to an inverse lithography algorithm.
9. A method for optimizing a mask pattern, comprising:
s21: obtaining a set of imaging signal values at each defined position of the initial mask pattern from convolution values of each eigenfunction of the set of eigenfunctions at each defined position of the initial mask pattern and the optical transfer function, the set of imaging signal values comprising n imaging signal values, n being an integer greater than 0;
s22: taking the set of imaging signal values at each defined position of an initial mask pattern as input to a mask pattern model generated by the method of any one of claims 1 to 8;
s23: an optimized mask pattern is generated from the output of the mask pattern model.
10. The method of optimizing a mask pattern of claim 9, wherein the initial mask pattern comprises: an auxiliary pattern or a main pattern and an auxiliary pattern.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711444124.9A CN109976087B (en) | 2017-12-27 | 2017-12-27 | Method for generating mask pattern model and method for optimizing mask pattern |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711444124.9A CN109976087B (en) | 2017-12-27 | 2017-12-27 | Method for generating mask pattern model and method for optimizing mask pattern |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109976087A true CN109976087A (en) | 2019-07-05 |
CN109976087B CN109976087B (en) | 2022-08-23 |
Family
ID=67072041
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711444124.9A Active CN109976087B (en) | 2017-12-27 | 2017-12-27 | Method for generating mask pattern model and method for optimizing mask pattern |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109976087B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111310407A (en) * | 2020-02-10 | 2020-06-19 | 上海集成电路研发中心有限公司 | Method for designing optimal feature vector of reverse photoetching based on machine learning |
CN111538213A (en) * | 2020-04-27 | 2020-08-14 | 湖南大学 | Electron beam proximity effect correction method based on neural network |
CN111985611A (en) * | 2020-07-21 | 2020-11-24 | 上海集成电路研发中心有限公司 | Computing method based on physical characteristic diagram and DCNN machine learning reverse photoetching solution |
CN112485976A (en) * | 2020-12-11 | 2021-03-12 | 上海集成电路装备材料产业创新中心有限公司 | Method for determining optical proximity correction photoetching target pattern based on reverse etching model |
CN112578646A (en) * | 2020-12-11 | 2021-03-30 | 上海集成电路装备材料产业创新中心有限公司 | Offline photoetching process stability control method based on image |
CN114200768A (en) * | 2021-12-23 | 2022-03-18 | 中国科学院光电技术研究所 | Super-resolution lithography reverse optical proximity effect correction method based on level set algorithm |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1661479A (en) * | 2003-11-05 | 2005-08-31 | Asml蒙片工具有限公司 | Eigen decomposition based OPC model |
JP2008122929A (en) * | 2006-10-20 | 2008-05-29 | Toshiba Corp | Method for creating simulation model |
CN104865788A (en) * | 2015-06-07 | 2015-08-26 | 上海华虹宏力半导体制造有限公司 | Photoetching layout OPC (Optical Proximity Correction) method |
CN106796668A (en) * | 2016-03-16 | 2017-05-31 | 香港应用科技研究院有限公司 | For the method and system that bit-depth in artificial neural network is reduced |
CN106777829A (en) * | 2017-02-06 | 2017-05-31 | 深圳晶源信息技术有限公司 | A kind of optimization method and computer-readable storage medium of integrated circuit mask design |
TW201734825A (en) * | 2015-12-31 | 2017-10-01 | 克萊譚克公司 | Accelerated training of a machine learning based model for semiconductor applications |
-
2017
- 2017-12-27 CN CN201711444124.9A patent/CN109976087B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1661479A (en) * | 2003-11-05 | 2005-08-31 | Asml蒙片工具有限公司 | Eigen decomposition based OPC model |
JP2008122929A (en) * | 2006-10-20 | 2008-05-29 | Toshiba Corp | Method for creating simulation model |
CN104865788A (en) * | 2015-06-07 | 2015-08-26 | 上海华虹宏力半导体制造有限公司 | Photoetching layout OPC (Optical Proximity Correction) method |
TW201734825A (en) * | 2015-12-31 | 2017-10-01 | 克萊譚克公司 | Accelerated training of a machine learning based model for semiconductor applications |
CN106796668A (en) * | 2016-03-16 | 2017-05-31 | 香港应用科技研究院有限公司 | For the method and system that bit-depth in artificial neural network is reduced |
CN106777829A (en) * | 2017-02-06 | 2017-05-31 | 深圳晶源信息技术有限公司 | A kind of optimization method and computer-readable storage medium of integrated circuit mask design |
Non-Patent Citations (1)
Title |
---|
巨孔亮等: "基于BP神经网络的面曝光快速成形系统掩模图像的畸变校正研究", 《机械制造》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111310407A (en) * | 2020-02-10 | 2020-06-19 | 上海集成电路研发中心有限公司 | Method for designing optimal feature vector of reverse photoetching based on machine learning |
CN111538213A (en) * | 2020-04-27 | 2020-08-14 | 湖南大学 | Electron beam proximity effect correction method based on neural network |
CN111538213B (en) * | 2020-04-27 | 2021-04-27 | 湖南大学 | Electron beam proximity effect correction method based on neural network |
CN111985611A (en) * | 2020-07-21 | 2020-11-24 | 上海集成电路研发中心有限公司 | Computing method based on physical characteristic diagram and DCNN machine learning reverse photoetching solution |
WO2022016802A1 (en) * | 2020-07-21 | 2022-01-27 | 上海集成电路研发中心有限公司 | Physical feature map- and dcnn-based computation method for machine learning-based inverse lithography technology solution |
CN112485976A (en) * | 2020-12-11 | 2021-03-12 | 上海集成电路装备材料产业创新中心有限公司 | Method for determining optical proximity correction photoetching target pattern based on reverse etching model |
CN112578646A (en) * | 2020-12-11 | 2021-03-30 | 上海集成电路装备材料产业创新中心有限公司 | Offline photoetching process stability control method based on image |
CN114200768A (en) * | 2021-12-23 | 2022-03-18 | 中国科学院光电技术研究所 | Super-resolution lithography reverse optical proximity effect correction method based on level set algorithm |
CN114200768B (en) * | 2021-12-23 | 2023-05-26 | 中国科学院光电技术研究所 | Super-resolution photoetching reverse optical proximity effect correction method based on level set algorithm |
US12085846B2 (en) | 2021-12-23 | 2024-09-10 | The Institute Of Optics And Electronics, The Chinese Academy Of Sciences | Method for inverse optical proximity correction of super-resolution lithography based on level set algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN109976087B (en) | 2022-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109976087B (en) | Method for generating mask pattern model and method for optimizing mask pattern | |
CN107797391B (en) | Optical proximity correction method | |
CN107908071B (en) | Optical proximity correction method based on neural network model | |
CN111310407A (en) | Method for designing optimal feature vector of reverse photoetching based on machine learning | |
US7882480B2 (en) | System and method for model-based sub-resolution assist feature generation | |
CN102692814B (en) | Light source-mask mixed optimizing method based on Abbe vector imaging model | |
CN108535952B (en) | Computational lithography method based on model-driven convolutional neural network | |
US11061318B2 (en) | Lithography model calibration | |
US20100203430A1 (en) | Methods for performing model-based lithography guided layout design | |
CN113759657B (en) | Optical proximity correction method | |
WO2022016802A1 (en) | Physical feature map- and dcnn-based computation method for machine learning-based inverse lithography technology solution | |
CN104133348B (en) | A kind of adaptive optical etching system light source optimization method | |
WO2020154978A1 (en) | Hessian-free lithography mask optimization method and apparatus, and electronic device | |
US9779186B2 (en) | Methods for performing model-based lithography guided layout design | |
CN110426914A (en) | A kind of modification method and electronic equipment of Sub-resolution assist features | |
CN117313640B (en) | Training method, device, equipment and storage medium for lithography mask generation model | |
CN108228981A (en) | The Forecasting Methodology of OPC model generation method and experimental pattern based on neural network | |
CN102998896B (en) | Basic module-based mask main body graph optimization method | |
CN107479335A (en) | The optical imagery quick calculation method decomposed based on light source interaural crosscorrelation function | |
Shi et al. | Fast and accurate machine learning inverse lithography using physics based feature maps and specially designed DCNN | |
CN114326329A (en) | Photoetching mask optimization method based on residual error network | |
WO2020154979A1 (en) | Photolithography mask optimization method and apparatus for pattern and image joint optimization, and electronic device | |
Lv et al. | Mask-filtering-based inverse lithography | |
CN116720479B (en) | Mask generation model training method, mask generation method and device and storage medium | |
NL2034667B1 (en) | Computer-implemented method based on fast mask near-field calculation by using cycle-consistent adversarial network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |