CN108665060A - A kind of integrated neural network for calculating photoetching - Google Patents
A kind of integrated neural network for calculating photoetching Download PDFInfo
- Publication number
- CN108665060A CN108665060A CN201810600924.3A CN201810600924A CN108665060A CN 108665060 A CN108665060 A CN 108665060A CN 201810600924 A CN201810600924 A CN 201810600924A CN 108665060 A CN108665060 A CN 108665060A
- Authority
- CN
- China
- Prior art keywords
- neural network
- photoetching
- vector
- calculating
- conjugation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03F—PHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
- G03F7/00—Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
- G03F7/70—Microphotolithographic exposure; Apparatus therefor
- G03F7/70483—Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
- G03F7/70491—Information management, e.g. software; Active and passive control, e.g. details of controlling exposure processes or exposure tool monitoring processes
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)
Abstract
The invention discloses the input terminals that the output end of a kind of integrated neural network for calculating photoetching, including conjugation neural network and feedforward neural network, and the conjugation neural network connects the feedforward neural network;The conjugation neural network is used to extract the characteristic vector for calculating photoetching, and the characteristic vector extracted is inputted in the feedforward neural network, wherein the method for the characteristic vector of the conjugation neural network extraction calculating photoetching is:Yj=∑iWijXi, Zj=Yj·Yj *;Wherein, ZjFor the characteristic vector extracted, WijFor the parameter of the conjugation neural network, XiFor i-th point on light shield of adjacent ambient, Yj *For YjConjugation.What is provided in the present invention is a kind of for calculating the integrated neural network of photoetching, by for extracting characteristic vector conjugation convolutional neural networks structure and feedforward neural network combine, form integrated neural network, can be used for any kind of calculatings photoetching and learn.
Description
Technical field
The present invention relates to integrated circuit fields, and in particular to a kind of integrated neural network for calculating photoetching.
Background technology
In order to constantly pursue the performance enhancement of semiconductor chip, power consumption reduces and chip area is shunk, semiconductor chip
Minimum feature spacing and minimum feature size needs correspondingly to reduce.In order to support this endless trend, semiconductor work
Industry needs to develop the lithography tool with shorter and shorter exposure wavelength and higher and higher numerical aperture (NA), such as scans
Instrument, to realize high optical resolution.Semi-conductor industry is successfully advanced along this road before 14nm technology nodes, so
And industry has been found that the development that hardware (scanner) technology is continued to press on along this road becomes extremely difficult, this
Point can be found out from developing slowly for EUV technologies.
As a kind of remedial measure, the development and application for calculating photoetching technique makes semiconductor industry be able to continue to step forward
Into.This photoetching technique that calculates includes that source light shield cooperates with optimization, advanced OPC model, the secondary graphics based on model generate, is reverse
Photoetching technique etc..It is very high that most of calculating photoetching techniques calculate cost when applied to full chip.
In order to alleviate this problem, deep convolutional neural networks (DCNN) framework has been proposed in industry, to learn reverse light
The generation of lithography, especially auxiliary patterns.DCNN frameworks are powerful and general learning machines, however, it needs a large amount of number
According to training.This is because it, which requires DCNN to automatically extract feature from input data training, detects core.Such DCNN frameworks
It should be served only for the case where no any priori can be used for neural network framework itself or input feature value design.Cause
This, deep convolutional neural networks framework program during carrying out calculating photoetching study is complicated, time-consuming more, does not ensure that mesh
The efficiency of preceding production.
For calculating photoetching, all are all since optical imagery, and the structure of optical imagery equation is mutually to treat as
Ripe, therefore, a kind of integrated neural network of production urgent need of modernization learns effectively calculate photoetching.
Invention content
Technical problem to be solved by the invention is to provide a kind of integrated neural networks for calculating photoetching, will be used to carry
The conjugation convolutional neural networks structure and feedforward neural network for taking characteristic vector combine, and form integrated neural network, can be with
Learn for any kind of calculating photoetching.
To achieve the goals above, the present invention adopts the following technical scheme that:A kind of integrated neural network for calculating photoetching
Network, the integrated neural network include conjugation neural network and feedforward neural network, and the output end of the conjugation neural network
Connect the input terminal of the feedforward neural network;The conjugation neural network is used to extract the characteristic vector for calculating photoetching, and will
The characteristic vector extracted inputs in the feedforward neural network, wherein the conjugation neural network extraction calculates photoetching
The method of characteristic vector is:Yj=∑iWijXi,Wherein, ZjFor the characteristic vector extracted, WijIt is described total
The parameter of yoke neural network, XiFor i-th point on light shield of adjacent ambient,For YjConjugation.
Further, the characteristic vector for calculating photoetching is vector related with light intensity.
Further, the XiUse vector in real space or based on the vector of spatial frequency on light shield i-th
The adjacent ambient of a point.
Further, work as XiWhen using vector in real space, when extraction calculates the characteristic vector of photoetching, need to input
The number of vector in real spaceWherein, a is the range of optical interaction in litho machine, the range areas quilt
It is divided into equal-sized subelement;B is the size of the subelement within the scope of optical interaction in litho machine.
Further, within the scope of the optical interaction subelement sizeWherein, NA is photoetching
The numerical aperture of machine, σmaxFor parameter related with maximum angle of the exposure irradiation light on light shield, λ is the exposure wave of litho machine
It is long.
Further, the input value of the vector in the real space is Valuecell=tbg·Areacell+(tf-
tbg)·Areageo_in_cell, wherein tbgIt is the multiple transmission value of the background of light shield, tfIt is the multiple transmission value of pattern on light shield,
AreacellFor the area of subelement, Areageo_in_cellThe area for being mask pattern in subelement.
Further, work as XiWhen using vector based on spatial frequency, when extraction calculates the characteristic vector of photoetching, need defeated
Enter the number M of the vector in spatial frequency2It is calculated by following formula:
Wherein, M2To meet all n of above-mentioned formulaxAnd nyNumber summation, nxAnd nyFor the order of diffraction of imaging system
Number, NA are the numerical aperture of litho machine, σmaxFor parameter related with maximum angle of the exposure irradiation light on light shield, λ is photoetching
The exposure wavelength of machine, P=2* (width of radius+safety belt of optical interaction range in optical imagery), the safety belt
It is arranged in the periphery of optical interaction range, the calculating for ensureing the mask pattern within the scope of optical interaction is accurate
Property.
Further, the input value of the vector based on spatial frequency is in the optical interaction model plus safety belt
Numerical value of the Fourier transformation of mask pattern in enclosing on the lattice point of λ/P.
Further, the feedforward neural network structure has 3 or 4 hidden layers.
Beneficial effects of the present invention are:The characteristic vector of photoetching is calculated using conjugation neural network extraction first, then will be carried
In the characteristic vector input feedforward neural network taken out, so that the output end and feedforward neural network of conjugation neural network
Input terminal combine, formed integrated neural network;The integrated neural network that the present invention is formed can be used in any kind of
Photoetching study is calculated, and the process for calculate photoetching study is easy quickly, substantially increases the efficiency for calculating photoetching, mitigates
Its complexity.
Description of the drawings
Attached drawing 1 is the structure chart that neural network is conjugated in the present invention.
Attached drawing 2 is the construction of the vector of the adjacent ambient of the point on light shield described in the present invention.
When attached drawing 3 is that input vector is the vector based on spatial frequency in the present invention, input vector is in spatial frequencies space
In sampling schematic diagram.
Attached drawing 4 is the structural schematic diagram for the integrated neural network that the present invention is formed.
Specific implementation mode
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with the accompanying drawings to the specific reality of the present invention
The mode of applying is described in further detail.
A kind of integrated neural network for calculating photoetching provided by the invention, integrated neural network include conjugation nerve net
Network and feedforward neural network, and it is conjugated the input terminal of the output end connection feedforward neural network of neural network;It is conjugated neural network
It is inputted in feedforward neural network for extracting the characteristic vector for calculating photoetching, and by the characteristic vector extracted, wherein conjugation
The method of characteristic vector that neural network extraction calculates photoetching is:Yj=∑iWijXi,Wherein, ZjTo extract
Characteristic vector, WijFor the parameter of the conjugation neural network, XiFor i-th point on light shield of adjacent ambient,For YjBe total to
Yoke.
It is well known that any calculating photoetching is all since intensity distribution function.Thus, it can be assumed that be based on engineering
The characteristic vector of the calculating photoetching of habit should be vector related with light intensity.In order to extracted from input geometry it is such with
The related vector of light intensity, we are conjugated neural network to extract.The structure of conjugation neural network provided by the invention is for example attached
Shown in Fig. 1, input vector XiFor i-th point on light shield of adjacent ambient, output valve is the characteristic vector for calculating photoetching
Zm。
X in the present inventioniUse vector in real space or based on the vector of spatial frequency for i-th point on light shield
Adjacent ambient.It is introduced respectively below for two kinds of situations:
If it is determined that describing adjacent ambient a little using real space amount, then need to estimate optical interaction model first
It encloses, is then subelement by the region division within the scope of optical interaction, as shown in Figure 2.The interaction volume of light depends on
In image-forming condition, the spatial coherence degree under given lighting condition is depended on.When extraction calculates the characteristic vector of photoetching at this time,
Need to input the number of the vector in real spaceWherein, a is the range of optical interaction in litho machine, the model
It encloses region and is divided into equal-sized subelement;B is the size of the subelement within the scope of optical interaction in litho machine.Its
In, the size of subelement within the scope of optical interactionWherein, NA is the numerical aperture of litho machine, σmaxFor
Parameter related with maximum angle of the exposure irradiation light on light shield, λ are the exposure wavelength of litho machine.Vector in real space
Input value be Valuecell=tbg·Areacell+(tf-tbg)·Areageo_in_cell, wherein tbgIt is answering for the background of light shield
Transmission value, tfIt is the multiple transmission value of pattern, AreacellFor the area of subelement, Areageo_in_cellIt is mask pattern in subelement
In area.
It is illustrated below by way of immersed photoetching machine:The numerical aperture NA=1.35 of the litho machine, in the litho machine
Parameter σ related with maximum angle of the exposure irradiation light on light shieldmax=0.95, exposure wavelength lambda=193nm of the litho machine,
The range a=1500nm of the optics optical interaction of the litho machine.Since lithography scanner is a kind of imaging system, Neng Goutong
The maximum spatial frequency for over-scanning the light field of instrument is NA (1+ σmax), then within the scope of optical interaction subelement sizeFurther, need to input the number of the vector in real spaceIt is a, each
The value of subelement has following equations decision:
Valuecell=tbg·Areacell+(tf-tbg)·Areageo_in_cell, wherein tbgIt is the multiple biography of the background of light shield
Defeated value, tfIt is the multiple transmission value of pattern, AreacellFor the area of subelement, Areageo_in_cellIt is mask pattern in subelement
Area.
If it is determined that using the input vector based on spatial frequency, then spatial frequencies space that can be as shown in Fig. 3
The number of the element of estimation input vector is carried out, can be NA (1+ by the maximum spatial frequency of imaging system in the present invention
σmax), as shown in Fig. 3 radius of circles.At this point, when extraction calculates the characteristic vector of photoetching, need to input the vector in real space
Number M2It is calculated by following formula:
Wherein, M2To meet all n of above-mentioned formulaxAnd nyNumber summation, nxAnd nyFor the order of diffraction of imaging system
Number, NA are the numerical aperture of litho machine, σmaxFor parameter related with maximum angle of the exposure irradiation light on light shield, λ is photoetching
The exposure wavelength of machine, P=2* (width of radius+safety belt of optical interaction range in optical imagery), the safety belt
It is arranged in the periphery of optical interaction range, the calculating for ensureing the mask pattern within the scope of optical interaction is accurate
Property.
The input value of vector based on spatial frequency is within the scope of the optical interaction plus certain safety belt
Numerical value of the Fourier transformation of pattern on the lattice point of λ/P, and in the multiple biography for carrying out needing to consider light shield when Fourier's variation
Defeated information.
It is illustrated below by way of immersed photoetching machine:The numerical aperture NA=1.35 of the litho machine, in the litho machine
Parameter σ related with maximum angle of the exposure irradiation light on light shieldmax=0.95, exposure wavelength lambda=193nm of the litho machine,
The range a=1500nm of the optics optical interaction of the litho machine.The maximum space of imaging system can be passed through in the present invention
Frequency is NA (1+ σmax), as shown in Fig. 3 radius of circles.Diffraction progression (the n of imaging system can be passed throughx,ny) must satisfy it is following
Equation
For NA=1.35, λ=193nm, σmax=0.95, the required sum about 5250 of the element of input vector.It is based on
The input value of the vector of spatial frequency is the Fourier of the pattern within the scope of the optical interaction plus certain safety belt
Convert the numerical value on the lattice point of λ/P.It is worth noting that when use space frequency information is come when describing environment adjacent, into
The multiple transmission information of consideration light shield is needed when row Fourier transformation.
After conjugation convolutional neural networks are for feature extraction, the BP Neural Network with 3 or 4 hidden layers is used
Network structure come approach user to calculate the interested any nonlinear function of photoetching, such as following integrated neural network shown in Fig. 4
Network can be used for any kind of calculating photoetching study.In forming the present invention after integrated neural network, above-mentioned collection may be used
It carries out calculating photoetching study at neural network.
The foregoing is merely the preferred embodiment of the present invention, the embodiment is not intended to limit the patent protection of the present invention
Range, therefore equivalent structure variation made by every specification and accompanying drawing content with the present invention, similarly should be included in this
In the protection domain of invention appended claims.
Claims (9)
1. a kind of integrated neural network for calculating photoetching, which is characterized in that the integrated neural network includes conjugation nerve
Network and feedforward neural network, and the output end of the conjugation neural network connects the input terminal of the feedforward neural network;Institute
State conjugation neural network and be used to extract the characteristic vector for calculating photoetching, and by the characteristic vector extracted input it is described it is preceding Godwards
Through in network, wherein the method for characteristic vector that the conjugation neural network extraction calculates photoetching is:Yj=∑iWijXi,Wherein, ZjFor the characteristic vector extracted, WijFor the parameter of the conjugation neural network, XiIt is on light shield
The adjacent ambient of i point,For YjConjugation.
2. a kind of integrated neural network for calculating photoetching according to claim 1, which is characterized in that described based on
The characteristic vector for calculating photoetching is vector related with light intensity.
3. a kind of integrated neural network for calculating photoetching according to claim 1, which is characterized in that the XiUsing
The adjacent ambient that vector in real space or the vector based on spatial frequency are i-th point on light shield.
4. a kind of integrated neural network for calculating photoetching according to claim 3, which is characterized in that work as XiUsing true
When vector in the real space, when extraction calculates the characteristic vector of photoetching, the number of the vector in real space is inputted
Wherein, a is the range of optical interaction in litho machine, which is divided into equal-sized subelement;B is photoetching
The size of subelement in machine within the scope of optical interaction.
5. a kind of integrated neural network for calculating photoetching according to claim 4, which is characterized in that the optics phase
The size of subelement within the scope of interactionWherein, NA is the numerical aperture of litho machine, σmaxTo be shone with exposure
The related parameter of maximum angle of the light on light shield is penetrated, λ is the exposure wavelength of litho machine.
6. a kind of integrated neural network for calculating photoetching according to claim 4, which is characterized in that the true sky
Between in vector input value be Valuecell=tbg·Areacell+(tf-tbg)·Areageo_in_cell, wherein tbgFor light shield
Background multiple transmission value, tfFor the multiple transmission value of pattern on light shield, AreacellFor the area of subelement, Areageo_in_cellFor
Area of the mask pattern in subelement.
7. a kind of integrated neural network for calculating photoetching according to claim 3, which is characterized in that work as XiUsing base
When the vector of spatial frequency, when extraction calculates the characteristic vector of photoetching, the number M of the vector in input space frequency2Pass through
Following formula calculates:
Wherein, M2To meet all n of above-mentioned formulaxAnd nyNumber summation, nxAnd nyFor the diffraction progression of imaging system, NA
For the numerical aperture of litho machine, σmaxFor parameter related with maximum angle of the exposure irradiation light on light shield, λ is litho machine
Exposure wavelength, P=2* (width of radius+safety belt of optical interaction range in optical imagery), the safety belt setting
In the periphery of optical interaction range, the calculating accuracy for ensureing the mask pattern within the scope of optical interaction.
8. a kind of integrated neural network for calculating photoetching according to claim 7, which is characterized in that described based on sky
Between the input value of vector of frequency be the Fourier transformation of mask pattern within the scope of safety belt and optical interaction in λ/P
Lattice point on numerical value.
9. a kind of integrated neural network for calculating photoetching according to claim 1, which is characterized in that before described Godwards
There are 3 or 4 hidden layers through network structure.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810600924.3A CN108665060B (en) | 2018-06-12 | 2018-06-12 | Integrated neural network for computational lithography |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810600924.3A CN108665060B (en) | 2018-06-12 | 2018-06-12 | Integrated neural network for computational lithography |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108665060A true CN108665060A (en) | 2018-10-16 |
CN108665060B CN108665060B (en) | 2022-04-01 |
Family
ID=63774679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810600924.3A Active CN108665060B (en) | 2018-06-12 | 2018-06-12 | Integrated neural network for computational lithography |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108665060B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109143796A (en) * | 2018-10-26 | 2019-01-04 | 中国科学院微电子研究所 | Method and device for determining photoetching light source and method and device for training model |
CN110187609A (en) * | 2019-06-05 | 2019-08-30 | 北京理工大学 | A kind of deep learning method calculating photoetching |
CN111985611A (en) * | 2020-07-21 | 2020-11-24 | 上海集成电路研发中心有限公司 | Computing method based on physical characteristic diagram and DCNN machine learning reverse photoetching solution |
US20220309645A1 (en) * | 2019-06-13 | 2022-09-29 | Asml Netherlands B.V. | Metrology Method and Method for Training a Data Structure for Use in Metrology |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080077907A1 (en) * | 2006-09-21 | 2008-03-27 | Kulkami Anand P | Neural network-based system and methods for performing optical proximity correction |
JP2008268265A (en) * | 2007-04-16 | 2008-11-06 | Fujitsu Microelectronics Ltd | Verification method and verification device |
CN104865788A (en) * | 2015-06-07 | 2015-08-26 | 上海华虹宏力半导体制造有限公司 | Photoetching layout OPC (Optical Proximity Correction) method |
CN107797391A (en) * | 2017-11-03 | 2018-03-13 | 上海集成电路研发中心有限公司 | Optical adjacent correction method |
CN107908071A (en) * | 2017-11-28 | 2018-04-13 | 上海集成电路研发中心有限公司 | A kind of optical adjacent correction method based on neural network model |
-
2018
- 2018-06-12 CN CN201810600924.3A patent/CN108665060B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080077907A1 (en) * | 2006-09-21 | 2008-03-27 | Kulkami Anand P | Neural network-based system and methods for performing optical proximity correction |
JP2008268265A (en) * | 2007-04-16 | 2008-11-06 | Fujitsu Microelectronics Ltd | Verification method and verification device |
CN104865788A (en) * | 2015-06-07 | 2015-08-26 | 上海华虹宏力半导体制造有限公司 | Photoetching layout OPC (Optical Proximity Correction) method |
CN107797391A (en) * | 2017-11-03 | 2018-03-13 | 上海集成电路研发中心有限公司 | Optical adjacent correction method |
CN107908071A (en) * | 2017-11-28 | 2018-04-13 | 上海集成电路研发中心有限公司 | A kind of optical adjacent correction method based on neural network model |
Non-Patent Citations (2)
Title |
---|
KYOUNG-AH JEON等: "Process Proximity Correction by Neural Networks", 《JAPANESE JOURNAL OF APPLIED PHYSICS》 * |
蒋舒宇等: "基于Kohonen神经网络的晶圆光刻流程动态调度方法", 《上海交通大学学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109143796A (en) * | 2018-10-26 | 2019-01-04 | 中国科学院微电子研究所 | Method and device for determining photoetching light source and method and device for training model |
CN109143796B (en) * | 2018-10-26 | 2021-02-12 | 中国科学院微电子研究所 | Method and device for determining photoetching light source and method and device for training model |
CN110187609A (en) * | 2019-06-05 | 2019-08-30 | 北京理工大学 | A kind of deep learning method calculating photoetching |
US20220309645A1 (en) * | 2019-06-13 | 2022-09-29 | Asml Netherlands B.V. | Metrology Method and Method for Training a Data Structure for Use in Metrology |
CN111985611A (en) * | 2020-07-21 | 2020-11-24 | 上海集成电路研发中心有限公司 | Computing method based on physical characteristic diagram and DCNN machine learning reverse photoetching solution |
WO2022016802A1 (en) * | 2020-07-21 | 2022-01-27 | 上海集成电路研发中心有限公司 | Physical feature map- and dcnn-based computation method for machine learning-based inverse lithography technology solution |
Also Published As
Publication number | Publication date |
---|---|
CN108665060B (en) | 2022-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108665060A (en) | A kind of integrated neural network for calculating photoetching | |
Ma et al. | Pixel-based OPC optimization based on conjugate gradients | |
CN102692814B (en) | Light source-mask mixed optimizing method based on Abbe vector imaging model | |
CN104865788B (en) | A kind of lithography layout OPC method | |
Ma et al. | Lithographic source optimization based on adaptive projection compressive sensing | |
CN107908071A (en) | A kind of optical adjacent correction method based on neural network model | |
CN110426914A (en) | A kind of modification method and electronic equipment of Sub-resolution assist features | |
CN110187609A (en) | A kind of deep learning method calculating photoetching | |
Ma et al. | Optimization of lithography source illumination arrays using diffraction subspaces | |
Ma et al. | Fast lithography aerial image calculation method based on machine learning | |
CN103365071B (en) | The optical adjacent correction method of mask plate | |
Ma et al. | Fast inverse lithography based on dual-channel model-driven deep learning | |
Cecil et al. | Establishing fast, practical, full-chip ILT flows using machine learning | |
Ma et al. | Fast pixel-based optical proximity correction based on nonparametric kernel regression | |
CN107621757A (en) | A kind of intersection transmission function quick decomposition method based on indicator function | |
CN108614390B (en) | Light source mask optimization method adopting compressed sensing technology | |
CN116720479B (en) | Mask generation model training method, mask generation method and device and storage medium | |
Farzipour et al. | Traffic sign recognition using local vision transformer | |
Gong et al. | Fast aerial image simulations for partially coherent systems by transmission cross coefficient decomposition with analytical kernels | |
NL2034667B1 (en) | Computer-implemented method based on fast mask near-field calculation by using cycle-consistent adversarial network | |
CN115712227A (en) | Optical proximity effect correction method and device based on evanescent wave field strong attenuation characteristic modulation mode | |
CN109683447A (en) | A kind of determination method and device of source mask collaboration optimization primary light source | |
Nagai et al. | Acquisition of characteristic TTSP graph patterns by genetic programming | |
Pan et al. | Efficient informatics-based source and mask optimization for optical lithography | |
US9857676B2 (en) | Method and program product for designing source and mask for lithography |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |