CN108665060A - A kind of integrated neural network for calculating photoetching - Google Patents
A kind of integrated neural network for calculating photoetching Download PDFInfo
- Publication number
- CN108665060A CN108665060A CN201810600924.3A CN201810600924A CN108665060A CN 108665060 A CN108665060 A CN 108665060A CN 201810600924 A CN201810600924 A CN 201810600924A CN 108665060 A CN108665060 A CN 108665060A
- Authority
- CN
- China
- Prior art keywords
- neural network
- lithography
- vector
- conjugate
- integrated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 73
- 238000001259 photo etching Methods 0.000 title 1
- 238000001459 lithography Methods 0.000 claims abstract description 69
- 239000013598 vector Substances 0.000 claims abstract description 68
- 238000000034 method Methods 0.000 claims abstract description 10
- 230000003993 interaction Effects 0.000 claims description 26
- 230000003287 optical effect Effects 0.000 claims description 25
- 238000000206 photolithography Methods 0.000 claims description 12
- 230000005540 biological transmission Effects 0.000 claims description 9
- 238000003384 imaging method Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012634 optical imaging Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 abstract description 5
- 239000004065 semiconductor Substances 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 238000007654 immersion Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 102000002067 Protein Subunits Human genes 0.000 description 1
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03F—PHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
- G03F7/00—Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
- G03F7/70—Microphotolithographic exposure; Apparatus therefor
- G03F7/70483—Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
- G03F7/70491—Information management, e.g. software; Active and passive control, e.g. details of controlling exposure processes or exposure tool monitoring processes
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)
Abstract
本发明公开了一种用于计算光刻的集成神经网络,包括共轭神经网络和前向神经网络,且所述共轭神经网络的输出端连接所述前向神经网络的输入端;所述共轭神经网络用于提取计算光刻的特征矢量,并将提取出来的特征矢量输入所述前向神经网络中,其中,所述共轭神经网络提取计算光刻的特征矢量的方法为:Yj=∑iWijXi,Zj=Yj·Yj *;其中,Zj为提取出来的特征矢量,Wij为所述共轭神经网络的参数,Xi为光罩上第i个点的邻近环境,Yj *为Yj的共轭。本发明中提供的一种用于计算光刻的集成神经网络,将用于提取特征矢量的共轭卷积神经网络结构和前向神经网络结合起来,形成集成神经网络,可以用于任何类型的计算光刻学习。
The invention discloses an integrated neural network for computational lithography, including a conjugate neural network and a forward neural network, and the output of the conjugate neural network is connected to the input of the forward neural network; the The conjugate neural network is used to extract the feature vector of computational lithography, and input the extracted feature vector into the forward neural network, wherein, the method for extracting the feature vector of computational lithography by the conjugate neural network is: Y j =∑ i W ij X i , Z j =Y j · Y j * ; wherein, Z j is the extracted feature vector, W ij is the parameter of the conjugate neural network, and Xi is the i -th points in the neighborhood, Y j * is the conjugate of Y j . An integrated neural network for computational lithography provided in the present invention combines the conjugate convolutional neural network structure and the forward neural network for extracting feature vectors to form an integrated neural network, which can be used for any type of Computational Lithography Learning.
Description
技术领域technical field
本发明涉及集成电路领域,具体涉及一种用于计算光刻的集成神经网络。The invention relates to the field of integrated circuits, in particular to an integrated neural network for computational lithography.
背景技术Background technique
为了不断追求半导体芯片的性能增强、功耗减小和芯片面积收缩,半导体芯片的最小的特征间距和最小特征尺寸需要相应地缩小。为了支持这一无止境的趋势,半导体工业需要开发具有越来越短的曝光波长和越来越高的数值孔径(NA)的光刻工具,例如扫描仪,以实现高的光学分辨率。半导体工业在14nm技术节点之前成功地沿着这条道路前进,然而,业界已经发现,沿着这条道路继续推动硬件(扫描仪)技术的发展变得非常困难,这一点,可以从EUV技术的发展缓慢来看出。In order to continuously pursue performance enhancement, power consumption reduction, and chip area shrinkage of semiconductor chips, the minimum feature pitch and minimum feature size of the semiconductor chip need to be reduced accordingly. To support this endless trend, the semiconductor industry needs to develop lithography tools, such as scanners, with increasingly shorter exposure wavelengths and higher numerical apertures (NA) to achieve high optical resolution. The semiconductor industry has successfully followed this path prior to the 14nm technology node, however, the industry has found it very difficult to continue to advance hardware (scanner) technology along this path. Development is slow to see.
作为一种补救措施,计算光刻技术的开发和应用使得半导体产业得以继续向前迈进。这种计算光刻技术包括源光罩协同优化、高级OPC模型、基于模型的辅助图形生成、逆向光刻技术等。大多数计算光刻技术在应用于全芯片时计算成本很高。As a remedy, the development and application of computational lithography has allowed the semiconductor industry to move forward. This computational lithography technology includes source mask collaborative optimization, advanced OPC model, model-based auxiliary pattern generation, reverse lithography technology, etc. Most computational lithography techniques are computationally expensive when applied to full chips.
为了缓解这个问题,业界已经提出了深卷积神经网络(DCNN)架构,以学习逆向光刻技术,特别是辅助图案的生成。DCNN架构是强大且通用的学习机,然而,它需要大量的数据来训练。这是因为它要求DCNN从输入数据训练中自动提取特征检测核。这样的DCNN架构应该只用于没有任何先验知识可以用于神经网络架构本身或输入特征向量设计的情况。因此,深卷积神经网络架构在进行计算光刻学习的过程中程序复杂,耗时较多,并不能保证目前生产的效率。To alleviate this problem, deep convolutional neural network (DCNN) architectures have been proposed to learn reverse lithography, especially auxiliary pattern generation. The DCNN architecture is a powerful and general learning machine, however, it requires a large amount of data for training. This is because it requires DCNN to automatically extract feature detection kernels from input data training. Such a DCNN architecture should only be used in cases where no prior knowledge is available for the neural network architecture itself or the design of input feature vectors. Therefore, the deep convolutional neural network architecture has complex procedures and time-consuming procedures in the process of computational lithography learning, which cannot guarantee the current production efficiency.
对于计算光刻而言,一切都从光学成像开始,并且光学成像方程的结构是相当成熟的,因此,现代化的生产急需一种集成神经网络来进行有效的计算光刻学习。For computational lithography, everything starts with optical imaging, and the structure of optical imaging equations is quite mature. Therefore, modern production urgently needs an integrated neural network for effective computational lithography learning.
发明内容Contents of the invention
本发明所要解决的技术问题是提供一种用于计算光刻的集成神经网络,将用于提取特征矢量的共轭卷积神经网络结构和前向神经网络结合起来,形成集成神经网络,可以用于任何类型的计算光刻学习。The technical problem to be solved by the present invention is to provide an integrated neural network for computational lithography, combining the conjugate convolutional neural network structure and the forward neural network for extracting feature vectors to form an integrated neural network, which can be used for any type of computational lithography learning.
为了实现上述目的,本发明采用如下技术方案:一种用于计算光刻的集成神经网络,所述集成神经网络包括共轭神经网络和前向神经网络,且所述共轭神经网络的输出端连接所述前向神经网络的输入端;所述共轭神经网络用于提取计算光刻的特征矢量,并将提取出来的特征矢量输入所述前向神经网络中,其中,所述共轭神经网络提取计算光刻的特征矢量的方法为:Yj=∑iWijXi,其中,Zj为提取出来的特征矢量,Wij为所述共轭神经网络的参数,Xi为光罩上第i个点的邻近环境,为Yj的共轭。In order to achieve the above object, the present invention adopts the following technical solution: an integrated neural network for computational lithography, the integrated neural network includes a conjugate neural network and a forward neural network, and the output terminal of the conjugate neural network Connect the input end of the forward neural network; the conjugate neural network is used to extract the feature vector of computational lithography, and input the extracted feature vector into the forward neural network, wherein the conjugate neural network The method of network extraction and calculation of the feature vector of lithography is: Y j = ∑ i W ij X i , Wherein, Z j is the extracted feature vector, W ij is the parameter of the conjugate neural network, Xi is the adjacent environment of the i -th point on the mask, is the conjugate of Y j .
进一步地,所述用于计算光刻的特征矢量为与光强有关的矢量。Further, the feature vector used for calculating lithography is a vector related to light intensity.
进一步地,所述Xi采用真实空间中的向量或者基于空间频率的向量为光罩上第i个点的邻近环境。Further, the X i uses a vector in real space or a vector based on spatial frequency as the surrounding environment of the i-th point on the mask.
进一步地,当Xi采用真实空间中的向量时,提取计算光刻的特征矢量时,需要输入真实空间中的向量的个数其中,a为光刻机中光学相互作用的范围,该范围区域被划分为大小相等的子单元;b为光刻机中光学相互作用范围内的子单元的大小。Further, when X i adopts vectors in real space, when extracting the feature vectors of computational lithography, it is necessary to input the number of vectors in real space Among them, a is the range of optical interaction in the lithography machine, which is divided into subunits of equal size; b is the size of the subunits within the range of optical interaction in the lithography machine.
进一步地,所述光学相互作用范围内子单元的大小其中,NA为光刻机的数值孔径,σmax为与曝光照射光在光罩上的最大角度有关的参数,λ为光刻机的曝光波长。Further, the size of the subunit within the optical interaction range Among them, NA is the numerical aperture of the lithography machine, σ max is a parameter related to the maximum angle of the exposure light on the mask, and λ is the exposure wavelength of the lithography machine.
进一步地,所述真实空间中的向量的输入值为Valuecell=tbg·Areacell+(tf-tbg)·Areageo_in_cell,其中,tbg是光罩的背景的复传输值,tf是光罩上图案的复传输值,Areacell为子单元的面积,Areageo_in_cell为光罩图案在子单元中的面积。Further, the input value of the vector in the real space is Value cell =t bg ·Area cell +(t f -t bg ) ·Area geo_in_cell , wherein, t bg is the complex transmission value of the background of the mask, t f is the complex transmission value of the pattern on the mask, Area cell is the area of the subunit, and Area geo_in_cell is the area of the mask pattern in the subunit.
进一步地,当Xi采用基于空间频率的向量时,提取计算光刻的特征矢量时,需要输入空间频率中的向量的个数M2通过如下公式计算:Further, when X i adopts a vector based on spatial frequency, when extracting the feature vector of computational lithography, the number M of vectors in the input spatial frequency needs to be calculated by the following formula :
其中,M2为符合上述公式的所有的nx和ny的个数总和,nx和ny为成像系统的衍射级数,NA为光刻机的数值孔径,σmax为与曝光照射光在光罩上的最大角度有关的参数,λ为光刻机的曝光波长,P=2*(光学成像中光学相互作用范围的半径+安全带的宽度),所述安全带设置在光学相互作用范围的外围,用于保证光学相互作用范围内的光罩图案的计算准确性。Among them, M 2 is the sum of all n x and n y numbers that conform to the above formula, n x and n y are the diffraction orders of the imaging system, NA is the numerical aperture of the photolithography machine, and σ max is the Parameters related to the maximum angle on the reticle, λ is the exposure wavelength of the lithography machine, P=2*(the radius of the optical interaction range in optical imaging+the width of the safety belt), and the safety belt is set at the optical interaction The periphery of the range, used to ensure the calculation accuracy of the reticle pattern in the optical interaction range.
进一步地,所述基于空间频率的向量的输入值为在加上安全带的光学相互作用范围内的光罩图案的傅里叶变换在λ/P的格点上的数值。Further, the input value of the vector based on the spatial frequency is the value of the Fourier transform of the mask pattern on the grid point of λ/P within the optical interaction range plus the safety belt.
进一步地,所述前向神经网络结构具有3个或4个隐藏层。Further, the feedforward neural network structure has 3 or 4 hidden layers.
本发明的有益效果为:首先采用共轭神经网络提取计算光刻的特征矢量,再将提取出来的特征矢量输入前向神经网络中,从而使得共轭神经网络的输出端与前向神经网络的输入端结合起来,形成集成神经网络;本发明形成的集成神经网络能够用于任何类型的计算光刻学习,并且进行计算光刻学习的过程简便快速,大大提高了计算光刻的效率,减轻了其复杂程度。The beneficial effects of the present invention are as follows: first, the conjugate neural network is used to extract and calculate the feature vector of lithography, and then the extracted feature vector is input into the forward neural network, so that the output terminal of the conjugate neural network is consistent with the forward neural network. The input ends are combined to form an integrated neural network; the integrated neural network formed by the present invention can be used for any type of computational lithography learning, and the process of computational lithography learning is simple and fast, greatly improving the efficiency of computational lithography and reducing its complexity.
附图说明Description of drawings
附图1为本发明中共轭神经网络的结构图。Accompanying drawing 1 is the structural diagram of the conjugated neural network of the present invention.
附图2为本发明中描述光罩上的点的邻近环境的向量的构造。Accompanying drawing 2 is the construction of the vector describing the surrounding environment of the point on the mask in the present invention.
附图3为本发明中输入向量为基于空间频率的向量时,输入向量在空间频率空间中的采样示意图。Figure 3 is a schematic diagram of sampling of the input vector in the spatial frequency space when the input vector is a vector based on the spatial frequency in the present invention.
附图4为本发明形成的集成神经网络的结构示意图。Accompanying drawing 4 is the structure diagram of the integrated neural network formed by the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面结合附图对本发明的具体实施方式做进一步的详细说明。In order to make the purpose, technical solution and advantages of the present invention clearer, the specific implementation manners of the present invention will be further described in detail below in conjunction with the accompanying drawings.
本发明提供的一种用于计算光刻的集成神经网络,集成神经网络包括共轭神经网络和前向神经网络,且共轭神经网络的输出端连接前向神经网络的输入端;共轭神经网络用于提取计算光刻的特征矢量,并将提取出来的特征矢量输入前向神经网络中,其中,共轭神经网络提取计算光刻的特征矢量的方法为:Yj=∑iWijXi,其中,Zj为提取出来的特征矢量,Wij为所述共轭神经网络的参数,Xi为光罩上第i个点的邻近环境,为Yj的共轭。An integrated neural network for computing lithography provided by the present invention, the integrated neural network includes a conjugated neural network and a forward neural network, and the output of the conjugated neural network is connected to the input of the forward neural network; the conjugated neural network The network is used to extract the feature vector of computational lithography, and input the extracted feature vector into the forward neural network, wherein the method of extracting the feature vector of computational lithography by the conjugate neural network is: Y j = ∑ i W ij X i , Wherein, Z j is the extracted feature vector, W ij is the parameter of the conjugate neural network, Xi is the adjacent environment of the i -th point on the mask, is the conjugate of Y j .
众所周知,任何计算光刻都是从光强分布函数开始的。为此,可以假设基于机器学习的计算光刻的特征矢量应该是与光强有关的矢量。为了从输入几何形状中提取这样的与光强有关的矢量,我们共轭神经网络来进行提取。本发明提供的共轭神经网络的结构如附图1所示,其输入向量Xi为光罩上第i个点的邻近环境,输出值为用于计算光刻的特征矢量Zm。It is well known that any computational lithography starts from the light intensity distribution function. To this end, it can be assumed that the feature vectors for machine learning-based computational lithography should be light intensity-dependent vectors. To extract such intensity-dependent vectors from the input geometry, we conjugate neural networks for extraction. The structure of the conjugate neural network provided by the present invention is shown in FIG. 1 . Its input vector Xi is the surrounding environment of the i -th point on the mask, and its output value is used to calculate the feature vector Z m of photolithography.
本发明中Xi采用真实空间中的向量或者基于空间频率的向量为光罩上第i个点的邻近环境。以下针对两种情况分别进行介绍:In the present invention, X i uses a vector in real space or a vector based on spatial frequency as the surrounding environment of the i-th point on the mask. The following two situations are introduced respectively:
如果决定使用真实空间量来描述点的邻近环境,则需要首先估计光学相互作用范围,然后将光学相互作用范围内的区域划分为子单元,如图2所示。光的相互作用范围取决于成像条件,取决于给定照明条件下的空间相干性程度。此时提取计算光刻的特征矢量时,需要输入真实空间中的向量的个数其中,a为光刻机中光学相互作用的范围,该范围区域被划分为大小相等的子单元;b为光刻机中光学相互作用范围内的子单元的大小。其中,光学相互作用范围内子单元的大小其中,NA为光刻机的数值孔径,σmax为与曝光照射光在光罩上的最大角度有关的参数,λ为光刻机的曝光波长。真实空间中的向量的输入值为Valuecell=tbg·Areacell+(tf-tbg)·Areageo_in_cell,其中,tbg是光罩的背景的复传输值,tf是图案的复传输值,Areacell为子单元的面积,Areageo_in_cell为光罩图案在子单元中的面积。If one decides to use real spatial quantities to describe the neighborhood of a point, one needs to first estimate the optical interaction range, and then divide the area within the optical interaction range into subunits, as shown in Figure 2. The extent to which light interacts depends on the imaging conditions, depending on the degree of spatial coherence in a given lighting condition. At this time, when extracting the feature vector of computational lithography, the number of vectors in the real space needs to be input Among them, a is the range of optical interaction in the lithography machine, which is divided into subunits of equal size; b is the size of the subunits within the range of optical interaction in the lithography machine. where the size of the subunit in the range of optical interaction Among them, NA is the numerical aperture of the lithography machine, σ max is a parameter related to the maximum angle of the exposure light on the mask, and λ is the exposure wavelength of the lithography machine. The input value of the vector in the real space is Value cell =t bg Area cell +(t f -t bg ) Area geo_in_cell , where t bg is the complex transmission value of the background of the mask, and t f is the complex transmission value of the pattern Value, Area cell is the area of the subunit, and Area geo_in_cell is the area of the mask pattern in the subunit.
以下通过浸没式光刻机来举例说明:该光刻机的数值孔径NA=1.35,该光刻机中与曝光照射光在光罩上的最大角度有关的参数σmax=0.95,该光刻机的曝光波长λ=193nm,该光刻机的光学光学相互作用的范围a=1500nm。由于光刻扫描仪是一种成像系统,能够通过扫描仪的光场的最大空间频率是NA(1+σmax),则光学相互作用范围内子单元的大小进一步地,需要输入真实空间中的向量的个数个,每个子单元的值有下列方程决定:The following is illustrated by an immersion photolithography machine: the numerical aperture NA=1.35 of the photolithography machine, the parameter σ max =0.95 related to the maximum angle of the exposure light on the mask in the photolithography machine, the photolithography machine The exposure wavelength λ=193nm, and the optical-optical interaction range a=1500nm of the lithography machine. Since the lithography scanner is an imaging system, the maximum spatial frequency of the light field that can pass through the scanner is NA(1+σ max ), then the size of the subunit in the optical interaction range Further, the number of vectors in the real space needs to be input , the value of each subunit is determined by the following equation:
Valuecell=tbg·Areacell+(tf-tbg)·Areageo_in_cell,其中,tbg是光罩的背景的复传输值,tf是图案的复传输值,Areacell为子单元的面积,Areageo_in_cell为光罩图案在子单元中的面积。Value cell =t bg Area cell +(t f -t bg ) Area geo_in_cell , where t bg is the complex transmission value of the background of the mask, t f is the complex transmission value of the pattern, and Area cell is the area of the subunit , Area geo_in_cell is the area of the mask pattern in the subunit.
如果决定使用基于空间频率的输入向量,那么可以如附图3所示的空间频率空间进行估计输入向量的元素的数目,本发明中可以通过成像系统的最大空间频率为NA(1+σmax),如图3圆半径所示。此时,在提取计算光刻的特征矢量时,需要输入真实空间中的向量的个数M2通过如下公式计算:If you decide to use an input vector based on spatial frequency, you can estimate the number of elements of the input vector in the spatial frequency space shown in Figure 3, the maximum spatial frequency that can be passed through the imaging system in the present invention is NA(1+σ max ) , as shown in Figure 3 circle radius. At this time, when extracting and calculating the feature vector of lithography, the number M2 of the vectors in the real space needs to be input to be calculated by the following formula:
其中,M2为符合上述公式的所有的nx和ny的个数总和,nx和ny为成像系统的衍射级数,NA为光刻机的数值孔径,σmax为与曝光照射光在光罩上的最大角度有关的参数,λ为光刻机的曝光波长,P=2*(光学成像中光学相互作用范围的半径+安全带的宽度),所述安全带设置在光学相互作用范围的外围,用于保证光学相互作用范围内的光罩图案的计算准确性。Among them, M 2 is the sum of all n x and n y numbers that conform to the above formula, n x and n y are the diffraction orders of the imaging system, NA is the numerical aperture of the photolithography machine, and σ max is the Parameters related to the maximum angle on the reticle, λ is the exposure wavelength of the lithography machine, P=2*(the radius of the optical interaction range in optical imaging+the width of the safety belt), and the safety belt is set at the optical interaction The periphery of the range, used to ensure the calculation accuracy of the reticle pattern in the optical interaction range.
基于空间频率的向量的输入值为在加上一定的安全带的光学相互作用范围内的图案的傅里叶变换在λ/P的格点上的数值,并且在进行傅里叶变化时需要考虑光罩的复传输信息。The input value of the spatial frequency-based vector is the value of the Fourier transform of the pattern in the range of optical interaction plus a certain safety belt on the lattice point of λ/P, and it needs to be considered when performing the Fourier transform The retransmission information of the mask.
以下通过浸没式光刻机来举例说明:该光刻机的数值孔径NA=1.35,该光刻机中与曝光照射光在光罩上的最大角度有关的参数σmax=0.95,该光刻机的曝光波长λ=193nm,该光刻机的光学光学相互作用的范围a=1500nm。本发明中可以通过成像系统的最大空间频率为NA(1+σmax),如图3圆半径所示。可以通过成像系统的衍射级数(nx,ny)必须满足以下方程The following is illustrated by an immersion photolithography machine: the numerical aperture NA=1.35 of the photolithography machine, the parameter σ max =0.95 related to the maximum angle of the exposure light on the mask in the photolithography machine, the photolithography machine The exposure wavelength λ=193nm, and the optical-optical interaction range a=1500nm of the lithography machine. In the present invention, the maximum spatial frequency that can pass through the imaging system is NA(1+σ max ), as shown by the circle radius in FIG. 3 . The diffraction orders (n x , ny ) that can pass through the imaging system must satisfy the following equation
对于NA=1.35,λ=193nm,σmax=0.95,输入矢量的元素的所需总数约为5250。基于空间频率的向量的输入值为在加上一定的安全带的光学相互作用范围内的图案的傅里叶变换在λ/P的格点上的数值。值得说明的是,当使用空间频率信息来描述相邻环境时,在进行傅里叶变换的时候需要考虑光罩的复传输信息。For NA=1.35, λ=193nm, σmax =0.95, the required total number of elements of the input vector is about 5250. The input values for the spatial frequency-based vector are the values of the Fourier transform of the pattern on the λ/P grid over the range of optical interactions plus a certain safety band. It is worth noting that when using spatial frequency information to describe the adjacent environment, the complex transmission information of the mask needs to be considered when performing Fourier transform.
在共轭卷积神经网络用于特征提取之后,使用具有3个或4个隐藏层的前向神经网络结构来逼近用户对计算光刻感兴趣的任何非线性函数,如下面的图4所示的集成神经网络,可用于任何类型的计算光刻学习。在形成本发明中集成神经网络之后,可以采用上述集成神经网络进行计算光刻学习。After the conjugate convolutional neural network is used for feature extraction, a feed-forward neural network structure with 3 or 4 hidden layers is used to approximate any nonlinear function that the user is interested in computational lithography, as shown in Figure 4 below An integrated neural network for any type of computational lithography learning. After forming the integrated neural network in the present invention, the above-mentioned integrated neural network can be used for computational lithography learning.
以上所述仅为本发明的优选实施例,所述实施例并非用于限制本发明的专利保护范围,因此凡是运用本发明的说明书及附图内容所作的等同结构变化,同理均应包含在本发明所附权利要求的保护范围内。The above are only preferred embodiments of the present invention, and the embodiments are not intended to limit the scope of patent protection of the present invention, so all equivalent structural changes made by using the description and drawings of the present invention should be included in the same reason Within the protection scope of the appended claims of the present invention.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810600924.3A CN108665060B (en) | 2018-06-12 | 2018-06-12 | Integrated neural network for computational lithography |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810600924.3A CN108665060B (en) | 2018-06-12 | 2018-06-12 | Integrated neural network for computational lithography |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108665060A true CN108665060A (en) | 2018-10-16 |
CN108665060B CN108665060B (en) | 2022-04-01 |
Family
ID=63774679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810600924.3A Active CN108665060B (en) | 2018-06-12 | 2018-06-12 | Integrated neural network for computational lithography |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108665060B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109143796A (en) * | 2018-10-26 | 2019-01-04 | 中国科学院微电子研究所 | Method and device for determining photoetching light source and method and device for training model |
CN110187609A (en) * | 2019-06-05 | 2019-08-30 | 北京理工大学 | A Deep Learning Approach to Computational Lithography |
CN111985611A (en) * | 2020-07-21 | 2020-11-24 | 上海集成电路研发中心有限公司 | Calculation method of reverse lithography solution based on physical feature map and DCNN machine learning |
US20220309645A1 (en) * | 2019-06-13 | 2022-09-29 | Asml Netherlands B.V. | Metrology Method and Method for Training a Data Structure for Use in Metrology |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080077907A1 (en) * | 2006-09-21 | 2008-03-27 | Kulkami Anand P | Neural network-based system and methods for performing optical proximity correction |
JP2008268265A (en) * | 2007-04-16 | 2008-11-06 | Fujitsu Microelectronics Ltd | Verification method and verification device |
CN104865788A (en) * | 2015-06-07 | 2015-08-26 | 上海华虹宏力半导体制造有限公司 | Photoetching layout OPC (Optical Proximity Correction) method |
CN107797391A (en) * | 2017-11-03 | 2018-03-13 | 上海集成电路研发中心有限公司 | Optical adjacent correction method |
CN107908071A (en) * | 2017-11-28 | 2018-04-13 | 上海集成电路研发中心有限公司 | A kind of optical adjacent correction method based on neural network model |
-
2018
- 2018-06-12 CN CN201810600924.3A patent/CN108665060B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080077907A1 (en) * | 2006-09-21 | 2008-03-27 | Kulkami Anand P | Neural network-based system and methods for performing optical proximity correction |
JP2008268265A (en) * | 2007-04-16 | 2008-11-06 | Fujitsu Microelectronics Ltd | Verification method and verification device |
CN104865788A (en) * | 2015-06-07 | 2015-08-26 | 上海华虹宏力半导体制造有限公司 | Photoetching layout OPC (Optical Proximity Correction) method |
CN107797391A (en) * | 2017-11-03 | 2018-03-13 | 上海集成电路研发中心有限公司 | Optical adjacent correction method |
CN107908071A (en) * | 2017-11-28 | 2018-04-13 | 上海集成电路研发中心有限公司 | A kind of optical adjacent correction method based on neural network model |
Non-Patent Citations (2)
Title |
---|
KYOUNG-AH JEON等: "Process Proximity Correction by Neural Networks", 《JAPANESE JOURNAL OF APPLIED PHYSICS》 * |
蒋舒宇等: "基于Kohonen神经网络的晶圆光刻流程动态调度方法", 《上海交通大学学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109143796A (en) * | 2018-10-26 | 2019-01-04 | 中国科学院微电子研究所 | Method and device for determining photoetching light source and method and device for training model |
CN109143796B (en) * | 2018-10-26 | 2021-02-12 | 中国科学院微电子研究所 | Method and device for determining photoetching light source and method and device for training model |
CN110187609A (en) * | 2019-06-05 | 2019-08-30 | 北京理工大学 | A Deep Learning Approach to Computational Lithography |
US20220309645A1 (en) * | 2019-06-13 | 2022-09-29 | Asml Netherlands B.V. | Metrology Method and Method for Training a Data Structure for Use in Metrology |
CN111985611A (en) * | 2020-07-21 | 2020-11-24 | 上海集成电路研发中心有限公司 | Calculation method of reverse lithography solution based on physical feature map and DCNN machine learning |
WO2022016802A1 (en) * | 2020-07-21 | 2022-01-27 | 上海集成电路研发中心有限公司 | Physical feature map- and dcnn-based computation method for machine learning-based inverse lithography technology solution |
Also Published As
Publication number | Publication date |
---|---|
CN108665060B (en) | 2022-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | GAN-OPC: Mask optimization with lithography-guided generative adversarial nets | |
US11748549B2 (en) | Method and apparatus for integrated circuit mask patterning | |
CN107908071B (en) | Optical proximity correction method based on neural network model | |
CN108665060A (en) | A kind of integrated neural network for calculating photoetching | |
Peng et al. | Gradient-based source and mask optimization in optical lithography | |
US20160154925A1 (en) | Method for Integrated Circuit Mask Patterning | |
CN104865788B (en) | A kind of lithography layout OPC method | |
CN110187609B (en) | Deep learning method for calculating photoetching | |
CN101388049B (en) | An Extractive Hierarchical Processing Method for Optical Proximity Correction | |
CN111310407A (en) | Method for designing optimal feature vector of reverse photoetching based on machine learning | |
CN108228981A (en) | The Forecasting Methodology of OPC model generation method and experimental pattern based on neural network | |
Sun et al. | Efficient ILT via multi-level lithography simulation | |
CN114326329B (en) | Photoetching mask optimization method based on residual error network | |
Cecil et al. | Establishing fast, practical, full-chip ILT flows using machine learning | |
CN111985611A (en) | Calculation method of reverse lithography solution based on physical feature map and DCNN machine learning | |
CN107169566A (en) | Dynamic neural network model training method and device | |
CN113238460B (en) | A Deep Learning-Based Optical Proximity Correction Method for Extreme Ultraviolet | |
Lin et al. | Fast mask near-field calculation using fully convolution network | |
Pearman et al. | Fast all-angle Mask 3D for ILT patterning | |
Ma et al. | Informational lithography approach based on source and mask optimization | |
US20070011648A1 (en) | Fast systems and methods for calculating electromagnetic fields near photomasks | |
US11899374B2 (en) | Method for determining an electromagnetic field associated with a computational lithography mask model | |
US10025177B2 (en) | Efficient way to creating process window enhanced photomask layout | |
Ma et al. | Nonlinear compressive inverse lithography aided by low-rank regularization | |
CN108614390A (en) | A kind of source mask optimization method using compressed sensing technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |