CN107908071A - A kind of optical adjacent correction method based on neural network model - Google Patents

A kind of optical adjacent correction method based on neural network model Download PDF

Info

Publication number
CN107908071A
CN107908071A CN201711216779.0A CN201711216779A CN107908071A CN 107908071 A CN107908071 A CN 107908071A CN 201711216779 A CN201711216779 A CN 201711216779A CN 107908071 A CN107908071 A CN 107908071A
Authority
CN
China
Prior art keywords
neural network
network model
training
pattern
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711216779.0A
Other languages
Chinese (zh)
Other versions
CN107908071B (en
Inventor
时雪龙
赵宇航
陈寿面
李铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai IC R&D Center Co Ltd
Original Assignee
Shanghai Integrated Circuit Research and Development Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Integrated Circuit Research and Development Center Co Ltd filed Critical Shanghai Integrated Circuit Research and Development Center Co Ltd
Priority to CN201711216779.0A priority Critical patent/CN107908071B/en
Publication of CN107908071A publication Critical patent/CN107908071A/en
Application granted granted Critical
Publication of CN107908071B publication Critical patent/CN107908071B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F1/00Originals for photomechanical production of textured or patterned surfaces, e.g., masks, photo-masks, reticles; Mask blanks or pellicles therefor; Containers specially adapted therefor; Preparation thereof
    • G03F1/36Masks having proximity correction features; Preparation thereof, e.g. optical proximity correction [OPC] design processes

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)

Abstract

The invention discloses a kind of optical adjacent correction method based on neural network model, comprise the following steps:S01:Training neural network model, is included in M test pattern of selection on trained light shield;The target pattern corresponding to above-mentioned M test pattern is obtained respectively;Class intensity function is simulated using known layered perception neural networksUsing class intensity function and target pattern, above-mentioned layered perception neural networks are trained, draw neural network model;S02:Optical near-correction is realized using the neural network model of above-mentioned training:The class intensity function of pending light shield is drawn using obtained neural network modelAbove-mentioned class intensity function is cut with cutting thresholdGenerate the light shield containing target pattern;Photoetching is carried out as the mask plate after optical near-correction using the above-mentioned light shield for containing target pattern.The present invention supplies disclosed optical adjacent correction method, while takes into account the picture quality after correction and realize speed faster.

Description

Optical proximity correction method based on neural network model
Technical Field
The invention relates to the field of optical proximity correction, in particular to an optical proximity correction method based on a neural network model.
Background
Optical Proximity Correction (OPC) has become an indispensable tool in semiconductor manufacturing processes. Its purpose is to make the pattern realized on the chip as consistent as possible with the target pattern of lithography by lithography mask pattern correction. OPC consists of several key steps, such as target pattern placement for lithography, generation of an auxiliary pattern, and correction of a main pattern. The target pattern of lithography is often different from the original design pattern due to the bias introduced by the etch or the requirements of the lithographic process window. The assist patterns are lithography process windows used to enhance sparsely designed patterns, and their placement rules are often derived from lithography simulations. The host pattern is modified by dividing the original design pattern edge into small segments and placing one or more evaluation points on each small segment.
As OPC correction iterations progress, the OPC engine simulates the edge position error of each line segment during each iteration to determine the correction direction and correction amount for its next iteration. The simulation requires a well calibrated OPC model. From a lithography process window perspective, current OPC engines in general only provide a sub-optimal OPC solution, since current OPC engine corrections only focus on edge placement errors for each segment, and do not consider optimizing the lithography process window. However, the host pattern edge line segments may have multiple correction schemes that all achieve similar edge placement error tolerance requirements, but the lithography process windows may be different. In advanced nodes, such as 14nm,10nm,7nm, and beyond, the interaction between adjacent line segments becomes stronger because of the more spatially coherent light illumination conditions used in the lithography process.
To overcome the inherent drawbacks of conventional OPC algorithms, the industry has developed more advanced OPC solution engines with increased complexity, from line segment interaction matrix solving to inverse lithography solutions. The line segment interaction matrix solution primarily considers the interaction of adjacent line segments, while the inverse lithography solution fully considers the optimization of the lithography process window. There are several methods for inverse lithography, such as level-set based methods, pixel-optimization based methods, and mask optimization methods. All reverse lithography approaches have a large increase in computation time, and therefore, full-chip implementations of reverse lithography solutions remain impractical. Therefore, if an OPC algorithm were available that provided both the quality of the inverse lithographic OPC solution, both in terms of the location of the assist pattern and the correction solution for the edge line segments of the main pattern, while being computationally fast, such an OPC algorithm would be desirable in the industry.
Disclosure of Invention
The invention aims to provide an optical proximity correction method based on a neural network model, which simultaneously considers the corrected image quality and the higher implementation speed.
In order to achieve the purpose, the invention adopts the following technical scheme:
an optical proximity correction method based on a neural network model comprises the following steps:
s01: training a neural network model, specifically comprising the following steps:
s0101: selecting M test patterns on a training light cover;
s0102: respectively obtaining target patterns corresponding to the M test patterns by adopting a reverse photoetching method;
s0103: simulation of class intensity function of training mask using known perceptual neural network
S0104: training the perception neural network by using a class intensity function and a target pattern of a training photomask to obtain optimal model parameters including a cutting threshold value, and obtaining a neural network model by using the optimal model parameters;
s02: and (3) realizing optical proximity correction by using the trained neural network model:
s0201: obtaining the intensity-like function of the photomask to be processed by using the obtained neural network model
S0202: cutting the above class intensity function with a cutting thresholdGenerating a photomask containing a target pattern;
s03: and photoetching by using the photomask containing the target pattern as a mask plate after optical proximity correction.
Further, the neural network model is a linear neural network model, and the perceptive neural network is a hidden layer perceptive neural network determined by parameters.
Further, in step S0103, the known perceptual neural network is used to simulate the class intensity function of the training maskThe specific method comprises the following steps:wherein w i,j 、ω v 、p j0 、q 0 Is a parameter of the hidden layer perception neural network, S i For training the intrinsic imaging signal values at the reticle grid points, wherein
K i polygon Mask Filter for ith polygon, K i Vedge Masking three-dimensional filters for the ith vertical edge, K i Hedge As the mask three-dimensional filter of the ith horizontal edge, K i Corner The mask three-dimensional filter for the ith corner.
Further, the cost function for training the perceptive neural network in step S0104 is:
wherein w i,j ,,ω v ,,p j0 ,,q 0 Model parameters, μ, for a linear neural network model m main Is the weight of the mth training pattern of the main pattern; mu.s m assist Is the weight of the mth training pattern of the auxiliary pattern, Z m The test pattern is a target pattern corresponding to the mth test pattern on the training photomask.
Further, in step S0201, the neural network model is used to obtain the intensity-like function of the photomask to be processedThe specific method comprises the following steps:wherein, w i,j ,,ω v ,,p j0 ,,q 0 Is the model parameter of the linear neural network model, and Si is the intrinsic imaging signal value on the grid point of the photomask to be processed, wherein
K i polygon Mask Filter for ith polygon, K i Vedge Masking three-dimensional filters for the ith vertical edge, K i Hedge As the mask three-dimensional filter of the ith horizontal edge, K i Corner The mask three-dimensional filter for the ith corner.
Further, the neural network is a quadratic neural network, and the perceptive neural network is a multi-layer perceptron neural network with determined parameters.
Further, in step S0103, the known perceptual neural network is used to simulate the class intensity function of the training maskThe specific method comprises the following steps: wherein u is i,k 、w k 、p k0 、z 0 Is a parameter of the multi-layer perceptive neural network, V i,k For training the ith convolution kernel of the kth node at the reticle grid point, t is V i,k To the corresponding light field.
Further, the cost function for training the perceptive neural network in step S0104 is:
wherein, w k ’、p k0 ’、z 0 ' model parameters for quadratic neural network model, { V 1.K ’,V 2.K ’,……V N.K ' l is convolution kernel set optimized by quadratic neural network model, { u 1.K ’,u 2.K ’,……u N.K ' } is the weight value, mu, corresponding to each convolution kernel in the quadratic neural network model m main Is the weight of the mth training pattern of the main pattern; mu.s m assist Is the weight of the mth training pattern of the auxiliary pattern, Z m The test pattern is a target pattern corresponding to the mth test pattern on the training photomask.
Furthermore, in the process of training the perception neural network, the following constraints are added to the model parameters,
u i,k ’>0;
i u i,k ’=1,k=1,2,…R。
further, in step S0201, the intensity-like function of the photomask to be processed is obtained by using the neural network modelThe specific method comprises the following steps: wherein w k ’、p k0 ’、z 0 ' model parameters for quadratic neural network model, { V 1.K ’,V 2.K ’,……V N.K ' } is a convolution kernel set optimized by a quadratic neural network model, { u } 1.K ’,u 2.K ’,……u N.K ' } is the weight corresponding to each convolution kernel in the quadratic neural network model, and t is V i,k To the corresponding light field.
The beneficial effects of the invention are as follows: the optical proximity correction method based on the neural network model provided by the invention not only provides the quality of a reverse photoetching OPC solution, but also can give consideration to the positions of the auxiliary patterns and the positions of the edge line segments of the main pattern, and meanwhile, the calculation speed is greatly improved compared with that of reverse photoetching calculation.
Drawings
FIG. 1 is a flow chart of an optical proximity correction method based on a neural network model according to the present invention.
FIG. 2 is a calculation relationship of intrinsic imaging signal values at reticle dots in example 1.
Fig. 3 is a structural diagram of a linear neural network model in embodiment 1.
Fig. 4 is a schematic view of the division of the mask into individual facets in example 2.
Fig. 5 is a structural diagram of a quadratic neural network model in embodiment 2.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following detailed description of the embodiments of the present invention is provided with reference to the accompanying drawings.
As shown in fig. 1, the optical proximity correction method based on a neural network model provided by the present invention includes the following steps:
s01: training a linear neural network model, specifically comprising the following steps:
s0101: selecting M test patterns on a training light cover;
s0102: respectively obtaining target patterns corresponding to the M test patterns by adopting a reverse photoetching method;
s0103: simulation of class intensity function of training mask using known perceptual neural network
S0104: training the perception neural network by using a similar intensity function and a target pattern of a training photomask to obtain optimal model parameters including a cutting threshold value, and obtaining a neural network model by using the optimal model parameters;
s02: and (3) realizing optical proximity correction by using the trained neural network model:
s0201: by using the aboveThe obtained neural network model obtains the intensity-like function of the photomask to be processed
S0202: cutting the intensity function with a cutting thresholdGenerating a photomask containing a target pattern;
s03: and photoetching by using the photomask containing the target pattern as a mask plate after optical proximity correction.
The neural network model in the invention can comprise a linear neural network model and a quadratic neural network model, when the types of the neural network models are different, the specific training method and the calculation steps are also different, and the following two embodiments are introduced respectively:
example 1
When the neural network model is a linear neural network model, the desired lithographic mask pattern, i.e., the corrected patterns of the auxiliary pattern and the main pattern, can be viewed as a threshold by cutting a continuous intensity-like functionThe resulting profile. This intensity-like function can be derived from the optical image intensity function I (x, y) of the lithographic target pattern by a fixed non-linear mapping mechanism. It is clear that,not only on I (x, y) but also on the grey scale distribution of the optical image intensity function I (x, y) around the (x, y) point. The most efficient way to encode the gray-scale distribution of the optical image intensity function I (x, y) around the (x, y) point is to use a set of values of the intrinsic imaging signal at the point (x, y). To accurately describe the intensity of an optical image, including the three-dimensional effects of a lithographic mask, the set of intrinsic imaging signals should include information for polygonal geometryVertical edges, horizontal edges and corners, as shown in fig. 2. Wherein K i polygon Mask Filter for ith polygon, K i Vedge Masking three-dimensional filters for the ith vertical edge, K i Hedge As a masked three-dimensional filter for the ith horizontal edge, K i Corner The mask three-dimensional filter for the ith corner.
The optical proximity correction method based on the neural network model provided by the embodiment comprises the following steps:
s01: training a neural network model, specifically comprising the following steps:
s0101: m test patterns are selected on the training mask.
S0102: and respectively obtaining target patterns corresponding to the M test patterns by adopting a reverse photoetching method.
Wherein any point on the lithographic mask plane can only take a value of 1 or 0. For clear tone lithographic mask types, the pattern definition areas are dark, using 0 as their value; for dark tone lithographic mask types, the pattern definition areas are clear, using 1 as their value. Since the neural network model cannot model the function of the discontinuity, we will first convolve the mask pattern computed from the inverse lithography with a reasonable gaussian function, thus smoothing the convex and concave corners. This operation can also be achieved by directly replacing the convex or concave corners with an arc of appropriate radius, which can be set at around 30 nm to 40 nm. The output of this operation is a function of the binary value, in Z M To represent the reverse lithography engine for the mth training pattern to obtain the target pattern.
S0103: simulation of class intensity function of training mask using known perceptual neural network
The structure of the perceptive neural network is shown in fig. 3, which is basically a hidden perceptron neural network. The number of nodes in the hidden layer is denoted by R. If the number of nodes in the hidden layer is too small, the neural network model has insufficient ability in learning the behavior of reverse lithography calculation, so that the accuracy of the neural network model is insufficient, and if the number of nodes in the hidden layer is too large, the neural network model has the possibility of overfitting, so that the neural network model is unstable. Therefore, the number of nodes, R, in the hidden layer will be determined through training experiments. The number of nodes in the hidden layer is estimated to be less than 10. Due to the high information-containing capability of the intrinsic image signal, which can be seen from the rapid drop of the eigenvalues of the TCC matrix, the number of elements of the estimated input eigenvector should be between 10 and 15, whereas the number of elements of the input eigenvector ranges from 50 to several hundred based on purely geometrically described eigenvectors.
The mathematical calculation relationship is as follows:
wherein w i,j 、ω v 、p j0 、q 0 For the parameters of the known perceptual neural network model, si is the intrinsic imaging signal value at the grid points of the training reticle, K i polygon mask Filter for ith polygon, K i Vedge Masking three-dimensional filters for the ith vertical edge, K i Hedge As the mask three-dimensional filter of the ith horizontal edge, K i Corner The mask three-dimensional filter for the ith corner.
S0104: training the perception neural network by using a class intensity function and a target pattern to obtain optimal model parameters including a cutting threshold value, and obtaining a neural network model by using the optimal model parameters;
wherein, the cost function for training the neural network model is as follows:
wherein w i,j ’,ω v ’,p j0 ’,q 0 ' is the model parameter, mu, of a linear neural network model m main Is the weight of the mth training pattern of the main pattern; mu.s m assist Is the weight of the mth training pattern of the auxiliary pattern, Z m The test pattern is a target pattern corresponding to the mth test pattern on the training photomask. The trained model needs to use another set of test patterns to verify the generality of the model.
S02: and (3) realizing optical proximity correction by utilizing the trained neural network model:
s0201: obtaining the intensity-like function of the photomask to be processed by using the obtained neural network model
Wherein,
wherein, w i,j ’,ω v ’,p j0 ’,q 0 Model parameters, S, for a linear neural network model i For the intrinsic imaging signal values at the grid points of the reticle to be processed, wherein K i polygon Mask Filter for ith polygon, K i Vedge Masking three-dimensional filters for the ith vertical edge, K i Hedge As a masked three-dimensional filter for the ith horizontal edge, K i Corner The mask three-dimensional filter for the ith corner.
S0202: cutting the above class intensity function with a cutting thresholdGenerating a photomask to be processed;
s03: and photoetching by using the photomask containing the target pattern as a mask plate after optical proximity correction.
Example 2
When the neural network model is a quadratic neural network model, as shown in FIG. 4, the lithographic mask plane is divided into small cells, assuming that t (i, j) and t (m, n) are the light fields behind the small cell (i, j) and small cell (m, n) lithographic mask. Since the reaction of chemical resists is light intensity, not the field itself, we can assume that the target pattern calculated from reverse lithography can be determined by cutting a continuous intensity-like functionThe resulting profile. . This function depends only on all the pair values { t (i, j), t (m, n) }, but this function itself is unknown:
the pair value { t (i, j), t (m, n) } is defined around the point (x, y). Since all lithographic mask types currently in use have only 0 phase or 180 phase, { t (i, j), t (m, n) } is a real number. One way to explore this unknown function is to use a multi-layered perceptron neural network model with the set { t (1, 1) × t (1, 1), t (1, 1) × t (1, 2) \ 8230 \ 8230, t (N, N) × t (N, N) } as feature vectors.
The optical proximity correction method based on the neural network model provided by the embodiment comprises the following steps:
s01: training a quadratic neural network model, which comprises the following specific steps:
s0101: selecting M test patterns on a training light cover;
s0102: and respectively obtaining target patterns corresponding to the M test patterns by adopting a reverse photoetching method.
S0103: simulation of class intensity function using known perceptual neural networks
Wherein: the structure of the known perceptive neural network is shown in fig. 5, and the specific expression is as follows:
Z=∑w k y k
due to the reciprocal principle of light interaction, the matrix must be symmetrical, and therefore,if we will beAndfrom a one-dimensional vector array to a two-dimensional matrix, equation (11) is effectively a two-dimensional convolution operation.
{u ik And { V } i,k Are the eigenvalues and eigenvectors of the matrix.
S0104: and training the perception neural network by using the class intensity function and the target pattern to obtain optimal model parameters including a cutting threshold value, and obtaining a neural network model by using the optimal model parameters.
Wherein, w k ’、p k0 ’、z 0 ' model parameters for quadratic neural network model, { V 1.K ’,V 2.K ’,……V N.K ' } is a convolution kernel set optimized by a quadratic neural network model, { u } 1.K ’,u 2.K ’,……u N.K ' } is the weight value, mu, corresponding to each convolution kernel in the quadratic neural network model m main Is the weight of the mth training pattern of the main pattern; mu.s m assist Is the weight of the mth training pattern of the auxiliary pattern, Z m The test pattern is a target pattern corresponding to the mth test pattern on the training photomask. The equation can use equation constraint as a cost item to construct a new cost function, so that the problem can be converted into a nonlinear unconstrained optimization problem. Gradient methods can be used to solve such optimization problems.
Wherein, in the training process, the following constraints are added to the model parameters:
the convolution kernel is required to be orthonormal;
the convolution kernel is required to be orthonormal;
u i,k ’>0;
i u i,k ’=1,k=1,2,…R。
s02: and (3) realizing optical proximity correction by utilizing the trained neural network model:
s0201: obtaining the intensity-like function of the photomask to be processed by using the obtained neural network modelThe intensity-like function in this embodiment is the same for different reticles to be processed. Using quadratic neural network model to obtain the intensity-like function of the photomask to be processedThe specific method adopts the following algorithm:
z=∑w k ’y kl
wherein w k ’、p k0 ’、z 0 ' model parameters for quadratic neural network model, { V 1.K ’,V 2.K ’,……V N.K ' } is the set of convolution kernels optimized by the quadratic neural network model, { u } 1 . K ’,u 2 . K ’,……u N.K '} is the weight corresponding to each convolution kernel in the quadratic neural network model, and t' is V i,k ' Point corresponding light field, in particular V i,k ' near field of light behind mask.
S0202: cutting the above class intensity function with a cutting thresholdGenerating a photomask containing a target pattern;
s03: and photoetching by using the photomask containing the target pattern as a mask plate after optical proximity correction.
The above description is only a preferred embodiment of the present invention, and the embodiment is not intended to limit the scope of the present invention, so that any equivalent structural changes made by using the contents of the specification and the drawings of the present invention should be included in the scope of the appended claims.

Claims (10)

1. An optical proximity correction method based on a neural network model is characterized by comprising the following steps:
s01: training a neural network model, specifically comprising the following steps:
s0101: selecting M test patterns on a training light cover;
s0102: respectively obtaining target patterns corresponding to the M test patterns by adopting a reverse photoetching method;
s0103: simulation of class intensity function of training mask using known perceptual neural network
S0104: training the perception neural network by using a class intensity function and a target pattern of a training photomask to obtain optimal model parameters including a cutting threshold value, and obtaining a neural network model by using the optimal model parameters;
s02: and (3) realizing optical proximity correction by utilizing the trained neural network model:
s0201: obtaining the intensity-like function of the photomask to be processed by using the obtained neural network model
S0202: cutting the above class intensity function with a cutting thresholdGenerating a photomask containing a target pattern;
s03: and photoetching by using the photomask containing the target pattern as a mask plate after optical proximity correction.
2. The optical proximity correction method based on the neural network model according to claim 1, characterized in that the neural network model is a linear neural network model, and the perceptual neural network is a parameter-determined hidden layer perceptual neural network.
3. The method of claim 2, wherein in step S0103, the known perceptual neural network is used to simulate the intensity-like function of the training maskThe specific method comprises the following steps:wherein, w i,j 、ω v 、p j0 、q 0 Is a parameter of the hidden layer perception neural network, S i For training intrinsic imaging signal values at reticle grid points, wherein
K i polygon Mask Filter for ith polygon, K i Vedge Masking three-dimensional filters for the ith vertical edge, K i Hedge As a masked three-dimensional filter for the ith horizontal edge, K i Corner The mask three-dimensional filter for the ith corner.
4. The method of claim 3, wherein the cost function for training the perceptive neural network in step S0104 is:
wherein, w i,j’v’ ,p j0’ ,q 0’ Model parameters, μ, for a linear neural network model m main Is the weight of the mth training pattern of the main pattern; mu.s m assist Is the weight of the mth training pattern of the auxiliary pattern, Z m The test pattern is a target pattern corresponding to the mth test pattern on the training photomask.
5. The method of claim 4, wherein the step S0201 of using the neural network model to obtain the intensity-like function of the photomask to be processed is performedThe specific method comprises the following steps: wherein, w i,j’v’ ,p j0’ ,q 0’ Is the model parameter of the linear neural network model, si is the intrinsic imaging signal value on the grid point of the photomask to be processed, wherein
K i polygon Mask Filter for ith polygon, K i Vedge Masking three-dimensional filters for the ith vertical edge, K i Hedge As a masked three-dimensional filter for the ith horizontal edge, K i Corner The mask three-dimensional filter for the ith corner.
6. The optical proximity correction method based on the neural network model of claim 1, wherein the neural network is a quadratic neural network, and the perceptive neural network is a multi-layer perceptron neural network with determined parameters.
7. The method of claim 6, wherein in step S0103, the known perceptual neural network is used to simulate the intensity-like function of the training maskThe specific method comprises the following steps:wherein u is i,k 、w k 、p k0 、z 0 Is a parameter of the multi-layer perceptive neural network, V i,k For training the ith convolution kernel of the kth node at the reticle grid point, t is V i,k To the corresponding light field.
8. The method of claim 7, wherein the cost function for training the perceptive neural network in step S0104 is:
wherein, w k’ 、p k0’ 、z 0’ Model parameters for quadratic neural network model, { V 1.K’ ,V 2.K’ ,……V N.K’ The method is a convolution kernel set optimized by a quadratic neural network model, u 1.K’ ,u 2.K’ ,……u N.K’ Is the weight value, mu, corresponding to each convolution kernel in the quadratic neural network model m main Is the weight of the mth training pattern of the main pattern; mu.s m assist Is the weight of the mth training pattern of the auxiliary pattern, Z m For training the target pattern corresponding to the mth test pattern on the photomaskA case.
9. The method of claim 8, wherein the following constraints are added to the model parameters during the training of the perceptual neural network,
u i,k’ >0;
i u i,k’ =1,k=1,2,…R。
10. the method of claim 8, wherein the step S0201 of using the neural network model to obtain the intensity-like function of the photomask to be processed is performedThe specific method comprises the following steps: z=∑w k’ y kwherein, w k’ 、p k0’ 、z 0’ Model parameters for a quadratic neural network model, { V 1.K’ ,V 2.K’ ,……V N.K’ The method is a convolution kernel set optimized by a quadratic neural network model, u 1.K’ ,u 2.K’ ,……u N.K’ Is a quadratic neural network modelThe weight value corresponding to each convolution kernel in the model, t' is V i,k’ To the corresponding light field.
CN201711216779.0A 2017-11-28 2017-11-28 Optical proximity correction method based on neural network model Active CN107908071B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711216779.0A CN107908071B (en) 2017-11-28 2017-11-28 Optical proximity correction method based on neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711216779.0A CN107908071B (en) 2017-11-28 2017-11-28 Optical proximity correction method based on neural network model

Publications (2)

Publication Number Publication Date
CN107908071A true CN107908071A (en) 2018-04-13
CN107908071B CN107908071B (en) 2021-01-29

Family

ID=61848973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711216779.0A Active CN107908071B (en) 2017-11-28 2017-11-28 Optical proximity correction method based on neural network model

Country Status (1)

Country Link
CN (1) CN107908071B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665060A (en) * 2018-06-12 2018-10-16 上海集成电路研发中心有限公司 A kind of integrated neural network for calculating photoetching
CN108875141A (en) * 2018-05-24 2018-11-23 上海集成电路研发中心有限公司 A method of the full mask focusing parameter of chip is determined based on neural network model
CN111310407A (en) * 2020-02-10 2020-06-19 上海集成电路研发中心有限公司 Method for designing optimal feature vector of reverse photoetching based on machine learning
CN111538213A (en) * 2020-04-27 2020-08-14 湖南大学 Electron beam proximity effect correction method based on neural network
WO2020187578A1 (en) * 2019-03-21 2020-09-24 Asml Netherlands B.V. Training method for machine learning assisted optical proximity error correction
CN111985611A (en) * 2020-07-21 2020-11-24 上海集成电路研发中心有限公司 Computing method based on physical characteristic diagram and DCNN machine learning reverse photoetching solution
CN112485976A (en) * 2020-12-11 2021-03-12 上海集成电路装备材料产业创新中心有限公司 Method for determining optical proximity correction photoetching target pattern based on reverse etching model
CN112578646A (en) * 2020-12-11 2021-03-30 上海集成电路装备材料产业创新中心有限公司 Offline photoetching process stability control method based on image
CN113454532A (en) * 2019-02-21 2021-09-28 Asml荷兰有限公司 Method of training a machine learning model to determine optical proximity correction of a mask
CN113759657A (en) * 2020-06-03 2021-12-07 中芯国际集成电路制造(上海)有限公司 Optical proximity correction method
CN114200768A (en) * 2021-12-23 2022-03-18 中国科学院光电技术研究所 Super-resolution lithography reverse optical proximity effect correction method based on level set algorithm
CN114815496A (en) * 2022-04-08 2022-07-29 中国科学院光电技术研究所 Pixel optical proximity effect correction method and system applied to super-resolution lithography
CN115509082A (en) * 2022-11-09 2022-12-23 华芯程(杭州)科技有限公司 Training method and device of optical proximity correction model and optical proximity correction method
WO2023137622A1 (en) * 2022-01-19 2023-07-27 华为技术有限公司 Method and apparatus for determining size of wafer pattern, device, medium and program product
CN118331002A (en) * 2024-06-13 2024-07-12 浙江大学 SRAF (feature extraction and adaptive feature) insertion rule construction method and system based on reverse photoetching technology

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030086081A1 (en) * 1998-09-17 2003-05-08 Applied Materials, Inc. Reticle design inspection system
CN1661479A (en) * 2003-11-05 2005-08-31 Asml蒙片工具有限公司 Eigen decomposition based OPC model
CN101320400A (en) * 2008-07-16 2008-12-10 桂林电子科技大学 Optimization design method of micro-electron packaging device based on artificial neural network
CN106777829A (en) * 2017-02-06 2017-05-31 深圳晶源信息技术有限公司 A kind of optimization method and computer-readable storage medium of integrated circuit mask design
CN107076681A (en) * 2014-10-14 2017-08-18 科磊股份有限公司 For responding measurement based on image and the signal for the overlay measurement for scattering art
WO2017171891A1 (en) * 2016-04-02 2017-10-05 Intel Corporation Systems, methods, and apparatuses for modeling reticle compensation for post lithography processing using machine learning algorithms

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030086081A1 (en) * 1998-09-17 2003-05-08 Applied Materials, Inc. Reticle design inspection system
CN1661479A (en) * 2003-11-05 2005-08-31 Asml蒙片工具有限公司 Eigen decomposition based OPC model
CN101320400A (en) * 2008-07-16 2008-12-10 桂林电子科技大学 Optimization design method of micro-electron packaging device based on artificial neural network
CN107076681A (en) * 2014-10-14 2017-08-18 科磊股份有限公司 For responding measurement based on image and the signal for the overlay measurement for scattering art
WO2017171891A1 (en) * 2016-04-02 2017-10-05 Intel Corporation Systems, methods, and apparatuses for modeling reticle compensation for post lithography processing using machine learning algorithms
CN106777829A (en) * 2017-02-06 2017-05-31 深圳晶源信息技术有限公司 A kind of optimization method and computer-readable storage medium of integrated circuit mask design

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875141A (en) * 2018-05-24 2018-11-23 上海集成电路研发中心有限公司 A method of the full mask focusing parameter of chip is determined based on neural network model
CN108665060A (en) * 2018-06-12 2018-10-16 上海集成电路研发中心有限公司 A kind of integrated neural network for calculating photoetching
CN108665060B (en) * 2018-06-12 2022-04-01 上海集成电路研发中心有限公司 Integrated neural network for computational lithography
CN113454532A (en) * 2019-02-21 2021-09-28 Asml荷兰有限公司 Method of training a machine learning model to determine optical proximity correction of a mask
WO2020187578A1 (en) * 2019-03-21 2020-09-24 Asml Netherlands B.V. Training method for machine learning assisted optical proximity error correction
CN113614638A (en) * 2019-03-21 2021-11-05 Asml荷兰有限公司 Training method for machine learning assisted optical proximity effect error correction
US11815820B2 (en) 2019-03-21 2023-11-14 Asml Netherlands B.V. Training method for machine learning assisted optical proximity error correction
CN111310407A (en) * 2020-02-10 2020-06-19 上海集成电路研发中心有限公司 Method for designing optimal feature vector of reverse photoetching based on machine learning
CN111538213A (en) * 2020-04-27 2020-08-14 湖南大学 Electron beam proximity effect correction method based on neural network
CN111538213B (en) * 2020-04-27 2021-04-27 湖南大学 Electron beam proximity effect correction method based on neural network
CN113759657B (en) * 2020-06-03 2024-05-03 中芯国际集成电路制造(上海)有限公司 Optical proximity correction method
CN113759657A (en) * 2020-06-03 2021-12-07 中芯国际集成电路制造(上海)有限公司 Optical proximity correction method
CN111985611A (en) * 2020-07-21 2020-11-24 上海集成电路研发中心有限公司 Computing method based on physical characteristic diagram and DCNN machine learning reverse photoetching solution
WO2022016802A1 (en) * 2020-07-21 2022-01-27 上海集成电路研发中心有限公司 Physical feature map- and dcnn-based computation method for machine learning-based inverse lithography technology solution
CN112485976A (en) * 2020-12-11 2021-03-12 上海集成电路装备材料产业创新中心有限公司 Method for determining optical proximity correction photoetching target pattern based on reverse etching model
CN112578646A (en) * 2020-12-11 2021-03-30 上海集成电路装备材料产业创新中心有限公司 Offline photoetching process stability control method based on image
CN114200768B (en) * 2021-12-23 2023-05-26 中国科学院光电技术研究所 Super-resolution photoetching reverse optical proximity effect correction method based on level set algorithm
CN114200768A (en) * 2021-12-23 2022-03-18 中国科学院光电技术研究所 Super-resolution lithography reverse optical proximity effect correction method based on level set algorithm
US12085846B2 (en) 2021-12-23 2024-09-10 The Institute Of Optics And Electronics, The Chinese Academy Of Sciences Method for inverse optical proximity correction of super-resolution lithography based on level set algorithm
WO2023137622A1 (en) * 2022-01-19 2023-07-27 华为技术有限公司 Method and apparatus for determining size of wafer pattern, device, medium and program product
CN114815496A (en) * 2022-04-08 2022-07-29 中国科学院光电技术研究所 Pixel optical proximity effect correction method and system applied to super-resolution lithography
CN114815496B (en) * 2022-04-08 2023-07-21 中国科学院光电技术研究所 Pixelated optical proximity effect correction method and system applied to super-resolution lithography
CN115509082A (en) * 2022-11-09 2022-12-23 华芯程(杭州)科技有限公司 Training method and device of optical proximity correction model and optical proximity correction method
CN118331002A (en) * 2024-06-13 2024-07-12 浙江大学 SRAF (feature extraction and adaptive feature) insertion rule construction method and system based on reverse photoetching technology

Also Published As

Publication number Publication date
CN107908071B (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN107908071B (en) Optical proximity correction method based on neural network model
CN110678961B (en) Simulating near field images in optical lithography
US8732625B2 (en) Methods for performing model-based lithography guided layout design
Jia et al. Machine learning for inverse lithography: using stochastic gradient descent for robust photomask synthesis
US10248028B2 (en) Source optimization for image fidelity and throughput
TWI398721B (en) Systems, masks, and methods for photolithography
US11061318B2 (en) Lithography model calibration
US8849008B2 (en) Determining calibration parameters for a lithographic process
TWI396055B (en) Multivariable solver for optical proximity correction
US7650587B2 (en) Local coloring for hierarchical OPC
US7000208B2 (en) Repetition recognition using segments
CN108535952B (en) Computational lithography method based on model-driven convolutional neural network
US20070198963A1 (en) Calculation system for inverse masks
US7328424B2 (en) Method for determining a matrix of transmission cross coefficients in an optical proximity correction of mask layouts
KR20050043713A (en) Eigen decomposition based opc model
CN111310407A (en) Method for designing optimal feature vector of reverse photoetching based on machine learning
US20070011648A1 (en) Fast systems and methods for calculating electromagnetic fields near photomasks
US20230375916A1 (en) Inverse lithography and machine learning for mask synthesis
CN110426914A (en) A kind of modification method and electronic equipment of Sub-resolution assist features
US8498469B2 (en) Full-field mask error enhancement function
CN102998896B (en) Basic module-based mask main body graph optimization method
US20200096876A1 (en) Dose Map Optimization for Mask Making
US8073288B2 (en) Rendering a mask using coarse mask representation
US8201110B1 (en) Optical proximity correction using regression
Zheng et al. Lithobench: Benchmarking ai computational lithography for semiconductor manufacturing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant