CN107908071B - Optical proximity correction method based on neural network model - Google Patents

Optical proximity correction method based on neural network model Download PDF

Info

Publication number
CN107908071B
CN107908071B CN201711216779.0A CN201711216779A CN107908071B CN 107908071 B CN107908071 B CN 107908071B CN 201711216779 A CN201711216779 A CN 201711216779A CN 107908071 B CN107908071 B CN 107908071B
Authority
CN
China
Prior art keywords
neural network
network model
training
pattern
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711216779.0A
Other languages
Chinese (zh)
Other versions
CN107908071A (en
Inventor
时雪龙
赵宇航
陈寿面
李铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai IC R&D Center Co Ltd
Original Assignee
Shanghai IC R&D Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai IC R&D Center Co Ltd filed Critical Shanghai IC R&D Center Co Ltd
Priority to CN201711216779.0A priority Critical patent/CN107908071B/en
Publication of CN107908071A publication Critical patent/CN107908071A/en
Application granted granted Critical
Publication of CN107908071B publication Critical patent/CN107908071B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F1/00Originals for photomechanical production of textured or patterned surfaces, e.g., masks, photo-masks, reticles; Mask blanks or pellicles therefor; Containers specially adapted therefor; Preparation thereof
    • G03F1/36Masks having proximity correction features; Preparation thereof, e.g. optical proximity correction [OPC] design processes

Abstract

The invention discloses an optical proximity correction method based on a neural network model, which comprises the following steps: s01: training a neural network model, including selecting M test patterns on a training reticle; respectively obtaining the M test chartsTarget patterns corresponding to the patterns; simulation of class intensity function using known perceptual neural networks
Figure DDA0001485678150000011
Training the perception neural network by using a class intensity function and a target pattern to obtain a neural network model; s02: and (3) realizing optical proximity correction by utilizing the trained neural network model: the method comprises the step of obtaining the intensity-like function of the photomask to be processed by using the obtained neural network model
Figure DDA0001485678150000012
Cutting the above class intensity function with a cutting threshold
Figure DDA0001485678150000013
Generating a photomask containing a target pattern; and photoetching by using the photomask containing the target pattern as a mask plate after optical proximity correction. The optical proximity correction method disclosed by the invention simultaneously considers the corrected image quality and the faster realization speed.

Description

Optical proximity correction method based on neural network model
Technical Field
The invention relates to the field of optical proximity correction, in particular to an optical proximity correction method based on a neural network model.
Background
Optical Proximity Correction (OPC) has become an indispensable tool in semiconductor manufacturing processes. Its purpose is to make the pattern realized on the chip as consistent as possible with the target pattern of lithography by lithography mask pattern correction. OPC consists of several key steps, such as target pattern placement for lithography, generation of auxiliary patterns, and correction of main patterns. The target pattern of lithography is often different from the original design pattern due to the bias introduced by the etch or the requirements of the lithographic process window. The assist patterns are lithography process windows used to enhance the sparse design patterns, and their placement rules are often derived from lithography simulations. The host pattern is corrected by dividing the original design pattern edge into small segments and placing one or more evaluation points on each small segment.
As OPC correction iterations progress, the OPC engine simulates the edge position error of each line segment during each iteration to determine the correction direction and correction amount for its next iteration. The simulation requires a well calibrated OPC model. From a lithography process window perspective, current OPC engines currently in use only provide a suboptimal OPC solution because current OPC engine corrections only focus on edge placement errors for each line segment, without regard to optimizing the lithography process window. However, the main pattern edge line segments may have multiple correction schemes that all achieve similar edge placement error tolerance requirements, but the lithography process windows may be different. At advanced nodes, such as 14nm, 10nm, 7nm, and beyond, the interaction between adjacent line segments becomes stronger because of the more spatially coherent light illumination conditions used in the lithography process.
To overcome the inherent drawbacks of conventional OPC algorithms, the industry has developed more advanced OPC solution engines with increased complexity, from line segment interaction matrix solving to inverse lithography solutions. The line segment interaction matrix solution primarily considers the interaction of adjacent line segments, while the inverse lithography solution fully considers the optimization of the lithography process window. There are several methods for reverse lithography, such as level set-based methods, pixel optimization-based methods, and mask optimization methods. All reverse lithography approaches have a large increase in computation time, and therefore, full-chip implementations of reverse lithography solutions remain impractical. Therefore, if an OPC algorithm were available that provided both the quality of the inverse lithographic OPC solution, both in terms of the location of the assist pattern and the correction solution for the edge line segments of the main pattern, while being computationally fast, such an OPC algorithm would be desirable in the industry.
Disclosure of Invention
The invention aims to provide an optical proximity correction method based on a neural network model, which simultaneously considers the corrected image quality and the faster realization speed.
In order to achieve the purpose, the invention adopts the following technical scheme:
an optical proximity correction method based on a neural network model comprises the following steps:
s01: training a neural network model, specifically comprising the following steps:
s0101: selecting M test patterns on a training light cover;
s0102: respectively obtaining target patterns corresponding to the M test patterns by adopting a reverse photoetching method;
s0103: simulation of class intensity function of training mask using known perceptual neural network
Figure BDA0001485678130000021
S0104: training the perception neural network by using a class intensity function and a target pattern of a training photomask to obtain optimal model parameters including a cutting threshold value, and obtaining a neural network model by using the optimal model parameters;
s02: and (3) realizing optical proximity correction by utilizing the trained neural network model:
s0201: obtaining the intensity-like function of the photomask to be processed by using the obtained neural network model
Figure BDA0001485678130000022
S0202: cutting the above class intensity function with a cutting threshold
Figure BDA0001485678130000023
Generating a photomask containing a target pattern;
s03: and photoetching by using the photomask containing the target pattern as a mask plate after optical proximity correction.
Further, the neural network model is a linear neural network model, and the perceptive neural network is a hidden layer perceptive neural network determined by parameters.
Further, in step S0103, the known perceptual neural network is used to simulate the class intensity function of the training mask
Figure BDA0001485678130000024
The specific method comprises the following steps:
Figure BDA0001485678130000025
wherein, wi,j、ωv、pj0、q0Is a parameter of the hidden layer perception neural network, SiFor training intrinsic imaging signal values at reticle grid points, wherein
Figure BDA0001485678130000026
Ki polygonMask Filter for ith polygon, Ki VedgeMasking three-dimensional filters for the ith vertical edge, Ki HedgeAs a masked three-dimensional filter for the ith horizontal edge, Ki CornerThe mask three-dimensional filter for the ith corner.
Further, the cost function for training the perceptive neural network in step S0104 is:
Figure BDA0001485678130000027
wherein, wi,j,,ωv,,pj0,,q0Being a model parameter, μ, of a linear neural network modelm mainIs the weight of the mth training pattern of the main pattern; mu.sm assistIs the weight of the mth training pattern of the auxiliary pattern, ZmThe test pattern is a target pattern corresponding to the mth test pattern on the training photomask.
Further, in step S0201, the intensity-like function of the photomask to be processed is obtained by using the neural network model
Figure BDA0001485678130000028
The specific method comprises the following steps:
Figure BDA0001485678130000029
wherein, wi,j,,ωv,,pj0,,q0Is the model parameter of the linear neural network model, and Si is the intrinsic imaging signal value on the grid point of the photomask to be processed, wherein
Figure BDA0001485678130000031
Ki polygonMask Filter for ith polygon, Ki VedgeMasking three-dimensional filters for the ith vertical edge, Ki HedgeAs a masked three-dimensional filter for the ith horizontal edge, Ki CornerThe mask three-dimensional filter for the ith corner.
Further, the neural network is a quadratic neural network, and the perceptive neural network is a multi-layer perceptron neural network with determined parameters.
Further, in step S0103, the known perceptual neural network is used to simulate the class intensity function of the training mask
Figure BDA0001485678130000032
The specific method comprises the following steps:
Figure BDA0001485678130000033
Figure BDA0001485678130000034
wherein u isi,k、wk、pk0、z0Is a parameter of the multi-layer perceptive neural network, Vi,kFor training the ith convolution kernel of the kth node at the reticle grid point, t is Vi,kTo the corresponding light field.
Further, the cost function for training the perceptive neural network in step S0104 is:
Figure BDA0001485678130000035
wherein, wk’、pk0’、z0' model parameters for quadratic neural network model, { V1.K’,V2.K’,……VN.K' I is a convolution kernel set optimized by a quadratic neural network model, { u1.K’,u2.K’,……uN.K' } is the weight value, mu, corresponding to each convolution kernel in the quadratic neural network modelm mainIs the weight of the mth training pattern of the main pattern; mu.sm assistIs the weight of the mth training pattern of the auxiliary pattern, ZmThe test pattern is a target pattern corresponding to the mth test pattern on the training photomask.
Further, in the process of training the perception neural network, the following constraints are added to the model parameters,
Figure BDA0001485678130000036
Figure BDA0001485678130000037
ui,k’>0;
iui,k’=1,k=1,2,…R。
further, in step S0201, the intensity-like function of the photomask to be processed is obtained by using the neural network model
Figure BDA0001485678130000038
The specific method comprises the following steps:
Figure BDA0001485678130000039
Figure BDA00014856781300000310
wherein, wk’、pk0’、z0' model parameters for quadratic neural network model, { V1.K’,V2.K’,……VN.K' } is a convolution kernel set optimized by a quadratic neural network model, { u }1.K’,u2.K’,……uN.K' is the weight corresponding to each convolution kernel in the quadratic neural network model, and t is Vi,kTo the corresponding light field.
The invention has the beneficial effects that: the optical proximity correction method based on the neural network model provided by the invention not only provides the quality of a reverse photoetching OPC solution, but also can give consideration to the positions of the auxiliary patterns and the positions of the edge line segments of the main pattern, and meanwhile, the calculation speed is greatly improved compared with that of reverse photoetching calculation.
Drawings
FIG. 1 is a flow chart of an optical proximity correction method based on a neural network model according to the present invention.
FIG. 2 is a graph showing the calculation of intrinsic imaging signal values at reticle points in example 1.
Fig. 3 is a structural diagram of a linear neural network model in embodiment 1.
Fig. 4 is a schematic view of the division of the mask into individual facets in example 2.
Fig. 5 is a structural diagram of a quadratic neural network model in embodiment 2.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the optical proximity correction method based on a neural network model provided by the present invention includes the following steps:
s01: training a linear neural network model, specifically comprising the following steps:
s0101: selecting M test patterns on a training light cover;
s0102: respectively obtaining target patterns corresponding to the M test patterns by adopting a reverse photoetching method;
s0103: simulation of class intensity function of training mask using known perceptual neural network
Figure BDA0001485678130000041
S0104: training the perception neural network by using a class intensity function and a target pattern of a training photomask to obtain optimal model parameters including a cutting threshold value, and obtaining a neural network model by using the optimal model parameters;
s02: and (3) realizing optical proximity correction by utilizing the trained neural network model:
s0201: obtaining the intensity-like function of the photomask to be processed by using the obtained neural network model
Figure BDA0001485678130000042
S0202: cutting the above class intensity function with a cutting threshold
Figure BDA0001485678130000043
Generating a photomask containing a target pattern;
s03: and photoetching by using the photomask containing the target pattern as a mask plate after optical proximity correction.
The neural network model in the invention can comprise a linear neural network model and a quadratic neural network model, when the types of the neural network models are different, the specific training method and the calculation steps are also different, and the following two embodiments are introduced respectively:
example 1
When the neural network model is a linear neural network model, the desired lithographic mask pattern, i.e., the corrected patterns of the auxiliary pattern and the main pattern, can be viewed as a threshold by cutting a continuous intensity-like function
Figure BDA0001485678130000044
The resulting profile. This intensity-like function can be derived from the optical image intensity function I (x, y) of the lithographic target pattern by a fixed non-linear mapping mechanism. It is clear that,
Figure BDA0001485678130000051
not only rely onI (x, y) and depends on the grey scale distribution of the optical image intensity function I (x, y) around the (x, y) point. The most efficient way to encode the gray-scale distribution of the optical image intensity function I (x, y) around the (x, y) point is to use a set of values of the intrinsic imaging signal at the point (x, y). To accurately describe the optical image intensities including the three-dimensional effects of a lithography mask, the set of intrinsic imaging signals should include the characteristic signals of the three-dimensional filters of the mask for polygonal geometries, vertical edges, horizontal edges and corners, as shown in FIG. 2. Wherein
Figure BDA0001485678130000052
Figure BDA0001485678130000053
Ki polygonMask Filter for ith polygon, Ki VedgeMasking three-dimensional filters for the ith vertical edge, Ki HedgeAs a masked three-dimensional filter for the ith horizontal edge, Ki CornerThe mask three-dimensional filter for the ith corner.
The optical proximity correction method based on the neural network model provided by the embodiment comprises the following steps:
s01: training a neural network model, specifically comprising the following steps:
s0101: m test patterns are selected on the training mask.
S0102: and respectively obtaining target patterns corresponding to the M test patterns by adopting a reverse photoetching method.
Wherein any point on the lithographic mask plane can only take a value of 1 or 0. For clear tone lithographic mask types, the pattern definition areas are dark, using 0 as their value; for dark tone lithographic mask types, the pattern defining areas are clear, using 1 as their value. Since the neural network model cannot model the function of the discontinuity, we will first convolve the mask pattern computed from the inverse lithography with a reasonable gaussian function, thus smoothing the convex and concave corners. This operation can also be performed by direct replacement of convex or concave by an appropriate radius arcThe angle is realized, and the radius can be set to be about 30 nanometers to 40 nanometers. The output of this operation is a function of the binary value, in ZMTo represent the reverse lithography engine for the mth training pattern to obtain the target pattern.
S0103: simulation of class intensity function of training mask using known perceptual neural network
Figure BDA0001485678130000054
The structure of the perceptive neural network is shown in fig. 3, which is basically a hidden perceptron neural network. The number of nodes in the hidden layer is denoted by R. If the number of nodes in the hidden layer is too small, the neural network model has insufficient ability in learning the behavior of the reverse photoetching calculation, so that the accuracy of the neural network model is insufficient, and if the number of nodes in the hidden layer is too large, the neural network model has the possibility of overfitting, so that the neural network model is unstable. Therefore, the number of nodes, R, in the hidden layer will be determined through training experiments. The number of nodes in the hidden layer is estimated to be less than 10. Due to the high information-containing capability of the intrinsic image signal, which can be seen from the rapid drop of the eigenvalues of the TCC matrix, the number of elements of the estimated input eigenvector should be between 10 and 15, whereas the number of elements of the input eigenvector ranges from 50 to several hundred based on purely geometrically described eigenvectors.
The mathematical calculation relationship is as follows:
Figure BDA0001485678130000061
Figure BDA0001485678130000062
Figure BDA0001485678130000063
Figure BDA0001485678130000064
wherein, wi,j、ωv、pj0、q0For the parameters of the known perceptual neural network model, Si is the intrinsic imaging signal value at the grid points of the training reticle,
Figure BDA0001485678130000065
Figure BDA0001485678130000066
Ki polygonmask Filter for ith polygon, Ki VedgeMasking three-dimensional filters for the ith vertical edge, Ki HedgeAs a masked three-dimensional filter for the ith horizontal edge, Ki CornerThe mask three-dimensional filter for the ith corner.
S0104: training the perception neural network by using a class intensity function and a target pattern to obtain optimal model parameters including a cutting threshold value, and obtaining a neural network model by using the optimal model parameters;
wherein, the cost function for training the neural network model is as follows:
Figure BDA0001485678130000067
wherein, wi,j’,ωv’,pj0’,q0' is the model parameter, mu, of a linear neural network modelm mainIs the weight of the mth training pattern of the main pattern; mu.sm assistIs the weight of the mth training pattern of the auxiliary pattern, ZmThe test pattern is a target pattern corresponding to the mth test pattern on the training photomask. The trained model needs to use another set of test patterns to verify the generality of the model.
S02: and (3) realizing optical proximity correction by utilizing the trained neural network model:
s0201: obtaining the intensity-like function of the photomask to be processed by using the obtained neural network model
Figure BDA00014856781300000614
Wherein the content of the first and second substances,
Figure BDA0001485678130000068
Figure BDA0001485678130000069
Figure BDA00014856781300000610
Figure BDA00014856781300000611
wherein, wi,j’,ωv’,pj0’,q0Model parameters, S, for a linear neural network modeliFor the intrinsic imaging signal values at the grid points of the reticle to be processed, wherein
Figure BDA00014856781300000612
Figure BDA00014856781300000613
Ki polygonMask Filter for ith polygon, Ki VedgeMasking three-dimensional filters for the ith vertical edge, Ki HedgeAs a masked three-dimensional filter for the ith horizontal edge, Ki CornerThe mask three-dimensional filter for the ith corner.
S0202: cutting the above class intensity function with a cutting threshold
Figure BDA0001485678130000071
Generating a photomask to be processed;
Figure BDA0001485678130000072
Figure BDA0001485678130000073
s03: and photoetching by using the photomask containing the target pattern as a mask plate after optical proximity correction.
Example 2
When the neural network model is a quadratic neural network model, as shown in FIG. 4, the lithographic mask plane is divided into small cells, assuming that t (i, j) and t (m, n) are the light fields behind the small cell (i, j) and small cell (m, n) lithographic mask. Since the reaction of chemical resists is light intensity, not the field itself, we can assume that the target pattern calculated from reverse lithography can be determined by cutting a continuous intensity-like function
Figure BDA0001485678130000074
The resulting profile. . This function depends only on all the pair values { t (i, j), t (m, n) }, but this function itself is unknown:
Figure BDA0001485678130000075
the pair value { t (i, j), t (m, n) } is defined around the point (x, y). Since all lithographic mask types currently in use have only 0 phase or 180 phase, { t (i, j), t (m, n) } is a real number. One way to explore this unknown function is to use a multi-layer perceptron neural network model with the set { t (1,1) × t (1,1), t (1,1) × t (1,2), … … t (N, N) × t (N, N) } as feature vectors.
The optical proximity correction method based on the neural network model provided by the embodiment comprises the following steps:
s01: training a quadratic neural network model, which comprises the following specific steps:
s0101: selecting M test patterns on a training light cover;
s0102: and respectively obtaining target patterns corresponding to the M test patterns by adopting a reverse photoetching method.
S0103: simulation of class intensity function using known perceptual neural networks
Figure BDA0001485678130000076
Wherein: the structure of the known perceptive neural network is shown in fig. 5, and the specific expression is as follows:
Figure BDA0001485678130000077
Z=∑wkyk
Figure BDA0001485678130000078
Figure BDA0001485678130000079
Figure BDA00014856781300000710
Figure BDA00014856781300000711
due to the reciprocal principle of light interaction, the matrix must be symmetrical, and therefore,
Figure BDA0001485678130000081
if we will be
Figure BDA0001485678130000082
And
Figure BDA0001485678130000083
from a one-dimensional vector array to a two-dimensional matrix, equation (11) is effectively a two-dimensional convolution operation.
{uikAnd { V }i,kAre the eigenvalues and eigenvectors of the matrix.
S0104: and training the perception neural network by using the class intensity function and the target pattern to obtain optimal model parameters including a cutting threshold value, and obtaining a neural network model by using the optimal model parameters.
Figure BDA0001485678130000084
Wherein, wk’、pk0’、z0' model parameters for quadratic neural network model, { V1.K’,V2.K’,……VN.K' } is a convolution kernel set optimized by a quadratic neural network model, { u }1.K’,u2.K’,……uN.K' } is the weight value, mu, corresponding to each convolution kernel in the quadratic neural network modelm mainIs the weight of the mth training pattern of the main pattern; mu.sm assistIs the weight of the mth training pattern of the auxiliary pattern, ZmThe test pattern is a target pattern corresponding to the mth test pattern on the training photomask. The equation can use equation constraint as a cost item to construct a new cost function, so that the problem can be converted into a nonlinear unconstrained optimization problem. Gradient methods can be used to solve such optimization problems.
Wherein, in the training process, the following constraints are added to the model parameters:
Figure BDA0001485678130000085
the convolution kernel is required to be orthonormal;
Figure BDA0001485678130000086
the convolution kernel is required to be orthonormal;
ui,k’>0;
iui,k’=1,k=1,2,…R。
s02: and (3) realizing optical proximity correction by utilizing the trained neural network model:
s0201: obtaining the intensity-like function of the photomask to be processed by using the obtained neural network model
Figure BDA0001485678130000087
The intensity-like function in this embodiment is the same for different reticles to be processed. Using quadratic neural network model to obtain the intensity-like function of the photomask to be processed
Figure BDA0001485678130000088
The specific method adopts the following algorithm:
Figure BDA0001485678130000089
Figure BDA00014856781300000810
Figure BDA00014856781300000811
z=∑wk’ykl
Figure BDA0001485678130000091
wherein wk’、pk0’、z0' model parameters for quadratic neural network model, { V1.K’,V2.K’,……VN.K' } is a convolution kernel set optimized by a quadratic neural network model, { u }1.K’,u2.K’,……uN.K'} is the weight corresponding to each convolution kernel in the quadratic neural network model, and t' is Vi,k' light field corresponding thereto, in particular Vi,k' near field of light behind mask.
S0202: cutting the above class intensity function with a cutting threshold
Figure BDA0001485678130000092
Generating a photomask containing a target pattern;
s03: and photoetching by using the photomask containing the target pattern as a mask plate after optical proximity correction.
The above description is only a preferred embodiment of the present invention, and the embodiment is not intended to limit the scope of the present invention, so that all equivalent structural changes made by using the contents of the specification and the drawings of the present invention should be included in the scope of the appended claims.

Claims (10)

1. An optical proximity correction method based on a neural network model is characterized by comprising the following steps:
s01: training a neural network model, specifically comprising the following steps:
s0101: selecting M test patterns on a training light cover;
s0102: respectively obtaining target patterns corresponding to the M test patterns by adopting a reverse photoetching method;
s0103: simulation of class intensity function of training mask using known perceptual neural network
Figure FDA0002749927270000011
The intensity-like function is derived from an optical image intensity function of the target pattern by a fixed non-linear mapping mechanism;
s0104: training the perception neural network by using a class intensity function and a target pattern of a training photomask to obtain optimal model parameters including a cutting threshold value, and obtaining a neural network model by using the optimal model parameters;
s02: and (3) realizing optical proximity correction by utilizing the trained neural network model:
s0201: obtaining the intensity-like function of the photomask to be processed by using the obtained neural network model
Figure FDA0002749927270000012
S0202: cutting the above class intensity function with a cutting threshold
Figure FDA0002749927270000013
Generating a photomask containing a target pattern;
s03: and photoetching by using the photomask containing the target pattern as a mask plate after optical proximity correction.
2. The optical proximity correction method based on the neural network model according to claim 1, wherein the neural network model is a linear neural network model, and the perceptive neural network is a parameter-determined hidden layer perceptive neural network.
3. The method of claim 2, wherein in step S0103, the known perceptual neural network is used to simulate the intensity-like function of the training mask
Figure FDA0002749927270000014
The specific method comprises the following steps:
Figure FDA0002749927270000015
wherein, wi,j、ωv、pj0、q0Is a parameter of the hidden layer perception neural network, SiFor training intrinsic imaging signal values on reticle grid points, R is the number of nodes in the hidden layer, where
Figure FDA0002749927270000016
Ki polygonMask Filter for ith polygon, Ki VedgeMasking three-dimensional filters for the ith vertical edge, Ki HedgeAs a masked three-dimensional filter for the ith horizontal edge, Ki CornerThe mask three-dimensional filter for the ith corner.
4. The method of claim 3, wherein the cost function for training the perceptive neural network in step S0104 is:
Figure FDA0002749927270000017
wherein, wi,j’,ωv’,pj0’,q0' is the model parameter, mu, of a linear neural network modelm mainIs the weight of the mth training pattern of the main pattern; mu.sm assistIs the weight of the mth training pattern of the auxiliary pattern, ZmThe test pattern is a target pattern corresponding to the mth test pattern on the training photomask.
5. The method of claim 4, wherein the step S0201 of using the neural network model to obtain the intensity-like function of the photomask to be processed is performed
Figure FDA0002749927270000021
The specific method comprises the following steps:
Figure FDA0002749927270000022
Figure FDA0002749927270000023
wherein, wi,j’,ωv’,pj0’,q0' is a model parameter of a linear neural network model, Si is an intrinsic imaging signal value on a grid point of a photomask to be processed, wherein
Figure FDA0002749927270000024
Ki polygonMask Filter for ith polygon, Ki VedgeMasking three-dimensional filters for the ith vertical edge, Ki HedgeAs a masked three-dimensional filter for the ith horizontal edge, Ki CornerThe mask three-dimensional filter for the ith corner.
6. The optical proximity correction method according to claim 1, wherein the neural network model is a quadratic neural network model, and the perceptive neural network is a multi-layer perceptron neural network with parameter determination.
7. The method of claim 6, wherein in step S0103, the known perceptual neural network is used to simulate the intensity-like function of the training mask
Figure FDA0002749927270000025
The specific method comprises the following steps:
Figure FDA0002749927270000026
wherein u isi,k、wk、pk0、z0Is a parameter of the multi-layer perceptive neural network, Vi,kFor training the ith convolution kernel of the kth node at the reticle grid point, t is Vi,kTo the corresponding light field.
8. The method of claim 7, wherein the cost function for training the perceptive neural network in step S0104 is:
Figure FDA0002749927270000027
wherein, wk’、pk0’、z0’Model parameters for a quadratic neural network model, { V1.K’,V2.K’,……VN.K’The method is a convolution kernel set optimized by a quadratic neural network model, u1.K’,u2.K’,……uN.K’Is the weight value, mu, corresponding to each convolution kernel in the quadratic neural network modelm mainIs the weight of the mth training pattern of the main pattern; mu.sm assistIs the weight of the mth training pattern of the auxiliary pattern,Zmthe test pattern is a target pattern corresponding to the mth test pattern on the training photomask.
9. The method of claim 8, wherein the following constraints are added to the model parameters during the training of the perceptual neural network,
Figure FDA0002749927270000031
Figure FDA0002749927270000032
ui,k’>0;
iui,k’=1,k=1,2,…R。
10. the method of claim 8, wherein the step S0201 of using the neural network model to obtain the intensity-like function of the photomask to be processed is performed
Figure FDA0002749927270000033
The specific method comprises the following steps:
Figure FDA0002749927270000034
Figure FDA0002749927270000035
wherein, wk’、pk0’、z0’Model parameters for a quadratic neural network model, { V1.K’,V2.K’,……VN.K’The method is a convolution kernel set optimized by a quadratic neural network model, u1.K’,u2.K’,……uN.K’The weights corresponding to each convolution kernel in the quadratic neural network model are denoted by t' as Vi,k’To the corresponding light field.
CN201711216779.0A 2017-11-28 2017-11-28 Optical proximity correction method based on neural network model Active CN107908071B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711216779.0A CN107908071B (en) 2017-11-28 2017-11-28 Optical proximity correction method based on neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711216779.0A CN107908071B (en) 2017-11-28 2017-11-28 Optical proximity correction method based on neural network model

Publications (2)

Publication Number Publication Date
CN107908071A CN107908071A (en) 2018-04-13
CN107908071B true CN107908071B (en) 2021-01-29

Family

ID=61848973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711216779.0A Active CN107908071B (en) 2017-11-28 2017-11-28 Optical proximity correction method based on neural network model

Country Status (1)

Country Link
CN (1) CN107908071B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875141B (en) * 2018-05-24 2022-08-19 上海集成电路研发中心有限公司 Method for determining chip full-mask focusing parameters based on neural network model
CN108665060B (en) * 2018-06-12 2022-04-01 上海集成电路研发中心有限公司 Integrated neural network for computational lithography
WO2020169303A1 (en) * 2019-02-21 2020-08-27 Asml Netherlands B.V. Method for training machine learning model to determine optical proximity correction for mask
CN113614638A (en) * 2019-03-21 2021-11-05 Asml荷兰有限公司 Training method for machine learning assisted optical proximity effect error correction
CN111310407A (en) * 2020-02-10 2020-06-19 上海集成电路研发中心有限公司 Method for designing optimal feature vector of reverse photoetching based on machine learning
CN111538213B (en) * 2020-04-27 2021-04-27 湖南大学 Electron beam proximity effect correction method based on neural network
CN113759657A (en) * 2020-06-03 2021-12-07 中芯国际集成电路制造(上海)有限公司 Optical proximity correction method
CN111985611A (en) * 2020-07-21 2020-11-24 上海集成电路研发中心有限公司 Computing method based on physical characteristic diagram and DCNN machine learning reverse photoetching solution
CN112485976B (en) * 2020-12-11 2022-11-01 上海集成电路装备材料产业创新中心有限公司 Method for determining optical proximity correction photoetching target pattern based on reverse etching model
CN112578646B (en) * 2020-12-11 2022-10-14 上海集成电路装备材料产业创新中心有限公司 Offline photoetching process stability control method based on image
CN114200768B (en) * 2021-12-23 2023-05-26 中国科学院光电技术研究所 Super-resolution photoetching reverse optical proximity effect correction method based on level set algorithm
CN116802556A (en) * 2022-01-19 2023-09-22 华为技术有限公司 Method, apparatus, device, medium and program product for determining wafer pattern dimensions
CN114815496B (en) * 2022-04-08 2023-07-21 中国科学院光电技术研究所 Pixelated optical proximity effect correction method and system applied to super-resolution lithography
CN115509082B (en) * 2022-11-09 2023-04-07 华芯程(杭州)科技有限公司 Training method and device of optical proximity correction model and optical proximity correction method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107076681A (en) * 2014-10-14 2017-08-18 科磊股份有限公司 For responding measurement based on image and the signal for the overlay measurement for scattering art

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6466314B1 (en) * 1998-09-17 2002-10-15 Applied Materials, Inc. Reticle design inspection system
US7398508B2 (en) * 2003-11-05 2008-07-08 Asml Masktooks B.V. Eigen decomposition based OPC model
CN101320400B (en) * 2008-07-16 2010-04-21 桂林电子科技大学 Optimization design method of micro-electron packaging device based on artificial neural network
WO2017171891A1 (en) * 2016-04-02 2017-10-05 Intel Corporation Systems, methods, and apparatuses for modeling reticle compensation for post lithography processing using machine learning algorithms
CN106777829B (en) * 2017-02-06 2019-04-12 深圳晶源信息技术有限公司 A kind of optimization method and computer-readable storage medium of integrated circuit mask design

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107076681A (en) * 2014-10-14 2017-08-18 科磊股份有限公司 For responding measurement based on image and the signal for the overlay measurement for scattering art

Also Published As

Publication number Publication date
CN107908071A (en) 2018-04-13

Similar Documents

Publication Publication Date Title
CN107908071B (en) Optical proximity correction method based on neural network model
CN110678961B (en) Simulating near field images in optical lithography
US8732625B2 (en) Methods for performing model-based lithography guided layout design
TWI398721B (en) Systems, masks, and methods for photolithography
Jia et al. Machine learning for inverse lithography: using stochastic gradient descent for robust photomask synthesis
US8849008B2 (en) Determining calibration parameters for a lithographic process
US6918104B2 (en) Dissection of printed edges from a fabrication layout for correcting proximity effects
US6792590B1 (en) Dissection of edges with projection points in a fabrication layout for correcting proximity effects
US6539521B1 (en) Dissection of corners in a fabrication layout for correcting proximity effects
US11061318B2 (en) Lithography model calibration
JP5078543B2 (en) Local coloring for hierarchical OPC
KR20050043713A (en) Eigen decomposition based opc model
US7328424B2 (en) Method for determining a matrix of transmission cross coefficients in an optical proximity correction of mask layouts
US20070011648A1 (en) Fast systems and methods for calculating electromagnetic fields near photomasks
CN111310407A (en) Method for designing optimal feature vector of reverse photoetching based on machine learning
CN110426914A (en) A kind of modification method and electronic equipment of Sub-resolution assist features
US8073288B2 (en) Rendering a mask using coarse mask representation
CN110244523B (en) Integrated photoetching method and photoetching system
CN117313640A (en) Training method, device, equipment and storage medium for lithography mask generation model
US10120963B1 (en) Figurative models calibrated to correct errors in process models
Choy et al. A robust computational algorithm for inverse photomask synthesis in optical projection lithography
US7944545B2 (en) High contrast lithographic masks
US20130125070A1 (en) OPC Checking and Classification
US20240045321A1 (en) Optical proximity correction method using neural jacobian matrix and method of manufacturing mask by using the optical proximity correction method
Jia et al. Robustness enhancement in optical lithography: from pixelated mask optimization to pixelated source-mask optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant