CN113238460A - Deep learning-based optical proximity correction method for extreme ultraviolet - Google Patents

Deep learning-based optical proximity correction method for extreme ultraviolet Download PDF

Info

Publication number
CN113238460A
CN113238460A CN202110412412.6A CN202110412412A CN113238460A CN 113238460 A CN113238460 A CN 113238460A CN 202110412412 A CN202110412412 A CN 202110412412A CN 113238460 A CN113238460 A CN 113238460A
Authority
CN
China
Prior art keywords
mask
module
inversion module
deep learning
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110412412.6A
Other languages
Chinese (zh)
Other versions
CN113238460B (en
Inventor
肖理业
赵乐一
易俊男
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202110412412.6A priority Critical patent/CN113238460B/en
Publication of CN113238460A publication Critical patent/CN113238460A/en
Application granted granted Critical
Publication of CN113238460B publication Critical patent/CN113238460B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70425Imaging strategies, e.g. for increasing throughput or resolution, printing product fields larger than the image field or compensating lithography- or non-lithography errors, e.g. proximity correction, mix-and-match, stitching or double patterning
    • G03F7/70433Layout for increasing efficiency or for compensating imaging errors, e.g. layout of exposure fields for reducing focus errors; Use of mask features for increasing efficiency or for compensating imaging errors
    • G03F7/70441Optical proximity correction [OPC]
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70491Information management, e.g. software; Active and passive control, e.g. details of controlling exposure processes or exposure tool monitoring processes
    • G03F7/705Modelling or simulating from physical phenomena up to complete wafer processes or whole workflow in wafer productions
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70491Information management, e.g. software; Active and passive control, e.g. details of controlling exposure processes or exposure tool monitoring processes
    • G03F7/70508Data handling in all parts of the microlithographic apparatus, e.g. handling pattern data for addressable masks or data transfer to or from different components within the exposure apparatus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)
  • Exposure Of Semiconductors, Excluding Electron Or Ion Beam Exposure (AREA)

Abstract

The invention provides an optical proximity correction method for extreme ultraviolet based on deep learning, which comprises the following steps: the forward modeling module and the inversion module are used; the forward modeling module is used for rapidly and accurately mapping the mask to a near-far field corresponding to a plane above the stack, and the inversion module is used for rapidly and accurately mapping the target image to the corrected mask. Compared with the traditional full-wave simulation method, the forward modeling module can greatly improve the calculation efficiency, including the required running time and the required memory; meanwhile, the difference of the time-consuming iterative OPC method is that the corrected mask can be obtained by inputting target imaging by using a trained inversion module.

Description

Deep learning-based optical proximity correction method for extreme ultraviolet
Technical Field
The invention relates to the field of electromagnetic inversion methods, in particular to an Optical Proximity Correction (OPC) method for Extreme Ultraviolet (EUV) based on deep learning.
Background
In recent years, EUV lithography at an illumination wavelength of 13.5nm has received increasing attention as the most advanced lithography technique following deep ultraviolet lithography because the critical dimension of its integrated circuits is less than 22 nm. However, optical proximity effects and mask shadow effects inevitably affect the imaging performance of EUV lithography systems.
OPC has attracted increasing attention as it can improve the uniformity of imaging in lithography systems. OPC compensates for imaging distortion by pre-distorting the mask so that it converges to the target pattern. In general, OPC methods can be classified into rule-based OPC and model-based OPC.
Rule-based OPC is simple and heuristic to implement, but cannot handle technology nodes below 90 nm. Model-based OPC is based on a physical or mathematical model of the OPC framework to find a globally optimal solution. Model-based OPC enables smaller lithographic resolution limits than rule-based OPC.
Model-based OPC can be further classified into edge-based optical proximity correction (EBOPC) and pixel-based optical proximity correction (PBOPC). In working principle, EBOPC breaks the edge of the mask into several segments from which the optimal solution is derived, and PBOPC breaks the mask into pixels and optimizes its binary values. Compared with EBOPC, PBOPC has higher degree of freedom of optimization and can conduct nodes smaller than 45 nm. Therefore, a series of PBOPC methods have been developed to follow the process variations of advanced lithography systems. However, model-based OPC will consume a significant amount of CPU time in a large number of iterations due to repeated calls to lithography simulation and mask imaging correction procedures. Machine learning based methods are generally more computationally efficient than traditional computational methods. The main idea of the machine learning-based method is to establish a mapping relationship by training a neural network model based on optical diffraction physics. When a new data is input into the trained neural network model, the corresponding output can be obtained immediately. The document "Machine Learning (ML) -based discrete optionals" describes basic OPC based on Machine learning, including support vector machines and neural networks. Through the discussion of learning parameters and the preparation of a compact learning data set, a guided technique is proposed that avoids the over-fitting problem. In the document "Litho-Aware Machine Learning for Hotspot Detection", an OPC based on a neural network classifier is proposed, wherein the neural network classifier is used as a mask bias model. Compared with the most advanced regression-based machine learning-OPC method at present, the OPC method reduces the prediction error of the mask deviation and the training time by 29% and 80% respectively. The literature "Optical simulation with hierarchical Bayes model" proposes an OPC regression model based on a hierarchical bayesian model, which is used to reduce the number of iterations in the OPC process. In order to reduce the running time and mask complexity of PBOPC, an OPC model based on machine learning is proposed in the literature "fast and manual-random optical correction based on machine learning". The proposed model was tested on metal layers and polysilicon layers of 45nm technology nodes. Simulation results show that the model can effectively reduce the running time of PBOPC software and improve mask manufacturability, but the cost is reduced imaging fidelity.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an optical proximity correction method for ultra-violet based on deep learning, which provides a new viewing angle for EUV lithography, in particular to lithography with small critical dimension. Compared with other OPC methods, the OPC method provided by the invention mainly contributes to the following steps: firstly, the modeling efficiency is greatly improved; second, very large scale patterns can be generated with little computational cost (including time and required computer memory); thirdly, for a critical dimension of 3nm or less, the forward module can output corresponding near-far field distribution quickly and accurately, and the proposed inversion module can quickly generate a corrected mask to improve the quality of imaging on the wafer.
The invention adopts the following technical scheme:
the extreme ultraviolet EUV structure is composed of a mask group and a multilayer Bragg reflector group, and 6-degree incident plane waves are selected as EUV rays. The mask stack includes a mask pattern over the multilayer bragg reflector stack. The multilayer Bragg reflector consists of 40 double-layer Si-Mo layers and can effectively reflect the energy of 13.5nm plane waves at 6-degree incidence. At the same time, the masked areas covered by the absorber absorb most of the EUV light, while the uncovered areas reflect most of the EUV light into the optical projector. The optical projector is used for projecting the layout pattern of the mask onto the wafer. After the photoresist is developed, the layout pattern will be printed on the wafer.
An optical proximity correction method for extreme ultraviolet based on deep learning specifically comprises the following steps:
forward module training samples were generated using Wave knowledge EM software developed by Wave computing Technologies, inc, which employs the spectral-element spectral-integration (SESI) method suitable for this problem. The forward module has a total of 320 training samples, and eight groups, each group containing 40 samples for training. 40 test samples were used for validation. The mask size of the training sample is 128nm multiplied by 128nm, the dispersion is 256 multiplied by 256 pixel points, and the size of each pixel point is 0.5nm multiplied by 0.5 nm.
The forward module is designed, the input of the forward module is a mask mode, i.e. a binary image, and the output is a far-near field on a plane above the mask stack. The near field on the 1nm plane above the mask stack is set as the output of the forward module. The iteration times of two connected U-Net are both 2000, and the learning rates are respectively 1 multiplied by 10-4And 5X 10-5. Meanwhile, Mean Square Error (MSE) is defined as a loss function of the two U-nets.
And generating an inversion module training sample, wherein before the training process, the key for constructing the required training data set is to construct the inversion module. And establishing an OPC model comprising two steps, wherein the OPC model is used for generating a training sample of the inversion module. Setting a side length threshold, comparing the side length of the sample with the threshold, and performing transformation for increasing the corresponding area and the protrusion and the recess of the boundary on the sample to produce a training sample. And selecting a sample with small mask error to construct a training data set of the inversion module. A total of 400 samples were selected for training and 50 additional test samples were tested for validation. The mask size of the training sample of the inversion module is also 128nm by 128nm, with a dispersion of 256 by 256 pixels.
And designing an inversion module, wherein the input of the inversion module is an ideal image expected on the wafer, and the corresponding output is the corrected mask. The inversion module is also constructed on the basis of U-Net, and on the basis, the input single-channel binary image can be converted into the output single-channel binary image. For the inversion module, four samples in the test set that never appeared in the training data set were used to evaluate the performance of the inversion module. The mask sizes of the four samples are 128nm × 128nm, and the mask output values obtained by the inversion module are input into the forward module to calculate the corresponding field distributions.
Establishing a model combining a forward modeling module and an inversion module, which specifically comprises the following steps:
the inversion module inputs the expected image on the wafer, and the corresponding output is the corrected mask; and then, the obtained mask is used as the input of the forward modeling module after training is finished, and the corresponding image on the wafer can be obtained.
As can be seen from the above description of the present invention, compared with the prior art, the present invention has the following advantages:
firstly, the forward modeling module is tested by using samples of different types and scales, and comparison with a full-wave simulation result shows that the forward modeling module not only can accurately map mask patterns of different critical dimensions to corresponding near fields, but also can map patterns of different types to corresponding near fields; meanwhile, compared with full-wave simulation (such as SESI), the forward module greatly reduces the required CPU time and memory, and improves the calculation efficiency;
then, evaluating samples of different scales and types by using the designed inversion module; according to the test results of samples with different critical dimensions, the provided inversion module is different from the traditional iteration method, the corrected mask can be directly output, the calculation efficiency is high, and the corrected mask can enhance the imaging balance, particularly for the samples with the critical dimension less than 3 nm.
The method provided by the invention, including the forward modeling module and the inversion module, can be used as a reliable and effective OPC tool, especially in an EUV system with a small critical dimension.
The above and other objects, advantages and features of the present invention will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
FIG. 1 is a structural diagram of EUV in an embodiment of the present invention;
FIG. 2 is a flow chart of the deep learning based optical proximity correction method for EUV in accordance with the present invention;
FIG. 3 is a schematic diagram of an OPC model generated by training samples for an inversion module according to the present invention; wherein (b), (e) and (h) are the first steps of expanding the original domain shown in (a); (c) representing that the middle points of the four edges of the pair (b) are respectively made to have the amplitude h1、h2、h3、h4The (f) shows that the middle points of the four sides of the (e) are respectively processed with the amplitude of h5、h6、h7、h8(ii) represents that h is respectively made for the midpoints of the four sides of (h)9、h10、h11And h12The outward protrusion treatment.
FIG. 4 is a block diagram of an inversion module according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating the convergence of the loss function during training;
FIG. 6 is a schematic diagram of sample tests of four different scales and types performed on the proposed inversion module; the first column is the target (original mask); the second column is a corrected mask output by the proposed inversion module; the third column is the image obtained from the forward module with the correction mask; column 4 shows the imaging with the correction mask SESI; column 5 shows imaging with original mask SESI;
FIG. 7 is a schematic diagram of the proposed inversion module evaluated using a very large mask of 6400nm by 6400nm in size; wherein (a) represents a target style; (b) the corresponding predicted mask resulting from the inversion module is represented.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1, the extreme ultraviolet EUV structure of the present invention is composed of a mask set and a multi-layer bragg reflector set, and 6-degree incident plane waves are selected as EUV light. The mask stack includes a mask pattern over the multilayer bragg reflector stack. The multilayer Bragg reflector consists of 40 double-layer Si-Mo layers and can effectively reflect the energy of 13.5nm plane waves at 6-degree incidence. At the same time, the masked areas covered by the absorber absorb most of the EUV light, while the uncovered areas reflect most of the EUV light into the optical projector. The optical projector is used for projecting the layout pattern of the mask onto the wafer. After the photoresist is developed, the layout pattern will be printed on the wafer.
Referring to fig. 2, the optical proximity correction method for extreme ultraviolet based on deep learning of the present invention specifically includes:
s101, establishing a deep learning method-based model of the OPC method for EUV.
First, a forward module is designed. The input to the forward module is the mask pattern, i.e. the binary image, and the output is the far and near field on the plane above the mask stack. The near field on the 1nm plane above the mask stack is set as the output of the forward module. The iteration times of two connected U-Net are both 2000, and the learning rates are respectively 1 multiplied by 10-4And 5X 10-5. Meanwhile, MSE is defined as the loss function of the two U-nets.
Secondly, an inversion module is designed. The input to the inversion module is the ideal image on the wafer and the corresponding output is the corrected mask. The inversion module is also constructed on the basis of U-Net, and the structure of the inversion module is shown in FIG. 4, wherein the gray vertical frame represents the multi-channel image, and the height and width thereof represent the size and the number of channels of the image respectively. The operation is indicated by arrows of different colors and directions. On the basis, the input single-channel binary image can be converted into the output single-channel binary image. For the inversion module, four samples in the test set that never appeared in the training data set were used to evaluate the performance of the inversion module. The mask sizes of the four samples are 128nm × 128nm, and the mask output values obtained by the inversion module are input into the forward module to calculate the corresponding field distributions.
Finally, establishing a model combining the forward modeling module and the inversion module, which specifically comprises the following steps:
the inversion module inputs are the desired imaging on the wafer and the corresponding outputs are the corrected mask. And then, the obtained mask is used as the input of the forward modeling module after training is finished, and the corresponding image on the wafer can be obtained.
And S102, establishing a data set of the forward modeling module and the inversion module.
Forward modeling block training samples were generated using Wave knowledge EM software developed by Wave computing Technologies, inc. The forward module has a total of 320 training samples. Eight training samples, each containing 40 samples for training. 40 test samples were used for validation. The mask size of the training sample was 128nm by 128nm, the dispersion was 256 by 256 pixels, and the size of each pixel was 0.5nm by 0.5 nm.
And generating an inversion module training sample, wherein before the training process, the key for constructing the required training data set is to construct the inversion module. And establishing an OPC model comprising two steps, wherein the OPC model is used for generating a training sample of the inversion module. As shown in FIGS. 3(a) -3(c), taking a rectangular shape as an example, the first step is to increase m in two directions respectively1And m2And enlarging the definition field of the original rectangle. Then, for four sides, the midpoints of the four sides respectively form an amplitude h1、h2、h3And h4Concave/convex. If the width of one side is less than the w threshold, the corresponding adjacent side is raised. The w is a threshold value for determining whether one side is concave or convex, and the present invention sets the w threshold value of the rectangle to 3 nm. As shown in FIG. 3(d), if L3If the value is less than the w threshold value, m is respectively increased to two directions in the area3And m4Then, h appears at the corresponding adjacent side5And h7As shown in fig. 3 (f). Conversely, if the length of a side is greater than the w threshold, the corresponding side forms a depression, as shown in fig. 3 (c). As shown in FIG. 3(g), if L5Greater than the w threshold, m is increased in both directions in the region5And m6Then, the corresponding adjacent side h9And h11A depression is formed as shown in fig. 3 (i). And finally, selecting a sample with small mask error to construct a training data set of the inversion module. A total of 400 samples were selected for training and 50 additional test samples were tested for validation. The mask size of the training sample of the inversion module is also 128nm by 128nm, with a dispersion of 256 by 256 pixels.
S103, training and verifying the inversion module by using the established data set of the inversion module; and taking the mask output by the inversion module as the input of the trained forward module to train and test the forward module.
The invention performs all calculations on workstations of Intel i9-10940X 3.30GHz CPU, 256GB RAM and NVIDIA GeForce RTX 3090 GPU.
As shown above, the forward module has a total of 320 training samples. Eight training samples, each containing 40 samples for training. 40 test samples were used for validation. The mask size of the training sample is 128nm multiplied by 128nm, the dispersion is 256 multiplied by 256 pixel points, and the size of each pixel point is 0.5nm multiplied by 0.5 nm.
For the inversion module, a total of 400 samples were selected for training, and 50 additional test samples were tested for validation. Fig. 5 records the convergence of the loss function during training. The training sample of the inversion module is also 128nm by 128nm, discretized into 256 by 256 pixels.
Specifically, for an inversion module, four samples in the test set that never appeared in the training data set were used to evaluate the performance of the inversion module. The mask sizes of the four samples are 128nm × 128nm, and the mask output values obtained by the inversion module are input into the forward module to calculate the corresponding field distributions. To describe the deviation between the output image and the original desired image (target), the error of the mask is defined as:
Figure BDA0003024407550000061
wherein m ispFor predicting matrices of binary images, mτIs a matrix of the desired binary image.
As the critical dimensions decrease, imaging performance of the lithography system poses significant challenges. Tests #1-4 (corresponding to the graphs in the first, second, third and fourth rows of FIG. 6, respectively) had critical dimensions of 4nm, 7nm, 4nm and 2.5nm, respectively, and corresponding electrical dimensions of 0.3 λ, 0.52 λ, 0.3 λ and 0.19 λ, respectively.
The goals of test #1-4 are shown in FIG. 6, where tests #1-3 are based on samples (i), (e), and (h) in FIG. 3, respectively, and test #4 is quite different from the training sample. The mask error, as defined in the equation. The distances between the targets of test #1-4 and the imaging map on the wafer are shown in table 1. Wherein Misfit1Is the error between the imaging of the forward module and the corresponding target. Misfit2Is the error between the corresponding imaging obtained from the SESI and the corresponding target.
TABLE 1 errors of imaging with corresponding targets
Test #1 Test #2 Test #3 Test #4
Misfit1 0.0632 0.0721 0.0757 0.0948
Misfit2 0.0698 0.0682 0.0762 0.0872
Misfit3 0.1342 0.1698 0.1593 0.1839
For comparison, the original target was set to mask and the corresponding image from SESI is shown in fig. 6. The mask error between target and image is shown in Table 1, namely Misfit3. It can be seen that the inversion module proposed by the present invention provides a mask that can achieve smaller errors than the original target as a mask. The proposed inversion module is then examined with a very large mask. As shown in FIG. 7, the target size is 6400nm by 6400nm, and the predicted output mask is shown in FIG. 7. Also, the CPU time and the required computational memory are recorded in Table 2. Different from the traditional iteration method, the inversion module can directly output the corrected mask, and has high calculation efficiency on different targets although extra GPU calculation is needed.
Table 2 shows the run time and required memory of the inversion module
Size of Run time Memory device
Test #
1 128nm×128nm 32ms 0.42GB
Test #
2 128nm×128nm 32ms 0.42GB
Test #3 128nm×128nm 32ms 0.42GB
Test #
4 128nm×128nm 32ms 0.42GB
Test #5 6400nm×6400nm 51s 9.8GB
As shown in fig. 6-7 and tables 1-2, the corrected mask can enhance the uniformity of the imaging as verified by trials # 1-4. Meanwhile, experiment #5 further verifies the computational efficiency through the computation of the oversized mask. Therefore, the inversion module in the deep learning-based OPC method for EUV can be used as an effective tool for the OPC method for EUV, and is especially suitable for the case of small critical dimension.
From the test results, the invention provides an OPC method for EUV based on deep learning, which improves the imaging performance of an EUV system, especially the imaging performance under the condition that the critical dimension is less than 3 nm. Firstly, the forward modeling module is tested by using samples with different types and scales, and comparison with a full-wave simulation result shows that the forward modeling module not only can accurately map mask patterns with different critical dimensions to corresponding near fields, but also can map different types of patterns to corresponding near fields. At the same time, the forward module greatly improves computational efficiency, including required CPU time and memory, compared to full-wave simulation (e.g., SESI). The constructed inversion module evaluates samples of different scales and types. It can be seen that, unlike the conventional iterative method, the proposed inversion module can directly output the corrected mask, which has high computational efficiency, and the corrected mask can enhance the imaging uniformity, especially the imaging uniformity for critical dimensions smaller than 3 nm. The method provided by the invention, including the forward module and the inversion module, can be used as a reliable and effective OPC tool, especially in an EUV system with a small critical dimension.
The above examples are only used to further illustrate the deep learning-based optical proximity correction method for extreme ultraviolet, but the present invention is not limited to the above examples, and any simple modification, equivalent change and modification made to the above examples according to the technical spirit of the present invention fall within the scope of the technical solution of the present invention.

Claims (7)

1. An optical proximity correction method for extreme ultraviolet based on deep learning, comprising:
designing a forward modeling module, wherein the input of the forward modeling module is a mask, namely a binary image, and the output of the forward modeling module is a far and near field on a plane above a mask stack; the forward modeling module comprises two connected U-nets, and the mean square error is defined as a loss function of the two U-nets;
designing an inversion module, wherein the input of the inversion module is expected imaging on a wafer, and the output of the inversion module is a corrected mask; the inversion module is also constructed on the basis of U-Net, and on the basis, the input single-channel binary image is converted into the output single-channel binary image;
combining the designed forward modeling module with the designed inversion module, specifically, taking the corrected mask output by the inversion module as the input of the trained forward modeling module to obtain the corresponding image on the wafer.
2. The deep learning-based optical proximity correction method for extreme ultraviolet as claimed in claim 1, wherein the near field on the 1nm plane above the mask stack is set as the output of the forward module; the iteration times of two connected U-Net are both 2000, and the learning rates are respectively 1 multiplied by 10-4And 5X 10-5
3. The deep learning-based optical proximity correction method for extreme ultraviolet as claimed in claim 1, wherein the method for establishing the sample set of the forward module comprises:
a sample set of the forward modeling module is generated through a spectral element spectrum integration method, the mask size of the sample set is 128nm multiplied by 128nm, the dispersion is 256 multiplied by 256 pixels, and the size of each pixel is 0.5nm multiplied by 0.5 nm.
4. The deep learning-based optical proximity correction method for extreme ultraviolet as claimed in claim 3, wherein the sample set of the forward module comprises 320 training samples and 40 test samples; the 320 training samples are divided into eight groups, and each group comprises 40 samples for training.
5. The deep learning-based optical proximity correction method for extreme ultraviolet as claimed in claim 1, wherein the method for establishing the sample set of the inversion module comprises:
establishing an OPC model, wherein the OPC model is used for generating a sample set of an inversion module; the method specifically comprises the following steps:
setting a side length threshold, comparing the side length of the sample with the side length threshold, and performing transformation of increasing corresponding areas and boundary bulges or depressions on the sample to generate a sample set; selecting a sample with small mask error to construct a training data set of an inversion module; the mask size of the training sample of the inversion module is 128nm by 128nm with a dispersion of 256 by 256 pixels.
6. The deep learning-based optical proximity correction method for extreme ultraviolet as claimed in claim 5, wherein the sample set of the inversion module comprises 400 training samples and 50 test samples; and evaluating the performance of the inversion module by using four samples which never appear in the training samples in the test samples, wherein the mask sizes of the four test samples are 128nm multiplied by 128nm, inputting the mask output values obtained by the inversion module into a forward module, and calculating corresponding field distribution.
7. The deep learning-based optical proximity correction method for extreme ultraviolet as claimed in claim 5, wherein the mask error is defined as the deviation between the output image and the original desired image, specifically expressed as:
Figure FDA0003024407540000021
wherein m ispFor predicting matrices of binary images, mτIs a matrix of the desired binary image.
CN202110412412.6A 2021-04-16 2021-04-16 Deep learning-based optical proximity correction method for extreme ultraviolet Expired - Fee Related CN113238460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110412412.6A CN113238460B (en) 2021-04-16 2021-04-16 Deep learning-based optical proximity correction method for extreme ultraviolet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110412412.6A CN113238460B (en) 2021-04-16 2021-04-16 Deep learning-based optical proximity correction method for extreme ultraviolet

Publications (2)

Publication Number Publication Date
CN113238460A true CN113238460A (en) 2021-08-10
CN113238460B CN113238460B (en) 2022-02-11

Family

ID=77128376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110412412.6A Expired - Fee Related CN113238460B (en) 2021-04-16 2021-04-16 Deep learning-based optical proximity correction method for extreme ultraviolet

Country Status (1)

Country Link
CN (1) CN113238460B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114326328A (en) * 2022-01-10 2022-04-12 厦门大学 Deep learning-based simulation method for extreme ultraviolet lithography
CN114898169A (en) * 2022-03-10 2022-08-12 武汉大学 Photoetching OPC database establishing method based on deep learning
WO2023241267A1 (en) * 2022-06-14 2023-12-21 腾讯科技(深圳)有限公司 Training method and apparatus for lithographic-mask generation model, and device and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090119635A1 (en) * 2006-12-04 2009-05-07 Kazuhiro Takahata Mask pattern correction method for manufacture of semiconductor integrated circuit device
CN105824187A (en) * 2015-01-06 2016-08-03 中芯国际集成电路制造(上海)有限公司 Optical proximity correction method
CN108828896A (en) * 2018-05-31 2018-11-16 中国科学院微电子研究所 Add the application of the method and this method of Sub-resolution assist features
US20190004504A1 (en) * 2017-06-30 2019-01-03 Kla-Tencor Corporation Systems and methods for predicting defects and critical dimension using deep learning in the semiconductor manufacturing process
CN110187609A (en) * 2019-06-05 2019-08-30 北京理工大学 A kind of deep learning method calculating photoetching
CN110517241A (en) * 2019-08-23 2019-11-29 吉林大学第一医院 Method based on the full-automatic stomach fat quantitative analysis of NMR imaging IDEAL-IQ sequence
CN110692017A (en) * 2017-05-26 2020-01-14 Asml荷兰有限公司 Machine learning based assist feature placement
WO2020187578A1 (en) * 2019-03-21 2020-09-24 Asml Netherlands B.V. Training method for machine learning assisted optical proximity error correction
WO2020200993A1 (en) * 2019-04-04 2020-10-08 Asml Netherlands B.V. Method and apparatus for predicting substrate image
CN111886606A (en) * 2018-02-23 2020-11-03 Asml荷兰有限公司 Deep learning for semantic segmentation of patterns
WO2021028228A1 (en) * 2019-08-13 2021-02-18 Asml Netherlands B.V. Method for training machine learning model for improving patterning process

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090119635A1 (en) * 2006-12-04 2009-05-07 Kazuhiro Takahata Mask pattern correction method for manufacture of semiconductor integrated circuit device
CN105824187A (en) * 2015-01-06 2016-08-03 中芯国际集成电路制造(上海)有限公司 Optical proximity correction method
CN110692017A (en) * 2017-05-26 2020-01-14 Asml荷兰有限公司 Machine learning based assist feature placement
US20190004504A1 (en) * 2017-06-30 2019-01-03 Kla-Tencor Corporation Systems and methods for predicting defects and critical dimension using deep learning in the semiconductor manufacturing process
CN111886606A (en) * 2018-02-23 2020-11-03 Asml荷兰有限公司 Deep learning for semantic segmentation of patterns
CN108828896A (en) * 2018-05-31 2018-11-16 中国科学院微电子研究所 Add the application of the method and this method of Sub-resolution assist features
WO2020187578A1 (en) * 2019-03-21 2020-09-24 Asml Netherlands B.V. Training method for machine learning assisted optical proximity error correction
WO2020200993A1 (en) * 2019-04-04 2020-10-08 Asml Netherlands B.V. Method and apparatus for predicting substrate image
CN110187609A (en) * 2019-06-05 2019-08-30 北京理工大学 A kind of deep learning method calculating photoetching
WO2021028228A1 (en) * 2019-08-13 2021-02-18 Asml Netherlands B.V. Method for training machine learning model for improving patterning process
CN110517241A (en) * 2019-08-23 2019-11-29 吉林大学第一医院 Method based on the full-automatic stomach fat quantitative analysis of NMR imaging IDEAL-IQ sequence

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114326328A (en) * 2022-01-10 2022-04-12 厦门大学 Deep learning-based simulation method for extreme ultraviolet lithography
CN114898169A (en) * 2022-03-10 2022-08-12 武汉大学 Photoetching OPC database establishing method based on deep learning
CN114898169B (en) * 2022-03-10 2024-04-12 武汉大学 Deep learning-based photoetching OPC database establishment method
WO2023241267A1 (en) * 2022-06-14 2023-12-21 腾讯科技(深圳)有限公司 Training method and apparatus for lithographic-mask generation model, and device and storage medium

Also Published As

Publication number Publication date
CN113238460B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN113238460B (en) Deep learning-based optical proximity correction method for extreme ultraviolet
US11561477B2 (en) Training methods for machine learning assisted optical proximity error correction
JP7362744B2 (en) Method and device for selecting layout patterns
US20200279065A1 (en) Modeling of a design in reticle enhancement technology
US7328424B2 (en) Method for determining a matrix of transmission cross coefficients in an optical proximity correction of mask layouts
US11726402B2 (en) Method and system for layout enhancement based on inter-cell correlation
CN107908071A (en) A kind of optical adjacent correction method based on neural network model
US11620425B2 (en) Methods for modeling of a design in reticle enhancement technology
CN108228981B (en) OPC model generation method based on neural network and experimental pattern prediction method
CN110426914B (en) Correction method of sub-resolution auxiliary graph and electronic equipment
CN111310407A (en) Method for designing optimal feature vector of reverse photoetching based on machine learning
US11874597B2 (en) Stochastic optical proximity corrections
CN115981115B (en) Optical proximity correction method, optical proximity correction device, computer equipment and storage medium
Pang et al. Optimization from design rules, source and mask, to full chip with a single computational lithography framework: level-set-methods-based inverse lithography technology (ILT)
CN111985611A (en) Computing method based on physical characteristic diagram and DCNN machine learning reverse photoetching solution
US20220128899A1 (en) Methods and systems to determine shapes for semiconductor or flat panel display fabrication
Ma et al. A fast and manufacture-friendly optical proximity correction based on machine learning
CN114326328A (en) Deep learning-based simulation method for extreme ultraviolet lithography
US11314171B2 (en) Lithography improvement based on defect probability distributions and critical dimension variations
US20230118656A1 (en) Machine learning based model builder and its applications for pattern transferring in semiconductor manufacturing
Xiao et al. A novel optical proximity correction (OPC) system based on deep learning method for the extreme ultraviolet (EUV) lithography
CN114326329A (en) Photoetching mask optimization method based on residual error network
Shi et al. Physics based feature vector design: a critical step towards machine learning based inverse lithography
US10642160B2 (en) Self-aligned quadruple patterning pitch walking solution
US20240086607A1 (en) Modeling of a design in reticle enhancement technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220211

CF01 Termination of patent right due to non-payment of annual fee