CN117031871A - Optical proximity correction method and system based on cascade network and photoetching method - Google Patents

Optical proximity correction method and system based on cascade network and photoetching method Download PDF

Info

Publication number
CN117031871A
CN117031871A CN202311144751.6A CN202311144751A CN117031871A CN 117031871 A CN117031871 A CN 117031871A CN 202311144751 A CN202311144751 A CN 202311144751A CN 117031871 A CN117031871 A CN 117031871A
Authority
CN
China
Prior art keywords
training
network
cascade network
optical proximity
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311144751.6A
Other languages
Chinese (zh)
Inventor
罗先刚
孔维杰
张舒行
尹格
赵泽宇
王长涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Optics and Electronics of CAS
Original Assignee
Institute of Optics and Electronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Optics and Electronics of CAS filed Critical Institute of Optics and Electronics of CAS
Priority to CN202311144751.6A priority Critical patent/CN117031871A/en
Publication of CN117031871A publication Critical patent/CN117031871A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F1/00Originals for photomechanical production of textured or patterned surfaces, e.g., masks, photo-masks, reticles; Mask blanks or pellicles therefor; Containers specially adapted therefor; Preparation thereof
    • G03F1/36Masks having proximity correction features; Preparation thereof, e.g. optical proximity correction [OPC] design processes
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70425Imaging strategies, e.g. for increasing throughput or resolution, printing product fields larger than the image field or compensating lithography- or non-lithography errors, e.g. proximity correction, mix-and-match, stitching or double patterning
    • G03F7/70433Layout for increasing efficiency or for compensating imaging errors, e.g. layout of exposure fields for reducing focus errors; Use of mask features for increasing efficiency or for compensating imaging errors
    • G03F7/70441Optical proximity correction [OPC]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Preparing Plates And Mask In Photomechanical Process (AREA)

Abstract

The disclosure provides an optical proximity effect correction method based on a cascade network, comprising the following steps: performing optical proximity effect correction on the design pattern to obtain a known data pair consisting of design pattern data and corresponding optimized mask pattern data, and splitting the known data pair into a training set and a verification set; inputting design graphic data in a training set into a cascade network in batches to obtain prediction graphic data related to training; calculating a training loss value and updating weight parameters of the cascade network; inputting the design graphic data in the verification set into the updated cascade network to obtain prediction graphic data related to verification; calculating a training loss value and an IOU value; judging whether the training loss value and the IOU value meet corresponding preset conditions or whether the current training times reach preset training times or not; if yes, the cascade network after the weight parameter is updated currently is a trained cascade network; and inputting the design graph to be optimized into the trained cascade network to obtain predicted and corrected mask graph data.

Description

Optical proximity correction method and system based on cascade network and photoetching method
Technical Field
The present disclosure relates to the field of integrated circuit technology, and in particular, to a cascade network-based optical proximity correction method, system, electronic device, computer-readable storage medium, program product, and photolithography method.
Background
Molar law in the large-scale integrated circuit industry suggests that the chip industry is rapidly evolving towards smaller nanoscale features, and as nanoscale is reduced, the distortion of lithographic results at photoresist relative to design masks is also more serious, and the phenomenon of distortion of lithographic imaging results due to interference and diffraction effects of light waves is called optical proximity effect (Optical Proximity Effect, OPE), which affects chip manufacturing yield. The effect of optical proximity effect can now be compensated for by modifying the pattern on the mask, a method known as optical proximity effect correction (Optical Proximity Correction, OPC).
In addition to the traditional projection lithography technology, the surface plasmon lithography technology (Surface Plasmon Lithography, SPL) in recent years has the advantages of low light source manufacturing difficulty, super-resolution imaging capability, high throughput and the like in the age of large-scale integrated circuits, and is a potential technology for replacing complex and expensive projection lithography technology. For both lithography techniques, rule-based optical proximity correction (Rule-based Optical Proximity Correction, MB-OPC) has limited applicability, while Model-based optical proximity correction (Model-based Optical Proximity Correction, RB-OPC) and inverse lithography (Inverse Lithography Technology, ILT) have drawbacks of large computational resources and of being unable to be applied to large-size lithography layouts.
Therefore, an optical proximity correction method that can be used with high efficiency, wide application range, and high robustness has become an urgent need in the field of photolithography.
Disclosure of Invention
First, the technical problem to be solved
In view of the above problems, the present disclosure provides a cascade network-based optical proximity correction method, system, electronic device, computer-readable storage medium, program product, and photolithography method, for solving the technical problems that the conventional method is difficult to implement efficient, fast, and widely applicable optical proximity correction.
(II) technical scheme
The first aspect of the present disclosure provides a cascade network-based optical proximity effect correction method, including: s1, carrying out optical proximity effect correction on a design pattern, and carrying out pixelation processing to obtain a known data pair consisting of design pattern data and corresponding optimized mask pattern data; splitting the known data pair into a training set and a verification set; s2, inputting design graphic data in the training set into a cascade network in batches to obtain prediction graphic data related to training; the cascade network is formed by serially connecting a Swin transform variant network and a Unet convolution network; s3, for each batch, calculating a first training loss value according to the predicted graph data related to training and the optimized mask graph data corresponding to the training set in sequence, and updating the weight parameters of the cascade network according to the first training loss value until an updated cascade network is obtained; s4, inputting the design graphic data in the verification set into the updated cascade network to obtain prediction graphic data related to verification; s5, calculating a second training loss value and a union intersection value according to the prediction graph data related to verification and the optimized mask graph data corresponding to the verification set; s6, judging whether the second training loss value and the union intersection value reach corresponding preset thresholds or whether the current training times reach preset training times; if not, repeating the steps S2 to S6 for training; if yes, the cascade network updated currently is a trained cascade network; s7, inputting the design graph to be optimized into the trained cascade network to obtain predicted and corrected mask graph data, and finishing optical proximity effect correction.
According to an embodiment of the present disclosure, performing optical proximity effect correction on the design pattern in S1 includes: performing optical proximity effect correction according to the design pattern to obtain an optimized mask pattern; respectively carrying out pixelation processing on the design pattern and the optimized mask pattern to obtain design pattern data and corresponding optimized mask pattern data, wherein the design pattern data and the corresponding optimized mask pattern data form a known data pair; the design pattern to be optimized in S7 is the whole or most of the design pattern of the large-size mask layout to be optimized, and the design pattern in S1 is a small part of the design pattern selected from a plurality of design patterns obtained by splitting the large-size mask layout to be optimized by a rasterization method.
According to an embodiment of the present disclosure, the method of performing optical proximity correction in S1 includes any one of a rule-based optical proximity correction method, a model-based optical proximity correction method, and a reverse lithography technique.
According to an embodiment of the present disclosure, S2 further includes before: s20, constructing a cascade network, wherein the cascade network is formed by connecting a Swin transform variant network and a Unet convolution network in series; the Swin transform variant network has a U-shaped structure and is divided into a downsampling end and an upsampling end; the down sampling end at least comprises a partition block, a linear embedding block, a Swin transform module and a merging block, and the up sampling end at least comprises a Swin transform module, a block expansion block and a linear mapping arrangement; the Unet convolutional network comprises an encoding end and a decoding end.
In accordance with an embodiment of the present disclosure, a method of tile expansion in the upsampling side of a Swin Transformer variant network includes double upsampling a downsampled feature map using a bilinear interpolation algorithm.
According to an embodiment of the present disclosure, S2 includes: inputting design graphic data in a training set into a Swin transform variant network of a cascade network in batches to obtain training-related rough prediction graphic data; and inputting the training-related rough prediction graph data into a Unet convolution network in the cascade network to obtain the training-related prediction graph data.
According to an embodiment of the present disclosure, S3 includes: for each batch, calculating a first training Loss value Loss according to the predicted pattern data related to training and the optimized mask pattern data corresponding to the training set in sequence by the following formula:
wherein M is pred (x, y) is training-related predictive graphic data, M opt (x, y) is optimized mask pattern data corresponding to the training set, MSE represents mean square error, and n is a value of batch size; counter-propagating the first training loss value to obtain a gradient of a weight parameter of the cascade network; and updating the weight parameters of the cascade network according to the gradient until the updated cascade network is obtained.
According to an embodiment of the present disclosure, S5 includes: the union intersection value IoU is calculated by:
Wherein k+1 represents a matrix element value classification number; p is p ij Representing the prediction of matrix elements that should be classified as i values as the number of j, p ii Representing the prediction of matrix elements that should be classified as i values as the number of i, p jj Representing the prediction of matrix elements that should be classified as j values as the number of j, p ji Representing the number of matrix elements predicted as i that would have been classified as j values; the corresponding preset conditions in S6 include: the second training loss value is less than or equal to the first preset threshold and the union crossing value is greater than or equal to the second preset threshold.
A second aspect of the present disclosure provides a lithographic method comprising: and obtaining predicted and corrected mask pattern data by using the optical proximity effect correction method based on the cascade network, outputting the predicted and corrected mask pattern, and performing projection lithography or surface plasmon super-resolution lithography according to the predicted and corrected mask pattern.
A third aspect of the present disclosure provides a cascade network-based optical proximity correction system, comprising: the optical proximity effect correction module is used for carrying out optical proximity effect correction on the design pattern and carrying out pixelation processing to obtain a known data pair consisting of the design pattern data and the corresponding optimized mask pattern data; splitting the known data pair into a training set and a verification set; the training set processing module is used for inputting design graphic data in the training set into the cascade network in batches to obtain prediction graphic data related to training; the cascade network is formed by serially connecting a Swin transform variant network and a Unet convolution network; the first calculation module is used for calculating a first training loss value according to the predicted graph data related to training and the optimized mask graph data corresponding to the training set in sequence for each batch, and updating the weight parameters of the cascade network according to the first training loss value until the updated cascade network is obtained; the verification set processing module is used for inputting design graphic data in the verification set into the updated cascade network to obtain prediction graphic data related to verification; a second calculation module for calculating a second training loss value and a union intersection value according to the verification-related prediction pattern data and the optimized mask pattern data corresponding to the verification set; the judging module is used for judging whether the second training loss value and the union intersection value meet corresponding preset conditions or whether the current training times reach preset training times or not; if not, repeating training; if yes, the cascade network after the weight parameter is updated currently is a trained cascade network; and the prediction module is used for inputting the design graph to be optimized into the trained cascade network to obtain predicted and corrected mask graph data, and finishing optical proximity effect correction.
A fourth aspect of the present disclosure provides an electronic device, comprising: a processor; a memory storing a computer executable program that, when executed by a processor, causes the processor to perform a cascade network-based optical proximity effect correction method as described above.
A fifth aspect of the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a cascade network-based optical proximity correction method as described above.
A sixth aspect of the present disclosure provides a computer program product comprising a computer program which, when executed by a processor, implements a cascade network based optical proximity correction method according to the above.
(III) beneficial effects
According to the optical proximity effect correction method, the system, the electronic equipment, the computer readable storage medium, the program product and the photoetching method based on the cascade network, the mapping relation from the design pattern to the optimized mask pattern is accurately fitted through training the cascade network, and the cascade network combines the advantages of the Swin transform variant network on global prediction and the Unet convolution network on local prediction, so that the integral characteristic and the local characteristic of the pattern can be fitted better; the trained cascade network can efficiently carry out optical proximity effect correction prediction on the input photoetching layout graph to obtain predicted and corrected mask graph data; the method can greatly improve the correction efficiency under the condition of keeping higher accuracy, and can implement the correction of the large-size layout pattern without being limited by the size of the designed pattern.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
FIG. 1 schematically illustrates a flow chart of a cascading network-based optical proximity correction method in accordance with an embodiment of the present disclosure;
FIG. 2 schematically depicts a schematic of a surface plasmon lithography structure used in accordance with an embodiment of the present disclosure;
FIG. 3 schematically illustrates a partial schematic diagram of a design graph split using a rasterization method in accordance with an embodiment of the present disclosure;
FIG. 4 schematically illustrates a hierarchical network architecture diagram in accordance with an embodiment of the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of a Swin transducer variant network architecture, in accordance with an embodiment of the present disclosure;
FIG. 6 schematically illustrates a block-extended bilinear interpolation algorithm operation at the up-sampling end in a Swin transform variant network, in accordance with an embodiment of the present disclosure;
FIG. 7 schematically illustrates a schematic diagram of a Unet convolutional network structure in accordance with an embodiment of the present disclosure;
FIG. 8 schematically illustrates a schematic diagram of the results of one of the design graphic data of the verification set before and after predictive correction under the influence of a cascading network in accordance with an embodiment of the present disclosure;
FIG. 9 schematically illustrates a schematic of results before and after predictive correction of a design pattern other than a known data pair in accordance with an embodiment of the present disclosure;
Fig. 10 schematically illustrates IoU variation curves and loss function curves in accordance with an embodiment of the present disclosure;
FIG. 11 schematically illustrates a block diagram of a cascading network-based optical proximity correction system in accordance with an embodiment of the present disclosure;
fig. 12 schematically shows a block diagram of an electronic device adapted to implement the method described above, according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). Additionally, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon, the computer program product being for use by or in connection with an instruction execution system.
With the development of deep learning in recent years, some network structures suitable for image fitting have emerged. The Swin transform variant network is a Multi-head Self-attention (MSA) network, and has the advantages that each element point data in the output result refers to the data of other element points, so that the rationality of global prediction is improved. In the optical proximity effect, the Swin transducer variant network can fit the coupling between patterns within a mask pattern in a lithography system, so the Swin transducer variant network is suitable for optical proximity effect correction. A Unet convolutional network is a U-shaped network comprising convolutional layers, where the size of the individual convolutional kernels is 3 x 3 or 1 x 1, such small convolutional kernels are fast to calculate but smaller in receptive field, while the Resnet structure in the network may allow for deeper network structures, so the output of such a network works better in terms of local detail.
In the present disclosure, for convenience of explanation, only the design pattern, the optimized mask pattern, and the predicted corrected mask pattern are referred to as patterns, and the calculation process and the imaging process in the optical proximity correction are referred to as data, and it is understood that the data in the process can output corresponding patterns.
The disclosure provides an optical proximity correction method based on a cascade network, as shown in fig. 1, comprising the following steps: s1, carrying out optical proximity effect correction on a design pattern, and carrying out pixelation processing to obtain a known data pair consisting of design pattern data and corresponding optimized mask pattern data; splitting the known data pair into a training set and a verification set; s2, inputting design graphic data in the training set into a cascade network in batches to obtain prediction graphic data related to training; the cascade network is formed by serially connecting a Swin transform variant network and a Unet convolution network; s3, for each batch, calculating a first training loss value according to the predicted graph data related to training and the optimized mask graph data corresponding to the training set in sequence, and updating the weight parameters of the cascade network according to the first training loss value until an updated cascade network is obtained; s4, inputting the design graphic data in the verification set into the updated cascade network to obtain prediction graphic data related to verification; s5, calculating a second training loss value and a union intersection value according to the prediction graph data related to verification and the optimized mask graph data corresponding to the verification set; s6, judging whether the second training loss value and the union intersection value meet corresponding preset conditions or whether the current training times reach preset training times or not; if not, repeating the steps S2 to S6 for training; if yes, the cascade network updated currently is a trained cascade network; s7, inputting the design graph to be optimized into the trained cascade network to obtain predicted and corrected mask graph data, and finishing optical proximity effect correction.
In the method, firstly, a plurality of design patterns are obtained by splitting from a large-size photoetching layout to be optimized in a rasterization method, partial design patterns are selected for optical proximity effect correction, a corresponding optimized mask pattern is obtained, the design patterns and the optimized mask pattern are converted into matrix data to form known data pairs, and the known data pairs are split into a training set and a verification set. In each round of training, the design graphic data in the training set are input into the established cascade network to obtain the prediction graphic data related to training, the prediction graphic data related to training and the optimized mask graphic data in the training set are used for calculating to obtain a first training loss value, and the weight parameters of the cascade network are updated according to the first training loss value. And inputting the design graphic data in the verification set into the cascade network after the weight parameters are updated to obtain the prediction graphic data related to verification, and calculating to obtain a second training loss value and a union intersection value by using the optimized mask graphic data in the verification set and the prediction graphic data related to verification. After a plurality of rounds of training, when the second training loss value and the union crossing value meet corresponding preset conditions or reach preset training rounds, the training is finished, and the cascade network parameter values are stored, so that the cascade network after the training is finished is obtained. Inputting design graphic data except for known data pairs in a photoetching design layout to be optimized into a trained cascade network to obtain mask graphic data after prediction correction, and finishing optical proximity effect correction; further, the predicted and corrected mask pattern data is converted into a predicted and corrected mask pattern.
The cascade network based optical proximity effect correction method combines the advantages of the Swin transform variant network and the Unet convolution network by connecting the Swin transform variant network and the Unet convolution network in series, utilizes the cascade network to find the mapping rule from design mask pattern data to optimized mask pattern data in a supervised learning mode, and can obtain the prediction correction mask pattern more quickly and accurately. In addition, the method is not limited by the size of the design pattern, and the optical proximity effect correction of the large-size photoetching layout pattern can be realized.
On the basis of the above embodiment, the optical proximity effect correction of the design pattern in S1 includes: performing optical proximity effect correction according to the design pattern to obtain an optimized mask pattern; respectively carrying out pixelation processing on the design pattern and the optimized mask pattern to obtain design pattern data and corresponding optimized mask pattern data, wherein the design pattern data and the corresponding optimized mask pattern data form a known data pair; the design pattern to be optimized in S7 is the whole or most of the design pattern of the large-size mask layout to be optimized, and the design pattern in S1 is a small part of the design pattern selected from a plurality of design patterns obtained by splitting the large-size mask layout to be optimized by a rasterization method.
Splitting the large-size photoetching layout to be optimized by a rasterization method to obtain a plurality of design patterns, and selecting part of the design patterns to obtain corresponding optimized mask patterns by adopting an optical proximity effect correction method. And performing binarization and pixelation processing on the obtained design pattern and the corresponding optimized mask pattern to form binarization matrix data, wherein the results are respectively design pattern data and optimized mask pattern data, then pairing the design pattern data and the optimized mask pattern data into known data pairs, and dividing the known data pairs into a training set and a verification set according to a certain quantity ratio, wherein the quantity ratio is the ratio of the number of the data pairs in the training set to the number of the data pairs in the verification set. For example, the number ratio is 7:3, i.e., the number of known data pairs in the training set to the number of known data pairs in the validation set is 7:3. The number ratio may also be dependent on the actual situation, and typically the number of known data pairs in the training set should be greater than the number of known data pairs in the verification set.
On the basis of the above-described embodiments, the method of performing optical proximity correction in S1 includes any one of a rule-based optical proximity correction method, a model-based optical proximity correction method, and a reverse lithography technique.
The optical proximity correction method is used for obtaining the corresponding optimized mask pattern, and is beneficial to the mapping rule from the design pattern data to the optimized mask pattern data of the subsequent Unet deep convolution network learning.
On the basis of the above embodiment, S2 further includes, before: s20, constructing a cascade network, wherein the cascade network is formed by connecting a Swin transform variant network and a Unet convolution network in series; the Swin transform variant network has a U-shaped structure and is divided into a downsampling end and an upsampling end; the down sampling end at least comprises a partition block, a linear embedding block, a Swin transform module and a merging block, and the up sampling end at least comprises a Swin transform module, a block expansion block and a linear mapping block; the Unet convolutional network comprises an encoding end and a decoding end.
The method comprises the steps of initializing weight parameters of a Swin transform variant network and a Unet convolutional network, wherein the weight parameters of the Swin transform variant network and the Unet convolutional network jointly form weight parameters of a cascade network, and the weight parameters refer to all variables which can be updated according to gradients in the cascade network.
Unlike conventional Swin transducer variant networks, which have a U-shaped structure, i.e., have equal input-output tensors in all dimensions, the network can be divided into a downsampled end and an upsampled end. The downsampling end mainly comprises a partition map block, a linear embedding block, a Swin transform module and a merging block according to a certain sequence, and has the functions of completing four downsampling and further realizing global feature extraction of data pairs. The up-sampling end is composed of a Swin transducer module, a block expansion and a linear mapping which are arranged according to a certain sequence, and the up-sampling end has the function of completing up-sampling with the help of jump links so as to realize global feature prediction.
Specifically, a partition tile is the name of an operation that converts a matrix element value into a tile, and adjacent 4 matrix element values into one tile. Linear embedding is the name of an operation that is a layer normalization of the tile in the second dimension. The Swin transducer module is a module comprising a multi-head self-attention algorithm, and is an original module of a common Swin transducer network. A merge tile is a name for a dimensional transformation operation on a tile that is not unique, including but not limited to a convolution group operation, tensor stretching, and tensor transposition, so long as the length of the 2 nd dimension of the output tensor is 2 times the length of the 2 nd dimension of the input tensor. The length of the 3 rd and 4 th dimensions of the tensor is 1/2 times of the length of the 3 rd and 4 th dimensions of the input tensor. The linear mapping is a name of a dimension transformation operation on tensors, and the transformation method is not unique, and includes, but is not limited to, convolution group operation, tensor stretching and tensor transposition, so long as the length of the 1 st dimension of the output tensor is unchanged, the length of the 2 nd dimension is 1, and the lengths of the 3 rd and 4 th dimensions are 256.
The above-mentioned Unet convolutional network is similar to the conventional Unet convolutional network, and can be divided into an encoding end and a decoding end. The coding end body is a Resnet network structure which is composed of a plurality of groups of convolution layers, so that the characteristic extraction of output data of the Swin transform variant network can be realized, and the decoding end is composed of merging channels and convolution layers alternately according to a certain order, thereby realizing local refined prediction.
Based on the above embodiments, the method of up-sampling the tile extension in the up-sampling end in the Swin transform variant network includes up-sampling the down-sampling feature map twice using a bilinear interpolation algorithm.
The block expansion of the up-sampling end adopts bilinear interpolation algorithm to up-sample, the algorithm is specially introduced to match the U-shaped structure of the Swin transform variant network, and the method has the technical effect that all dimensions of tensors of input and output are equal in size.
On the basis of the above embodiment, S2 includes: inputting design graphic data in a training set into a Swin transform variant network of a cascade network in batches to obtain training-related rough prediction graphic data; and inputting the training-related rough prediction graph data into a Unet convolution network in the cascade network to obtain the training-related prediction graph data.
Splitting the design graphic data and the optimized mask graphic data in the training set into a plurality of batches, inputting the design graphic data in the training set into a Swin transform variant network in the cascade network for each batch, outputting the design graphic data as training-related rough prediction graphic data, inputting the training-related rough prediction graphic data into a Unet convolution network, and outputting the design graphic data as training-related prediction graphic data.
On the basis of the above embodiment, S3 includes: for each batch, calculating a first training Loss value Loss according to the predicted pattern data related to training and the optimized mask pattern data corresponding to the training set in sequence by the following formula:
wherein M is pred (x, y) is training-related predictive graphic data, M opt (x, y) is optimized mask pattern data corresponding to the training set, MSE represents mean square error, and n is a value of batch size; counter-propagating the first training loss value to obtain a gradient of a weight parameter of the cascade network; updating weight parameters of a cascaded network according to gradientsAnd counting until the updated cascade network is obtained.
For each batch, its training related predictive graphics data M pred (x, y) optimized mask pattern data M corresponding to the lot opt (x, y) performing the Mean Square Error (MSE) operation to obtain a first training loss value; and back-propagating the obtained first training loss value, solving to obtain the gradient of the cascade network weight parameter, and updating the cascade network weight parameter by using a gradient descent method.
On the basis of the above embodiment, S5 includes: the union intersection value IoU is calculated by:
wherein k+1 represents a matrix element value classification number; p is p ij Representing the prediction of matrix elements that should be classified as i values as the number of j, p ii Representing the prediction of matrix elements that should be classified as i values as the number of i, p jj Representing the prediction of matrix elements that should be classified as j values as the number of j, p ji Representing the number of matrix elements predicted as i that would have been classified as j values.
And inputting the design graphic data in the verification set into a cascade network with updated weight parameters to obtain prediction graphic data related to verification, performing loss operation on the prediction graphic data related to verification and the optimized mask graphic data corresponding to the verification set to obtain a training loss value, and performing union intersection (Intersection over Union, ioU) operation to obtain a IoU value.
And when the IoU value is smaller than the second preset threshold value or the second training loss value is larger than the first preset threshold value, performing steps S2-S6 in a circulating way. And when the IoU value is greater than or equal to a second preset threshold value and the second training loss value is less than or equal to a first preset threshold value, or the repeated training times reach the preset training times, exiting the loop. And saving the weight parameters of the current cascade network, wherein the cascade network after updating the weight parameters is a trained cascade network.
Inputting design graphic data except for known data pairs in a photoetching design layout to be optimized into a trained cascade network to obtain mask graphic data after prediction correction, and finishing optical proximity effect correction; further, the predicted and corrected mask pattern data is converted into a predicted and corrected mask pattern.
The present disclosure uses a cascading network consisting of a Swin transducer variant network and a Unet convolutional network for optical proximity correction. The U-shaped structure of the Swin transform variant network enables prediction of the network to not only accord with local prediction rules but also adapt to global prediction rules, so that the capability of extracting global features is improved. The Unet convolution network is limited by local attribute due to the convolution network receptive field, so that local pixel point prediction can be performed better. After the optimized mask pattern is obtained by using an optical proximity effect method on the selected design pattern and the design pattern data and the optimized mask pattern data are used as known data pairs, the Swin transform variant network and the Unet convolution network can better fit the mapping rule between the design pattern and the optimized mask pattern through serial complementation working modes through multiple rounds of training, so that the accuracy of predicting and correcting the mask pattern is improved. By applying the fitted mapping rule to other design patterns, the optical proximity effect correction of the large-size photoetching layout can be quickly realized.
The present disclosure also provides a lithographic method comprising: and obtaining predicted and corrected mask pattern data by using the optical proximity effect correction method based on the cascade network, outputting the predicted and corrected mask pattern, and performing projection lithography or surface plasmon super-resolution lithography according to the predicted and corrected mask pattern.
The method outputs the predicted and corrected mask pattern based on the optical proximity effect correction method based on the cascade network, and performs lithography according to the predicted and corrected mask pattern, and can be applied to projection lithography or surface plasmon lithography. Operations S1 to S7 are the same as the previous operations and will not be described again here.
The method disclosed by the invention accurately fits the mapping relation of the design pattern data to the optimized mask pattern data by training the cascade network, and the trained cascade network can efficiently carry out optical proximity effect correction prediction on other uncorrected design patterns to obtain corrected mask patterns. On one hand, the cascade network combines the advantages of the Swin transform variant network on global prediction and the advantages of the Unet convolution network on local prediction, so that the overall characteristic and the local characteristic of the graph can be better fitted; on the other hand, in order to adapt to the requirement that the dimension of the rough prediction graphic data is equal to that of the design graphic data in the method, the Swin transform variant network is designed into a U-shaped structure, so that the global prediction capability of the Swin transform network can be reserved, and the requirement that the input dimension and the output dimension are equal is met.
The present disclosure is further illustrated by the following detailed description. The optical proximity correction method based on the cascade network is specifically described in the following embodiments. However, the following examples are merely illustrative of the present disclosure, and the scope of the present disclosure is not limited thereto.
Specifically, as shown in fig. 1, the method of the present embodiment includes the following steps:
step S01: splitting a large-size photoetching layout to be optimized by a rasterization method to obtain a plurality of design patterns, and selecting part of the design patterns M from the plurality of design patterns ori (x, y) obtaining an optimized mask pattern M after optical proximity correction by an optical proximity correction method opt (x,y)。
The optical proximity correction method used in the present embodiment is a reverse lithography technique, and the lithography model in the present embodiment is a surface plasmon lithography model. The pixel size of the design pattern and the optimized mask pattern in this step are the same.
Fig. 2 is a schematic view of a surface plasmon lithography structure, the surface plasmon lithography structure 201 including a mask substrate (SiO 2 ) A mask pattern layer (Cr), an Air spacer layer (Air), a metal layer (Ag), a photoresist layer (Pr), a metal reflective layer (Ag) and a substrate (SiO 2 ) In this embodiment, the thickness of the mask pattern layer is 50nm, the thickness of the air spacer layer is 20nm, the thickness of the metal layer reflection layer is 70nm, and the thickness of the photoresist layer is 40nm.
FIG. 3 is a schematic diagram of a portion of a design pattern split from a large-size lithographic layout using a rasterization method, the split size being dependent upon the situation. In this embodiment, the large-size lithography layout has dimensions of 2.5mm×2.5mm, and the design pattern has dimensions of 1 μm×1 μm.
Step S02: binarizing the design pattern and the optimized mask pattern and storing the two in a matrix form to form design pattern data and optimized mask pattern data respectively, pairing the two to form known data pairs, and using 7: the number ratio of 3 splits the known data pairs into training and validation sets. The matrix dimensions of the design pattern data and the optimized mask pattern data in this embodiment are 256×256; corresponding to step S1.
Step S03: and constructing a cascade network, wherein the network is formed by serially connecting a SwinTransformer variant network and a Unet convolutional network, and initializing weight parameters of the SwinTransformer variant network and the Unet convolutional network, wherein the weight parameters of the SwinTransformer variant network and the Unet convolutional network jointly form the weight parameters of the cascade network, and the weight parameters refer to all variables which can be updated according to gradients in the network.
Fig. 4 is a schematic diagram of a cascaded network. The cascade network is formed by serially connecting a Swin transducer variant network and a Unet convolution network. The Swin transform variant network is responsible for carrying out global prediction on input design graphic data, so that the prediction of each matrix element value can be coupled with all other elements in different degrees, and the output result is rough prediction graphic data. The Unet convolution network performs local prediction correction on the rough prediction graph data through the convolution layer, and the output result is prediction correction mask graph data.
In the present embodiment, the graphic matrix data is designed to be input to the Swin Transformer variant network with tensors of dimensions (8, 1, 256, 256) as output, which is coarse predicted graphic data, with tensors of dimensions (8, 1, 256, 256). The rough prediction pattern data is input to a nnet convolution network, and tensors with dimensions (8, 1, 256, 256) are used as outputs, and the outputs are prediction correction mask pattern data.
Unlike conventional Swin transducer variant networks, which have a U-shaped structure, i.e., each dimension of the input/output tensor is equal, the network can be divided into a downsampling end and an upsampling end. The downsampling end mainly comprises a partition map block, a linear embedding block, a Swin transform module and a merging block according to a certain sequence, and has the functions of completing four downsampling and further realizing global feature extraction of data pairs. The up-sampling end is composed of a Swin transducer module, a block expansion and a linear mapping which are arranged according to a certain sequence, and the up-sampling end has the function of completing up-sampling with the help of jump links so as to realize global feature prediction.
Fig. 5 is a schematic diagram of a Swin Transformer variant network structure, which is divided into a downsampling end and an upsampling end. The left dashed line frame is internally provided with a downsampling end which consists of a partition block, a linear embedding block, a Swin transform module and a merging block, the merging block output of each stage is called a downsampling characteristic diagram, and the downsampling end can realize the functions of downsampling and global characteristic extraction. The dashed box is an up-sampling end, and is composed of a linear mapping module, a block expansion module and a Swin transform module, and the functions of up-sampling and global feature prediction can be realized by merging the down-sampling feature graphs of the same level by means of jump links, wherein the jump links are copying and transferring, and the block expansion method in the up-sampling end comprises, but is not limited to, double up-sampling the down-sampling feature graphs by using a bilinear interpolation algorithm.
FIG. 6 is a schematic diagram of upsampling by a bilinear interpolation algorithm used for block expansion at the upsampling side in a Swin transform variant network, which is specifically introduced to match the U-shaped structure of the Swin transform variant network, Q 11 、Q 12 、Q 21 、Q 22 Representing the coordinates of four adjacent matrix elements, and defining the coordinates (X, y) of the inserted element P, and obtaining R by an X-direction interpolation algorithm 1 、R 2 The element values of the P points can be obtained by carrying out interpolation in the Y direction again on the element values of the two points. The algorithm formula is as follows:
where f (·) represents the element value at a point. The coordinates of the custom insert element P in this embodiment can be expressed asThe tensor dimension of the input feature map of any stage of image block expansion can be expressed as (8,W/2, h/2,2C), the tensor dimension of the output feature map is (8, w, h, c), the output result can be combined with the feature map matrix of the downsampling end of the peer in the fourth dimension direction, the input feature map matrix can enter the Swin transform module or the linear mapping according to different stages of the network, the working principle of the Swin transform module of the upsampling end is the same as that of the input feature map matrix of the downsampling end, when the result of image expansion is input into the linear embedding, the feature map Zhang Liangbian is the dimension (8, w, h, 1) by using the convolution kernel of 1×1, the result is transposed into the dimension (8, 1, w, h) after the dimension is transposed, and finally the result of the Swin transform variant network is obtained after the activation function, and the result dimension is (8, 1, w, h).
The above-mentioned Unet convolutional network is similar to the conventional Unet convolutional network, and can be divided into an encoding end and a decoding end. The coding end body is a Resnet network structure which is composed of a plurality of groups of convolution layers, and can realize characteristic extraction of output data of the Swin transform variant network. The decoding end is formed by combining channels and convolution layers alternately according to a certain order, so that local fine prediction can be realized.
Fig. 7 is a schematic diagram of a uret convolutional network, which can be divided into two parts, namely an encoding end and a decoding end. The left dotted line is provided with a coding end, and the coding end is composed of a Resnet network, so that the functions of downsampling and feature extraction can be realized. The right dashed box is the decoding end, and up-sampling and local refinement prediction functions can be realized. In this embodiment, the coarse prediction pattern data with the dimension (8, 1, 256, 256) enters the encoding end of the Unet convolution network, and finally the prediction correction mask pattern data with the dimension (8, 1, 256, 256) is output at the decoding end.
In the step, when the cascade network is trained, the pixel classification of the network is set to be 2, the learning rate adjustment mode of the network is set to be adam optimization gradient descent, the number of iterative batch processing of each round of the network is set to be 8, the maximum iterative times of the network is set to be 100, and the iterative precision of the network is set to be 0.0001.
Step S04: the design graphic data and the optimized mask graphic data in the training set are divided into a plurality of batches, and for each batch, the design graphic data in the training set is input to a Swin transform variant network in the cascade network and output as training-related coarse prediction graphic data. Inputting the training-related rough prediction graph data into a Unet convolution network in a cascade network, and outputting the training-related rough prediction graph data as training-related prediction graph data; corresponding to step S2.
Step S05: for each batch, training related predicted graph data M is sequentially carried out pred (x, y) optimized mask pattern data M corresponding to the lot opt And (x, y) performing the following Mean Square Error (MSE) loss operation to obtain a first training loss value, then performing back propagation on the training loss value to obtain a gradient of the cascade network weight parameter, and updating the cascade network weight parameter according to the gradient. N=10 in this embodiment. Corresponding to step S3.
Step S06: and (3) executing step S04 on the design graphic data in the verification set, inputting the design graphic data into a cascade network with updated weight parameters in step S05 to obtain prediction graphic data related to verification, calculating the sum of mean square errors of the prediction graphic data related to verification and the optimization mask graphic data corresponding to the verification set to obtain a second training loss value, and simultaneously performing union cross operation (Intersection over Union, ioU) to obtain a IoU value. In the present embodiment, k=1 is set. Corresponding to steps S4 to S5.
Step S07: s04 to S07 are repeatedly executed, and the number of repetitions is called training round. As the training round increases, the second training loss value in S06 tends to decrease; ioU values tend to rise. The second training loss value versus training round curve is referred to as the loss function curve, and the IoU value versus training round curve is referred to as the IoU curve. In this embodiment, the IoU value is monitored in real time during each round of training, and when the IoU value is continuously greater than or equal to 90% in the last 25 rounds and the second training loss value is continuously less than or equal to 110 in the 25 rounds, the network training is terminated; the network training may also be terminated after the training round reaches the target round. Because the IoU value and the second training loss value in the training process fluctuate, the influence of fluctuation on network output can be reduced by taking the result of 25 continuous rounds, so that the judgment is more accurate. Corresponding to step S6.
Step S08: and saving the weight parameter values of the cascade network, wherein the cascade network formed by the parameter values is a trained cascade network.
Step S09: the design patterns except the known data pairs are firstly converted into the design pattern data of a binarization matrix, and then the design pattern data is input into a cascade network trained in the step S08 to obtain predicted and corrected mask pattern data, and the predicted and corrected mask pattern data is converted into a predicted and corrected mask pattern. Corresponding to step S7.
Fig. 8 shows the prediction result of one of the design patterns of the verification set under the cascade network, wherein 801 represents the design pattern, 802 represents the optimized mask pattern obtained by using the inverse photolithography technique, 803 represents the prediction pattern of the verification set obtained by the trained cascade network, and the IoU value is 86.38%.
Fig. 9 shows the optimization results of the design patterns except for the known data pairs, 901 represents the design pattern, 902 represents the corrected mask pattern obtained by the reverse photolithography technique, 903 represents the predicted corrected mask pattern obtained by the method of the present disclosure, and IoU values of 902 and 903 are 86.56%, which are equivalent to the results of the verification set, which indicates that the method of the present disclosure has better applicability to different design patterns.
Fig. 10 is a IoU change curve and a loss function curve obtained in step S06. 1001 is a IoU change curve, the ordinate of which is IoU values and the abscissa of which is the training round. 1002 is a loss function curve, whose ordinate is the second training loss value magnitude and whose abscissa is the training round. Two graphs of the change indicate that the network is converging.
The optical proximity effect correction method based on the cascade network effectively combines the Swin transform variant network and the Unet convolution network, trains the cascade network in a supervised mode by utilizing the advantages of the Swin transform variant network and the Unet convolution network to find the mapping rule from the design pattern to the optimized mask pattern, can greatly improve the correction efficiency under the condition of keeping higher accuracy according to the rule, and can realize the optical proximity effect correction of the large-size photoetching layout.
Fig. 11 schematically illustrates a block diagram of a cascading network-based optical proximity correction system in accordance with an embodiment of the present disclosure.
As shown in fig. 11, fig. 11 schematically illustrates a block diagram of a cascading network-based optical proximity correction system in accordance with an embodiment of the present disclosure. The correction system 1100 includes: an optical proximity correction module 1110, a training set processing module 1120, a first calculation module 1130, a verification set processing module 1140, a second calculation module 1150, a judgment module 1160, and a prediction module 1170.
An optical proximity correction module 1110, configured to perform optical proximity correction on the design pattern, and perform pixelation processing to obtain a known data pair composed of the design pattern data and the corresponding optimized mask pattern data; the known data pairs are split into training and validation sets. The optical proximity correction module 1110 may be used to perform the step S1 described above with reference to fig. 1, for example, according to an embodiment of the present disclosure, and will not be described herein.
The training set processing module 1120 is configured to input design graphic data in a training set into a cascade network in batches to obtain predicted graphic data related to training; the cascade network is formed by serially connecting a Swin Transformer variant network with a Unet convolution network. The training set processing module 1120 may be used, for example, to perform the step S2 described above with reference to fig. 1 according to an embodiment of the present disclosure, and will not be described herein.
The first calculating module 1130 is configured to sequentially calculate, for each batch, a first training loss value according to the predicted pattern data related to training and the optimized mask pattern data corresponding to the training set, and update the weight parameter of the cascade network according to the first training loss value until an updated cascade network is obtained. The first computing module 1130 may be used, for example, to perform the step S3 described above with reference to fig. 1 according to an embodiment of the present disclosure, which is not described herein.
The verification set processing module 1140 is configured to input design graphic data in the verification set into the updated cascade network, to obtain prediction graphic data related to verification. The verification set processing module 1140 may be used, for example, to perform step S4 described above with reference to fig. 1, according to an embodiment of the present disclosure, which is not described herein.
And a second calculation module 1150 for calculating a second training loss value and a union intersection value according to the verification-related prediction pattern data and the optimized mask pattern data corresponding to the verification set. The second computing module 1150 may be used, for example, to perform the step S5 described above with reference to fig. 1 according to an embodiment of the present disclosure, and will not be described herein.
A judging module 1160, configured to judge whether the second training loss value and the union intersection value meet corresponding preset conditions or whether the current training frequency reaches the preset training frequency; if not, repeating training; if yes, the cascade network updated currently is the trained cascade network. The determining module 1160 may be used, for example, to perform the step S6 described above with reference to fig. 1 according to an embodiment of the present disclosure, which is not described herein.
And the prediction module 1170 is used for inputting the design graph to be optimized into the trained cascade network to obtain predicted and corrected mask graph data, and finishing optical proximity effect correction. The prediction module 1170 may be used, for example, to perform the step S7 described above with reference to fig. 1, according to an embodiment of the present disclosure, which is not described herein.
It should be noted that any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the modules, sub-modules, units, or sub-units may be implemented in one module according to embodiments of the present disclosure. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any of the optical proximity correction module 1110, the training set processing module 1120, the first calculation module 1130, the verification set processing module 1140, the second calculation module 1150, the judgment module 1160, and the prediction module 1170 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to embodiments of the present disclosure, at least one of the optical proximity correction module 1110, the training set processing module 1120, the first computation module 1130, the verification set processing module 1140, the second computation module 1150, the determination module 1160, and the prediction module 1170 may be implemented, at least in part, as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging the circuit, or in any one of or a suitable combination of any of the three implementations of software, hardware, and firmware. Alternatively, at least one of the optical proximity correction module 1110, the training set processing module 1120, the first calculation module 1130, the verification set processing module 1140, the second calculation module 1150, the judgment module 1160, and the prediction module 1170 may be at least partially implemented as a computer program module, which when executed, may perform the corresponding functions.
Fig. 12 schematically shows a block diagram of an electronic device adapted to implement the method described above, according to an embodiment of the disclosure. The electronic device shown in fig. 12 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 12, the electronic apparatus 1200 described in the present embodiment includes: the processor 1201, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1202 or a program loaded from a storage section 1208 into a Random Access Memory (RAM) 1203. The processor 1201 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 1201 may also include on-board memory for caching purposes. The processor 1201 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the disclosure.
In the RAM 1203, various programs and data required for the operation of the system 1200 are stored. The processor 1201, the ROM 1202, and the RAM 1203 are connected to each other through a bus 1204. The processor 1201 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 1202 and/or RAM 1203. Note that the program may be stored in one or more memories other than the ROM 1202 and the RAM 1203. The processor 1201 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in one or more memories.
According to an embodiment of the disclosure, the electronic device 1200 may also include an input/output (I/O) interface 1205, the input/output (I/O) interface 1205 also being connected to the bus 1204. The system 1200 may also include one or more of the following components connected to the I/O interface 1205: an input section 1206 including a keyboard, a mouse, and the like; an output portion 1207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 1208 including a hard disk or the like; and a communication section 1209 including a network interface card such as a LAN card, a modem, or the like. The communication section 1209 performs communication processing via a network such as the internet. The drive 1210 is also connected to the I/O interface 1205 as needed. A removable medium 1211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 1210 so that a computer program read out therefrom is installed into the storage section 1208 as needed.
According to embodiments of the present disclosure, the method flow according to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 1209, and/or installed from the removable media 1211. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 1201. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs that, when executed, implement a cascade network-based optical proximity effect correction method according to an embodiment of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include the ROM1202 and/or the RAM 1203 and/or one or more memories other than the ROM1202 and the RAM 1203 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. The program code, when executed in a computer system, causes the computer system to implement the cascade network-based optical proximity correction method provided by embodiments of the present disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 1201. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program can also be transmitted, distributed over a network medium in the form of signals, and downloaded and installed via a communication portion 1209, and/or from a removable medium 1211. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 1209, and/or installed from the removable media 1211. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 1201. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that, each functional module in each embodiment of the present disclosure may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, either in essence or as a part of the prior art or all or part of the technical solution.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
While the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents. The scope of the disclosure should, therefore, not be limited to the above-described embodiments, but should be determined not only by the following claims, but also by the equivalents of the following claims.

Claims (13)

1. An optical proximity correction method based on a cascade network is characterized by comprising the following steps:
s1, carrying out optical proximity effect correction on a design pattern, and carrying out pixelation processing to obtain a known data pair consisting of design pattern data and corresponding optimized mask pattern data; splitting the known data pair into a training set and a verification set;
S2, inputting the design graphic data in the training set into a cascade network in batches to obtain predicted graphic data related to training; the cascade network is formed by serially connecting a Swin transform variant network and a Unet convolution network;
s3, for each batch, calculating a first training loss value according to the predicted graph data related to training and the optimized mask graph data corresponding to the training set in sequence, and updating the weight parameters of the cascade network according to the first training loss value until an updated cascade network is obtained;
s4, inputting the design graphic data in the verification set into the updated cascade network to obtain prediction graphic data related to verification;
s5, calculating a second training loss value and a union intersection value according to the prediction graph data related to verification and the optimized mask graph data corresponding to the verification set;
s6, judging whether the second training loss value and the union intersection value meet corresponding preset conditions or whether the current training times reach preset training times or not; if not, repeating the steps S2 to S6 for training; if yes, the cascade network updated currently is a trained cascade network;
S7, inputting the design graph to be optimized into the trained cascade network to obtain predicted and corrected mask graph data, and finishing optical proximity effect correction.
2. The optical proximity correction method based on the cascade network according to claim 1, wherein the performing optical proximity correction on the design pattern in S1 includes:
performing optical proximity effect correction according to the design pattern to obtain an optimized mask pattern;
respectively carrying out pixelation processing on the design pattern and the optimized mask pattern to obtain design pattern data and corresponding optimized mask pattern data, wherein the design pattern data and the corresponding optimized mask pattern data form a known data pair;
the design pattern to be optimized in the step S7 is the whole or most of the design pattern of the large-size mask layout to be optimized, and the design pattern in the step S1 is a small part of the design pattern selected from a plurality of design patterns obtained by splitting the large-size mask layout to be optimized by a rasterization method.
3. The cascade network-based optical proximity correction method of claim 1, wherein the optical proximity correction method in S1 includes any one of a rule-based optical proximity correction method, a model-based optical proximity correction method, and a reverse photolithography technique.
4. The optical proximity correction method based on the cascade network according to claim 1, wherein the step S2 further comprises:
s20, constructing a cascade network, wherein the cascade network is formed by serially connecting a Swin transform variant network and a Unet convolution network;
the Swin transform variant network has a U-shaped structure and is divided into a downsampling end and an upsampling end; the downsampling end at least comprises a partition block, a linear embedding block, a SwinTransformer module and a merging block, and the upsampling end at least comprises a Swin Transformer module, a block expansion block and a linear mapping block; the Unet convolutional network comprises an encoding end and a decoding end.
5. The method of claim 4, wherein the step-network-based method of tile expansion in the upsampling side of the Swin Transformer variant network includes double upsampling the downsampled feature map using a bilinear interpolation algorithm.
6. The optical proximity correction method based on the cascade network according to claim 1, wherein S2 comprises:
inputting the design graphic data in the training set into a SwinTransformer variant network of a cascade network in batches to obtain training-related rough prediction graphic data;
And inputting the training-related rough prediction graph data into a Unet convolution network in a cascade network to obtain training-related prediction graph data.
7. The optical proximity correction method based on the cascade network according to claim 1, wherein the S3 includes:
for each batch, calculating a first training Loss value Loss according to the predicted graph data related to training and the optimized mask graph data corresponding to the training set in sequence by the following formula:
wherein M is pred (x, y) is training-related predictive graphic data, M opt (x, y) is optimized mask pattern data corresponding to the training set, MSE represents mean square error, and n is a value of batch size;
counter-propagating the first training loss value to obtain a gradient of a cascade network weight parameter;
and updating the weight parameters of the cascade network according to the gradient until the updated cascade network is obtained.
8. The optical proximity correction method based on the cascade network according to claim 1, wherein S5 comprises:
the union intersection value IoU is calculated by:
wherein k+1 represents a matrix element value classification number; p is p ij Representing the number of matrix elements predicted to j that would be classified as i-values ,p ii Representing the prediction of matrix elements that should be classified as i values as the number of i, p jj Representing the prediction of matrix elements that should be classified as j values as the number of j, p ji Representing the number of matrix elements predicted as i that would have been classified as j values;
the corresponding preset conditions in S6 include:
the second training loss value is less than or equal to a first preset threshold and the union crossing value is greater than or equal to a second preset threshold.
9. A lithographic method, comprising: obtaining predicted and corrected mask pattern data by using the optical proximity effect correction method based on the cascade network according to any one of claims 1 to 8, outputting a predicted and corrected mask pattern, and performing projection lithography or surface plasmon super-resolution lithography according to the predicted and corrected mask pattern.
10. An optical proximity correction system based on a cascade network, comprising:
the optical proximity effect correction module is used for carrying out optical proximity effect correction on the design pattern and carrying out pixelation processing to obtain a known data pair consisting of the design pattern data and the corresponding optimized mask pattern data; splitting the known data pair into a training set and a verification set;
The training set processing module is used for inputting the design graphic data in the training set into the cascade network in batches to obtain predicted graphic data related to training; the cascade network is formed by serially connecting a Swin transform variant network and a Unet convolution network;
the first calculation module is used for calculating a first training loss value according to the predicted graph data related to training and the optimized mask graph data corresponding to the training set in sequence for each batch, and updating the weight parameters of the cascade network according to the first training loss value until an updated cascade network is obtained;
the verification set processing module is used for inputting the design graphic data in the verification set into the updated cascade network to obtain prediction graphic data related to verification;
a second calculation module, configured to calculate a second training loss value and a union intersection value according to the prediction pattern data related to verification and the optimized mask pattern data corresponding to the verification set;
the judging module is used for judging whether the second training loss value and the union crossing value reach corresponding preset thresholds or whether the current training times reach preset training times or not; if not, repeating training; if yes, the cascade network after the weight parameter is updated currently is a trained cascade network;
And the prediction module is used for inputting the design graph to be optimized into the trained cascade network to obtain predicted and corrected mask graph data, and finishing optical proximity effect correction.
11. An electronic device, comprising:
a processor;
a memory storing a computer executable program that, when executed by the processor, causes the processor to perform the cascade network-based optical proximity effect correction method of any one of claims 1 to 8.
12. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the cascade network-based optical proximity effect correction method according to any one of claims 1 to 8.
13. A computer program product comprising a computer program which, when executed by a processor, implements the cascade network-based optical proximity correction method according to any one of claims 1 to 8.
CN202311144751.6A 2023-09-05 2023-09-05 Optical proximity correction method and system based on cascade network and photoetching method Pending CN117031871A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311144751.6A CN117031871A (en) 2023-09-05 2023-09-05 Optical proximity correction method and system based on cascade network and photoetching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311144751.6A CN117031871A (en) 2023-09-05 2023-09-05 Optical proximity correction method and system based on cascade network and photoetching method

Publications (1)

Publication Number Publication Date
CN117031871A true CN117031871A (en) 2023-11-10

Family

ID=88630040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311144751.6A Pending CN117031871A (en) 2023-09-05 2023-09-05 Optical proximity correction method and system based on cascade network and photoetching method

Country Status (1)

Country Link
CN (1) CN117031871A (en)

Similar Documents

Publication Publication Date Title
CN111066063B (en) System and method for depth estimation using affinity for convolutional spatial propagation network learning
US9471746B2 (en) Sub-resolution assist feature implementation with shot optimization
CN114008663A (en) Real-time video super-resolution
CN114815496B (en) Pixelated optical proximity effect correction method and system applied to super-resolution lithography
US10146124B2 (en) Full chip lithographic mask generation
CN110678961A (en) Simulation of near field images in optical lithography
CN114200768B (en) Super-resolution photoetching reverse optical proximity effect correction method based on level set algorithm
US11836572B2 (en) Quantum inspired convolutional kernels for convolutional neural networks
KR20080070623A (en) Systems, masks, and methods for photolithography
CN108535952A (en) A kind of calculating photolithography method based on model-driven convolutional neural networks
CN108228981B (en) OPC model generation method based on neural network and experimental pattern prediction method
TW202101111A (en) Method for fabricating semiconductor device
US20160162623A1 (en) Method, computer readable storage medium and computer system for creating a layout of a photomask
CN110597023B (en) Photoetching process resolution enhancement method and device based on multi-objective optimization
WO2022016802A1 (en) Physical feature map- and dcnn-based computation method for machine learning-based inverse lithography technology solution
CN113238460B (en) Deep learning-based optical proximity correction method for extreme ultraviolet
WO2023060505A1 (en) Mask topology optimization method and system for surface plasmon near-field photolithography
US10578963B2 (en) Mask pattern generation based on fast marching method
CN117196978A (en) Super-resolution photoetching optical proximity effect correction method based on Unet depth convolution network
CN110244523B (en) Integrated photoetching method and photoetching system
CN113589644A (en) Curve type reverse photoetching method based on sub-resolution auxiliary graph seed insertion
CN117031871A (en) Optical proximity correction method and system based on cascade network and photoetching method
CN116974139A (en) Method, device and equipment for rapidly calculating photoetching mask image
CN113962185A (en) Method and system for surface plasma near-field photoetching mask topology optimization
CN111507902B (en) High-resolution image acquisition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination