CN110347017B - Overlay error extraction method based on optical diffraction - Google Patents

Overlay error extraction method based on optical diffraction Download PDF

Info

Publication number
CN110347017B
CN110347017B CN201910581646.6A CN201910581646A CN110347017B CN 110347017 B CN110347017 B CN 110347017B CN 201910581646 A CN201910581646 A CN 201910581646A CN 110347017 B CN110347017 B CN 110347017B
Authority
CN
China
Prior art keywords
overlay
error
optical
overlay error
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910581646.6A
Other languages
Chinese (zh)
Other versions
CN110347017A (en
Inventor
石雅婷
李旷逸
陈修国
刘世元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201910581646.6A priority Critical patent/CN110347017B/en
Publication of CN110347017A publication Critical patent/CN110347017A/en
Application granted granted Critical
Publication of CN110347017B publication Critical patent/CN110347017B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/708Construction of apparatus, e.g. environment aspects, hygiene aspects or materials
    • G03F7/7085Detection arrangement, e.g. detectors of apparatus alignment possibly mounted on wafers, exposure dose, photo-cleaning flux, stray light, thermal load
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F9/00Registration or positioning of originals, masks, frames, photographic sheets or textured or patterned surfaces, e.g. automatically
    • G03F9/70Registration or positioning of originals, masks, frames, photographic sheets or textured or patterned surfaces, e.g. automatically for microlithography
    • G03F9/7088Alignment mark detection, e.g. TTR, TTL, off-axis detection, array detector, video detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention belongs to the field of photoetching, and discloses an overlay error extraction method based on optical diffraction, which comprises the following steps: (1) determining an overlay mark structure and an optical constant of a material; (2) establishing an overlay mark forward optical characteristic model; (3) generating a training set based on the optical characteristic model; (4) determining a neural network structure; (5) training a neural network; (6) and extracting overlay error. Compared with the existing alignment error extraction method, the method provided by the invention does not depend on the linear relation between the optical resolution and the experience, can realize alignment error measurement in a single direction based on one unit, has smaller alignment mark area, can extract the alignment error from more complicated nonlinear alignment optical characteristic quantity, and has the advantages of quick and accurate extraction process and good robustness.

Description

Overlay error extraction method based on optical diffraction
Technical Field
The invention belongs to the field of photoetching, and particularly relates to an overlay error extraction method based on optical diffraction.
Background
With the rapid development of Integrated Circuit (IC) processes, Critical Dimensions (CD) have been reduced to 7nm nodes. The photolithographic process is most critical in the manufacture of ICs. The main performance indexes of the photoetching process include alignment precision, substrate size, resolution, light source wavelength and the like. The overlay accuracy refers to the alignment accuracy of the current photoetching process layer pattern and the previous process layer pattern. Generally speaking, the allowable size of the overlay error is 1/3-1/5 of the critical dimension, so that under the advanced IC manufacturing node, the realization of the fast, accurate and stable measurement and evaluation of the overlay error is the key to guarantee the performance of the semiconductor device.
Haiyong Gao et al, in the literature "comprehensive study of Diffraction-Based Overlay and Image-Based Overlay measurements on programmed Overlay errors", concluded that Overlay error measurement methods are largely divided into Image-Based Overlay measurements (IBO) and optical Diffraction-Based Overlay measurements (DBO). In the front and back photoetching process layers with the registration relation, the two methods both need to design the registration marks at the same positions of the front and back layers, and the registration measuring equipment carries out registration error extraction by positioning, measuring the registration marks and using the measured information. And determining whether the semiconductor device is qualified or not by the photoetching system according to the obtained alignment error. Generally, the overlay mark is processed at the edge of the exposed area (Scribeline), but as Scribeline shrinks under advanced manufacturing nodes, the overlay mark size needs to be more tightly controlled.
Typical IBO overlay marks and overlay errors: (XY) The determination method is shown in fig. 4. Young-Nam Kim et al, in the document "Device based in-chip critical dimension and overlay metrology", indicate that the size of the IBO overlay mark is typically about tens of microns, which is two orders of magnitude larger than the semiconductor Device in the integrated circuit, and that the overlay error obtained by the IBO method may not reflect the true overlay error value of the upper and lower alignment layers in the semiconductor Device. Waia mentions in the document advanced lithography theory and application of very large scale integrated circuits that the IBO method is more severely affected by the Chemical Mechanical Polishing (CMP) process than the DBO method and the method is limited by optical resolution.
In the document "Evaluating Diffraction-Based Overlay", Jie Li et al mention that the DBO method mainly comprises eDBO (empirical DBO) and mDBO (model-Based DBO). Chinese patent CN103472004B indicates that the mDBO method needs to solve a complex partial differential equation in real time to construct an alignment mark optical model, and therefore, it is difficult to meet the time requirement of in-situ measurement of alignment errors.
The eDBO method extracts overlay error based on the local linear relationship of the overlay optical characterization curve (fig. 2), where curve 201 in fig. 2 is an ideal curve, and curve 202 is a curve after actual translation that cannot be predicted in advance. A typical eDBO overlay mark 301 is shown in FIGS. 3 (a) and (b), and comprises four cells, wherein the cells labeled 302 and 303 are used to measure overlay error in the X-direction, and the cells labeled 304 and 305 are used to measure overlay error in the X-directionThe unit is used for measuring the overlay error in the Y direction. Taking the cell 302 as an example, the cross-sectional structure is shown in the figure (b), in whichXI.e. an overlay error value representing the X-direction. The method mentioned in the Chinese patent CN103454861B has the advantage of high measurement speed. This method is mentioned in chinese patents CN103454861B, CN200510091733.1, US7173699B2, US7477405B2, US7428060B2 and US6985232B 2.
The key point of the eDBO method lies in the local approximate linear relation and the linear relation sensitivity of the overlay optical characterization curve, which are easily understood, and the linear relation is determined by the overlay mark structure, the overlay mark material and the measurement condition. However, under the influence of a complicated IC manufacturing process, it is difficult for overlay marks to guarantee ideal design topography and measurement conditions. In addition, other asymmetric factors of the overlay mark will cause the overlay characterization curve to shift, i.e. the ideal curve 201 is used for calculation, but the actual curve 202 shifts compared with the ideal curve 201, thereby having a very adverse effect on the overlay error extraction accuracy μ.
Therefore, there is a need for an overlay error extraction method with simpler operation and higher efficiency and accuracy.
Disclosure of Invention
Aiming at the defects or improvement requirements of the prior art, the invention provides an overlay error extraction method based on optical diffraction, which aims to generate a training set through a forward model based on the diffraction characteristics of an optical grating and in combination with the structure of an overlay mark, so as to train a neural network to learn the mapping rule between an overlay error value and an optical characteristic quantity, and the trained neural network can quickly, accurately and robustly extract the overlay error value according to the optical characteristic quantity of the overlay mark to be tested.
To achieve the above object, according to an aspect of the present invention, there is provided an overlay error extraction method based on optical diffraction, including the steps of:
(1) determining an overlay mark structure and an optical constant of a material:
the overlay mark comprises two overlay mark units which are respectively used for extracting overlay errors in the X direction and the Y direction, the cross section structures of the two overlay mark units are the same, and a mask grating layer, a mask film layer, a mixed layer of a photoetching grating and a film material and an oxide film layer are sequentially arranged from the first layer to the fourth layer from top to bottom; the optical constants of each layer of material comprise a refractive index n and an extinction coefficient k; the overlay mark structure parameters include:
mask grating and lithography grating period Λ, mask grating and lithography grating linewidth CD1、CD2Overlay marks the height H of the first to fourth layers1、H2、H3、H4The left and right side wall angles LSWA and RSWA of the photoetching grating and the theoretical value of the overlay error;
(2) establishing an alignment mark forward optical characteristic model according to the alignment mark structural parameters and the material optical constants determined in the step (1) and in combination with preset measurement conditions, and obtaining an optical characteristic quantity of an alignment error;
(3) according to the alignment mark structural parameters determined in the step (1) and the alignment mark forward optical characteristic model established in the step (2), randomly taking values of the alignment mark structural parameters in a specified deviation range to generate a plurality of training samples containing optical characteristic quantities and alignment error theoretical values;
(4) determining a neural network structure, taking the optical characteristic quantity as an input layer of the neural network, and taking the overlay error as an output layer of the neural network; the output result of the output layer is an extracted value' of the overlay error;
(5) establishing a loss function representing the deviation between the extraction value' of the overlay error and the theoretical value of the overlay error, inputting the optical characteristic quantities of all the training samples into a neural network once to obtain corresponding output, and regarding as finishing one iteration; the iteration is stopped when the specified number of times of iteration or the loss function value reaches the preset range or the loss function value is stable, and a trained neural network is obtained;
(6) inputting the optical characteristic quantity obtained by the actual measurement of the overlay mark to be measured into the trained neural network, and extracting the overlay error.
Further, the optical characterization quantity in the step (2) is any one of reflectivity, an ellipsometric parameter, or a muller matrix; measurement conditions includeAngle of incidence theta, azimuth
Figure BDA0002113327950000042
Wavelength λ, incident light electric field vector Ε, and polarization angle Ψ.
Further, the optical characteristic quantity obtained in step (2) is any one of reflectivity, ellipsometry parameters and mueller matrix.
Further, the optical characteristic quantity obtained in step (2) is a mueller matrix, and the mueller matrix is a one-dimensional overlay error optical characteristic spectrum obtained by changing a single measurement condition, or a two-dimensional overlay error optical characteristic matrix obtained by changing two measurement conditions, or high-dimensional data obtained by changing a plurality of measurement conditions.
Further, the network structure established in the step (4) is a fully-connected network, a convolutional neural network or a cyclic neural network.
Further, the network structure established in step (4) is a fully connected network, and the expression is as follows:
Figure BDA0002113327950000041
wherein M isiRepresenting the ith data in the Mueller spectra, wi,jRepresenting the connection weight of the ith interface and the jth neuron, bjAnd hjRespectively representing the bias of the jth neuron and the connection weight of the neuron and an output layer.
Further, w is trained firsti,j、hjAnd bjThe random initialization was performed within a range of ± 0.001.
Further, the loss function in the step (5) is a mean square error loss function, a cross entropy loss function or an exponential loss function.
Further, the loss function in step (5) is a mean square error loss function, and the expression is as follows:
Figure BDA0002113327950000051
wherein the content of the first and second substances,nn' represents the overlay error theoretical value and the overlay error extraction value of the nth training sample respectively.
In general, compared with the prior art, the above technical solution contemplated by the present invention can obtain the following beneficial effects:
(1) compared with the existing overlay error extraction method, the method provided by the invention does not depend on the optical resolution and the empirical linear relation, can realize the overlay error measurement in a single direction based on one unit, has smaller overlay mark area, and can extract the overlay error from more complicated nonlinear overlay optical characterization quantities.
(2) The method has the advantages of high extraction speed and good robustness, and can resist the influence of certain overlay mark morphology parameter changes (such as side wall angle, process error of film thickness and the like) on the overlay error measurement result. Thus, the method is suitable for in-situ measurement of advanced IC manufacturing nodes.
(3) Compared with the traditional overlay error extraction method, the method provided by the invention is not limited by optical resolution and does not depend on empirical linear relation, and the measurement of the overlay error in one direction can be realized based on a single marking unit, so that the method has the advantage of small overall size of the overlay mark.
(4) The method considers the process deviation of the overlay mark when establishing the training sample, and the trained neural network model can still ensure the rapid, accurate and robust extraction of the overlay error under the condition that the appearance of the overlay mark is not ideal.
Drawings
FIG. 1 is a process flow of an overlay error extraction method according to a preferred embodiment of the present invention;
FIG. 2 is an optical characteristic curve of alignment error of the eDBO method in the prior art;
FIGS. 3 (a) and (b) are top layout and cross-sectional views, respectively, of an exemplary eDBO process overlay mark;
FIG. 4 is a schematic illustration of prior art IBO process overlay marks;
FIG. 5 is a fully-connected neural network architecture in accordance with a preferred embodiment of the present invention;
FIGS. 6 (a) and (b) are top layout views of one-dimensional overlay marks used to extract overlay errors in X and Y directions, respectively, in a preferred embodiment of the present invention;
FIG. 6 (c) is a cross-sectional view of a one-dimensional overlay mark according to a preferred embodiment of the present invention;
FIG. 7 is a schematic diagram of overlay error measurement conditions and a schematic diagram of a training sample according to a preferred embodiment of the present invention;
FIGS. 8 (a) and (b) are a schematic perspective view and an alignment error diagram of a two-dimensional periodic alignment mark according to a preferred embodiment of the present invention, respectively;
FIG. 9 is an iterative decreasing curve of the MSE loss function during the neural network training process in accordance with the preferred embodiment of the present invention;
FIG. 10 is a diagram illustrating the overlay error extraction accuracy μ according to a preferred embodiment of the present invention;
FIG. 11 is a schematic diagram of overlay error extraction timing according to a preferred embodiment of the present invention;
fig. 12 is a schematic diagram of overlay error extraction repeatability precision σ according to the preferred embodiment of the present invention.
The same reference numbers will be used throughout the drawings to refer to the same or like elements or structures, wherein:
201-an ideal curve, 202-an actual curve, 301-an alignment mark, 302-305-four alignment mark units of an eDBO, 401-an input layer, 402-an intermediate layer, 403-an output layer, 601-a one-dimensional alignment error optical characterization spectrum 601, 602-a two-dimensional alignment error optical characterization matrix.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The main principle of the invention is as follows: first, the topology of the overlay mark, i.e., the structural shape of each layer (e.g., thin film, grating, two-dimensional periodic structure, etc.) is determined according to the semiconductor process, and the optical constants (refractive index n and extinction coefficient k) of all materials in each layer structure are measured with an instrument (or obtained by directly referring to reference data). After the marker structure and material are determined, an overlay marker forward optical property model is constructed by methods such as Rigorous Coupled Wave (RCWA), Finite Element (FEM), Boundary Element (BEM), or finite time domain difference (FDTD). In order to make the method suitable for complex semiconductor processes, the invention needs to consider non-ideal overlay mark morphology. Specifically, the invention generates a training sample for neural network learning within a certain variation range of the overlay mark appearance, wherein the training sample comprises an overlay error optical characteristic quantity and an overlay error theoretical value. Then, determining a proper neural network structure and a loss function, and iteratively updating parameters in the neural network by using an optimization algorithm to minimize the loss function, thereby completing the training of the neural network. Finally, in order to verify the performance of the trained neural network, the method utilizes the forward model to generate a test set containing random noise, and then carries out overlay error extraction on the test set containing the noise through the trained neural network so as to evaluate the repeatability precision and accuracy of the overlay error measurement.
Both the one-dimensional overlay mark structure (as shown in fig. 6) and the two-dimensional overlay mark structure (as shown in fig. 8) are suitable for the method of the present invention, and the following description will be made in detail on the overlay error extraction method of the present invention by taking the one-dimensional overlay mark structure as an example, and fig. 1 shows the main flow of the overlay error extraction method provided by the present invention, and the detailed steps are as follows:
(1) determining overlay mark structure and material optical constant
In this embodiment, the overlay mark includes two overlay mark units, as shown in fig. 6 (a) and (b), for extracting the overlay errors in the X and Y directions, respectively. The cross-sectional structures of the two overlay mark units are the same, as shown in fig. 6 (c). The overlay mark structure comprises five layers, from top to bottom, the first layer is photoresist grating, and the second layer is Si3N4A film; the third layer is a mixed layer consisting of grating material and filling material, wherein the grating material is Si, and the filling material is Si3N4(ii) a The fourth layer is SiO2And the bottom layer of the film is a Si substrate. The optical constants of the material include refractive index n and extinction coefficient k, which can be characterized by an ellipsometer or other measuring devices, and can also be obtained by directly consulting the relevant reference data of the optical constants of the material.
It should be noted that the key of the present invention lies in the parameters of the layered structure and the dimension and the morphology of the grating, and the specific material and the material parameters can be selected according to the actual situation. Preferably, the materials of all layers of the overlay mark are selected to be the same as the materials of the corresponding layers of the target overlay device, so that the synchronous processing is facilitated, and the actual overlay error of the target overlay device can be objectively reflected.
As shown in FIG. 6 (c), the profile structure of this embodiment is characterized by 10 parameters, wherein Λ represents the grating period, the periods of the two layers of gratings are equal, and CD1、CD2Respectively represents the line widths H of two grating layers1、H2、H3、H4Sequentially represents the heights of different layers of the overlay mark, and LSWA and RSWA respectively represent the left and right side wall angles of the Si grating and represent overlay errors. Since the cross-sectional structures in the X direction and the Y direction are the same, the calculation method of the overlay error is also the same, and the method of the invention has universality and does not particularly distinguish the directions.
(2) Establishing a positive optical characteristic model of overlay mark
The construction of the overlay mark forward optical characteristic model needs to solve a complex partial differential equation (such as a Maxwell equation set or a Helmholtz equation) aiming at the overlay mark structural parameters, the material optical constants and the preset measurement conditions determined in the step (1) to obtain a final overlay error optical characteristic quantity. The measurement conditions in fig. 7 are as follows: theta, theta,
Figure BDA0002113327950000081
λ represents the measured incident angle, azimuth angle, wavelength, respectively, and e, Ψ represents the incident light electric field vector and polarization angle, respectively.
Aiming at the overlay mark structure set in the step (1) and the known measurement condition, solving Maxwell equation set or Helmholtz equation by adopting analytical or numerical modeling technology, wherein the modeling technology which can be utilized comprises strict coupled wave analysis (RCWA), Finite Element Method (FEM), Boundary Element Method (BEM) or finite time domain difference method (FDTD) and the like. The optical characteristic may be a reflectance measured by a reflectometer, an ellipsometric parameter measured by an ellipsometer, or a muller matrix measured by a muller matrix ellipsometer, or the like.
(3) Generating a training set based on a forward optical property model
After the construction method of the overlay mark optical forward characteristic model is determined, a training sample needs to be generated, wherein the training sample comprises an overlay error optical characteristic quantity and an overlay error theoretical value. As shown in fig. 7, the alignment error optical characteristic quantity may be in the form of a one-dimensional alignment error optical characteristic spectrum 601 obtained by changing a single measurement condition, a two-dimensional alignment error optical characteristic matrix 602 obtained by changing two measurement conditions, or even high-dimensional data (not shown) obtained by changing a plurality of measurement conditions.
In this embodiment, the optical characteristic amount of the overlay error is a one-dimensional optical characteristic spectrum of the overlay error, specifically, a one-dimensional mueller matrix spectrum with λ of 300nm to 420nm, and other measurement conditions θ,
Figure BDA0002113327950000092
Ψ was 65 °, 90 °, 0 °, respectively.
After the form of the optical characteristic quantity of the overlay error is determined, in order to further improve the accuracy, the embodiment further considers the influence of the manufacturing process error of the overlay mark in the overlay error measurement on the appearance of the overlay mark, and accordingly generates a training set. Specifically, the 10 morphological parameters described in step (1) are randomly varied within the range shown in table 1, so as to simulate random process errors and generate X 01000 training samples. The simulated error range can be selected according to the actual error condition or according to an empirical value, and is not limited by table 1.
TABLE 1 alignment marker parameter training set Range
Figure BDA0002113327950000091
(4) Determining neural network structure
The neural network used for extracting the overlay error can be a fully-connected neural network, a convolutional neural network, a cyclic neural network or other neural networks, and the basic principle of the neural network is to establish the accurate functional relationship between input parameters and output parameters in a mode of updating network parameters in a self-adaptive iteration mode, so that the result of target parameters can be directly obtained according to the input parameters.
The invention only takes the fully-connected neural network as an example, and provides a specific method with higher operation efficiency and accuracy.
In this embodiment, a fully-connected neural network structure is shown in fig. 5, and the network structure includes an input layer 401, an intermediate layer 402, and an output layer 403. The input data of the input layer 401 is a one-dimensional mueller matrix spectrum with a length N of 75 (each mueller matrix corresponds to one wavelength λ, 75 wavelengths correspond to 75 mueller matrices, and each element in the mueller matrix has 75 corresponding values, that is, the mueller matrix spectrum length is 75, and 75 wavelengths can also be obtained by randomly taking values within a given wavelength range), the number of neurons q of the intermediate layer 402 is 50, which is marked as q in the figure1~q50The activation function of each neuron is LeakyRelu (abbreviated as LR), and the output layer 403 is an overlay error value. It is understood that the spectrum length, the number of intermediate layers and the number of neurons are all adjustable, and the specific values are given only for the requirements of subsequent practical tests, verification of operation efficiency and extraction accuracy, and are not particularly limited. In addition, the input data can select the spectrum corresponding to the full element of the Mueller matrix; it is also possible to select spectra corresponding to one or more elements having a higher sensitivity to changes in measurement conditions, thereby obtaining accurate results with less input data.
Therefore, the calculation of the extraction of the overlay error by the fully-connected neural network can be expressed by equation (1):
Figure BDA0002113327950000101
wherein M isiRepresenting the ith data in the Mueller spectra, wi,jRepresenting the connection weight of the ith interface and the jth neuron, bjAnd hjRespectively representing the bias of the jth neuron and the connection weight of the neuron and an output layer.
(5) Neural network training
In order to measure the performance of extracting the overlay error by the neural network, the Mean Square Error (MSE) loss function is defined as an evaluation index in the embodiment, and the calculation method is as follows:
Figure BDA0002113327950000102
whereinnn' respectively represents the overlay error theoretical value and the neural network extraction value of the nth training sample. As shown in step (4), the MSE depends on the weight wi,j、hjAnd bias bjThe three parameters, i.e. the ability of the method to extract overlay error, depend on a suitable choice of these parameters.
In order to obtain the optimum wi,j、hjAnd bjFor the parameter combination of wi,j、hjAnd bjRandom initialization was performed within a range of ± 0.001 to model random errors, which were then iteratively adjusted to minimize the MSE loss function. The iterative optimization adjustment can be realized by adopting an Adam optimization algorithm or other common optimization algorithms, and the influence on the final extraction effect is small. And inputting all the training samples into the fully-connected neural network once to obtain corresponding output values, and finishing one iteration.
In this embodiment, to prevent overfitting, the iteration is terminated when the parameters are iterated 5000 times and the MSE change is plotted as fig. 9 (in practice, the MSE change may be terminated when the convergence is stable, and is not necessarily limited to 5000 times). It can be seen that the MSE loss function drops to 10-4And the network after training has good overlay error extraction performance in the training sample.
(6) Overlay error extraction test
In order to quickly verify the effectiveness of the method, the embodiment randomly generates 100 test samples according to the method in the step (3), forms a test set to simulate the optical characteristic quantity measured in the actual lithography process, and inputs the test samples into the fully-connected neural network trained in the step (5) to perform a simulation test of overlay error extraction.
It can be understood that, in the actual on-line test, the measured value of the optical characteristic quantity of the overlay mark to be tested is directly input into the trained fully-connected neural network.
Fig. 10 shows the extraction accuracy μ of the overlay error of the present method under 100 test samples, and it can be seen that the present method can still accurately extract the overlay error under the condition that the overlay mark morphology parameters are not ideal (such as asymmetric sidewall angle, high line width and line height fluctuation).
FIG. 11 shows the extraction time of the single overlay error value of the method under 100 test samples, which is 10-3The second order, so the method can meet the speed requirement of the overlay error in-situ measurement. In addition, in order to verify the robustness of the method, one thousandth of Gaussian noise is introduced to the Mueller spectrum of the test sample, and the repeatability measurement precision sigma extracted by the overlay error is counted.
FIG. 12 shows the repeatability of the measurement accuracy after one thousandth of Gaussian noise is introduced into 100 test samples by the method, and it can be seen that sigma is 10-2nm magnitude, and good overlay error extraction robustness.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. An overlay error extraction method based on optical diffraction is characterized by comprising the following steps:
(1) determining an overlay mark structure and an optical constant of a material:
the overlay mark comprises two overlay mark units which are respectively used for extracting overlay errors in the X direction and the Y direction, the cross section structures of the two overlay mark units are the same, and a mask grating layer, a mask film layer, a mixed layer of a photoetching grating and a film material and an oxide film layer are sequentially arranged from the first layer to the fourth layer from top to bottom; the optical constants of each layer of material comprise a refractive index n and an extinction coefficient k; the overlay mark structure parameters include:
mask grating and lithography grating period Λ, mask grating and lithography grating linewidth CD1、CD2Overlay marks the height H of the first to fourth layers1、H2、H3、H4The left and right side wall angles LSWA and RSWA of the photoetching grating and the theoretical value of the overlay error;
(2) establishing an alignment mark forward optical characteristic model according to the alignment mark structural parameters and the material optical constants determined in the step (1) and in combination with preset measurement conditions, and obtaining an optical characteristic quantity of an alignment error;
(3) according to the alignment mark structural parameters determined in the step (1) and the alignment mark forward optical characteristic model established in the step (2), randomly taking values of the alignment mark structural parameters in a specified deviation range to generate a plurality of training samples containing optical characteristic quantities and alignment error theoretical values;
(4) determining a neural network structure, taking the optical characteristic quantity as an input layer of the neural network, and taking the overlay error as an output layer of the neural network; the output result of the output layer is an extracted value' of the overlay error;
(5) establishing a loss function representing the deviation between the extraction value' of the overlay error and the theoretical value of the overlay error, inputting the optical characteristic quantities of all the training samples into a neural network once to obtain corresponding output, and regarding as finishing one iteration; the iteration is stopped when the specified number of times of iteration or the loss function value reaches the preset range or the loss function value is stable, and a trained neural network is obtained;
(6) inputting the optical characteristic quantity obtained by the actual measurement of the overlay mark to be measured into the trained neural network, and extracting the overlay error.
2. The overlay error extraction method based on optical diffraction as claimed in claim 1, wherein the optical characterization in step (2) is any one of reflectivity, ellipsometry parameters, or mueller matrix; measuringThe quantity condition comprises an incident angle theta and an azimuth angle
Figure FDA0002551077810000022
Wavelength λ, incident light electric field vector Ε, and polarization angle Ψ.
3. The alignment error extraction method based on optical diffraction as claimed in claim 2, wherein the optical characteristic quantity obtained in step (2) is a mueller matrix, and the mueller matrix is a one-dimensional alignment error optical characteristic spectrum obtained by changing a single measurement condition, a two-dimensional alignment error optical characteristic matrix obtained by changing two measurement conditions, or high-dimensional data obtained by changing a plurality of measurement conditions.
4. The overlay error extraction method based on optical diffraction as claimed in claim 1, wherein the network structure established in step (4) is a fully-connected network, a convolutional neural network or a cyclic neural network.
5. The overlay error extraction method based on optical diffraction as claimed in claim 4, wherein the network structure established in step (4) is a fully-connected network, and the expression is as follows:
Figure FDA0002551077810000021
wherein M isiRepresenting the ith data in the Mueller spectra, wi,jRepresenting the connection weight of the ith interface and the jth neuron, bjAnd hjRespectively representing the bias of the jth neuron and the connection weight of the neuron and an output layer, and LR is an activation function of each neuron.
6. The overlay error extraction method based on optical diffraction as claimed in claim 5, wherein w is first extracted before trainingi,j、hjAnd bjThe random initialization was performed within a range of ± 0.001.
7. The overlay error extraction method based on optical diffraction as claimed in any one of claims 1 to 6, wherein the loss function in step (5) is a mean square error loss function, a cross entropy loss function or an exponential loss function.
8. The overlay error extraction method based on optical diffraction as claimed in claim 7, wherein the loss function in step (5) is a mean square error loss function, and the expression is as follows:
Figure FDA0002551077810000031
wherein the content of the first and second substances,n、′nrespectively representing the theoretical value and the extracted value of the overlay error of the nth training sample, wherein MSE is the mean square error and X is the mean square error0Is the total number of training samples.
CN201910581646.6A 2019-06-30 2019-06-30 Overlay error extraction method based on optical diffraction Active CN110347017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910581646.6A CN110347017B (en) 2019-06-30 2019-06-30 Overlay error extraction method based on optical diffraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910581646.6A CN110347017B (en) 2019-06-30 2019-06-30 Overlay error extraction method based on optical diffraction

Publications (2)

Publication Number Publication Date
CN110347017A CN110347017A (en) 2019-10-18
CN110347017B true CN110347017B (en) 2020-09-08

Family

ID=68177267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910581646.6A Active CN110347017B (en) 2019-06-30 2019-06-30 Overlay error extraction method based on optical diffraction

Country Status (1)

Country Link
CN (1) CN110347017B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929864B (en) * 2019-12-05 2023-04-18 北京超放信息技术有限公司 Optical diffraction neural network on-line training method and system
CN110908256B (en) * 2019-12-30 2021-11-26 南京诚芯集成电路技术研究院有限公司 Photoetching overlay mark design method
CN113219792B (en) * 2021-04-29 2022-07-19 华中科技大学 Snapshot type overlay error measuring device and measuring method thereof
CN114386687B (en) * 2021-12-31 2024-04-05 全芯智造技术有限公司 Method and device for predicting overlay result between multiple layers of masks and terminal
CN116125765B (en) * 2023-04-17 2023-07-04 魅杰光电科技(上海)有限公司 Integrated circuit overlay error assessment method
CN116449657A (en) * 2023-06-06 2023-07-18 全芯智造技术有限公司 Method, apparatus and medium for layout marking
CN117289562B (en) * 2023-11-22 2024-02-13 全芯智造技术有限公司 Method, apparatus and medium for simulating overlay marks

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102798342A (en) * 2012-08-02 2012-11-28 华中科技大学 Fitting error interpolation based library matching method for optical scattering measurement
CN103049605A (en) * 2012-12-13 2013-04-17 华中科技大学 Alignment error extraction method based on Mueller matrix
CN107340689A (en) * 2016-02-29 2017-11-10 上海微电子装备(集团)股份有限公司 A kind of apparatus and method for measuring overlay error

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10430719B2 (en) * 2014-11-25 2019-10-01 Stream Mosaic, Inc. Process control techniques for semiconductor manufacturing processes

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102798342A (en) * 2012-08-02 2012-11-28 华中科技大学 Fitting error interpolation based library matching method for optical scattering measurement
CN103049605A (en) * 2012-12-13 2013-04-17 华中科技大学 Alignment error extraction method based on Mueller matrix
CN107340689A (en) * 2016-02-29 2017-11-10 上海微电子装备(集团)股份有限公司 A kind of apparatus and method for measuring overlay error

Also Published As

Publication number Publication date
CN110347017A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110347017B (en) Overlay error extraction method based on optical diffraction
TWI631476B (en) Method and system for on-device metrology
TWI798248B (en) Machine learning in metrology measurements
KR102382490B1 (en) Model-Based Hot Spot Monitoring
US20190378012A1 (en) Metrology Apparatus and Method for Determining a Characteristic of One or More Structures on a Substrate
CN110244527B (en) Overlay mark morphology and measurement condition optimization method
CN108886006A (en) It is measured using the semiconductor of the information from multiple processing steps
KR20130025941A (en) Measurement of a structure on a substrate
WO2006060562A2 (en) Method for designing an overlay mark
CN107408519A (en) Single parameter measurement based on model
TW201448081A (en) Statistical model-based metrology
JP7412559B2 (en) Measurement recipe optimization and physical realization based on probabilistic domain knowledge
CN113574458A (en) Metrology method and apparatus, computer program and lithographic system
KR102522444B1 (en) Provide a trained neural network and determine the properties of the physical system
WO2021072792A1 (en) Machine learning-based method for determining focal plane position of photolithography system
CN116324393A (en) Dynamic control of measurement recipe optimization based on machine learning
CN111458984A (en) Step-by-step optimization method for overlay mark and measurement configuration
TW201526130A (en) Integrated use of model-based metrology and a process model
US10345095B1 (en) Model based measurement systems with improved electromagnetic solver performance
Gereige et al. Rapid control of submicrometer periodic structures by a neural inversion from ellipsometric measurement
KR20240003440A (en) High-resolution profile measurements based on trained parameter conditional measurement models
CN117413177A (en) Semiconductor profile measurement based on scan condition model
CN111553064A (en) Feature selection method suitable for optical scattering measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant