CN115457299B - Matching method of sensor chip projection photoetching machine - Google Patents
Matching method of sensor chip projection photoetching machine Download PDFInfo
- Publication number
- CN115457299B CN115457299B CN202211416778.1A CN202211416778A CN115457299B CN 115457299 B CN115457299 B CN 115457299B CN 202211416778 A CN202211416778 A CN 202211416778A CN 115457299 B CN115457299 B CN 115457299B
- Authority
- CN
- China
- Prior art keywords
- ternary
- sample
- neural network
- convolutional neural
- pictures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000001259 photo etching Methods 0.000 title abstract description 10
- 230000006870 function Effects 0.000 claims abstract description 35
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 29
- 238000012216 screening Methods 0.000 claims abstract description 26
- 238000012549 training Methods 0.000 claims abstract description 25
- 230000008569 process Effects 0.000 claims abstract description 22
- 238000001459 lithography Methods 0.000 claims abstract description 12
- 238000005457 optimization Methods 0.000 claims description 24
- 239000013598 vector Substances 0.000 claims description 20
- 238000007781 pre-processing Methods 0.000 claims description 10
- 238000013136 deep learning model Methods 0.000 claims description 7
- 238000013461 design Methods 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 238000004088 simulation Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 abstract description 4
- 238000013135 deep learning Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a matching method of a sensor chip projection photoetching machine. The method is based on deep learning, a convolutional neural network capable of well expressing the mapping relation from a picture source domain to a target domain is trained through a ternary sub-loss function by utilizing a pre-labeled graph data set, and meanwhile, a process window is introduced into the target function, so that the process window of a key graph which is not over-fitted and is screened after a model training process is ensured to be well represented. The invention can obtain the clustering result and the key graph group of the selected graph to be optimized through the trained convolutional neural network, simplifies the complex screening and classifying rules and realizes the end-to-end key graph screening. In addition, for the sensor chip patterns with different characteristics, the invention effectively improves the working efficiency of the projection lithography machine.
Description
Technical Field
The invention belongs to the technical field of photoetching resolution enhancement, and particularly relates to a matching method of a sensor chip projection photoetching machine.
Background
The lithography machine is the most popular equipment for producing and manufacturing large-scale integrated circuits, and the quality of the resolution directly affects the precision of the operation of the lithography machine. As a key means for improving the resolution of the photoetching machine, computational photoetching can effectively promote the development of chips to higher integration level. Light Source Mask Optimization (SMO) is an important branch of computational lithography. The photoetching resolution ratio and the process window can be effectively improved by jointly optimizing the light source and the mask pattern, the optimization degree of freedom is higher compared with the independent optimization of the light source and the mask, namely a light source optimization technology and an optical proximity effect correction technology, and the optimization effect on the resolution ratio and the process window is more obvious. Greater freedom of optimization results in greater complexity, and thus greater computational cost and time consumption. Therefore, in order to balance the optimization effect and the optimization cost, the light source mask optimization is generally not performed on the whole chip, the pattern pictures are screened in the whole chip range by using a pattern screening technology, and the whole chip mask light source optimization is performed by using the screened key patterns. And performing light source proximity correction and adding a sub-resolution auxiliary pattern on the whole mask pattern by using the light source obtained after SMO as an illumination condition. In the process, the method for screening the key graph for optimizing the full-chip light source mask effectively improves the optimization efficiency and reduces the optimization cost.
The key pattern screening technique based on pattern diffraction spectrum analysis proposed by ASML of the Netherlands (see prior art 1. The technology clusters and screens the graphs under a set rule by extracting the characteristics of the diffraction spectrum, thereby obtaining a key graph group. However, the method does not achieve the optimal condition from the clustering result to the process window. A method for screening SMO key graphs based on graph clustering proposed by Lai et al of IBM company maps all pictures into a feature space in a specific representation mode, and clusters the graphs in the feature space, thereby selecting representative graphs as the key graphs of SMO. The method can effectively screen out the key graphs for the SMO, but the number of clusters for clustering needs to be preset, and the representation mode in the mapping process can not update the model parameters, so that the method has a larger promotion space. The two methods currently popular are not the optimal methods for key graph screening.
Disclosure of Invention
The invention aims to overcome the problems of the method and provide a matching method of a sensor chip projection lithography machine, which can better screen key graphs, can realize a deep learning method of the representation mapping relation from a picture to a super-geometric plane and has better process window expression in a lithography scene.
In order to achieve the above object, the invention adopts the technical scheme that:
a matching method of a sensor chip projection lithography machine comprises the following steps:
step 1, acquiring and preprocessing a layout data set;
step 2, deep learning model training is carried out;
and 4, optimizing a light source mask for the key pattern.
Further, the step 1 specifically includes:
step 1.1, obtaining a layout data set with labels according to a mask screening method and a layout standard library, placing pictures with the same labels in the same path, and using the pictures as a sample set required by a training modelD={x 1 ,x 2 ,…,x n }, and n samples are divided into k disjoint groupsC l │l=1,2,…,kTherein ofC l A label for the sample;
and step 1.2, preprocessing the picture of the layout data set to enable the picture to meet the requirement of the input size of the convolutional neural network picture.
Further, the step 2 specifically includes:
step 2.1, randomly selecting a sample as an anchor sample when the ternary group is primarily screened, randomly selecting a sample under the same label as the anchor as a positive sample, and randomly selecting a sample under a label different from the anchor as a negative sample; when the ternary subgroup is selected subsequently, selecting another ternary subgroup most effective to the optimization model according to the screening principle of the ternary subgroup;
step 2.2, inputting the preprocessed three-element subgroup of the three single-channel pictures with the size of 224 × 224 into a convolutional neural network, obtaining a vector corresponding to a hyper-geometric plane through a final full-connection layer of the convolutional neural network, embedding the feature vector into the vector by using L2 regularization, and using the feature vector to embedf(x i ) It is shown that the process of the present invention,f(x i ) The dimension of (2) is M, and M is a user-defined variable;
step 2.3 selecting ternary sub-loss function for convolutional neural network with ternary sub-group as inputL triplet (ii) a Meanwhile, adding a bias function on the basis of a ternary sub-loss function on the design of a system target function, namely, performing an imaging simulation process window result on the key graph, wherein the process window is a depth of focus DOF;
step 2.4, updating model parameters of the convolutional neural network and the full connection layer in sequence by using a back propagation algorithm, returning to the step 2 and the step 2 if the number of training rounds is a multiple of N, otherwise, still using the original ternary subgroup;
and 2.5, storing the trained model and storing the feature vector updated at the last time.
Further, the ternary subgroup screening method in step 2.1 and the system objective function design in step 2.3 specifically include:
the construction and screening of the ternary elements use an off-line global mode, a certain sample is randomly selected as an anchor, and then a sample with the largest Euclidean distance away from the anchor sample is selected as a positive sample in the ternary elements, namelyWherein argmax represents the corresponding parameter value which fulfills the function maximum value, <' > R>Expressing the square of the L2 norm, selecting the sample of different labels with the shortest Euclidean distance from the anchor as a negative sample in the ternary, namelyWherein argmin denotes the corresponding parameter value for which the minimum of the function is sought, is->,/>Is the output of the ternary subsample through the model;forming a ternary subgroup; re-determining a new ternary subset through the updated network at regular intervals of training rounds;
the system objective function is expressed as: objective function =L triplet + a regular term;
loss functionL triplet Expressed as:
whereinx] + =max{0,x},αIs a value for the boundary between the values,for ternary sub, for input convolutional neural network>Is the output of the input triplet;
the regularization term is expressed as:
the final objective function is expressed as:
further, the step 3 specifically includes:
step 3.1, making the global format picture to be optimized into a plurality of small slices, and preprocessing the obtained slice images into a format meeting the input of a convolutional neural network;
step 3.2, inputting the picture set to be screened into a convolutional neural network to obtain corresponding feature vectors;
3.3, completing the embedded clustering on the super-geometric plane according to a density clustering algorithm, thereby realizing the clustering and grouping of the pictures with the corresponding types;
and 3.4, determining the center of the hyper-geometric space of each group, and selecting the graph of each group closest to the hyper-geometric plane as a key graph for joint optimization of the light source mask.
Further, the density-based clustering algorithm mentioned in step 3 specifically includes:
(1) Setting a neighborhood parameter (epsilon, minPts), wherein epsilon is the radius of the maximum range of a neighborhood, and MinPts is the number of the minimum sample points in the neighborhood range when the minimum sample points are determined as a core object;
(2) Finding core objects in all pictures from neighborhood parameters (ε, minPts), where the setting of ε refers to the values in the ternary sub-loss functionαSetting;
(3) And taking any core object as a starting point to find out a sample reaching the density, so as to generate a cluster until all the core objects are accessed.
Compared with the prior art, the invention has the following beneficial effects:
(1) The neural network has more excellent effects on the clustering and screening problems compared with the traditional algorithm.
(2) Compared with the traditional method for transforming from a source domain to a target domain, the method has more accurate mapping relation, and meanwhile, the classification number does not need to be determined in advance in the clustering process.
(3) Compared with the existing method, the method has the advantages that the model screened by the method is optimized in the light source mask, and the method has a more excellent process window.
Drawings
FIG. 1 is a schematic diagram of the three-element training process of the present invention;
FIG. 2 is a schematic diagram illustrating the selection principle of ternary elements according to the present invention;
FIG. 3 is a schematic diagram of the overall model structure of the matching method of the sensor chip projection lithography machine according to the present invention;
FIG. 4 is a schematic diagram of a specific structure of the present invention using a convolutional neural network;
FIG. 5 is a flow chart of the Density clustering (DBSCAN) algorithm processing of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention. In addition, the technical features involved in the respective embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention relates to a matching method of a sensor chip projection photoetching machine, which specifically comprises the following steps of:
step 1, acquisition and pretreatment of an integrated circuit layout data set, which specifically comprises the following steps:
step 1.1, obtaining a layout data set with labels according to a traditional mask screening method and a layout standard library, and placing pictures with the same labels in the same path, namely a sample setD={x 1 ,x 2 ,…,x n And n samples are divided into k disjoint groupsC l │l=1,2,…,k}; whereinC l Is a label for the sample;
and step 1.2, preprocessing the pictures of the layout data set to enable the pictures to meet the requirement of the input size of the convolutional neural network pictures.
Step 2, as shown in fig. 3, the deep learning model training process specifically includes:
and 2.1, as shown in the figure 1 and the figure 2, when the ternary subgroup is selected for the first time, randomly selecting a sample as an anchor sample, randomly selecting a sample under the same label as the anchor as a positive sample, and randomly selecting a sample under a label different from the anchor as a negative sample. When the ternary subgroup is selected subsequently, selecting a new ternary subgroup most effective to the optimization model according to the screening principle of the ternary subgroup;
step 2.2, inputting the preprocessed three-element subgroup of three single-channel pictures with the size of 224 × 224 into the convolutional neural network shown in fig. 4, obtaining vectors corresponding to the hyper-geometric plane through the last full-connection layer of the convolutional neural network, embedding the vectors into the feature vectors by using L2 regularization, and embedding the feature vectors by using the feature vectorsf(x i ) It is shown that,f(x i ) The dimension of (2) is M, and M is a user-defined variable;
step 2.3 convolutional neural network selecting ternary sub-loss function by taking ternary sub-group as inputL triplet (ii) a Meanwhile, on the design of a system target function, adding a bias function on the basis of a ternary sub-loss function, namely a process window result for imaging simulation of a key graph, wherein the process window is a depth of focus DOF;
step 2.4, updating model parameters of the convolutional neural network and the full connection layer in sequence by using a back propagation algorithm, returning to the step 2.2 if the number of training rounds is a multiple of N, otherwise, still using the original ternary subgroup;
and 2.5, storing the trained model and storing the feature vector updated at the last time.
Further, the ternary subgroup screening method mentioned in step 2.1 and the system objective function design in step 2.3 specifically include:
the construction and screening of the ternary elements use an off-line global mode, a certain sample is randomly selected as an anchor, and then a sample with the largest Euclidean distance away from the anchor sample is selected as a positive sample in the ternary elements, namelyWherein argmax denotes the corresponding parameter value for which the maximum of the function is sought, is->The square of the L2 norm of X is expressed, and samples of different labels with the shortest distance in an Anchor Euclidean manner are selected as samplesNegative samples in ternary elements, i.e.Wherein argmin denotes the corresponding parameter value for which the minimum of the function is sought, and ` er `>Is the output of the ternary subsample through the model; />Forming a ternary subgroup; and re-determining a new ternary subset through the updated network at regular intervals of training rounds.
The design of the system objective function is expressed as: objective function =L triplet + a regular term;
loss functionL triplet Expressed as:
whereinx] + =max{0,x},αIs a boundary value->For the ternary children of the input convolutional neural network,is the output of the input triplet;
the regularization term is expressed as:
The final objective function is expressed as:
step 3.1, making the global format picture to be optimized into a plurality of small slices, and preprocessing the obtained slice images into a format meeting the input of a convolutional neural network;
step 3.2, inputting the picture to be screened into a convolutional neural network to obtain a corresponding feature vector;
3.3, completing the embedded clustering on the super-geometric plane according to a density clustering algorithm, thereby realizing the clustering and grouping of the pictures of the corresponding type;
and 3.4, determining the center of the hyper-geometric space of each group, and selecting the graph of each group closest to the hyper-geometric plane as a key graph for joint optimization of the light source mask.
Further, as shown in fig. 5, the density-based clustering algorithm mentioned in step 3 specifically includes:
(1) Setting a neighborhood parameter (ε, minPts), where ε refers to the radius of the maximum range of the neighborhood, and MinPts is determined to be the kernel
The number of the sample points which are the least in the neighborhood range when the heart object is detected;
(2) Finding core objects in all pictures according to neighborhood parameters (epsilon, minPts), where the set value of epsilon refers to those in a ternary sub-loss functionαSetting;
(3) And (4) taking any core object as a starting point to find out a sample with which the density is reached so as to generate a cluster until all the core objects are accessed.
And 4, optimizing a light source mask for the key pattern.
The matching method of the sensor chip projection photoetching machine of the embodiment of the invention adopts the public standard library of FreePDK45 to derive the standard library to obtain a binary picture in the selection of the training set and the testing set. The FreePDK45 version set contains almost any commonly used version graphics, such as AND, MUX, etc. The data set used for training was 800 randomly selected out of 850 pictures, and the remaining 50 pictures were used as a test set. And performing labeling classification on the obtained data set by using a classification method based on a graph diffraction spectrum, and placing samples under the same label in the same directory.
Specifically, the matching method of the sensor chip projection lithography machine in the embodiment of the invention specifically comprises the following steps:
step 1, acquiring and preprocessing a layout data set, which specifically comprises the following steps:
step 1.1, preprocessing the obtained layout data set, and ensuring that the size of a picture meets the picture input requirement of a subsequent convolutional neural network when the layout data set is scaled to the same proportion;
and step 1.2, randomly dividing 850 pictures into a training set of 800 pictures and a testing set of 50 pictures.
Step 2, the deep learning model training process specifically comprises the following steps:
step 2.1, initializing parameters of the VGG network;
2.2, separating from the training state, inputting the training set into the VGG network with the latest parameters, randomly selecting 128 pictures as anchors in the ternary subgroup in the off-line state, and according to the anchors
Andthe positive and negative samples in the ternary sub are selected separately. Wherein it is present>Is the output of the input triplet; after selection, a size of 128 was formed as a batch for the next several rounds of training;
step 2.3, selecting ternary sub-loss function in the first 50 rounds of training
As a training target, the following 50 rounds of selection of a ternary sub-loss function plus a regularization term are used as the training target, wherein the target function is expressed as follows:
obtaining a key pattern group by the screening method in the step 3, and obtaining light source mask optimization by the key pattern group;
and 2.4, updating the parameters of the convolutional neural network through a designed training target and a back propagation algorithm (BP algorithm) so as to realize the optimization of the clustering model, and returning to the step 2.2 if the set round of updating the ternary elements is reached, or returning to the step 2.3. If the total training turns reach 100, go to step 3.
and 3.1, inputting the test set of the picture into a network to obtain corresponding embedding (embedding). Screening all core objects which are not less than MinPts within the Euclidean distance epsilon according to the set neighborhood parameters (epsilon, minPts);
step 3.2, classifying the core objects and the samples with the reachable density into a cluster, and sequentially clustering until all the samples are clustered;
and 3.3, determining the centers of all the pictures in each group in the hypergeometric plane, and selecting the sample picture closest to the hypergeometric plane as a key graph in each cluster.
And 4, optimizing a light source mask for the screened key graph.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (3)
1. A matching method of a sensor chip projection lithography machine is characterized by comprising the following steps:
step 1, acquiring and preprocessing a layout data set;
step 2, carrying out deep learning model training, specifically comprising:
step 2.1, randomly selecting a sample as an anchor sample when the ternary group is primarily screened, randomly selecting a sample under the same label as the anchor as a positive sample, and randomly selecting a sample under a label different from the anchor as a negative sample; when the ternary subgroup is selected subsequently, selecting another ternary subgroup which is most effective on the optimization model according to the screening principle of the ternary subgroup;
step 2.2, inputting the preprocessed three-element subgroup of the three single-channel pictures with the size of 224 × 224 into a convolutional neural network, obtaining a vector corresponding to a hyper-geometric plane through a final full-connection layer of the convolutional neural network, embedding the vector into a feature vector by using L2 regularization, and using the feature vector to embed the vectorf(x i ) It is shown that the process of the present invention,f(x i ) The dimension of (2) is M, and M is a user-defined variable;
step 2.3 selecting ternary sub-loss function for convolutional neural network with ternary sub-group as inputL triplet (ii) a Meanwhile, on the design of a system target function, adding a bias function on the basis of a ternary sub-loss function, namely a process window result for imaging simulation of a key graph, wherein the process window is a depth of focus DOF;
2.4, sequentially updating model parameters of the convolutional neural network and the full connection layer by using a back propagation algorithm; if the number of training rounds is multiple of N, returning to the step 2.2, otherwise, still using the original ternary subgroup;
step 2.5, storing the trained model and storing the feature vector updated at the last time;
step 3, clustering and screening the layout data set before light source mask optimization by using the trained deep learning model to obtain a key graph, which specifically comprises the following steps:
step 3.1, making the global format picture to be optimized into a plurality of small slices, and preprocessing the obtained slice images into a format meeting the input of a convolutional neural network;
step 3.2, inputting the picture set to be screened into a convolutional neural network to obtain a corresponding feature vector;
3.3, completing the embedded clustering on the super-geometric plane according to a density clustering algorithm, thereby realizing the clustering and grouping of the pictures of the corresponding type;
step 3.4, determining the center of the super-geometric space of each group, and selecting a graph of each group closest to the super-geometric plane as a key graph for joint optimization of the light source mask;
the density-based clustering algorithm mentioned in the step 3 specifically includes:
(1) Setting a neighborhood parameter (epsilon, minPts), wherein epsilon is the radius of the maximum range of the neighborhood, and MinPts is the number of the minimum sample points in the neighborhood range when the core object is determined;
(2) Finding core objects in all pictures according to neighborhood parameters (epsilon, minPts), where the set value of epsilon refers to those in a ternary sub-loss functionαSetting;
(3) Taking any core object as a starting point to find out a sample with which the density can be reached so as to generate a cluster until all the core objects are accessed;
and 4, optimizing a light source mask for the key pattern.
2. The matching method of the sensor chip projection lithography machine according to claim 1, wherein the step 1 specifically comprises:
step 1.1, obtaining a layout data set with labels according to a mask screening method and a layout standard library, placing pictures with the same labels in the same path, and using the pictures as a sample set required by a training modelD={x 1 ,x 2 ,…,x n }, and n samples are divided into k disjoint groupsC l │l=1,2,…,kTherein ofC l A label for the sample;
and step 1.2, preprocessing the pictures of the layout data set to enable the pictures to meet the requirement of the input size of the convolutional neural network pictures.
3. The matching method of the sensor chip projection lithography machine according to claim 2, wherein the ternary subgroup screening method of step 2.1 and the system objective function design of step 2.3 specifically comprise:
the construction and screening of the ternary elements use an off-line global mode, a certain sample is randomly selected as an anchor, and then a sample with the largest Euclidean distance away from the anchor sample is selected as a positive sample in the ternary elements, namelyWherein argmax denotes the corresponding parameter value which fulfills the function maximum value, <' > R>Represents the square of the L2 norm; selecting samples of different labels with Euclidean distances from the anchor sample as negative samples in the ternary elements, namelyWherein argmin represents the corresponding parameter value which satisfies the minimum of the function, and->,/>The output of the ternary subsample through the model;forming a ternary subgroup; re-determining through the updated network at regular intervals of training roundsA new ternary subset;
the system objective function is expressed as: objective function =L triplet + a regular term;
loss functionL triplet Expressed as:
whereinx] + =max{0,x},αIs a value for the boundary between the values,for the ternary children of the input convolutional neural network,is the output of the input triplet;
the regularization term is expressed as:
the final objective function is expressed as:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211416778.1A CN115457299B (en) | 2022-11-14 | 2022-11-14 | Matching method of sensor chip projection photoetching machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211416778.1A CN115457299B (en) | 2022-11-14 | 2022-11-14 | Matching method of sensor chip projection photoetching machine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115457299A CN115457299A (en) | 2022-12-09 |
CN115457299B true CN115457299B (en) | 2023-03-31 |
Family
ID=84295472
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211416778.1A Active CN115457299B (en) | 2022-11-14 | 2022-11-14 | Matching method of sensor chip projection photoetching machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115457299B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116306452B (en) * | 2023-05-17 | 2023-08-08 | 华芯程(杭州)科技有限公司 | Photoresist parameter acquisition method and device and electronic equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113469236A (en) * | 2021-06-25 | 2021-10-01 | 江苏大学 | Deep clustering image recognition system and method for self-label learning |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7194085B2 (en) * | 2000-03-22 | 2007-03-20 | Semiconductor Energy Laboratory Co., Ltd. | Electronic device |
JP5825470B2 (en) * | 2011-05-16 | 2015-12-02 | 株式会社ブイ・テクノロジー | Exposure apparatus and shading plate |
CN108399428B (en) * | 2018-02-09 | 2020-04-10 | 哈尔滨工业大学深圳研究生院 | Triple loss function design method based on trace ratio criterion |
EP4030349A1 (en) * | 2021-01-18 | 2022-07-20 | Siemens Aktiengesellschaft | Neuromorphic hardware for processing a knowledge graph represented by observed triple statements and method for training a learning component |
CN113435545A (en) * | 2021-08-14 | 2021-09-24 | 北京达佳互联信息技术有限公司 | Training method and device of image processing model |
CN114491592A (en) * | 2022-01-21 | 2022-05-13 | 清华大学 | Encrypted image information acquisition device and method |
-
2022
- 2022-11-14 CN CN202211416778.1A patent/CN115457299B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113469236A (en) * | 2021-06-25 | 2021-10-01 | 江苏大学 | Deep clustering image recognition system and method for self-label learning |
Also Published As
Publication number | Publication date |
---|---|
CN115457299A (en) | 2022-12-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111368896B (en) | Hyperspectral remote sensing image classification method based on dense residual three-dimensional convolutional neural network | |
CN111461110B (en) | Small target detection method based on multi-scale image and weighted fusion loss | |
CN109949317B (en) | Semi-supervised image example segmentation method based on gradual confrontation learning | |
WO2021134871A1 (en) | Forensics method for synthesized face image based on local binary pattern and deep learning | |
CN113449594B (en) | Multilayer network combined remote sensing image ground semantic segmentation and area calculation method | |
CN114298158A (en) | Multi-mode pre-training method based on image-text linear combination | |
CN110619059B (en) | Building marking method based on transfer learning | |
CN112613350A (en) | High-resolution optical remote sensing image airplane target detection method based on deep neural network | |
CN115457299B (en) | Matching method of sensor chip projection photoetching machine | |
CN108121781A (en) | Search method of related feedback images with parameter optimization is chosen based on efficient sample | |
CN113032613A (en) | Three-dimensional model retrieval method based on interactive attention convolution neural network | |
CN115311502A (en) | Remote sensing image small sample scene classification method based on multi-scale double-flow architecture | |
CN111652836A (en) | Multi-scale target detection method based on clustering algorithm and neural network | |
CN115409157A (en) | Non-data knowledge distillation method based on student feedback | |
CN114283083B (en) | Aesthetic enhancement method of scene generation model based on decoupling representation | |
CN116071331A (en) | Workpiece surface defect detection method based on improved SSD algorithm | |
CN112132798B (en) | Method for detecting complex background PCB mark point image based on Mini ARU-Net network | |
CN109102019A (en) | Image classification method based on HP-Net convolutional neural networks | |
CN117876668A (en) | Domain self-adaptive target detection method based on global-local contrast learning | |
CN117151998A (en) | Image illumination correction method based on support vector regression | |
CN115661504A (en) | Remote sensing sample classification method based on transfer learning and visual word package | |
CN110489584B (en) | Image classification method and system based on dense connection MobileNet model | |
CN116993727B (en) | Detection method and device, electronic equipment and computer readable medium | |
CN117876401B (en) | Cervix liquid-based lamellar cell image segmentation method based on SAM segmentation model | |
CN112926682B (en) | Nuclear magnetic resonance image small sample learning and classifying method based on graph network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |