CN117314861A - Method for detecting, identifying and aligning silicon wafer overlay pattern - Google Patents

Method for detecting, identifying and aligning silicon wafer overlay pattern Download PDF

Info

Publication number
CN117314861A
CN117314861A CN202311272140.XA CN202311272140A CN117314861A CN 117314861 A CN117314861 A CN 117314861A CN 202311272140 A CN202311272140 A CN 202311272140A CN 117314861 A CN117314861 A CN 117314861A
Authority
CN
China
Prior art keywords
image
region
svm
silicon wafer
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311272140.XA
Other languages
Chinese (zh)
Inventor
徐锋
杨瑞琳
李艳丽
胡松
陈天宝
黄思洁
王蔓菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Optics and Electronics of CAS
Southwest University of Science and Technology
Original Assignee
Institute of Optics and Electronics of CAS
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Optics and Electronics of CAS, Southwest University of Science and Technology filed Critical Institute of Optics and Electronics of CAS
Priority to CN202311272140.XA priority Critical patent/CN117314861A/en
Publication of CN117314861A publication Critical patent/CN117314861A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70491Information management, e.g. software; Active and passive control, e.g. details of controlling exposure processes or exposure tool monitoring processes
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/70Microphotolithographic exposure; Apparatus therefor
    • G03F7/70483Information management; Active and passive control; Testing; Wafer monitoring, e.g. pattern monitoring
    • G03F7/70605Workpiece metrology
    • G03F7/70616Monitoring the printed patterns
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F9/00Registration or positioning of originals, masks, frames, photographic sheets or textured or patterned surfaces, e.g. automatically
    • G03F9/70Registration or positioning of originals, masks, frames, photographic sheets or textured or patterned surfaces, e.g. automatically for microlithography
    • G03F9/7003Alignment type or strategy, e.g. leveling, global alignment
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F9/00Registration or positioning of originals, masks, frames, photographic sheets or textured or patterned surfaces, e.g. automatically
    • G03F9/70Registration or positioning of originals, masks, frames, photographic sheets or textured or patterned surfaces, e.g. automatically for microlithography
    • G03F9/7073Alignment marks and their environment
    • G03F9/708Mark formation
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F9/00Registration or positioning of originals, masks, frames, photographic sheets or textured or patterned surfaces, e.g. automatically
    • G03F9/70Registration or positioning of originals, masks, frames, photographic sheets or textured or patterned surfaces, e.g. automatically for microlithography
    • G03F9/7088Alignment mark detection, e.g. TTR, TTL, off-axis detection, array detector, video detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting, identifying and aligning a silicon wafer overlay pattern. Firstly, automatically searching a region of interest by using image segmentation to generate an SVM data set; secondly, extracting image salient features by using a traditional feature extraction method, and using the extracted image salient features for training of a Support Vector Machine (SVM) and obtaining a region of interest for SVM model prediction; then, using HU shape invariant moment as feature point similarity matching, and verifying SVM recognition results to eliminate misprediction conditions; and finally, determining the angle and the position offset, and aligning. The method can automatically complete the detection of the silicon wafer alignment pattern and realize alignment.

Description

Method for detecting, identifying and aligning silicon wafer overlay pattern
Technical Field
The invention relates to a method for detecting, identifying and aligning a silicon wafer overlay pattern, belongs to the field of image processing, and relates to a machine learning and image identification and classification method which is used for adaptively selecting an overlay alignment front template in photoetching equipment and realizing overlay alignment.
Background
The photoetching machine is used as one of key process equipment in microelectronic manufacture and plays a vital role in the modern technological field. Photolithography is one of the basic processes for manufacturing Integrated Circuits (ICs) by transferring patterns onto silicon wafers using an optical projection system to form fine electronic components and circuit structures, the development of which has a great significance for technological progress and economic development in countries. The elements influencing the photoetching process include photoresist materials, light sources, masks, alignment precision of alignment, and the like, and the alignment technology of alignment is one of three core technologies of a photoetching system, is an important component part in the photoetching system, ensures the position alignment between the masks and the alignment marks on a sample to be exposed according to the alignment marks on the masks, ensures the precision and the accuracy of photoetching patterns, and is a key for improving the performance and the pattern quality of the photoetching system continuously.
Patent document CN202010834499.1 discloses an automatic alignment method on 12/11 of 2020, firstly, a mask cross mark picture and an aligned picture are taken, secondly, a cross mark on a silicon wafer is extracted and processed in a computer, and finally, the position difference of the mask silicon wafer is obtained by utilizing image analysis to realize alignment, but the method needs to design a cross alignment mark. Zhang Shaoyu et al, uses a gray-scale based template matching method to achieve image alignment of different markers in the conference report "Lithography alignment method based on image rotation matching", but this implementation is limited by the initial generation and selection of the marker templates. When the alignment mark changes or the alignment mark is different when the user designs each alignment, the alignment template needs to be redesigned to generate, which is not beneficial to realizing the automatic production of the photoetching machine.
In order to solve the problems, the invention provides a method for detecting, identifying and aligning a silicon wafer overlay pattern, which realizes automatic acquisition of a training set, does not need to manually mark a label or know label content, trains an SVM model, and uses HU invariant moment to perform SVM identification result verification and judgment. When equipment or a mask is replaced, the alignment template does not need to be replaced manually, the template can be selected in a self-adaptive detection mode, and automatic alignment is achieved.
Disclosure of Invention
The invention provides a method for detecting, identifying and aligning a silicon wafer overlay pattern. Firstly, preprocessing an image to strengthen image characteristics, automatically acquiring a marked template image by using an image segmentation method, and taking the template image as a training file; secondly, extracting the HOG features of the image by using a traditional feature extraction method, generating feature vectors, training a detection recognition model by using a machine learning method, acquiring an interested region of the image to be predicted for SVM model detection recognition, and returning a label value; and then, in order to eliminate the influence of other graphs on prediction, a label value is returned according to SVM recognition, and HU shape invariant moment characteristic point matching is used for recognition verification secondary screening, so that the image detection recognition accuracy is ensured. And finally, completing template matching alignment, acquiring angle and position offset, and performing equipment alignment debugging. The method mainly aims at completing detection and identification of the silicon wafer alignment template, laying a foundation for next selecting a corresponding template to realize image alignment, and obtaining an offset value of an angle and a position to realize alignment debugging of equipment.
The method for detecting, identifying and aligning the overlay pattern of the silicon wafer is characterized by comprising the following steps of:
step S1: automatically generating a silicon wafer template data set, dividing the region of interest according to the exposed image on the silicon wafer, selecting the region meeting the requirement, and expanding the region into the data set;
step S2: extracting HOG features of the data set, generating feature vectors, and training out corresponding SVM models by using an SVM method;
step S3: image preprocessing is carried out on an image to be predicted, image background noise is removed, and an interested region is obtained;
step S4: extracting HOG characteristics of the region of interest, inputting SVM model prediction, and returning a prediction result label;
step S5: when the shapes of the patterns are different, the HOG gradient characteristics of the patterns are similar with small probability, the situation that the SVM has false detection occurs at the moment, a proper image is selected to be matched with the area based on the shape invariant moment similarity of the HU according to the SVM prediction result label, SVM recognition result verification is carried out, the prediction result of the similarity matching of the SVM and the HU is combined, a final detection recognition label value is returned, the image with the same label value is selected as an alignment template, and when the area which does not accord with the shape characteristics of the SVM and the HU after the full-image detection is not detected, other certain areas are randomly extracted for subsequent realization of silicon wafer template matching alignment;
step S6: and (3) performing template matching alignment, and acquiring angle and position offset through a single graph, wherein the angle is the rotation offset of the detection shape relative to the template, the position is the offset of the detection shape center point relative to the full graph center point, and the actual debugging of the equipment is performed according to the angle and the position offset. The selected alignment template is an unknown pattern of unknown areas in the image, i.e. the correct position of the final alignment result in the image cannot be determined, so the angle of the template itself and the center point of the overall image are selected as evaluation criteria.
Further, in the step S1, the generation of the silicon wafer template data set includes the following steps:
step S11: carrying out pretreatment such as denoising, filtering and the like on the image;
step S12: acquiring a region with a possible graph by using an image segmentation method, if the region meets the judgment condition, storing the region, acquiring the upper left corner coordinate information of the region, if the region does not meet the judgment condition, discarding the region, and continuing the judgment of other regions;
step S13: according to the obtained upper left corner coordinates of the mark 1, capturing pictures with m x n sizes;
step S14: and (5) expanding and storing the data set by using a rotation and translation method, wherein the storage label is 1.
Further, in the step S12, the judgment condition is that the number Q of the pixels in the area should be within a range, if the number Q exceeds the range, the area is abandoned, the other graphic areas are continuously judged, if the number Q exceeds the range, the area is saved, and two or more independent graphics are necessarily present in the photographed image according to the characteristics of the exposure image.
Further, in step S13, the size of each sample data is consistent to m×n, the specific value of m, n is determined according to the sample image, and compared with the original image, the truncated m×n image only performs the region clipping operation.
Further, in the step S14, the number of sample types is greater than 2, that is, the method for generating the other data sets such as the labels 2 and 3 is the same as the method for generating the data set of the label 1, which is used for training the SVM and includes positive samples and negative samples, and is a multi-classification problem of a plurality of samples.
Further, in the step S3, preprocessing is performed on the image to be predicted to obtain the region of interest, so that the subsequent SVM model prediction is facilitated, and the method for cutting the region of interest is consistent with the cutting method for generating the data set, and the size of the region of interest is also consistent.
Further, in the step S5, the final detection identification tag value is returned, which includes the following steps:
step S51: selecting a corresponding label graph from a database according to a label value result returned by SVM detection and identification;
step S52: performing image preprocessing operation on the corresponding label graph and the corresponding region of interest;
step S53: extracting HU shape invariant moment characteristics of the processed image;
step S54: and (3) performing HU invariant moment similarity matching, performing identification verification, wherein the score is 1 when the HU invariant moment similarity matching is completely consistent, namely when the score is more than or equal to 0.9, recognizing that the matching result is consistent with the SVM prediction result, reserving a label result, and selecting an image with the same label value as an alignment template. And when the prediction results are inconsistent, returning to the step S1, and executing to the step S13, and aligning the template by using the selected region.
The invention discloses a method for detecting, identifying and aligning a silicon wafer overlay pattern, which mainly has the following beneficial effects:
1. the training set for SVM training can be automatically generated by software, and labels do not need to be manually marked, and the content of the labels does not need to be known;
2. the SVM support vector machine and HU shape invariant moment feature point matching are utilized to accurately detect and identify the template type, preparation is made for alignment of the lithography machine in the next step, the method is suitable for lithography alignment equipment with changed alignment marks, and laying is made for realizing automation of the lithography equipment.
Drawings
FIG. 1 is a flow chart of the overall detection and recognition method of the present invention.
FIG. 2 is a flow chart of autonomous generation of a dataset in accordance with the present invention.
FIG. 3 is a flowchart of the identification verification of the present invention.
FIG. 4 is a diagram showing the recognition result of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in FIG. 1, a general flow chart of a method for detecting, identifying and aligning patterns of a silicon wafer overlay includes the following steps.
Step S1: and automatically generating a silicon wafer template data set. The method comprises the steps of dividing an interested region according to an exposed image on a silicon wafer by using an image segmentation method, intercepting the image possibly with a shape, selecting a region meeting the requirement, carrying out data expansion by using a rotation translation method and the like, and automatically generating a silicon wafer template data set. Wherein the ratio of training set to test set is listed as 7:3.
Step S2: extracting HOG features of the data sets, generating feature vectors, obtaining 1 x 3780 feature vectors for each data set picture, integrating the feature vectors of the same-label pictures into n 3780 training files, wherein n is the number of the data set pictures under the same label, and the number of the pictures of different labels is consistent. And selecting an SVM linear kernel function, and training a corresponding SVM model. And (3) testing the model identification precision on the test set, returning to the step (S1) if the detection accuracy rate on the test set is not hundred percent, otherwise, continuing to the step (S3). The content of the test set is consistent with that of the training set, the image intercepted in the step S1 comprises a label 1, a label 2, a label 3 and the like, the quantity of the images is less than that of the training set, and a calculation formula of the accuracy rate is shown as a formula (1):
(1)
wherein P is the accuracy, a1 is the total number of data of the label 1 in the test set, a2 and a3 are the same, and r1 is the number of correct labels 1 predicted by the SVM in the test set, and r2 and r3 are the same. When the number of data between classes is increased or decreased, the calculation formula of P is changed at the same time.
The HOG algorithm first calculates gradients of the image for capturing edge and texture information in the image. Dividing the image into a series of cells (cells) of the same size, each cell containing a set of pixels; for each pixel point in the cell, calculating the gradient direction of the pixel point, wherein a common gradient calculation method is to convolve an image by using a Sobel operator and project the image into a histogram; each column of the histogram represents a gradient direction within a certain range, and the gradient intensity of each direction in the cell is counted; forming a block (adjacent cells) from a plurality of cells, and normalizing the histogram in each block to increase the robustness to illumination variation; the normalized histograms in all blocks are concatenated to form the final HOG feature vector.
Step S3: firstly, region clipping is carried out in the whole diagram, the clipping size is m1 x n1, m1 is smaller than the length m of the original diagram, n1 is smaller than the width n of the original diagram, the initial coordinates of clipping in the original diagram to be detected are (x, y), and the clipped region comprises a shape expected by a customer to be used for alignment. And filtering, expanding and corroding the cut image to be predicted, removing image background noise, and more facilitating image segmentation and interception of the region of interest, wherein the method for intercepting the region of interest is consistent with the method for generating a data set, but data expansion and storage are not needed. An image to be sent to the SVM model prediction is obtained.
Step S4: the SVM model prediction is the same as SVM model training, HOG characteristics of the region of interest need to be extracted, SVM model prediction is input, a prediction result label is returned, and when a plurality of images exist in the prediction graph, a plurality of prediction label values are returned.
Step S5: when the shapes of the patterns are different, the HOG gradient characteristics of the patterns are similar with small probability, the situation that the SVM has false detection occurs at the moment, a proper image is selected to be matched with the area based on the shape invariant moment similarity of HU according to the SVM prediction result label, SVM recognition result verification is carried out, the prediction result of the shape invariant moment similarity matching of SVM and HU is combined, a final detection recognition label value is returned, the image with the same label value is selected as an alignment template, and when the area which does not accord with the shape characteristics of SVM and HU after the whole-pattern detection is detected, other certain area is randomly extracted for subsequent realization of silicon wafer template matching alignment.
Step S6: and (3) completing template matching alignment, and acquiring angle and position offset through a single graph. The angle is the rotation deviation of the detection shape relative to the template, the position is the deviation of the detection shape center point relative to the full-image center point, and the accurate position of the unknown image in the image cannot be determined because the unknown image of the unknown area in the image is selected, so the angle of the template and the center point of the whole image are selected as evaluation criteria, and the actual debugging of the equipment is performed according to the calculated angle and the calculated position deviation.
The SVM model training requires a data set, as shown in fig. 2, a flow chart of the data set is autonomously generated for the present invention, which includes the following steps.
Step S11: firstly, cutting out a rough region of an overall image, carrying out pretreatment such as denoising and filtering on the image, and removing background noise and small targets affecting subsequent operation.
Step S12: the upper left corner coordinates of the possible marker 1 are acquired using image segmentation. And (3) image segmentation is carried out in a cut area by using a connected area assembly method, a possible shape area is found, the shape with the number of pixels smaller than Q is excluded, the Q value can be determined according to the number of pixels of each overlay graph in the whole image, Q=20 can be determined, when the number of pixels is smaller than Q, namely the position is considered to be background noise, if the background pixel Q >20 is excluded, the position is considered to have the shape, the shape is reserved, but the Q value cannot be too large, because the condition that the detection fails or the whole image with too large chip is detected is required to be excluded, and the area cannot acquire obvious shapes.
Step S13: and acquiring centroid information of the shape, and obtaining the left upper corner coordinate of the shape according to the information of the minimum circumscribed rectangle length and width information. According to the obtained upper left corner coordinates (x, y) of the mark 1, intercepting the picture with the size of m x n, moving the upper left corner coordinates to the left in the x-axis direction, namely moving 1 bit towards the direction of value reduction, intercepting the picture with the size of m x n according to the upper left corner coordinates (x-1, y) at the moment, moving the y axis coordinates upwards by 1 bit, namely moving towards the direction of value reduction, intercepting the picture with the size of m x n according to the upper left corner coordinates (x, y-1) at the moment, and the like, intercepting the picture, wherein the shape is required to be completely included in each intercepted image, and the movement amount of the upper left corner coordinates is determined according to the actual situation.
Step S14: and (5) expanding and storing the data set by using a rotation and translation method, wherein the storage label is 1.
And the size of each sample data is consistent with m x n, the specific numerical value of m and n is determined according to the resolution of the sample image, and the intercepted m x n picture is compared with the original image and only subjected to region clipping operation. The number of the sample classes is 20, and 5 samples can be added or subtracted on the basis. The number of sample classes is larger than 2, namely the method not only comprises positive samples and negative samples for SVM training, but also is a multi-classification problem of a plurality of samples, and the generation of other data sets such as labels 2 and 3 is the same as the generation method of the data set of the label 1. The dataset was divided into training and test sets in a 7:3 ratio.
As shown in fig. 3, a secondary decision flow chart of the present invention is shown, which includes the following steps.
Step S51: and selecting the overlay graphic template graph with consistent labels from the database as a corresponding label graph according to the label value result returned by SVM detection and identification.
Step S52: and performing image preprocessing operations for the corresponding label graph and the corresponding region of interest, including image processing such as denoising and filtering. And searching for a communication area, separating the searched shape area from a background area, filling white and black respectively, and only one shape area exists in one figure.
Step S53: and extracting HU shape invariant moment characteristics of the processed image.
Based on the Hu shape invariant moment method, 7 invariant moment features can be extracted:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
the calculation formula of the normalization processing result for the center moment of each order is as follows:
(9)
in the formula (9): p, q represent the p-order derivative of x and the q-order derivative of y for coordinates (x, y), which are the central moments of the respective orders, defined as:
(10)
in the formula (10): x0 and y0 are calculated by an origin moment, and a calculation formula is as follows:
(11)
(12)
in the above formula, m is the origin moment of the p+q order, which is defined as:
(13)
in the formula (13): m, N represent discrete images Img (x, y) of m×n size.
The invariant moment of the region of interest HU is defined as SH [ i ], and the invariant moment of the template map HU is defined as TH [ i ].
Step S54: and (3) performing HU invariant moment similarity matching, wherein the score is 1 when the HU invariant moment similarity matching is completely consistent, and when the score is more than or equal to 0.9, the matching result is considered to be consistent with the SVM prediction result at the moment, and the current label result corresponding graph is selected as an alignment template. And when the prediction results are inconsistent, returning to the step S1, and executing to the step S13, and aligning the template by using the selected region.
The HU invariant moment information is extremely large in numerical value after extraction, image matching calculation is inconvenient, the numerical value is converted into a two-digit integer convenient to calculate by utilizing log, and a calculation formula is as follows:
(14)
wherein the sign of M [ i ] is opposite to that of H [ i ].
The similarity matching calculation formula is:
(15)
wherein dsgmast is the sum of the products of the template diagram and absolute values of invariant moment values of each HU in the region of interest, and the calculation formula is as follows:
(16)
dSigmaS is the sum of squares of invariant moment values of each HU in the region of interest, and the calculation formula is as follows:
(17)
dSigmaT is the sum of squares of invariant moment values of each HU in the template diagram, and the calculation formula is as follows:
(18)。

Claims (7)

1. the method for detecting, identifying and aligning the overlay pattern of the silicon wafer is characterized by comprising the following steps of:
step S1: automatically generating a silicon wafer template data set, dividing the region of interest according to the exposed image on the silicon wafer, selecting the region meeting the requirement, and expanding the region into the data set;
step S2: extracting HOG features of the data set, generating feature vectors, and training out corresponding SVM models by using an SVM method;
step S3: image preprocessing is carried out on an image to be predicted, image background noise is removed, and an interested region is obtained;
step S4: extracting HOG characteristics of the region of interest, inputting SVM model prediction, and returning a prediction result label;
step S5: when the shapes of the graphs are different, the HOG gradient characteristics of the graphs are similar with small probability, the situation that the SVM has false detection occurs at the moment, a proper image is selected to be matched with the area based on the shape invariant moment similarity of HU according to the SVM prediction result label, SVM recognition result verification is carried out, the prediction result of the shape invariant moment similarity matching of SVM and HU is combined, a final detection recognition label value is returned, an image with the same label value is selected as an alignment template, and when the area which does not accord with the shape characteristics of SVM and HU after the whole graph detection is not detected, other areas are randomly extracted for subsequent realization of silicon wafer template matching alignment;
step S6: and (3) performing template matching alignment, and acquiring angle and position offset through a single graph, wherein the angle is the rotation offset of the detection shape relative to the template, the position is the offset of the detection shape center point relative to the full graph center point, and the actual debugging of the equipment is performed according to the angle and the position offset.
2. The method for detecting, identifying and aligning a pattern of a silicon wafer overlay according to claim 1, wherein the method comprises the following steps: in the step S1, a silicon wafer template data set is automatically generated, which includes the following steps:
step S11: carrying out preprocessing operations such as denoising, filtering and the like on the image;
step S12: acquiring a region with a possible graph by using an image segmentation method, if the region meets the judgment condition, storing the region, acquiring the upper left corner coordinate information of the region, if the region does not meet the judgment condition, discarding the region, and continuing the judgment of other regions;
step S13: according to the obtained upper left corner coordinates of the mark 1, capturing pictures with m x n sizes;
step S14: and (5) expanding and storing the data set by using a rotation and translation method, wherein the storage label is 1.
3. The method for detecting, identifying and aligning a pattern of a silicon wafer overlay according to claim 2, wherein the method comprises the following steps: in the step S12, the judgment condition is that the number Q of the pixels in the area should be within a range, if the number Q exceeds the range, the area is abandoned, the other graphic areas are continuously judged, if the number Q meets the condition, the area is saved, and two or more independent graphics are necessarily present in the photographed image according to the characteristics of the exposure image.
4. The method for detecting, identifying and aligning a pattern of a silicon wafer overlay according to claim 2, wherein the method comprises the following steps: in step S13, the size of each sample data is consistent to m×n, and the specific value of m, n is determined according to the sample image, and compared with the original image, the truncated m×n image only performs region clipping operation.
5. The method for detecting, identifying and aligning a pattern of a silicon wafer overlay according to claim 2, wherein the method comprises the following steps: in the step S14, the number of sample types is greater than 2, that is, the method for generating the other data sets such as the labels 2 and 3 is the same as the method for generating the data set of the label 1, wherein the method is used for training the SVM and not only includes positive samples and negative samples, but also is a multi-classification problem of a plurality of samples.
6. The method for detecting, identifying and aligning a pattern of a silicon wafer overlay according to claim 1, wherein the method comprises the following steps: in the step S3, preprocessing is performed on the image to be predicted, and the region of interest is obtained, so that the subsequent SVM model prediction is facilitated, and the method for cutting the region of interest is consistent with the cutting method for generating the data set, and the cutting size is also consistent.
7. The method for detecting, identifying and aligning a pattern of a silicon wafer overlay according to claim 1, wherein the method comprises the following steps: in the step S5, similarity matching screening is performed by using HU shape invariant moment, including the following steps:
step S51: selecting a corresponding label graph from a database according to a label value result returned by SVM detection and identification;
step S52: performing image preprocessing operation on the corresponding label graph and the corresponding region of interest;
step S53: extracting HU shape invariant moment characteristics of the processed image;
step S54: and (3) matching the HU invariant moment similarity, performing identification verification, retaining a label result when the matching is consistent with the SVM prediction result, and selecting an image with the same label value as an alignment template. And when the prediction results are inconsistent, returning to the step S1, and executing to the step S13, and aligning the template by using the selected region.
CN202311272140.XA 2023-09-28 2023-09-28 Method for detecting, identifying and aligning silicon wafer overlay pattern Pending CN117314861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311272140.XA CN117314861A (en) 2023-09-28 2023-09-28 Method for detecting, identifying and aligning silicon wafer overlay pattern

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311272140.XA CN117314861A (en) 2023-09-28 2023-09-28 Method for detecting, identifying and aligning silicon wafer overlay pattern

Publications (1)

Publication Number Publication Date
CN117314861A true CN117314861A (en) 2023-12-29

Family

ID=89261642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311272140.XA Pending CN117314861A (en) 2023-09-28 2023-09-28 Method for detecting, identifying and aligning silicon wafer overlay pattern

Country Status (1)

Country Link
CN (1) CN117314861A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974601A (en) * 2024-02-01 2024-05-03 广东工业大学 Method and system for detecting surface defects of silicon wafer based on template matching

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974601A (en) * 2024-02-01 2024-05-03 广东工业大学 Method and system for detecting surface defects of silicon wafer based on template matching

Similar Documents

Publication Publication Date Title
Duan et al. Corner proposal network for anchor-free, two-stage object detection
Zhang et al. A new algorithm for character segmentation of license plate
CN111414934A (en) Pointer type meter reading automatic identification method based on fast R-CNN and U-Net
CN109740606B (en) Image identification method and device
CN112639396B (en) Dimension measuring apparatus, dimension measuring method, and semiconductor manufacturing system
CN108038435A (en) A kind of feature extraction and method for tracking target based on convolutional neural networks
CN106874901B (en) Driving license identification method and device
CN117314861A (en) Method for detecting, identifying and aligning silicon wafer overlay pattern
CN111242050A (en) Automatic change detection method for remote sensing image in large-scale complex scene
CN103854278A (en) Printed circuit board image registration method based on shape context of mass center of communicated region
CN115690670A (en) Intelligent identification method and system for wafer defects
CN111965197A (en) Defect classification method based on multi-feature fusion
CN110288040B (en) Image similarity judging method and device based on topology verification
Kyaw et al. License plate recognition of Myanmar vehicle number plates a critical review
CN116596875A (en) Wafer defect detection method and device, electronic equipment and storage medium
Kalina et al. Application of template matching for optical character recognition
CN115311293B (en) Rapid matching method for printed matter pattern
CN117315578A (en) Monitoring method and system for rust area expansion by combining classification network
CN109191489B (en) Method and system for detecting and tracking aircraft landing marks
CN113822836A (en) Method of marking an image
CN110765993A (en) SEM image measuring method based on AI algorithm
Heitzler et al. A modular process to improve the georeferencing of the Siegfried map
Luo et al. FPC surface defect detection based on improved Faster R-CNN with decoupled RPN
Tao et al. A hybrid approach to detection and recognition of dashboard information in real-time
CN114187294B (en) Regular wafer positioning method based on prior information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination