CN113538392A - Wafer detection method, wafer detection equipment and storage medium - Google Patents

Wafer detection method, wafer detection equipment and storage medium Download PDF

Info

Publication number
CN113538392A
CN113538392A CN202110842583.2A CN202110842583A CN113538392A CN 113538392 A CN113538392 A CN 113538392A CN 202110842583 A CN202110842583 A CN 202110842583A CN 113538392 A CN113538392 A CN 113538392A
Authority
CN
China
Prior art keywords
wafer
image
classification model
defect
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110842583.2A
Other languages
Chinese (zh)
Other versions
CN113538392B (en
Inventor
石强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze Memory Technologies Co Ltd
Original Assignee
Yangtze Memory Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze Memory Technologies Co Ltd filed Critical Yangtze Memory Technologies Co Ltd
Priority to CN202110842583.2A priority Critical patent/CN113538392B/en
Publication of CN113538392A publication Critical patent/CN113538392A/en
Application granted granted Critical
Publication of CN113538392B publication Critical patent/CN113538392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Abstract

The application provides a wafer detection method, which comprises the following steps: acquiring a first image of the wafer, wherein the first image comprises an overall characteristic representing defect information of the wafer; determining a second image of the wafer based on the first image, the second image including detail features representing defect information of the wafer; fusing the overall features and the detail features to generate fused features; and detecting the fused features through the wafer defect classification model. By the method, the accuracy of wafer defect detection can be improved to a certain extent, and the labor cost is reduced.

Description

Wafer detection method, wafer detection equipment and storage medium
Technical Field
The present disclosure relates to the field of semiconductors, and more particularly, to a method, an apparatus, and a storage medium for detecting defects of a wafer.
Background
With the development of memories, the requirement for the integration degree of the memories is higher and higher, so the feature size of the wafer is continuously reduced, and the defect detection of the wafer becomes an important method for improving the wafer yield. The production process of the memory is various and complicated, and after each process, especially after the etching process and the deposition process, the defects on the surface of the wafer need to be detected, so that the defect wafer is prevented from flowing to the subsequent process to influence the electrical performance of the wafer.
At present, an Automatic Defect Classification (ADC) system is mainly used in combination with a machine learning method to detect defects of a wafer. The ADC system has two main modules: the device comprises a detection module and a classification module. The detection module confirms the area coordinates of possible defects by scanning the surface of the wafer, and then obtains an image of the defect area by using a Scanning Electron Microscope (SEM); and the classification module constructs a classification model by using a machine learning method to realize defect classification. However, the performance of the defect detection of the current ADC system is high, and the accuracy of the defect classification is poor. Therefore, the defect classification result needs to be confirmed through manual inspection, and a large amount of marks need to be carried out on the defect sample by professional personnel, so that the accuracy of wafer detection and labor investment saving are problems to be solved quickly.
Disclosure of Invention
Embodiments of the present application provide a method and system for inspecting a wafer, which can at least partially solve the above problems in the prior art.
According to an aspect of an embodiment of the present application, there is provided a method for detecting a wafer, which may include: acquiring a first image of a wafer, wherein the first image comprises an overall characteristic representing defect information of the wafer; determining a second image of the wafer based on the first image, the second image including detail features representing defect information of the wafer; fusing the overall features and the detail features to generate fused features; and detecting the fused features through a wafer defect classification model.
In one embodiment of the present application, the step of determining the second image of the wafer based on the first image may comprise: locating a defect feature region in the first image; and intercepting the area of the located defect feature to obtain a target image as the second image.
In one embodiment of the present application, the step of determining the second image of the wafer based on the first image may include: positioning a detail area of a defect feature in the target image; and intercepting the located detail area to obtain a detail image as the second image.
In an embodiment of the application, the step of intercepting the located region of the defect feature to obtain the target image as the second image may include: extracting a depth feature map of the first image of the wafer based on a depth convolutional neural network; determining an average of features in each location channel on the depth feature map and an average of features in the global channels of the depth feature map; confirming a region of the depth feature map, wherein the average value of the features in the position channel is larger than the average value of the features in the whole channel, as a target defect region; and intercepting the image of the target defect area as the target image.
In one embodiment of the present application, after confirming, as the target defect region, a region in the depth feature map in which an average value of the features in the position channel is greater than an average value of the features in the overall channel, the method may include: determining the minimum outer-wrapping rectangle of the target defect area and confirming the coordinates of the minimum outer-wrapping rectangle; determining the position coordinates of the target defect region in the first image of the wafer through deconvolution; and intercepting the first image of the wafer according to the position coordinate to be used as the target image.
In an embodiment of the application, the step of intercepting the located detail area to obtain a detail image as the second image may include: extracting a depth feature map of the target image based on a depth convolutional neural network; determining an average value of the features of each position channel on the depth feature map of the target image; selecting a sliding window to carry out convolution on the target image, and confirming at least one activation window according to the sliding window, wherein the average value of the characteristics of the depth feature map channel of the activation window is larger than the average value of the characteristics of each position channel on the depth feature map; and intercepting the target image corresponding to the at least one activation window as the detail image.
In an embodiment of the present application, the step of selecting a sliding window to convolve the target image, and after confirming an activation window according to the sliding window, intercepting the located detail area to obtain a detail image may further include: selecting the area of the at least one activation window as a detail defect area of the wafer defect image in a non-maximum suppression mode; determining the position coordinates of the detail defect area in the target image of the wafer through deconvolution; and intercepting an image of the detail defect area as the detail image.
In an embodiment of the present application, the detecting the fused features by using a wafer defect classification model may include: and classifying the defects of the wafer to determine the defect type of the wafer.
In an embodiment of the present application, after detecting the fused feature through a wafer defect classification model, the method may include: and outputting a detection result, wherein the detection result comprises the defect type of the wafer and the confidence corresponding to the defect type of the wafer.
In one embodiment of the present application, the wafer defect classification model may be obtained by training. The wafer defect classification model comprises a first classification model and a second classification model, and the training samples of the first classification model and the training samples of the second classification model are different.
In one embodiment of the present application, the method further includes a step of training the wafer defect classification model separately, which may include: dividing the coarse-grained image and the fine-grained image of the wafer into a pure sample and a noise sample; and inputting the pure samples into the first classification model, inputting the noise samples into the second classification model, and respectively training the first classification model and the second classification model.
In one embodiment of the present application, the clean sample may include the coarse-grained image or the fine-grained image of the identified defect type for testing and verification of the wafer defect classification model; the noise sample may include the coarse-grained image or the fine-grained image of the defect type to be confirmed for training of a classification model of the wafer defect.
In an embodiment of the present application, the training the first classification model and the second classification model respectively further includes performing hybrid training on the first classification model and the second classification model, and may include: dividing the noise samples into labeled and unlabeled datasets; extracting a depth feature map of the labeled data set and the unlabeled data set using the first classification model and the second classification model; fusing the depth feature maps of the same sample extracted by the first classification model and the second classification model; inputting the fused depth feature map into a classifier to obtain a detection result of the noise sample; and confirming the overall loss of the wafer defect classification model according to the detection result of the noise sample, and finishing the training of the wafer defect classification model.
In one embodiment of the present application, the step of dividing the noise sample into a labeled data set and an unlabeled data set may include: inputting the coarse-grained image or the fine-grained image in the noise sample to the first classification model and the second classification model; and dividing the noise sample into a marked data set and an unmarked data set according to the predicted defect types and the confidence degrees of the first classification model and the second classification model, wherein the confidence degree of the marked data set is greater than a set value, and the confidence degree of the unmarked data set is less than the set value.
In an embodiment of the present application, before fusing the depth feature maps extracted from the same sample by the first classification model and the second classification model of the same sample, the method may further include: and inputting the depth feature map into a full-connected layer to obtain a one-dimensional depth feature map.
In one embodiment of the present application, the determining the overall loss of the wafer defect classification model according to the defect type and the probability of the image may include: performing linear combination according to the defect types and the probabilities of the marked data sets obtained by the first classification model and the second classification model to obtain collaborative fine tuning loss; merging the defect types and the probabilities of the unmarked data sets estimated by the first classification model and the second classification model to be used as collaborative estimation loss; regularizing the first classification model and the second classification model to obtain regularization losses of the first classification model and the second classification model; and fusing the collaborative fine tuning loss, the collaborative estimation loss and the regularization loss as the overall loss of the fine-grained classification model of the wafer defect.
In an embodiment of the application, the classifying the defect model of the image according to the depth feature map by the first classification model and the second classification model, and fusing the classification results to obtain the detection result of the noise sample may include: processing the depth feature maps of the marked data set and the unmarked data set by using a full connection layer to obtain a one-dimensional depth feature map; and fusing the characteristics of the one-dimensional depth characteristic map and inputting the fused characteristics into a classifier to obtain the detection result of the wafer.
In an embodiment of the present application, the detection result may include a confidence level corresponding to the defect type of the wafer.
Another aspect of the present disclosure provides a system for detecting a wafer, which may include: a memory for storing program instructions; and a processor in communication with the memory for executing the program instructions to implement the method of any of the above.
In another aspect, the present application provides an apparatus for inspecting a wafer, which may include the above wafer inspection system.
In one embodiment of the present application, the detection apparatus may further include: and the detection device is used for acquiring the image of the wafer.
In one embodiment of the present application, the detection device may be configured to acquire a first image of the wafer and/or a second image of the wafer.
In one embodiment of the present application, the detection device may include at least one of: computers, servers, cell phones, smart phones, wearable devices, wafer processing devices.
Yet another aspect of the present application provides a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any of the above.
According to the wafer detection method and the wafer detection equipment, the surface defects of the wafer can be detected, the overall defect characteristics of the surface of the wafer can be focused, meanwhile, the detailed defect characteristics of the surface of the wafer can be focused, and in the wafer detection process, the accuracy of wafer defect type classification can be improved to a certain extent by analyzing the overall defect characteristics and the detailed defect characteristics of the wafer in a combined manner. And the model is trained through the marked data set and the unmarked data set, so that the training of the wafer defect classification model can be completed by using fewer marked samples, the human input is saved to a certain extent, and the wafer detection efficiency is improved.
Drawings
Other features, objects, and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings. Wherein:
FIG. 1 is a schematic flow chart illustrating a wafer inspection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating a process for obtaining a target image according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart illustrating a process of obtaining a detail image according to an embodiment of the present application;
FIG. 4A is a schematic diagram of a coarse-grained image of a wafer according to an embodiment of the present disclosure;
FIG. 4B is a schematic diagram of a target image of a wafer according to one embodiment of the present disclosure;
FIG. 4C is a schematic view of a detail image of a wafer according to an embodiment of the present disclosure;
FIG. 5A is a schematic flow chart illustrating separate training of a first classification model and a second classification model according to an embodiment of the present application;
FIG. 5B is a schematic flow chart illustrating hybrid training of a first classification model and a second classification model according to an embodiment of the present application;
FIG. 6 is a schematic view of an inspection system for wafers according to one embodiment of the present disclosure; and
fig. 7 is a schematic view of an inspection apparatus for a wafer according to an embodiment of the present disclosure.
Detailed Description
The defect detection process of the wafer surface is an important link in the production process of the memory, the production process of the memory is various and complex, and the defects on the wafer surface need to be detected after each process, particularly after an etching process and a deposition process, so that the defect wafer is prevented from flowing to a subsequent process to influence the electrical performance of the wafer. The inventor finds that, in the conventional technology, the Defect detection of the wafer is mainly performed by using an Auto Defect Classification (ADC) system in combination with a machine learning method, and the accuracy of the Defect Classification is poor. The inventor finds that, through creative work, in the conventional technology, a machine learning method is mainly used to extract image features of defects, and then the features are combined and input into a classifier to obtain a classification result, which is mainly based on shallow features of an image, such as size, shape, position, color, smoothness, texture complexity, contour and the like of a defect region of the image, and if the background of the image is complex, defects with small areas are easily ignored or classified incorrectly. Commonly used classifiers include Support Vector Machines (SVMs), neural networks, decision trees, and the like, often requiring further validation of the classification results manually. A common machine learning method is deep learning based on a convolutional neural network, and due to the fact that the conditions of defects on the surface of a wafer and the manufacturing process of chips are complex, the defects are detected in a layered mode, the defects are multiple in types, electronic scanning images are different in scanning size of the defects, and professionals are required to mark defect samples in a large quantity, and therefore the accuracy of wafer detection and labor investment saving are the problems to be solved rapidly.
Based on this, the embodiments of the present application provide a method, a system, an apparatus for wafer inspection, and a non-transitory computer-readable storage medium storing computer instructions. Defects of the wafer, such as defects on the surface of the wafer, may be detected. According to the wafer defect classification method and device, the whole defect characteristics of the wafer can be concerned, meanwhile, the detailed defect characteristics of the wafer can be concerned, and in the wafer detection process, the whole defect characteristics and the detailed defect characteristics of the wafer are combined and analyzed, so that the accuracy of wafer defect type classification can be improved to a certain extent. And the model is trained through the marked data set and the unmarked data set, so that the training of the wafer defect classification model can be completed by using fewer marked samples, the human input is saved to a certain extent, and the wafer detection efficiency is improved.
For a better understanding of the present application, various aspects of the present application will be described in more detail with reference to the accompanying drawings. It should be understood that the detailed description is merely illustrative of exemplary embodiments of the present application and does not limit the scope of the present application in any way. Like reference numerals refer to like elements throughout the specification. The expression "and/or" includes any and all combinations of one or more of the associated listed items.
In the drawings, the size, dimension, and shape of elements have been slightly adjusted for convenience of explanation. The figures are purely diagrammatic and not drawn to scale. As used herein, the terms "approximately", "about" and the like are used as table-approximating terms and not as table-degree terms, and are intended to account for inherent deviations in measured or calculated values that would be recognized by one of ordinary skill in the art. In addition, in the present application, the order in which the processes of the respective steps are described does not necessarily indicate an order in which the processes occur in actual operation, unless explicitly defined otherwise or can be inferred from the context.
It will be further understood that terms such as "comprising," "including," "having," "including," and/or "containing," when used in this specification, are open-ended and not closed-ended, and specify the presence of stated features, elements, and/or components, but do not preclude the presence or addition of one or more other features, elements, components, and/or groups thereof. Furthermore, when a statement such as "at least one of" appears after a list of listed features, it modifies that entire list of features rather than just individual elements in the list. Furthermore, when describing embodiments of the present application, the use of "may" mean "one or more embodiments of the present application. Also, the term "exemplary" is intended to refer to an example or illustration.
Unless otherwise defined, all terms (including engineering and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In addition, the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a flowchart illustrating a wafer inspection method according to an embodiment of the present disclosure. As shown in fig. 1, the present application provides a wafer inspection method 1000, comprising:
step S110: acquiring a first image of the wafer, wherein the first image comprises an overall characteristic representing defect information of the wafer;
step S120: determining a second image of the wafer based on the first image of the wafer, wherein the second image comprises detail characteristics representing defect information of the wafer;
step S130: fusing the overall features and the detail features to generate fused features;
step S140: and detecting the fused features through a wafer defect classification model.
The specific processes of the steps of the above-described manufacturing method 1000 will be described in detail with reference to fig. 2 to 5.
Step S110: acquiring a first image of the wafer, wherein the first image comprises an overall characteristic representing defect information of the wafer;
first, a first image of a wafer can be obtained by scanning the wafer, and the first image includes an overall feature representing defect information of the wafer. For example, the wafer may be scanned by a Scanning Electron Microscope (SEM) to obtain a first image of the wafer. The first image may include a coarse-grained image of the wafer, and in this embodiment, the first image is taken as the coarse-grained image of the wafer. The coarse-grained image of the wafer may include the global characteristics of the wafer, which may include characteristic information such as the location, shape, color, etc. of the defect region. The defects of the wafer may include particles (liquid, solid particles, etc.), defects such as surface scratches, irregular connections, etc. The wafer may be a wafer after any process step, and the image of the wafer is obtained by an image acquisition device, where the image acquisition device may be a camera, a Wafer Inspection System (WIS) equipped with a camera, and the like, and the image acquisition device is not limited in this application, and the coarse-grained image of the wafer may be as shown in fig. 4A.
Step S120: determining a second image of the wafer based on the first image of the wafer, wherein the second image includes a representation of the wafer The detail characteristics of the defect information of (1);
because the size of the defect area in the scanned image of the wafer is different, in order to make the defect type determination of the wafer more accurate, a second image of the wafer needs to be acquired, and the second image may include detailed features representing the wafer. The second image may include a fine-grained image of the wafer. In this embodiment, the second image is a fine-grained image of a wafer as an example. The fine-grained image of the wafer contains detailed features representing defect information of the wafer. Fine-grained image classification may also be referred to as sub-category image classification, which may aim to more finely sub-classify large categories of coarse-grained images. Firstly, a defect feature area in a coarse-grained image of a wafer is positioned, and a target image is obtained as a fine-grained image by intercepting the positioned defect feature area. A fine-grained image of the wafer is acquired over the coarse-grained image of the wafer based on an attention mechanism, and the fine-grained image of the wafer may include a target image that may include features representing defect information of the wafer. The target image of the wafer, taken with the coarse-grained image of FIG. 4A as an example, can be seen in FIG. 4B. Fig. 2 is a schematic flowchart of a process for acquiring a target image according to an embodiment of the present application, and as shown in fig. 2, specific steps of acquiring the target image are as follows:
first, in step S1211, a depth feature map of a coarse-grained image of a wafer may be extracted based on a depth convolutional neural network. The coarse-grained image of the wafer is convoluted by convolution kernels to obtain a plurality of depth feature maps, each convolution kernel can extract specific features, and different convolution kernels can extract different features. For example, one convolution kernel is used to extract the contour feature of the defect region in the coarse-grained image, another convolution kernel is used to extract the gray feature of the defect region in the coarse-grained image, another convolution kernel is used to extract the position feature of the defect region in the coarse-grained image, and so on. It will be understood by those skilled in the art that the above-mentioned extracted features are exemplary, and the application is not limited to the extracted depth feature map.
Next, in step S1212, the average value of the features in each position channel on the depth feature map and the average value of the features in the entire channels of the depth feature map are calculated. After the coarse-grained image of the wafer is subjected to depth convolution neural network feature extraction, the average value of each position channel in the depth feature map of the wafer and the average value of the whole channel of the depth feature map are calculated. For example, the depth profile of the wafer is an x y profile, each depth profile of the wafer includes x y locations, each depth profile corresponds to one channel, and the same locations in different channels are averaged with all channels of the depth profile.
Then, in step S1213, an area in the depth feature map where the average value of the features in the position channel is larger than the average value of the features in the entire channel is confirmed as a target defect area; and intercepting an image of the target defective region as a target image in step S1214. In one embodiment, the average value of the features in each position channel in the depth feature map of the wafer may be compared with the average value of the entire depth feature map of the wafer, an area in the depth feature map, in which the average value of the features in the position channels is greater than the average value of the features in the entire channels, is used as a target defect area, an outsourcing rectangle is obtained for the area, the coordinates of the outsourcing rectangle are determined, the coordinates of the target defect area corresponding to the outsourcing rectangle in the coarse-grained image of the wafer are further calculated by deconvolution, the corresponding coarse-grained image may be captured according to the coordinates of the coarse-grained image, and the captured image is used as the target image of the wafer. The method of determining the outsourcing rectangle for the target defect area and confirming the coordinates of the outsourcing rectangle may include: and solving a minimum outsourcing rectangle for the target defect area, and confirming the coordinate of the minimum outsourcing rectangle. The minimum bounding rectangle of the target defect region can be determined by Non-Maximum Suppression (NMS for short). This facilitates framing the exact location of the target defect.
In the classification process of the wafer defects, some defects are only differentiated slightly on the surface, and in order to further improve the accuracy of a wafer defect classification model, after the target image of the wafer is obtained, the target image of the wafer can be further processed according to an attention mechanism to obtain a detail image, and the defect type of the wafer is further confirmed through the detail image. The fine grain image of the wafer may also include a detail image, for example, FIG. 4C is a detail image taken from the target image of FIG. 4B as an example.
Fig. 3 is a schematic flowchart of a process for acquiring a detail image according to an embodiment of the present application, and as shown in fig. 3, specific steps of acquiring the detail image are as follows:
first, in step S1221, a depth feature map of the target image may be extracted based on a depth convolution neural network. After the target image of the wafer is subjected to convolution kernel convolution, a plurality of depth feature maps can be obtained, wherein the depth feature map of the target image can comprise feature information such as the position, the shape and the color of a defect area of the target image.
Next, in step S1222, an average value of the features of each position channel on the depth feature map of the target image is calculated. After the target image of the wafer is subjected to depth convolution neural network feature extraction, calculating a feature average value of each position channel in a depth feature map of the target image, for example, extracting a map with the depth feature map of the target image being m × n, wherein the depth feature map of the target image comprises m × n positions, each depth feature map corresponds to one channel, and averaging the same positions in different channels.
Then, in step S1223, a sliding window is selected to perform convolution on the target image, and at least one active window is determined according to the sliding window, where the average value of the features of the depth feature map channel of the active window is greater than the average value of the features of the channels. And selecting a sliding window pair to slide in the target image, and confirming a detailed area of the wafer defect area in the target image based on an attention mechanism. Windows of different sizes and aspect ratios can be set to slide on the whole target image, for example, the size of the sliding window is 3 to 11, and the number of the sliding windows is 3 to 7. The sliding window needs to traverse all the positions of the target image, the average value of the characteristics of the channel of the sliding window in the depth characteristic map is calculated, and if the average value of the characteristics of the channel of the depth characteristic map of the sliding window is larger than a certain threshold value, the sliding window can be used as an activation window. The certain threshold may comprise an average of all pixels in the image. The size of the sliding window is, for example, 3 × 3, 6 × 6, and 9 × 9, but those skilled in the art will recognize that the size of the window is illustrative and the size of the sliding window in the present application is not limited thereto. The sliding window has certain step length limitation in the process of traversing the target image, so that certain redundancy may exist in the activation window, a fixed number of window areas can be selected as detailed defect areas of the wafer defect image in a non-maximum suppression mode in the subsequent process, and the subsequent influence on the wafer defect model detection performance is reduced. Those skilled in the art will appreciate that the number and size of the sliding windows are merely illustrative and the number and size of the sliding windows in the present application is not limited thereto.
Then, in step S1224, the target image corresponding to at least one active window is intercepted as a detail image. After the positions and the number of the activated windows are confirmed, the position coordinates of the detail defect areas in the target image of the wafer can be calculated through deconvolution, and at least one corresponding target image is further intercepted to serve as the detail image of the wafer.
Step S130: fusing the overall features and the detail features to generate fused features;
and fusing the extracted coarse-grained image features of the wafer with the fine-grained image features of the wafer, and outputting the fused features to a classifier in a wafer defect classification model for further judgment. For example, the feature representing the defect information in the coarse-grained image and the fine-grained image may be fused, the coarse-grained image feature of the extracted wafer and the fine-grained image feature of the wafer may be concatenated, and the fused feature may include the size, shape, position, color, smoothness, texture complexity, contour, and the like of the image defect region.
Step S140: and detecting the fused features through a wafer defect classification model.
The method is characterized in that the coarse grain image features of the wafer are extracted and fused with the fine grain image features of the wafer in a mode of combining the coarse grain image and the fine grain image of the wafer, and the fused features are detected by adopting a trained wafer defect classification model. Therefore, not only the whole image of the wafer defects is concerned, but also the detail difference among the defects is concerned, so that the accuracy of wafer defect type classification can be improved to a certain extent by combining and analyzing the whole defect characteristics and the detail defect characteristics of the wafer in the wafer detection process. After the fused features are detected by the trained wafer defect classification model, a more accurate wafer defect classification result can be output, for example, the classification result of the wafer defect and the corresponding confidence coefficient can be output.
In one embodiment of the present application, the higher the confidence level. For example, if a defect image is classified into a certain defect classification and the confidence is 0.99, the classification at this time is considered to be correct. If the obtained confidence is lower than the confidence threshold of the jamming control, the wafer defect classification model does not classify, and the coarse-grained image and/or the fine-grained image with the confidence lower than the confidence threshold are/is placed in the class which is not defined by classification. For example, if the confidence threshold of the stuck control is 0.6, the wafer defect classification model does not classify the wafer defect with the confidence lower than 0.6, but places the coarse-grained image and/or the fine-grained image with the confidence lower than 0.6 in the class which is not defined by classification.
Of course, the wafer inspection method is not limited to the type of wafer. The wafer detection method can be adopted for wafers with defects and wafers without defects. Even if the wafer has no defects, the wafer detection method can be adopted for detection, and the detection result and the corresponding confidence coefficient are output. For example, when the wafer itself has no defect, the wafer inspection method can be used to output the inspection result of "no defect on wafer" and the corresponding confidence.
Since the training samples for training the wafer defect classification model in step S140 are mixed with noise samples, the noise samples need to be distinguished. Based on this, a first classification model and a second classification model are introduced here to distinguish noise samples. Fig. 5A is a schematic flowchart of a process of individually training a first classification model and a second classification model according to an embodiment of the present application, and as shown in fig. 5A, specific steps of individually training the first classification model and the second classification model are as follows:
first, in step S1410, the coarse-grained image and the fine-grained image of the wafer are divided into a clean sample and a noise sample. Marking the defect types of the coarse grain image and the fine grain image of the wafer, and calibrating a part of the coarse grain image and the fine grain image by experts or experienced engineers in the field to be used as a pure sample for subsequent training and verification of a wafer defect classification model; the other part of the coarse-grained image and the fine-grained image which are not marked or marked by a front-line operator can be used as noise samples, and due to the fact that the level of the front-line operator has a certain difference, false marking may exist, and therefore the noise samples which are not marked or marked by the front-line operator can be used for subsequent training of the wafer defect classification model. Therefore, the wafer defect classification model can be trained through a small amount of pure samples and a certain amount of noise samples, so that the labor input is reduced, the accuracy of wafer defect type classification is improved to a certain extent, and the production cost is reduced.
Then, in step S1420, the clean samples are input to the first classification model, the noise samples are input to the second classification model, and the first classification model and the second classification model are trained respectively. The clean samples are input into the first classification model, the noise samples are input into the second classification model, and because the samples input by the first classification model and the second classification model are different, the parameters contained in the trained first classification model and the trained second classification model are different, and the same or different results can be obtained in the subsequent wafer detection. The first classification model and the second classification model are introduced, so that the interference of the noise sample on the wafer defect classification can be effectively eliminated on the whole. In consideration of the fact that the modeling method based on data driving is adopted in the embodiment and a large number of noise samples exist in the defect classification task, the method can effectively reduce the workload of manually verifying the samples under the condition of ensuring the classification performance of the model in the model training process.
Fig. 5B is a schematic flowchart of a process of hybrid training a first classification model and a second classification model according to an embodiment of the present application, and as shown in fig. 5B, specific steps of the hybrid training of the first classification model and the second classification model are as follows:
first, in step S1421, noise samples are divided into labeled and unlabeled data sets. The obtained wafer image is a noise sample and comprises a coarse grain image and a fine grain image of the wafer, the coarse grain image and the fine grain image of the wafer are input into a first classification model and a second classification model, and the first classification model and the second classification model predict the defect type of the wafer to obtain the predicted defect type and the confidence coefficient of the predicted defect type. And then, the sample distribution can be mixed by using a Gaussian mixture model, so that the real distribution of the wafer defect model is simulated as much as possible, and the marked data set and the unmarked data set can be distinguished conveniently. The labeled data set and the unlabeled data set are partitioned according to the confidence. The confidence of the labeled data set is greater than the set value and the confidence of the unlabeled data set is less than the set value. The set value can be adjusted according to the experience of the engineer, for example, the set value is 0.8, that is, the confidence of the defect type of the wafer is greater than 0.8, which can be regarded as labeled data, and the confidence of the defect type of the wafer is less than 0.8, which can be regarded as unlabeled data.
Next, in step S1422, the depth feature maps of the labeled data set and the unlabeled data set are extracted using the first classification model and the second classification model; in step S1423, the depth feature maps of the same sample extracted by the first classification model and the second classification model are fused; and step S1424, inputting the fused depth feature map into a classifier to obtain a detection result of the noise sample. Inputting the same noise sample into a first classification model and a second classification model, extracting a depth feature map from the same noise sample by the first classification model and the second classification model, fusing the depth feature map extracted by the first classification model and the depth feature map extracted by the second classification model, inputting the fused depth feature map into a classifier, and judging the fused depth feature map by the classifier to obtain a detection result of the noise sample. The depth feature maps extracted by the first classification model and the second classification model and corresponding to the same sample can be input into the full-connection layer before being fused, and the depth feature maps pass through the full-connection layer to obtain one-dimensional depth feature maps, so that subsequent data processing is simplified.
Then, in step S1425, the overall loss of the wafer defect classification model is determined according to the detection result of the noise sample, and the hybrid training of the first classification model and the second classification model is completed. And confirming the overall loss of the wafer defect classification model according to the detection result of the noise sample. The training effect of the wafer defect classification model can be confirmed through the overall loss, and the smaller the overall loss is, the higher the accuracy of the wafer defect classification model on the type of the defect predicted by the sample is, so that the training effect can be confirmed through the overall loss. The overall loss of the wafer defect classification model comprises cooperative fine tuning loss, cooperative estimation loss and regularization loss. The collaborative estimation loss is that a first classification model and a second classification model merge prediction results in an unmarked data set, the collaborative fine tuning loss is that the first classification model and the second classification model linearly combine actual marking values and predicted marking values in a marked data set, and parameters in the first classification model and the second classification model are trained by evaluating the linear combination. The training process can also comprise the step of conducting regularization processing on the first classification model and the second classification model to obtain regularization losses of the first classification model and the second classification model, and the regularization processing can prevent the phenomenon of overfitting in wafer defect type classification.
In the wafer detection method in one embodiment of the application, not only the whole image of the wafer defects is concerned, but also the detail difference between the defects is concerned in a mode of combining the coarse-grained image and the fine-grained image of the wafer; the training of the wafer defect classification model can be completed through a small amount of pure samples and a certain amount of noise samples, so that the labor input is reduced, the accuracy of the wafer defect type classification is improved to a certain extent, and the production cost is reduced.
According to the embodiment of the application, the application also provides a wafer detection system, a wafer detection device and a readable storage medium. The wafer detection equipment also comprises a processing machine table and a wafer detection system, wherein the wafer detection system is arranged on the processing machine table and used for detecting the wafer.
Fig. 6 is a schematic diagram of an inspection system for wafers according to an embodiment of the present disclosure. The system is intended to represent hardware devices provided in various forms of detection apparatus, such as hardware devices provided in a digital computer. The detection device may represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The detection apparatus may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, wafer processing equipment, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the wafer inspection system includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 601 is taken as an example.
The memory 602 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the wafer inspection method provided by the present application. A non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the inspection method for a wafer provided by the present application.
Memory 602, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules. The processor 601 executes various functional applications of the server and data processing by executing non-transitory software programs, instructions and modules stored in the memory 602, namely, implements the method for wafer inspection in the above method embodiments.
The memory 602 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for controlling quality, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 may include memory remotely located from the processor 601, and these remote memories may be connected to the inspection equipment for the wafer through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The inspection system for a wafer may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus for controlling quality, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen. In an embodiment of the present application, the input device 603 may further include a detection device for acquiring an image of the wafer, and the acquired image may include a coarse-grained image and a fine-grained image of the wafer.
Fig. 7 is a schematic view of an inspection apparatus for a wafer according to an embodiment of the present disclosure. The detection apparatus 701 may comprise the detection system of any of the embodiments described above, and the detection apparatus 701 may represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The detection apparatus may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, wafer processing equipment, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a server of a distributed system or a server incorporating a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology. The server may be a server of a distributed system or a server incorporating a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The objects, technical solutions and advantageous effects of the present invention are further described in detail with reference to the above-described embodiments. It should be understood that the above description is only a specific embodiment of the present invention, and is not intended to limit the present invention. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present invention shall be included in the protection scope of the present invention.

Claims (24)

1. A method for detecting a wafer is characterized by comprising the following steps:
acquiring a first image of a wafer, wherein the first image comprises an overall characteristic representing defect information of the wafer;
determining a second image of the wafer based on the first image, the second image including detail features representing defect information of the wafer;
fusing the overall features and the detail features to generate fused features; and
and detecting the fused features through a wafer defect classification model.
2. The method of claim 1, wherein determining a second image of the wafer based on the first image comprises:
locating a defect feature region in the first image; and
and intercepting the area of the positioned defect feature to obtain a target image as the second image.
3. The method of claim 2, wherein determining the second image of the wafer based on the first image comprises:
positioning a detail area of a defect feature in the target image; and
and intercepting the located detail area to obtain a detail image as the second image.
4. The method of claim 2, wherein the step of intercepting the located region of the defect feature to obtain a target image as the second image comprises:
extracting a depth feature map of the first image of the wafer based on a depth convolutional neural network;
determining an average of features in each location channel on the depth feature map and an average of features in the global channels of the depth feature map;
confirming a region of the depth feature map, wherein the average value of the features in the position channel is larger than the average value of the features in the whole channel, as a target defect region; and
and intercepting the image of the target defect area as the target image.
5. The method according to claim 4, wherein after confirming, as the target defect region, a region in the depth feature map in which an average value of the features in the position channel is larger than an average value of the features in the entire channel, comprising:
determining the minimum outer-wrapping rectangle of the target defect area and confirming the coordinates of the minimum outer-wrapping rectangle;
determining the position coordinates of the target defect region in the first image of the wafer through deconvolution; and
and intercepting a first image of the wafer according to the position coordinate to serve as the target image.
6. The method according to claim 3, wherein the step of intercepting the located detail area to obtain a detail image as the second image comprises:
extracting a depth feature map of the target image based on a depth convolutional neural network;
determining an average value of the features of each position channel on the depth feature map of the target image;
selecting a sliding window to carry out convolution on the target image, and confirming at least one activation window according to the sliding window, wherein the average value of the characteristics of the depth feature map channel of the activation window is larger than the average value of the characteristics of each position channel on the depth feature map; and
and intercepting the target image corresponding to the at least one activation window as the detail image.
7. The method of claim 6, wherein the step of selecting a sliding window to convolve the target image and after confirming the active window according to the sliding window, truncating the located detail region to obtain a detail image further comprises:
selecting the area of the at least one activation window as a detail defect area of the wafer defect image in a non-maximum suppression mode;
determining the position coordinates of the detail defect area in the target image of the wafer through deconvolution; and
and intercepting an image of the detail defect area as the detail image.
8. The method of claim 1, wherein detecting the fused features through a wafer defect classification model comprises:
and classifying the defects of the wafer to determine the defect type of the wafer.
9. The method of claim 1, wherein after detecting the fused features by a wafer defect classification model, the method comprises:
and outputting a detection result, wherein the detection result comprises the defect type of the wafer and the confidence corresponding to the defect type of the wafer.
10. The method of claim 1, wherein the wafer defect classification model comprises a first classification model and a second classification model, and wherein the first classification model and the second classification model are different in training samples.
11. The method of claim 10, further comprising the step of individually training the wafer defect classification model, comprising:
dividing the first image and the second image of the wafer into a clean sample and a noise sample; and
and inputting the pure samples into the first classification model, inputting the noise samples into the second classification model, and respectively training the first classification model and the second classification model.
12. The method of claim 11, wherein the clean sample comprises the first image or the second image identifying a defect type for testing and verification of the wafer defect classification model; the noise sample comprises the first image or the second image of the defect type to be confirmed and is used for training the wafer defect classification model.
13. The method of claim 11, wherein training the first classification model and the second classification model separately further comprises hybrid training the first classification model and the second classification model, comprising:
dividing the noise samples into labeled and unlabeled datasets;
extracting a depth feature map of the labeled data set and the unlabeled data set using the first classification model and the second classification model;
fusing the depth feature maps of the same sample extracted by the first classification model and the second classification model;
inputting the fused depth feature map into a classifier to obtain a detection result of the noise sample; and
and confirming the overall loss of the wafer defect classification model according to the detection result of the noise sample, and finishing the training of the wafer defect classification model.
14. The method of claim 13, wherein the step of dividing the noise samples into labeled and unlabeled data sets comprises:
inputting the first image or the second image in the noise samples to the first classification model and the second classification model; and
and dividing the noise sample into a marked data set and an unmarked data set according to the predicted defect types and the confidence degrees of the first classification model and the second classification model, wherein the confidence degree of the marked data set is greater than a set value, and the confidence degree of the unmarked data set is less than the set value.
15. The method according to claim 13, wherein before fusing the depth feature maps extracted from the same sample by the first classification model and the second classification model of the same sample, the method further comprises:
and inputting the depth feature map into a full-connected layer to obtain a one-dimensional depth feature map.
16. The method of claim 13, wherein determining the global penalty of the wafer defect classification model based on the defect type and probability of the image comprises:
performing linear combination according to the defect types and the probabilities of the marked data sets obtained by the first classification model and the second classification model to obtain collaborative fine tuning loss;
merging the defect types and the probabilities of the unmarked data sets estimated by the first classification model and the second classification model to be used as collaborative estimation loss;
regularizing the first classification model and the second classification model to obtain regularization losses of the first classification model and the second classification model; and
and fusing the collaborative fine tuning loss, the collaborative estimation loss and the regularization loss to serve as the overall loss of the fine-grained classification model of the wafer defects.
17. The method of claim 16, wherein the first classification model and the second classification model classify the defect models of the images according to the depth feature maps, and the fusing the classification results to obtain the detection results of the noise samples comprises:
processing the depth feature maps of the marked data set and the unmarked data set by using a full connection layer to obtain a one-dimensional depth feature map; and
and fusing the characteristics of the one-dimensional depth characteristic map and inputting the fused characteristics into a classifier to obtain the detection result of the wafer.
18. The method of claim 13, wherein the inspection result comprises a confidence level that the defect type of the wafer corresponds to.
19. An inspection system for a wafer, comprising:
a memory for storing program instructions; and
a processor in communication with the memory for executing the program instructions to implement the method of any of claims 1 to 18.
20. An apparatus for inspecting a wafer, comprising: the detection system of claim 19.
21. The detection apparatus of claim 20, further comprising: and the detection device is used for acquiring the image of the wafer.
22. The inspection apparatus of claim 21, wherein the probing device is configured to capture a first image of the wafer and/or a second image of the wafer.
23. The detection apparatus of claim 20, wherein the detection apparatus comprises at least one of: computers, servers, cell phones, smart phones, wearable devices, wafer processing devices.
24. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-18.
CN202110842583.2A 2021-07-26 2021-07-26 Wafer detection method, wafer detection equipment and storage medium Active CN113538392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110842583.2A CN113538392B (en) 2021-07-26 2021-07-26 Wafer detection method, wafer detection equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110842583.2A CN113538392B (en) 2021-07-26 2021-07-26 Wafer detection method, wafer detection equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113538392A true CN113538392A (en) 2021-10-22
CN113538392B CN113538392B (en) 2022-11-11

Family

ID=78120775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110842583.2A Active CN113538392B (en) 2021-07-26 2021-07-26 Wafer detection method, wafer detection equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113538392B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821194A (en) * 2022-05-30 2022-07-29 深圳市科荣软件股份有限公司 Equipment running state identification method and device
CN116485795A (en) * 2023-06-19 2023-07-25 湖南隆深氢能科技有限公司 Coil coating production line flaw detection method and system
CN117455897A (en) * 2023-11-30 2024-01-26 魅杰光电科技(上海)有限公司 Wafer scratch detection method, device, equipment and storage medium
CN117455897B (en) * 2023-11-30 2024-04-30 魅杰光电科技(上海)有限公司 Wafer scratch detection method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019160999A (en) * 2018-03-13 2019-09-19 株式会社アイテス Defect inspection device, and defect inspection method
CN112074940A (en) * 2018-03-20 2020-12-11 东京毅力科创株式会社 Self-sensing corrective heterogeneous platform incorporating integrated semiconductor processing modules and methods of use thereof
CN112270722A (en) * 2020-10-26 2021-01-26 西安工程大学 Digital printing fabric defect detection method based on deep neural network
CN112529873A (en) * 2020-12-09 2021-03-19 深圳市芯汇群微电子技术有限公司 Wafer defect detection method based on ART neural network
CN112651961A (en) * 2021-01-06 2021-04-13 华虹半导体(无锡)有限公司 Wafer defect identification method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019160999A (en) * 2018-03-13 2019-09-19 株式会社アイテス Defect inspection device, and defect inspection method
CN112074940A (en) * 2018-03-20 2020-12-11 东京毅力科创株式会社 Self-sensing corrective heterogeneous platform incorporating integrated semiconductor processing modules and methods of use thereof
CN112270722A (en) * 2020-10-26 2021-01-26 西安工程大学 Digital printing fabric defect detection method based on deep neural network
CN112529873A (en) * 2020-12-09 2021-03-19 深圳市芯汇群微电子技术有限公司 Wafer defect detection method based on ART neural network
CN112651961A (en) * 2021-01-06 2021-04-13 华虹半导体(无锡)有限公司 Wafer defect identification method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIAOYAN CHEN: "K-means clustering with morphological filtering for silicon wafer grain defect detection", 《2020 IEEE 4TH INFORMATION TECHNOLOGY》 *
孙世凡等: "基于超像素分割和随机森林的橡胶柱塞缺陷检测方法", 《计算机与现代化》 *
罗月童等: "基于卷积去噪自编码器的芯片表面弱缺陷检测方法", 《计算机科学》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821194A (en) * 2022-05-30 2022-07-29 深圳市科荣软件股份有限公司 Equipment running state identification method and device
CN114821194B (en) * 2022-05-30 2023-07-25 深圳市科荣软件股份有限公司 Equipment running state identification method and device
CN116485795A (en) * 2023-06-19 2023-07-25 湖南隆深氢能科技有限公司 Coil coating production line flaw detection method and system
CN116485795B (en) * 2023-06-19 2023-09-01 湖南隆深氢能科技有限公司 Coil coating production line flaw detection method and system
CN117455897A (en) * 2023-11-30 2024-01-26 魅杰光电科技(上海)有限公司 Wafer scratch detection method, device, equipment and storage medium
CN117455897B (en) * 2023-11-30 2024-04-30 魅杰光电科技(上海)有限公司 Wafer scratch detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113538392B (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
US7409081B2 (en) Apparatus and computer-readable medium for assisting image classification
US7424146B2 (en) Defect inspection method
US8045789B2 (en) Method and apparatus for inspecting defect of pattern formed on semiconductor device
CN111693534B (en) Surface defect detection method, model training method, device, equipment and medium
CN110148130B (en) Method and device for detecting part defects
KR102521386B1 (en) Dimension measuring device, dimension measuring method, and semiconductor manufacturing system
CN113538392B (en) Wafer detection method, wafer detection equipment and storage medium
JP2014178229A (en) Teacher data creation method, image classification method and image classification device
WO2014156425A1 (en) Method for partitioning area, and inspection device
CN107369176B (en) System and method for detecting oxidation area of flexible IC substrate
TW201330135A (en) Method for building rule of thumb of defect classification, and methods for classifying defect and judging killer defect based on rule of thumb and critical area analysis
JP2011158373A (en) Method for creation of teacher data for use in automatic defect classification, and method and apparatus for automatic defect classification
KR20220012217A (en) Machine Learning-Based Classification of Defects in Semiconductor Specimens
JP2006098153A (en) Method and apparatus for automatically sorting defect
CN113241310B (en) Wafer defect detection method, detection device, detection equipment and readable storage medium
KR20220014805A (en) Generating training data usable for examination of a semiconductor specimen
CN113222967A (en) Wafer detection method and system
CN113920096A (en) Method for detecting metal packaging defects of integrated circuit
US20220405905A1 (en) Sample observation device and method
CN114445348B (en) New material water pump defect detection method and system based on optical means
JP4262269B2 (en) Pattern matching method and apparatus
TW201830334A (en) Diagnostic methods for the classifiers and the defects captured by optical tools
CN113538376B (en) Defect positioning method, device and equipment of storage array and readable storage medium
CN113039631B (en) Defect classification by fitting optical signals to point spread functions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant