CN116071349A - Wafer defect detection method, storage medium and data processing device - Google Patents

Wafer defect detection method, storage medium and data processing device Download PDF

Info

Publication number
CN116071349A
CN116071349A CN202310195694.8A CN202310195694A CN116071349A CN 116071349 A CN116071349 A CN 116071349A CN 202310195694 A CN202310195694 A CN 202310195694A CN 116071349 A CN116071349 A CN 116071349A
Authority
CN
China
Prior art keywords
wafer
data
test
clustering result
clustering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310195694.8A
Other languages
Chinese (zh)
Inventor
王恒宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changxin Memory Technologies Inc
Original Assignee
Changxin Memory Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changxin Memory Technologies Inc filed Critical Changxin Memory Technologies Inc
Priority to CN202310195694.8A priority Critical patent/CN116071349A/en
Publication of CN116071349A publication Critical patent/CN116071349A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Testing Or Measuring Of Semiconductors Or The Like (AREA)

Abstract

The disclosure provides a wafer defect detection method, a storage medium and data processing equipment, and relates to the technical field of semiconductors, wherein the method comprises the following steps: and obtaining n wafer characteristic data according to the target wafer information by obtaining the target wafer information, wherein n is a positive integer greater than 1, and processing m wafer characteristic data in the n wafer characteristic data by utilizing a target neural network model to obtain the defect type of the wafer, and m is a positive integer less than or equal to n and greater than 1. By the method, the accuracy of detecting and classifying the defective wafers can be improved, and the detecting and classifying efficiency is improved.

Description

Wafer defect detection method, storage medium and data processing device
Technical Field
The present disclosure relates to the field of semiconductor technologies, and in particular, to a wafer defect detection method, a storage medium, and a data processing apparatus.
Background
In the field of integrated circuits and semiconductor technology, for mass-produced products, a lot of wafers are produced every day, and engineers spend a lot of time looking up the wafers, detect the wafers, classify defective wafers, and find out the results affecting the yield.
In the related art, the accuracy of the method for detecting and classifying the defective wafer is not high and the efficiency is low.
Disclosure of Invention
The wafer defect detection method, the storage medium and the data processing equipment can effectively improve the accuracy of detecting and classifying the defective wafers and improve the detection and classification efficiency of the wafer defects.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to one aspect of the present disclosure, there is provided a wafer defect detection method including: obtaining target wafer information; obtaining n wafer characteristic data according to the target wafer information, wherein n is a positive integer greater than 1; and processing m wafer characteristic data in the n wafer characteristic data by utilizing a target neural network model to obtain the defect class of the wafer, wherein m is a positive integer less than or equal to n and greater than 1.
According to still another aspect of the present disclosure, there is provided a wafer defect detection apparatus including: the acquisition unit is used for acquiring target wafer information; the acquisition unit is also used for acquiring n wafer characteristic data according to the target wafer information, wherein n is a positive integer greater than 1; and the processing unit is used for processing m wafer characteristic data in the n wafer characteristic data by utilizing the target neural network model to obtain the defect type of the wafer, wherein m is a positive integer less than or equal to n and more than 1.
According to yet another aspect of the present disclosure, there is provided a data processing apparatus including a processor and a memory; the memory has stored thereon computer instructions executable on the processor which when executed perform the steps of the method of any of the embodiments of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer instructions which, when run, perform the steps of the method of any of the embodiments of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer program product which, when executed by a processor, implements a data processing method in any of the embodiments of the present disclosure.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
In this way, the disclosure provides a wafer defect detection method, by obtaining target wafer information, obtaining n wafer characteristic data according to the target wafer information, where n is a positive integer greater than 1, and processing m wafer characteristic data in the n wafer characteristic data by using a target neural network model to obtain a defect class of a wafer, where m is a positive integer less than or equal to n and greater than 1. By the method, the accuracy of detecting and classifying the defective wafers can be improved, and the detecting and classifying efficiency is improved.
Further, the target neural network model of the present disclosure may be an adaptive resonance neural network model (Adaptive Resonance Theory Neural Network, ARTNN), and because the method of the present disclosure obtains n wafer characteristic data, compared with directly processing wafer information in the related art, the data amount of the wafer characteristic data obtained in the present disclosure is more, in order to ensure that n wafer characteristic data can be better processed, and an accurate defect type of the wafer is obtained, in the present disclosure, the adaptive resonance neural network model is used as the target neural network model, more wafer characteristic data can be processed, and efficiency is higher, and accuracy is better.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 is a flow chart of a wafer defect detection method according to an embodiment of the disclosure;
FIG. 2 illustrates a schematic diagram of first wafer data in an embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of a second wafer data in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a wafer classification result in an embodiment of the present disclosure;
FIG. 5 is a flow chart illustrating another wafer defect detection method in an embodiment of the present disclosure;
FIG. 6 shows a schematic diagram of a wafer processing process by a clustering method in one embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a wafer defect inspection apparatus according to an embodiment of the present disclosure;
FIG. 8 shows a schematic diagram of a data processing apparatus in an embodiment of the present disclosure;
fig. 9 shows a schematic diagram of a computer-readable storage medium in an embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
Due to the related art, the manner of detecting and classifying the defective wafer is not accurate and has low efficiency.
Based on the above, the disclosure provides a wafer defect detection method, by obtaining target wafer information, obtaining n wafer characteristic data according to the target wafer information, where n is a positive integer greater than 1, and processing m wafer characteristic data in the n wafer characteristic data by using a target neural network model to obtain a defect class of a wafer, where m is a positive integer less than or equal to n and greater than 1. By the method, the accuracy of detecting and classifying the defective wafers can be improved, and the detecting and classifying efficiency is improved.
In order to facilitate overall understanding of the technical solution provided by the embodiments of the present disclosure, fig. 1 shows a flow chart of a wafer defect detection method, as shown in fig. 1, including the following steps:
s102: and obtaining target wafer information.
The target wafer information may include first wafer data and second wafer data, among others.
In one possible embodiment, the manner of acquiring the target wafer information may be: obtaining initial wafer information, wherein the initial wafer information comprises position information and test results of each chip on a wafer; and marking binary tag data on the corresponding position information of each chip on the wafer according to the position information and the test result of each chip on the wafer to obtain first wafer data, and converting the first wafer data with the binary tag data to obtain second wafer data with the numerical tag data.
The initial wafer information may include various data information corresponding to wafers produced on a semiconductor device production line. The method can specifically comprise the position information and the test result of each chip on the wafer.
Illustratively, the test results in the initial wafer information include two types of test Pass (Pass) and test Fail (Fail), and the binary tag data is marked on the corresponding position information of each chip on the wafer according to the position information of each chip on the wafer and the test results to obtain the first wafer data.
Specifically, the manner of marking the binary tag data on the corresponding position information of each chip on the wafer may be: if the test result of the chip on the wafer is that the test is passed, the binary label data marked on the corresponding position information is a first value, the first value may be 0, and if the test result of the chip on the wafer is that the test is not passed, the binary label data marked on the corresponding position information is a second value, the second value may be 1. As shown in fig. 2, fig. 2 shows a wafer including test passing chips and test failing chips, wherein blank squares on the wafer represent test passing chips and squares on the wafer with shadows represent test failing chips.
Illustratively, after the first wafer data is determined, processing is performed on the basis of the first wafer data, and second wafer data having numerical data may be obtained.
Specifically, according to first wafer data with binary label data, determining a test failed chip and position information thereof and a test failed chip and position information thereof on a wafer, according to the determined test failed chip and position information thereof and the determined test failed chip and position information thereof, obtaining weighted distances of all the test failed chips on the wafer to all the test failed chips so as to determine numerical label data of the test failed chips on the wafer, according to the determined test failed chips and position information thereof, obtaining weighted distances of the remaining test failed chips on the wafer to target test failed chips so as to determine the numerical label data of the target test failed chips on the wafer.
As shown in fig. 3, after the first wafer data of the binary label data is converted into the second wafer data of the numerical label data, the first wafer data may be specifically represented on the wafer by a progressive numerical method, and the specific numerical value corresponding to the chip is represented by the color of the square representing the chip on the wafer.
The method for determining the second wafer data of the numerical tag data by the first wafer data of the binary tag data can be processed according to the following formula, wherein the specific formula is as follows:
Figure BDA0004107855880000051
/>
Figure BDA0004107855880000052
wherein, in the formula, N b Indicating the number of chips on the wafer that fail the test; y is s Indicating that the test failed the chip, y r Indicating that the test passed the chip, y wc Represents the center of the circle, d (y r ,y s ) Represents the distance, d (y r ,y wc ) Representing the distance between the chip and the circle center through the test; m is a constant. Where m is used to determine the weight affected by the surrounding chips (chips), the specific value may be 1.
By the above means, after the target wafer information is effectively determined, S104 is executed.
S104: and obtaining n wafer characteristic data according to the target wafer information, wherein n is a positive integer greater than 1.
In one possible embodiment, the n wafer characteristic data in the wafer may be determined according to the target wafer information, and the specific manner may include multiple manners.
For example, processing may be performed statistically to obtain n wafer characteristic data. Specifically, the target wafer information may be processed by way of principal component analysis (Principal Components Analysis, PCA) to obtain 1 wafer characteristic data of the n wafer characteristic data.
For example, the target wafer information may be processed by singular value decomposition (Singular Value Decomposition, SVD) to obtain the wafer characteristic data.
Further, clustering processing can be performed on the target wafer information through a plurality of clustering algorithms, so that n wafer characteristic data can be obtained. For example, 500 clustering methods may be used to process the target wafer information to determine 500 wafer characteristic data. In the method, when the wafer characteristic data are obtained by processing the target wafer information, the more the obtained wafer characteristic data are, the more the wafer characteristic data are, so long as the wafer characteristic data are in the calculation capability supporting range, the better the effect of finally detecting the wafer is, and the higher the accuracy is.
The specific way of obtaining the n wafer characteristic data through various clustering algorithms can be as follows: and clustering the first wafer data and the second wafer data respectively to obtain a first clustering result of the first wafer data and a second clustering result of the second wafer data, and obtaining n wafer characteristic data according to the first clustering result and the second clustering result.
In one possible embodiment, the clustering model directly established by two clustering algorithms respectively performs clustering processing on the first wafer data and the second wafer data, which may be specifically: the clustering processing is performed on the first wafer data and the second wafer data in the following manner: clustering the first wafer data and the second wafer data by using a first clustering model to obtain a first initial clustering result of the first wafer data and a second initial clustering result of the second wafer data, and clustering the first wafer data and the second wafer data by using a second clustering model to obtain a third initial clustering result of the first wafer data and a fourth initial clustering result of the second wafer data.
The first clustering result comprises a first initial clustering result and a third initial clustering result, and the second clustering result comprises a second initial clustering result and a fourth initial clustering result.
Illustratively, the first cluster model may be modeled using a K-nearest neighbor (K-Means) method in the cluster method, and the second cluster model may be modeled using a particle swarm-based K-nearest neighbor (PSO-K-Means) method in the cluster method.
In another possible embodiment, the first wafer data and the second wafer data may also be processed in the following manner, which may specifically be: determining a first cluster number of the first wafer data and a second cluster number of the second wafer data, wherein a first initial cluster result and a second initial cluster result are obtained by adopting the first cluster number and the second cluster number for the first cluster model respectively; the first cluster number and the second cluster number are respectively adopted as the first particle number and the second particle number for the second clustering model, so as to obtain a third initial clustering result and a fourth initial clustering result.
The number of clustering clusters which can be obtained after the clustering model processes the wafer data can be determined in advance before the wafer data is processed through the clustering model, and then the wafer data is processed, so that the subsequent clustering processing can be more quickly converged.
The first cluster number of the first wafer data and the second cluster number of the second wafer data may be determined by an elbow method.
It should be noted that, the above manner is only exemplary of processing the target wafer information through two cluster models, and the target wafer information may also be processed through more cluster models to obtain more wafer feature data.
After the first clustering result and the second clustering result are determined in the above manner, n wafer characteristic data can be further determined, and since the determined first clustering result and second clustering result are both data of a numerical label, the first initial clustering result, the second initial clustering result, the third initial clustering result and the fourth initial clustering result can be converted into binary label data for facilitating subsequent processing. In the above manner, the first wafer data and the second wafer data are processed by the two clustering models, so that the initial clustering result of the four numerical label data is obtained, and the wafer characteristic data which can be obtained in the process of converting the initial clustering result into the binary label data is 4.
The specific modes can be as follows: respectively converting the first initial clustering result, the second initial clustering result, the third initial clustering result and the fourth initial clustering result into binary 4-wafer characteristic data; the n wafer characteristic data includes 4 wafer characteristic data.
S106: and processing m wafer characteristic data in the n wafer characteristic data by utilizing the target neural network model to obtain the defect type of the wafer, wherein m is a positive integer less than or equal to n and greater than 1.
In one possible embodiment, the target neural network model may use an unsupervised adaptive resonant neural network model.
The target neural network model may specifically be an ATR1 version in the adaptive resonant neural network model. The target neural network model may also select other neural network models, such as a trained classification model.
And taking the n wafer characteristic data obtained in the steps as the input of the self-adaptive resonant neural network model, and carrying out final clustering treatment to obtain a final grouping result. The obtained n wafer characteristic data are not necessarily all inputted to the ART model, but may be inputted to the ART model in any combination.
For example, the obtained 4 wafer characteristic data may be input into an adaptive resonant neural network model, so as to obtain a final grouping result, where the final grouping result includes a category of a defective wafer and a category of an non-defective wafer. As shown in fig. 4.
The method comprises the steps of carrying out clustering processing on first wafer data and second wafer data in target wafer information through a plurality of clustering algorithms in the mode of the disclosure to obtain a plurality of initial clustering results, extracting a plurality of wafer characteristic data from the plurality of initial clustering results, constructing a clustering model by adopting as many clustering algorithms as possible in a specific clustering process, processing the target wafer information to obtain as many wafer characteristic data as possible, distinguishing and screening the plurality of wafer characteristic data through the adaptive resonant neural network model, removing noise, and finally detecting a defect wafer to obtain the defect type of the wafer.
It should be noted that, in the present disclosure, in order to improve the efficiency of detection and classification, as many wafer feature data as possible are obtained in a relatively small number of detection times so as to quickly complete detection of a defective wafer, however, the related art cannot effectively process such many features, and cannot distinguish and detect a defective wafer from such many features, and based on such consideration, an adaptive resonant neural network model is selected as the target neural network model in the present disclosure.
Furthermore, through the processing of the plurality of clustering algorithms, the target wafer information can be processed through the plurality of clustering algorithms, compared with the method for directly detecting and classifying the wafers through only one clustering algorithm in the related art, the method is more accurate, and after n wafer characteristic data are obtained and processed through the target neural network model, the screening result is more accurate, and the efficiency is higher.
In one possible embodiment, FIG. 5 shows a schematic diagram of a wafer defect detection method of the present disclosure; as shown in fig. 5.
S502: and obtaining target wafer information, wherein the target wafer information comprises first wafer data and second wafer data.
In one possible embodiment, the target wafer information is processed, schematically represented in fig. 6, to obtain the defect type of the final wafer.
S504: and clustering the first wafer data and the second wafer data through a K neighbor clustering algorithm and a particle swarm-based K neighbor algorithm respectively to obtain a first clustering result and a second clustering result, and determining 4 wafer characteristic data, wherein the first clustering result comprises a first initial clustering result and a third initial clustering result, and the second clustering result comprises a second initial clustering result and a fourth initial clustering result.
In one possible embodiment, partition1 in fig. 6 represents a first initial clustering result, partition2 represents a second initial clustering result, partition3 represents a third initial clustering result, and Partition4 represents a fourth initial clustering result.
And converting the first initial clustering result, the second initial clustering result, the third initial clustering result and the fourth initial clustering result of the numerical value type tag data into binary type tag data to obtain wafer characteristic data.
For example, as shown in the following tables 1 and 2, the clustering results of the numerical values of Partition1, partition2, partition3, and Partition4 are converted into binary (binary):
TABLE 1 numerical clustering results (cluster result)
col1 col2 col3 col4 Clustering results
0 0 1 0 group1
1 1 1 0 group2
Table 2 binary wafer characterization data
col1 col2 col3 col4 group1 group2
0 0 1 0 1 0
1 1 1 0 0 1
In tables 1 and 2, col1 to col4 are used to represent numeric clustering results, and after the clustering results are converted into binary wafer feature data, the binary wafer feature data is represented by corresponding bits according to the number of groups of the clustering results in table 1, for example, if 2 groups (group 1 and group 2) are assumed in the table, the binary wafer feature data is represented by 2 bits, and when a certain clustering result is group1, the 2 bits are represented as "10", and when a certain clustering result is group2, the 2 bits are represented as "01".
S506: and processing m wafer characteristic data in the n wafer characteristic data by utilizing a target neural network model to obtain the defect class of the wafer, wherein m is a positive integer less than or equal to n and greater than 1.
Based on the same inventive concept, the embodiments of the present disclosure also provide a wafer defect detection apparatus, such as the following embodiments. Since the principle of solving the problem of the embodiment of the device is similar to that of the embodiment of the method, the implementation of the embodiment of the device can be referred to the implementation of the embodiment of the method, and the repetition is omitted.
Fig. 7 shows a schematic structural diagram of a wafer defect detecting device, wherein the wafer defect detecting device 70 includes: an acquiring unit 701, configured to acquire target wafer information; the acquisition unit is also used for acquiring n wafer characteristic data according to the target wafer information, wherein n is a positive integer greater than 1; and a processing unit 702, configured to process m wafer feature data in the n wafer feature data by using the target neural network model, so as to obtain a defect class of the wafer, where m is a positive integer less than or equal to n and greater than 1.
Fig. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present disclosure. As shown in fig. 8, a data processing apparatus in an embodiment of the present disclosure may include: one or more processors 801, memory 802, and input-output interfaces 803. The processor 801, memory 802, and input-output interface 803 are connected via a bus 804. The memory 802 is used to store a computer program including program instructions, the input output interface 803 is used to receive data and output data, such as for performing data interactions between a host and a data processing device, or for performing data interactions between various virtual machines in a host; the processor 801 is configured to execute program instructions stored in the memory 802.
The processor 801 may perform the following operations, among others: acquiring semiconductor equipment requirement information and preventive maintenance original windows of the same type of each semiconductor equipment; under the constraint of preventive maintenance original windows of the semiconductor devices, current generation chromosomes of the semiconductor devices are obtained; acquiring current generation three-phase guide vectors according to the semiconductor equipment demand information and current generation chromosomes of each semiconductor equipment; under the guidance of the current generation three-phase guidance vector, unidirectional mutation is carried out on the current generation chromosome of each semiconductor device, and the next generation chromosome of each semiconductor device is obtained; obtaining target chromosomes of the respective semiconductor devices from current-generation chromosomes and next-generation chromosomes of the respective semiconductor devices; determining preventive maintenance time of each semiconductor device according to the target chromosome of each semiconductor device, wherein the preventive maintenance time of each semiconductor device is in a preventive maintenance original window of the corresponding semiconductor device.
In some possible implementations, the processor 801 may be a central processing unit (central processing unit, CPU), which may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field-programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 802 may include read only memory and random access memory, and provides instructions and data to the processor 801 and the input output interface 803. A portion of memory 802 may also include non-volatile random access memory. For example, the memory 802 may also store information of device type.
In a specific implementation, the data processing device may execute, through each built-in functional module, an implementation manner provided by each step in any method embodiment described above, and specifically, the implementation manner provided by each step in the diagram shown in the method embodiment described above may be referred to, which is not described herein again.
Embodiments of the present disclosure provide a data processing apparatus, including: a processor, an input-output interface, and a memory, where the processor obtains a computer program in the memory, and performs the steps of the method shown in any of the embodiments above.
The embodiments of the present disclosure further provide a computer readable storage medium, where a computer program is stored, fig. 9 shows a schematic diagram of a computer readable storage medium in an embodiment of the present disclosure, and as shown in fig. 9, a program product capable of implementing the method of the present disclosure is stored on the computer readable storage medium 900. The computer program is suitable for being loaded by the processor and executing the data processing method provided by each step in any of the foregoing embodiments, and specifically, the implementation manner provided by each step in any of the foregoing embodiments may be referred to, which is not described herein in detail. In addition, descriptions of the advantageous effects of the same method are not performed. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present disclosure, please refer to the description of the embodiments of the method according to the present disclosure. As an example, a computer program may be deployed to be executed on one data processing device or on multiple data processing devices at one site or distributed across multiple sites and interconnected by a communication network.
The computer readable storage medium may be the data processing apparatus provided in any of the foregoing embodiments or an internal storage unit of the data processing device, such as a hard disk or a memory of the data processing device. The computer readable storage medium may also be an external storage device of the data processing device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the data processing device. Further, the computer readable storage medium may also include both internal storage units and external storage devices of the data processing apparatus. The computer readable storage medium is used to store the computer program and other programs and data required by the data processing apparatus. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
The disclosed embodiments also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the data processing apparatus reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the data processing apparatus to perform the methods provided in the various alternatives in any of the embodiments described above.
The terms first, second and the like in the description and in the claims and drawings of the embodiments of the disclosure are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the term "include" and any variations thereof is intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or device that comprises a list of steps or elements is not limited to the list of steps or modules but may, in the alternative, include other steps or modules not listed or inherent to such process, method, apparatus, article, or device.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in this description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The methods and related devices provided by the embodiments of the present disclosure are described with reference to the method flowcharts and/or structure diagrams provided by the embodiments of the present disclosure, and each flowchart and/or block of the method flowcharts and/or structure diagrams may be implemented by computer program instructions, and combinations of flowcharts and/or block diagrams. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable application display device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable application display device, create means for implementing the functions specified in the flowchart flow or flows and/or structural diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable application display device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or structural diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable application display device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or structures block or blocks.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims.

Claims (13)

1. A method for detecting wafer defects, comprising:
obtaining target wafer information;
obtaining n wafer characteristic data according to the target wafer information, wherein n is a positive integer greater than 1;
and processing m wafer characteristic data in the n wafer characteristic data by utilizing a target neural network model to obtain the defect class of the wafer, wherein m is a positive integer less than or equal to n and greater than 1.
2. The method of claim 1, wherein obtaining target wafer information comprises:
obtaining initial wafer information, wherein the initial wafer information comprises position information and test results of each chip on the wafer;
marking binary tag data on the corresponding position information of each chip on the wafer according to the position information and the test result of each chip on the wafer to obtain first wafer data;
converting the first wafer data with the binary tag data to obtain second wafer data with the numerical tag data;
wherein the target wafer information includes the first wafer data and the second wafer data.
3. The method of claim 2, wherein marking binary label data on the corresponding positional information of each die on the wafer to obtain first wafer data based on test results of each die on the wafer, comprises:
if the test result of the chips on the wafer is that the test is passed, the binary label data marked on the corresponding position information is a first value;
and if the test result of the chips on the wafer is that the test fails, the binary label data marked on the corresponding position information is a second value.
4. The method of claim 2, wherein converting the first wafer data with binary tag data to obtain second wafer data with numeric tag data, comprises:
determining a failed test chip and position information thereof and a passed test chip and position information thereof on the wafer according to the first wafer data with the binary label data;
according to the determined failed test chips and the position information thereof and the determined passed test chips and the position information thereof, obtaining the weighted distances of all failed test chips on the wafer to all passed test chips so as to determine the numerical label data of the passed test chips on the wafer;
and according to the determined failed chip of the test on the wafer and the position information thereof, obtaining the weighted distance between the failed chip of the residual test on the wafer and the failed chip of the target test, so as to determine the numerical label data of the failed chip of the target test on the wafer.
5. The method of claim 4, wherein the numerical label data M of the test passing chips on the wafer is determined according to the following formula (yr)
Figure FDA0004107855870000021
Figure FDA0004107855870000022
In the above formula, N b Indicating the number of failed chips tested on the wafer; y is s Indicating that the test failed the chip, y r Indicating that the test passed the chip, y wc Represents the center of the circle, d (y r ,y s ) Represents the distance, d (y r ,y wc ) Representing the distance between the chip and the circle center through the test; m is a constant.
6. The method of claim 2, wherein obtaining n wafer characteristic data from the target wafer information further comprises:
and processing the target wafer information by adopting principal component analysis to obtain 1 wafer characteristic data in the n wafer characteristic data.
7. The method of claim 2, wherein obtaining n wafer characteristic data from the target wafer information comprises:
clustering the first wafer data and the second wafer data respectively to obtain a first clustering result of the first wafer data and a second clustering result of the second wafer data;
and obtaining n wafer characteristic data according to the first clustering result and the second clustering result.
8. The method of claim 7, wherein clustering the first wafer data and the second wafer data to obtain a first clustered result of the first wafer data and a second clustered result of the second wafer data, respectively, comprises:
clustering the first wafer data and the second wafer data by using a first clustering model to obtain a first initial clustering result of the first wafer data and a second initial clustering result of the second wafer data;
clustering the first wafer data and the second wafer data by using a second clustering model respectively to obtain a third initial clustering result of the first wafer data and a fourth initial clustering result of the second wafer data;
the first clustering result comprises the first initial clustering result and the third initial clustering result, and the second clustering result comprises the second initial clustering result and the fourth initial clustering result.
9. The method of claim 8, wherein obtaining n wafer characteristic data from the first clustering result and the second clustering result comprises:
converting the first initial clustering result, the second initial clustering result, the third initial clustering result and the fourth initial clustering result into binary 4-wafer characteristic data respectively;
the n wafer characteristic data includes 4 wafer characteristic data.
10. The method of claim 7, wherein clustering the first wafer data and the second wafer data to obtain a first clustered result of the first wafer data and a second clustered result of the second wafer data, respectively, further comprises:
determining a first cluster number of the first wafer data and a second cluster number of the second wafer data;
the first clustering model is respectively used for obtaining a first initial clustering result and a second initial clustering result by adopting the first clustering number and the second clustering number; and respectively adopting the first cluster number and the second cluster number as a first particle number and a second particle number for the second clustering model to obtain the third initial clustering result and the fourth initial clustering result.
11. The method of claim 1, wherein the target neural network model comprises an adaptive resonant neural network model.
12. A computer readable storage medium having stored thereon computer instructions, which when run perform the steps of the method of any of claims 1 to 11.
13. A data processing apparatus comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, wherein the processor, when executing the computer instructions, performs the steps of the method of any of claims 1 to 11.
CN202310195694.8A 2023-02-27 2023-02-27 Wafer defect detection method, storage medium and data processing device Pending CN116071349A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310195694.8A CN116071349A (en) 2023-02-27 2023-02-27 Wafer defect detection method, storage medium and data processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310195694.8A CN116071349A (en) 2023-02-27 2023-02-27 Wafer defect detection method, storage medium and data processing device

Publications (1)

Publication Number Publication Date
CN116071349A true CN116071349A (en) 2023-05-05

Family

ID=86171574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310195694.8A Pending CN116071349A (en) 2023-02-27 2023-02-27 Wafer defect detection method, storage medium and data processing device

Country Status (1)

Country Link
CN (1) CN116071349A (en)

Similar Documents

Publication Publication Date Title
US7415387B2 (en) Die and wafer failure classification system and method
TW201923922A (en) Defect inspection method
US6507800B1 (en) Method for testing semiconductor wafers
KR101195226B1 (en) Semiconductor wafer analysis system
US20180196911A1 (en) Forecasting wafer defects using frequency domain analysis
KR101331249B1 (en) Method and apparatus for manufacturing data indexing
CN111512324A (en) Method and system for deep learning-based inspection of semiconductor samples
CN113763312B (en) Detection of defects in semiconductor samples using weak labels
US8170707B2 (en) Failure detecting method, failure detecting apparatus, and semiconductor device manufacturing method
Tam et al. Systematic defect identification through layout snippet clustering
WO2020234863A1 (en) Machine learning-based classification of defects in a semiconductor specimen
US11144702B2 (en) Methods and systems for wafer image generation
Zhang et al. WDP-BNN: Efficient wafer defect pattern classification via binarized neural network
CN111582309A (en) Method for generating dead pixel detection model of design layout and dead pixel detection method
Park et al. Data mining approaches for packaging yield prediction in the post-fabrication process
KR20190081843A (en) Method and apparatus for processing wafer data
CN116071349A (en) Wafer defect detection method, storage medium and data processing device
Tsai et al. Enhancing the data analysis in IC testing by machine learning techniques
Hsu et al. Variation and failure characterization through pattern classification of test data from multiple test stages
CN108254669B (en) Integrated circuit testing method
Kan et al. Network models for monitoring high-dimensional image profiles
US11423529B2 (en) Determination of defect location for examination of a specimen
CN115329827A (en) System for predicting products manufactured by manufacturing process and training method thereof
KR20190081710A (en) Method and computer program for classifying wafer map according to defect type
CN117272122B (en) Wafer anomaly commonality analysis method and device, readable storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination