CN114648528A - Semiconductor detection method and device and computer readable storage medium - Google Patents

Semiconductor detection method and device and computer readable storage medium Download PDF

Info

Publication number
CN114648528A
CN114648528A CN202210544301.5A CN202210544301A CN114648528A CN 114648528 A CN114648528 A CN 114648528A CN 202210544301 A CN202210544301 A CN 202210544301A CN 114648528 A CN114648528 A CN 114648528A
Authority
CN
China
Prior art keywords
neural network
network model
training
sample
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210544301.5A
Other languages
Chinese (zh)
Other versions
CN114648528B (en
Inventor
韩娜
孙罗男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Third Generation Semiconductor Research Institute Co Ltd
Original Assignee
Jiangsu Third Generation Semiconductor Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Third Generation Semiconductor Research Institute Co Ltd filed Critical Jiangsu Third Generation Semiconductor Research Institute Co Ltd
Priority to CN202210544301.5A priority Critical patent/CN114648528B/en
Publication of CN114648528A publication Critical patent/CN114648528A/en
Application granted granted Critical
Publication of CN114648528B publication Critical patent/CN114648528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Testing Or Measuring Of Semiconductors Or The Like (AREA)

Abstract

The invention discloses a semiconductor detection method, a device and a computer readable storage medium, based on various performance characterization parameters, different inspection images and marking grades of a training set sample, performing single training on a multilayer neural network model corresponding to various performance characterization modes and a convolutional neural network model corresponding to different inspection images, and performing combined training on all neural network models after the error recognition rate of each neural network model is not more than a preset value, thereby further reducing the error recognition rate of each neural network model; and setting the weight of each neural network model according to the type of the semiconductor product, and fusing the output results of each model. According to the method, the device and the computer readable storage medium provided by the invention, the heterogeneous neural network is established to detect the grade of the substrate or the epitaxial wafer according to different characterization parameters and detection images detected by different detection equipment in the semiconductor manufacturing process, so that the artificial participation workload of testers is reduced, and the detection efficiency and precision are improved.

Description

Semiconductor detection method and device and computer readable storage medium
Technical Field
The present invention relates to the field of semiconductor technologies, and in particular, to a semiconductor detection method, a semiconductor detection device, and a computer-readable storage medium.
Background
In the production of semiconductor devices, from a semiconductor single crystal wafer to a final product, several tens or even hundreds of processes are required. In order to ensure that the product is qualified, stable and reliable and has high yield, strict specific requirements are required for all process steps according to the production conditions of various products. Therefore, a corresponding detection system and accurate monitoring measures must be established during the semiconductor manufacturing process.
In the prior art, defect detection is mainly performed in a semiconductor process in a manual mode, such as pure manual detection, manual and image combination detection, template matching and other methods. And (3) the tester adopts different detection equipment, detects various characterization parameters of the substrate or the epitaxial wafer, and then judges whether the epitaxial wafer meets the requirements or not by referring to the standard characterization parameters. The detection method mainly based on manual work has the following problems: 1. the workload of artificial participation is large, and the efficiency is low; 2. all the characterization parameters cannot be considered, and along with the improvement of the processing precision of the semiconductor, the precision of the original method needs to be improved; 3. correlation exists among different characterization parameters, and mutual interference exists among obtained results.
In addition, some characterization data (such as crystallographic characterization) are imaging results which need to be output by a detection device, and then the images are processed by a professional detection engineer to obtain related characterization data. The manual processing process has the problems of strong subjectivity, high requirement on the professional of testers, low detection efficiency and the like.
In summary, how to improve the detection accuracy and efficiency of the semiconductor to match the higher detection requirement of the third generation semiconductor is a problem to be solved at present.
Disclosure of Invention
The invention aims to provide a semiconductor detection method, a semiconductor detection device and a computer readable storage medium, which are used for solving the problems that in the prior art, a semiconductor substrate or epitaxial wafer detection method is greatly influenced by artificial subjectivity, and the detection precision and efficiency are low.
In order to solve the above technical problem, the present invention provides a semiconductor inspection method, including:
marking the corresponding grades of various performance characterization parameters of the training set sample and the corresponding grades of different inspection images;
taking various performance characterization parameters of the training set sample as the input of the multilayer neural network model corresponding to various performance characterization modes, and taking the corresponding grades of the various performance characterization parameters of the training set sample as the output;
taking different inspection images of the training set sample as the input of convolutional neural network models corresponding to different image detection devices, and taking the corresponding grades of the different inspection images of the training set sample as the output;
training each neural network model based on various performance characterization parameters, different inspection images and marking grades of the training set sample until the error recognition rate of all the neural network models is not greater than the preset error recognition rate, and finishing single model training;
setting the weight of each neural network model based on the type of the semiconductor product;
and respectively detecting the multi-class performance representation and the multiple inspection images of the sample to be tested by utilizing each neural network model which completes the single model training, and fusing the outputs of all the neural network models according to the weight of each neural network model to obtain the final grade of the sample to be tested.
Preferably, before the detecting the sample to be detected by using each neural network model completing the single model training, the method includes:
classifying the training set samples according to the grades of the marks, screening the samples with the same grade corresponding to various performance characterization parameters and different test images in the training set, and generating a target training set with the same grade of each sample;
setting a combined training frequency threshold value, and performing combined training on each neural network model after single model training based on the target training set;
marking the neural network model with the highest ratio in the output result and the output of all the neural network models in each training;
when the training times reach the combined training time threshold, comparing the marked times of each neural network model with the threshold;
if the number of times that the current neural network model is marked is smaller than a first threshold value, the internal parameters and the weight of the current neural network model are not adjusted;
and if the number of times that the current neural network model is marked is greater than or equal to the first threshold value, adjusting the internal parameters or the weight of the current neural network model.
Preferably, if the number of times that the current neural network model is marked is greater than or equal to the first threshold, adjusting the internal parameters or weights of the current neural network model includes:
judging whether the number of times that the current neural network model is marked is smaller than a second threshold value;
if the number of times that the current neural network model is marked is smaller than the second threshold value, adjusting the weight of the current neural network model;
if the number of times that the current neural network model is marked is greater than or equal to the second threshold value, adjusting internal parameters of the current neural network model according to the type of the current neural network model, and retraining the current neural network model;
after the parameter adjustment and retraining of all the neural network models with the marked times larger than or equal to the second threshold value are completed, performing recombination training on all the neural network models until the marked times of all the neural network models are smaller than the second threshold value.
Preferably, if the number of times that the current neural network model is marked is greater than or equal to the second threshold, adjusting the internal parameters of the current neural network model according to the type of the current neural network model includes:
obtaining the type of the current neural network model;
if the current neural network model is a multilayer neural network, the adjusted parameters are any one or more of the number of input layer nodes, the number of hidden layer nodes or the number of nodes of the multilayer neural network and the learning rate;
if the current neural network model is a convolutional neural network, the adjusted parameters are the size and the step length of a convolutional kernel.
Preferably, the various types of performance characterization parameters include one or more of crystallographic, electrical, optical, and thermodynamic characterization parameters of the substrate or the epitaxial wafer;
the image detection equipment comprises one or more of an optical microscope, an atomic force microscope, a transmission electron microscope, a scanning tunneling microscope, a scanning electron microscope and automatic optical detection equipment.
Preferably, the semiconductor product type at least comprises an epitaxial wafer or a substrate of a laser, a MicroLED, a deep ultraviolet LED, a power device and a radio frequency device; the weights of the neural network models corresponding to different semiconductor product types are different.
Preferably, the setting the weight of each neural network model based on the semiconductor product type at least includes:
when the sample to be tested is an epitaxial wafer or a substrate of the power device, the weight of the multilayer neural network model corresponding to the electrical characterization is greater than the weight of other neural network models;
and when the sample to be detected is an epitaxial wafer or a substrate of the MicroLED, the weight of the multilayer neural network model corresponding to the optical representation is greater than the weight of other neural network models.
Preferably, the multilayer neural network model is one or more of a full-connection neural network DNN algorithm, a recurrent neural network RNN algorithm, a feedforward neural network FNN algorithm and the like;
the convolutional neural network model is one or more of an LsNet-5 network, an AlexNet network, a VGG-16 network, a ResNet residual network, a DenseNet network, a SENET network and a multi-scale multi-column convolutional neural network.
The present invention also provides a semiconductor inspection apparatus, comprising:
the grade marking module is used for marking grades corresponding to various performance characterization parameters of the training set sample and grades corresponding to different inspection images;
the multilayer neural network model building module is used for taking various performance characterization parameters of the training set sample as the input of the multilayer neural network model corresponding to various performance characterization modes and taking the corresponding grades of the various performance characterization parameters of the training set sample as the output;
the convolutional neural network model construction module is used for taking different inspection images of the training set sample as the input of convolutional neural network models corresponding to different image detection devices and taking the corresponding grades of the different inspection images of the training set sample as the output;
the single model training module is used for training each neural network model based on various performance characterization parameters, different inspection images and marking grades of the training set sample until the error recognition rate of all the neural network models is not greater than the preset error recognition rate, and completing single model training;
the weight setting module is used for setting the weight of each neural network model based on the type of the semiconductor product;
and the detection module is used for respectively detecting the multi-class performance representation and the multiple inspection images of the sample to be detected by utilizing each neural network model which completes the single model training, and fusing the outputs of all the neural network models according to the weight of each neural network model to obtain the final grade of the sample to be detected.
The invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a semiconductor inspection method as described above.
The semiconductor detection method provided by the invention marks the corresponding grades of various performance characterization parameters of the training set sample and the corresponding grades of different inspection images. And taking each type of performance characterization parameter of the training set sample as input, taking the grade corresponding to each type of performance characterization parameter of the training set sample as output, and respectively training the multilayer neural network model corresponding to each type of performance characterization mode. And taking different inspection images of the training set sample as input, taking the corresponding grades of the different inspection images of the training set sample as output, and respectively training the convolutional neural network models corresponding to the image detection devices of the different inspection images. And when the error recognition rate of all the neural network models is not greater than the preset error recognition rate, finishing the single model training of all the neural network models. Weights for the respective neural network models are set based on the semiconductor product type. And respectively detecting the multi-class performance characterization parameters and the multiple inspection images of the sample to be tested by using each neural network model which completes the single model training, and fusing the outputs of all the neural network models according to the weight of each neural network model to obtain the final grade of the sample to be tested.
According to the method provided by the invention, all performance characterization parameters appearing during the detection of the sample to be detected are used as evaluation indexes by utilizing the trained multilayer neural network, so that the detection precision of the semiconductor is greatly improved; and each performance characterization parameter is detected by adopting the corresponding neural network model, thereby effectively avoiding the correlation among different characterization parameters and interfering the detection results of different performances. Performing feature extraction and classification on a test image acquired by image detection equipment by using a trained convolutional neural network to obtain the grade of a sample to be detected; the method not only improves the detection efficiency and precision, effectively avoids the influence of the subjectivity of a tester on the detection result in the processing process, but also reduces the requirement on high-specificity talents in the semiconductor detection process. The invention can set the weight for each neural network model according to the type of the semiconductor product, fuse the output results of all the neural network models according to the preset weight to obtain the final grade of the sample to be detected, further improve the detection precision by utilizing a multi-model fusion method to adapt to the detection requirement of the third generation semiconductor, and is suitable for detecting various types of semiconductor products.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a flow chart of a first embodiment of a semiconductor inspection method according to the present invention;
FIG. 2 is a schematic structural diagram of a multi-layer neural network model;
FIG. 3 is a schematic diagram of the structure of a convolutional neural network model;
FIG. 4 is a flow chart of a second embodiment of a semiconductor inspection method according to the present invention;
fig. 5 is a block diagram of a semiconductor inspection apparatus according to an embodiment of the present invention.
Detailed Description
The core of the invention is to provide a semiconductor detection method, a semiconductor detection device and a computer readable storage medium, which effectively improve the precision and the efficiency of semiconductor detection.
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a semiconductor inspection method according to a first embodiment of the present invention; the specific operation steps are as follows:
step S101: marking the corresponding grades of various performance characterization parameters of the training set sample and the corresponding grades of different inspection images;
in the embodiment of the present invention, the various performance characterization manners of the training set sample or the sample to be tested may include: crystallographic, electrical, optical, thermodynamic, and other characterizations. Each type of characterization mode comprises a plurality of parameters. Crystallographic characterization includes at least one of crystallographic and tilt angles, surface roughness, surface macro-defects, perforations, pits, halos, combinations, dislocation density, orientation edges/orientation planes; the electrical characterization includes at least one of resistivity, carrier concentration, doping; the optical characterization includes: at least one of a light emission peak wavelength, a light absorption coefficient, an infrared absorption spectrum peak wavelength, and a light emission intensity; the thermodynamic characterization includes: at least one of strength, warp, total thickness variation, bow, thermal conductivity, dimensional parameter, composition/doping.
In a semiconductor manufacturing process, different inspection images are obtained by different image inspection equipment, and the image inspection equipment can comprise one or more of an optical microscope, an atomic force microscope, a transmission electron microscope, a scanning tunneling microscope, a scanning electron microscope, an automatic optical inspection equipment and the like.
After the distance between atoms is reduced to a certain degree, the acting force between atoms is rapidly increased, so that the height of the surface of the sample can be directly converted by the stress of an Atomic Force Microscope (AFM) microprobe, and the information of the surface topography of the sample is obtained. Transmission Electron Microscope (TEM) refers to a Transmission electron microscope (Transmission electron microscope) that projects an accelerated and focused electron beam onto a very thin sample, where the electron collides with atoms in the sample to change direction, thereby generating solid angle scattering. The magnitude of the scattering angle is related to the density and thickness of the sample, and therefore, different bright and dark images can be formed, and the images can be displayed on an imaging device (such as a fluorescent screen, a film and a photosensitive coupling component) after being amplified and focused. Scanning Electron Microscopy (SEM) is an observation instrument that is intermediate between transmission electron microscopy and optical microscopy. The method utilizes a focused narrow high-energy electron beam to scan a sample, excites various physical information through the interaction between a light beam and a substance, and collects, amplifies and re-images the information to achieve the purpose of characterizing the microscopic morphology of the substance.
Step S102: taking various performance characterization parameters of the training set sample as the input of the multilayer neural network model corresponding to various performance characterization modes, and taking the corresponding grades of the various performance characterization parameters of the training set sample as the output;
in the embodiment of the invention, one performance characterization mode at least corresponds to one multilayer neural network model. For example, taking the crystallography characterization parameters of the training set samples as input, taking the corresponding grades of the crystallography characterization parameters of the training set samples as output, and training a first multilayer neural network model; taking the electrical characterization parameters of the training set samples as input, taking the corresponding grades of the electrical characterization parameters of the training set samples as output, and training a second multilayer neural network model; and by analogy, training a plurality of multilayer neural network models.
As shown in FIG. 2, the multi-layer neural network model includes an input layer, at least one hidden layer, and an output layer; the input layer comprises inputs which have one-to-one correspondence with the hidden layer, and each input is respectively connected with the neuron of the corresponding hidden layer. The number of input layer nodes and the number of hidden layers of the corresponding multilayer neural network model can be determined according to the input characterization types. The number of nodes in the input layer is set according to the number of parameters included in various performance representation modes, for example: when the input is a crystallographic characterization, the number of nodes of the input layer is at least 9; when the input is electrically characterized, the number of nodes of the input layer is at least 3; when the input is optically characterized, the number of nodes of the input layer is at least 4; when the input is thermodynamically characterized, the number of nodes of the input layer is at least 7. The number of nodes of the hidden layer is typically S =2n +1 (n is the number of input layer nodes).
It should be noted that the multilayer neural network models corresponding to the various performance characterization parameters may be the same type of neural network model, and may also be different types of neural network models. The neural network model may be one or more algorithms such as a fully-connected neural network DNN algorithm, a recurrent neural network RNN algorithm, a feedforward neural network FNN algorithm, etc., such as a BP neural network, an RBF neural network, an LSTM long-short term memory recurrent neural network model, a gate control loop unit GUR neural network model, etc., which are not limited herein.
In the embodiment of the present invention, the outputs of the multilayer neural network model and the convolutional neural network model are both semiconductor levels, and include at least two different levels, specifically, a first level, a second level, a third level, or a good level, a bad level, and the like.
Step S103: taking different inspection images of the training set sample as the input of convolutional neural network models corresponding to different image detection devices, and taking the corresponding grades of the different inspection images of the training set sample as the output;
different image detection devices obtain different detection images, and the detection images output by the different image detection devices cannot be trained and detected by adopting the same convolutional neural network model. But images detected by different image detection devices may have the same characterizing data. If the images are classified according to different image detection devices, after the images output by the different detection devices are used as the input of the corresponding convolutional network model for training and learning, the output results are crossed, and the detection precision is improved. For example: acquiring a test image of a training sample output by an atomic force microscope, preprocessing the test image, marking the grades (at least two grades) of the training sample to obtain a first image training sample set, and training a first convolution neural network model; acquiring a test image of a training sample output by a transmission electron microscope, preprocessing the test image, marking the grade of the training sample to obtain a second image training sample set, and training a second convolutional neural network model; acquiring a test image of a training sample output by a scanning electron microscope, preprocessing the test image, marking the grade of the training sample to obtain a third image training sample set, and training a third convolutional neural network model; and by parity of reasoning, the training of a plurality of convolutional neural network models is completed. Preprocessing of the test images of the training samples includes centering, normalization, and the like.
As shown in fig. 3, in the present embodiment, the convolutional neural network includes a convolutional layer, a pooling layer, a fully-connected layer, and the like. The convolutional neural network model can be one or more of an LsNet-5 network, an AlexNet network, a VGG-16 network, a ResNet residual network, a DenseNet network, a SENET network and a multi-scale multi-column convolutional neural network.
Specifically, a ResNet residual network model can be constructed, and a suitable optimization, loss function, regularization method and iteration times are selected for feature extraction of different inspection images acquired by different image devices to obtain corresponding convolutional neural network models. The Res-Net residual convolution neural network is used, so that the problems of gradient loss and explosion in the training process can be solved, a deeper network can be trained, and good performance can be guaranteed. The convolution layer and the pooling layer can better identify key features of the image, and in the deep convolution process, skip connection is established through residual learning, so that the training speed can be greatly increased; and overfitting problems caused by too small data volume can be avoided by using dropout regularization.
Step S104: training each neural network model based on various performance characterization parameters, different inspection images and marking grades of the training set sample until the error recognition rate of all the neural network models is not greater than the preset error recognition rate, and finishing single model training;
the method comprises the steps of obtaining various characterization parameters and different inspection images of a substrate or an epitaxial wafer which is inspected within a preset time period, constructing a sample data set, and marking the corresponding grades of various performance characterization parameters and the corresponding grades of different inspection images of each sample in the sample data set. And dividing the sample data set into a training set and a testing set according to a preset proportion, and training each neural network model by setting a false recognition rate. The division proportion and the error recognition rate of the training set and the test set can be set according to the actual detection precision requirement. For example, a sample data set containing 10000 marked sample data sets with various performance characterization parameter corresponding grades and inspection image corresponding grades is trained by 7000 sample data, 3000 sample data is used for testing, the error recognition rate is set to be 1% during testing, and each neural network is trained through the training set and the testing set, so that the error recognition rate of each neural network model after training is not more than 1%.
Step S105: setting the weight of each neural network model based on the type of the semiconductor product;
in embodiments of the present invention, the weights of the various neural network models may be set for the semiconductor product type. The semiconductor product type at least comprises an epitaxial wafer or a substrate of a laser, a MicroLED, a deep ultraviolet LED, a power device and a radio frequency device; the weights of the neural network models corresponding to different semiconductor product types are different.
For example, when the sample to be tested is an epitaxial wafer or a substrate of the power device, the weight of the multilayer neural network model corresponding to the electrical characterization is greater than the weight of other neural network models; and when the sample to be detected is an epitaxial wafer or a substrate of the MicroLED, the weight of the multilayer neural network model corresponding to the optical representation is greater than the weight of other neural network models.
Step S106: and respectively detecting the multi-class performance representation and the multiple inspection images of the sample to be tested by utilizing each neural network model which completes the single model training, and fusing the outputs of all the neural network models according to the weight of each neural network model to obtain the final grade of the sample to be tested.
According to the embodiment of the invention, the heterogeneous neural network is established to detect the grade of the substrate or the epitaxial wafer according to the difference of the characterization parameters and the detection images detected by different detection equipment in the semiconductor manufacturing process, so that the workload of artificial participation of testers is reduced, and the detection efficiency and precision are improved. Respectively inputting various performance characterization parameters of the substrate or the epitaxial wafer into the multilayer neural network model corresponding to various performance characterization modes, and outputting the grade of the substrate or the epitaxial wafer; and respectively outputting a plurality of inspection images of the substrate or the epitaxial wafer output by different image detection equipment into the corresponding convolutional neural network, and outputting the grade of the substrate or the epitaxial wafer. And setting the weight of each neural network model according to the type of the semiconductor product, and fusing the output results of all the neural network models according to the preset weight to obtain the final grade of the sample to be detected.
Based on the above embodiments, in this embodiment, in order to further improve the detection accuracy of the sample to be detected, after the single model training and weight setting are completed for each neural network model, samples with the same corresponding grade of each type of performance characterization parameter and the same corresponding grade of different inspection images in the training set may be screened, and a target training set with the same grade of each sample is generated; and performing combined training for preset times on each neural network model completing the single model training based on a target training set, marking models which are inconsistent with most of model outputs in each training, and adjusting internal parameters or weights of the models according to the times that the models which are inconsistent with other model output results in the training for the preset times are marked.
Referring to fig. 4, fig. 4 is a flowchart illustrating a semiconductor inspection method according to a second embodiment of the present invention; compared with the first specific embodiment, the second specific embodiment further includes a step of performing combined training on all neural network models that complete the training of the single model before step S106, where the specific operation steps of the combined training are as follows:
step S401: classifying the training set samples according to the grades of the marks, screening the samples with the same grade corresponding to various performance characterization parameters and different test images in the training set, and generating a target training set with the same grade of each sample;
step S402: setting a combined training frequency threshold value, and performing combined training on each neural network model after single model training based on the target training set;
step S403: marking the neural network model with the highest ratio in the output result and the output of all the neural network models in each training;
step S404: when the training times reach the combined training time threshold, comparing the marked times of each neural network model with a first threshold;
step S405: if the number of times that the current neural network model is marked is smaller than the first threshold value, the internal parameters and the weight of the current neural network model are not adjusted;
step S406: if the number of times of the current neural network model is marked is larger than or equal to the first threshold, judging whether the number of times of the current neural network model is marked is smaller than a second threshold;
step S407: if the number of times that the current neural network model is marked is smaller than the second threshold value, adjusting the weight of the current neural network model;
step S408: if the number of times that the current neural network model is marked is greater than or equal to the second threshold value, adjusting internal parameters of the current neural network model according to the type of the current neural network model, and retraining the current neural network model;
obtaining the type of the current neural network model; if the current neural network model is a multilayer neural network, the adjusted parameters are any one or more of the number of input layer nodes, the number of hidden layer nodes or the number of nodes of the multilayer neural network and the learning rate; if the current neural network model is a convolutional neural network, the adjusted parameters are the size and the step length of a convolutional kernel.
In particular, the learning rate of the multi-layer neural network interacts with many other aspects of the optimization process, and the interaction may be non-linear. However, in general, a smaller learning rate will require more training periods. Conversely, a greater learning rate will require less training time. Furthermore, a smaller batch size is more suitable for a smaller learning rate, taking into account the noisy estimation of the error gradient. The traditional default value for learning rate is 0.1 or 0.01, with a default value of 0.01 generally applicable to standard multi-layer neural networks. Too large a learning rate may result in too large a magnitude of weight update, possibly crossing the minimum value of the loss function, causing the parameter value to wander across the extreme value, i.e. to diverge continuously at both ends of the extreme point, or to oscillate severely, with no tendency for loss to decrease as the number of iterations increases. If the learning rate is set to be too small, the updating speed of the parameters is too slow, so that a good descending direction cannot be found quickly, the model loss is basically unchanged along with the increase of the iteration times, and more training resources are required to be consumed to ensure that the optimal values of the parameters are obtained. For the setting of the learning rate, when the updating is started, the learning rate is as large as possible, when the parameter is close to the optimal value, the learning rate is gradually reduced, and the parameter can reach the optimal value finally.
When the learning rate of the multilayer neural network model is adjusted, the learning rate is reduced or increased, the change rates of at least two other parameters are obtained, when other parameters are suddenly changed, the minimum learning rate and the maximum learning rate are obtained, the section is used as the range of the variable learning rate, the learning rate is gradually changed, and the training is returned to retrain after the learning rate is adjusted.
Step S409: after the parameter adjustment and retraining of all the neural network models with the marked times larger than or equal to the second threshold value are completed, performing recombination training on all the neural network models until the marked times of all the neural network models are smaller than the second threshold value.
Taking 10 ten thousand times of training as an example, adopting samples with various performance representation corresponding grades and different inspection images corresponding grades which are all consistent to perform combined training on each neural network model after single model training, wherein the grades of the samples in a target training set are all the same, so that the output results of each neural network model in each training are the same, and if a model with the output results inconsistent with the output results of most models exists, the model can be judged to have false recognition. And marking the neural network model with the output result and the grade different from the grade with the maximum ratio in all the neural network model outputs in each training. After 10 ten thousand times of training, adjusting internal parameters or weights of each neural network model according to the marked times and types of the neural network models.
In this embodiment, two thresholds are set to determine whether the marked model needs to be adjusted, or whether the internal parameters or the weights of the marked model need to be adjusted. When the number of times that the model is marked is smaller than the first threshold, the error of the model is not large, and the model can not be adjusted; when the marked times are greater than or equal to the first threshold, adjusting the internal parameters of the model or adjusting the weight of the model according to the inconsistent times. When the marked times are larger than or equal to the first threshold and smaller than the second threshold, the error of the model can be accepted, and the weight corresponding to the model is only required to be adjusted downwards. For example, for a device, the model of electrical, thermodynamic, and optical properties has weights T1, T2, and T3; comparing the output results of each model for multiple times, the proportion of the output results of the model of the optical performance and the output results of other models is the largest, and the weight T3 of the model corresponding to the optical performance can be reduced.
When the number of times of marking is larger than or equal to the second threshold, the error of the model is large, and the training is performed again after the internal parameters are adjusted according to the type of the model. And after all the neural network models with the marked times larger than or equal to the second threshold value are subjected to parameter adjustment and retraining, performing recombination training on all the neural network models until the marked times of all the neural network models are smaller than the second threshold value, and then not needing to adjust the internal parameters of each model.
According to the semiconductor detection method provided by the embodiment of the invention, after the single model training of each neural network model is completed, all the models are subjected to combined training, so that the error identification rate of each neural network model can be further reduced under the condition of not increasing the number of training set samples. When all the neural network models which complete single model training and combined training are used for detecting the sample to be detected, the accuracy of the detection result can be ensured to be higher, and the method is favorable for testing personnel to efficiently screen out the sample to be detected with various performance characterization parameters and different detection image grades.
Referring to fig. 5, fig. 5 is a block diagram of a semiconductor inspection apparatus according to an embodiment of the present invention; the specific device may include:
the grade marking module 100 is used for marking grades corresponding to various performance characterization parameters of the training set samples and grades corresponding to different inspection images;
the multilayer neural network model building module 200 is configured to use various performance characterization parameters of the training set sample as inputs of a multilayer neural network model corresponding to various performance characterization modes, and use corresponding grades of the various performance characterization parameters of the training set sample as outputs;
a convolutional neural network model building module 300, configured to use different inspection images of the training set sample as inputs of convolutional neural network models corresponding to different image detection devices, and use corresponding grades of the different inspection images of the training set sample as outputs;
the single model training module 400 is used for training each neural network model based on various performance characterization parameters, different inspection images and marking grades of the training set samples until the error recognition rate of all the neural network models is not greater than the preset error recognition rate, and completing single model training;
a weight setting module 500 for setting weights of the respective neural network models based on the semiconductor product type;
the detecting module 600 is configured to detect multiple types of performance characteristics and multiple inspection images of a sample to be detected by using each neural network model that completes single model training, and fuse outputs of all neural network models according to the weight of each neural network model to obtain a final grade of the sample to be detected.
The semiconductor detection apparatus of this embodiment is used to implement the aforementioned semiconductor detection method, and therefore specific embodiments of the semiconductor detection apparatus can be seen in the foregoing embodiment parts of the semiconductor detection method, for example, the level marking module 100, the multilayer neural network model building module 200, the convolutional neural network model building module 300, the single model training module 400, the weight setting module 500, and the detection module 600 are respectively used to implement steps S101, S102, S103, S104, S105, and S106 in the aforementioned semiconductor detection method, so that specific embodiments thereof may refer to descriptions of corresponding respective embodiment parts, and are not described herein again.
The specific embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the semiconductor inspection method.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed in the embodiment corresponds to the method disclosed in the embodiment, so that the description is simple, and the relevant points can be referred to the description of the method part.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The semiconductor inspection method, the semiconductor inspection device, and the computer-readable storage medium according to the present invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (10)

1. A semiconductor inspection method, comprising:
marking the corresponding grades of various performance characterization parameters of the training set sample and the corresponding grades of different inspection images;
taking various performance characterization parameters of the training set sample as the input of the multilayer neural network model corresponding to various performance characterization modes, and taking the corresponding grades of the various performance characterization parameters of the training set sample as the output;
taking different inspection images of the training set sample as the input of convolutional neural network models corresponding to different image detection devices, and taking the corresponding grades of the different inspection images of the training set sample as the output;
training each neural network model based on various performance characterization parameters, different inspection images and marking grades of the training set sample until the error recognition rate of all the neural network models is not greater than the preset error recognition rate, and finishing single model training;
setting the weight of each neural network model based on the type of the semiconductor product;
and respectively detecting the multi-class performance representation and the multiple inspection images of the sample to be tested by utilizing each neural network model which completes the single model training, and fusing the outputs of all the neural network models according to the weight of each neural network model to obtain the final grade of the sample to be tested.
2. The semiconductor test method of claim 1, wherein before testing the sample to be tested using the neural network models that have been trained using the single model, the method comprises:
classifying the training set samples according to the grades of the marks, screening the samples with the same grade corresponding to various performance characterization parameters and different test images in the training set, and generating a target training set with the same grade of each sample;
setting a combined training frequency threshold value, and performing combined training on each neural network model after single model training based on the target training set;
marking the neural network model with the highest ratio in the output result and the output of all the neural network models in each training;
when the training times reach the combined training time threshold, comparing the marked times of each neural network model with the threshold;
if the number of times that the current neural network model is marked is smaller than a first threshold value, the internal parameters and the weight of the current neural network model are not adjusted;
and if the number of times that the current neural network model is marked is greater than or equal to the first threshold value, adjusting the internal parameters or the weight of the current neural network model.
3. The semiconductor test method of claim 2, wherein the adjusting the internal parameters or weights of the current neural network model if the current neural network model is marked more than or equal to the first threshold comprises:
judging whether the number of times that the current neural network model is marked is smaller than a second threshold value;
if the number of times that the current neural network model is marked is smaller than the second threshold value, adjusting the weight of the current neural network model;
if the number of times that the current neural network model is marked is greater than or equal to the second threshold value, adjusting internal parameters of the current neural network model according to the type of the current neural network model, and retraining the current neural network model;
after the parameter adjustment and retraining of all the neural network models with the marked times larger than or equal to the second threshold value are completed, performing recombination training on all the neural network models until the marked times of all the neural network models are smaller than the second threshold value.
4. The method of claim 3, wherein if the current neural network model is marked more than or equal to the second threshold, adjusting internal parameters of the current neural network model according to the type of the current neural network model comprises:
obtaining the type of the current neural network model;
if the current neural network model is a multilayer neural network, the adjusted parameters are any one or more of the number of input layer nodes, the number of hidden layer nodes or the number of nodes of the multilayer neural network and the learning rate;
if the current neural network model is a convolutional neural network, the adjusted parameters are the size and the step length of a convolutional kernel.
5. The semiconductor inspection method of claim 1, wherein the types of performance characterization parameters include one or more of crystallographic, electrical, optical, and thermodynamic characterization parameters of the substrate or epitaxial wafer;
the image detection equipment comprises one or more of an optical microscope, an atomic force microscope, a transmission electron microscope, a scanning tunneling microscope, a scanning electron microscope and automatic optical detection equipment.
6. The semiconductor inspection method according to claim 1, wherein the semiconductor product type includes at least an epitaxial wafer or a substrate of a laser, a micro LED, a deep ultraviolet LED, a power device, a radio frequency device; the weights of the neural network models corresponding to different semiconductor product types are different.
7. The semiconductor inspection method of claim 6, wherein setting the weight of each neural network model based on the semiconductor product type comprises at least:
when the sample to be tested is an epitaxial wafer or a substrate of the power device, the weight of the multilayer neural network model corresponding to the electrical characterization is greater than the weight of other neural network models;
and when the sample to be detected is an epitaxial wafer or a substrate of the MicroLED, the weight of the multilayer neural network model corresponding to the optical characterization is greater than the weight of other neural network models.
8. The semiconductor detection method according to claim 1, wherein the multilayer neural network model is one or more of a full-connection neural network (DNN) algorithm, a Recurrent Neural Network (RNN) algorithm, a Feedforward Neural Network (FNN) algorithm and the like;
the convolutional neural network model is one or more of an LsNet-5 network, an AlexNet network, a VGG-16 network, a ResNet residual error network, a Densenet network, an SENEt network and a multi-scale multi-column convolutional neural network.
9. A semiconductor inspection apparatus, comprising:
the grade marking module is used for marking grades corresponding to various performance characterization parameters of the training set sample and grades corresponding to different inspection images;
the multilayer neural network model building module is used for taking various performance characterization parameters of the training set samples as the input of the multilayer neural network model corresponding to various performance characterization modes and taking the corresponding grades of the various performance characterization parameters of the training set samples as the output;
the convolutional neural network model construction module is used for taking different inspection images of the training set sample as the input of convolutional neural network models corresponding to different image detection devices and taking the corresponding grades of the different inspection images of the training set sample as the output;
the single model training module is used for training each neural network model based on various performance characterization parameters, different test images and marking grades of the training set sample until the error recognition rate of all the neural network models is not greater than the preset error recognition rate, and completing single model training;
the weight setting module is used for setting the weight of each neural network model based on the type of the semiconductor product;
and the detection module is used for respectively detecting the multi-class performance characterization and the multiple inspection images of the sample to be detected by utilizing each neural network model completing single model training, and fusing the outputs of all the neural network models according to the weight of each neural network model to obtain the final grade of the sample to be detected.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of a semiconductor inspection method according to one of claims 1 to 8.
CN202210544301.5A 2022-05-19 2022-05-19 Semiconductor detection method, device and computer readable storage medium Active CN114648528B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210544301.5A CN114648528B (en) 2022-05-19 2022-05-19 Semiconductor detection method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210544301.5A CN114648528B (en) 2022-05-19 2022-05-19 Semiconductor detection method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN114648528A true CN114648528A (en) 2022-06-21
CN114648528B CN114648528B (en) 2022-09-23

Family

ID=81997144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210544301.5A Active CN114648528B (en) 2022-05-19 2022-05-19 Semiconductor detection method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114648528B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115481570A (en) * 2022-09-22 2022-12-16 华南理工大学 DTCO modeling method based on residual error network
CN115932530A (en) * 2023-01-09 2023-04-07 东莞市兆恒机械有限公司 Method for calibrating semiconductor detection equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150112182A1 (en) * 2013-10-17 2015-04-23 Siemens Aktiengesellschaft Method and System for Machine Learning Based Assessment of Fractional Flow Reserve
US20190287238A1 (en) * 2018-03-14 2019-09-19 Kla-Tencor Corporation Defect detection, classification, and process window control using scanning electron microscope metrology
CN111340800A (en) * 2020-03-18 2020-06-26 联影智能医疗科技(北京)有限公司 Image detection method, computer device, and storage medium
CN111512324A (en) * 2018-02-07 2020-08-07 应用材料以色列公司 Method and system for deep learning-based inspection of semiconductor samples

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150112182A1 (en) * 2013-10-17 2015-04-23 Siemens Aktiengesellschaft Method and System for Machine Learning Based Assessment of Fractional Flow Reserve
CN111512324A (en) * 2018-02-07 2020-08-07 应用材料以色列公司 Method and system for deep learning-based inspection of semiconductor samples
US20190287238A1 (en) * 2018-03-14 2019-09-19 Kla-Tencor Corporation Defect detection, classification, and process window control using scanning electron microscope metrology
CN111340800A (en) * 2020-03-18 2020-06-26 联影智能医疗科技(北京)有限公司 Image detection method, computer device, and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115481570A (en) * 2022-09-22 2022-12-16 华南理工大学 DTCO modeling method based on residual error network
CN115932530A (en) * 2023-01-09 2023-04-07 东莞市兆恒机械有限公司 Method for calibrating semiconductor detection equipment

Also Published As

Publication number Publication date
CN114648528B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
CN114648528B (en) Semiconductor detection method, device and computer readable storage medium
CN111052331B (en) System and method for identifying disturbances in detected defects and defects of interest
CN111837225B (en) Defect detection, classification and process window control using scanning electron microscope metrology
JP4253522B2 (en) Defect classification method and apparatus
US11686689B2 (en) Automatic optimization of an examination recipe
CN108463874A (en) Sample control based on image
US11790515B2 (en) Detecting defects in semiconductor specimens using weak labeling
CN114757297A (en) Multi-parameterization semiconductor detection method and device and readable storage medium
CN113191399B (en) Method for improving yield of semiconductor chips based on machine learning classifier
JP2022027473A (en) Generation of training data usable for inspection of semiconductor sample
JP2024500887A (en) Prediction of electrical properties of semiconductor samples
CN115482227B (en) Machine vision self-adaptive imaging environment adjusting method
López de la Rosa et al. Detection of unknown defects in semiconductor materials from a hybrid deep and machine learning approach
WO2022059135A1 (en) Error cause estimation device and estimation method
CN114088027A (en) Transformer winding deformation identification method based on LSTM neural network
Liu et al. A deep learning approach to defect detection in additive manufacturing of titanium alloys
US12007335B2 (en) Automatic optimization of an examination recipe
US20230128610A1 (en) Continuous Machine Learning Model Training for Semiconductor Manufacturing
CN118115038A (en) LED chip defect detection method, device, equipment and storage medium
CN117909886B (en) Sawtooth cotton grade classification method and system based on optimized random forest model
US20240062356A1 (en) Data-driven prediction and identification of failure modes based on wafer-level analysis and root cause analysis for semiconductor processing
TWI682294B (en) Soldering process parameters suggestion method
CN114595740A (en) Ultra-high-speed ray image identification method based on photoelectric detector
Okuda et al. High throughput CD-SEM metrology using image denoising based on deep learning
CN113807490A (en) Data linear correlation judgment method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant