CN114511516A - Micro LED defect detection method based on unsupervised learning - Google Patents

Micro LED defect detection method based on unsupervised learning Download PDF

Info

Publication number
CN114511516A
CN114511516A CN202210048096.3A CN202210048096A CN114511516A CN 114511516 A CN114511516 A CN 114511516A CN 202210048096 A CN202210048096 A CN 202210048096A CN 114511516 A CN114511516 A CN 114511516A
Authority
CN
China
Prior art keywords
sample
image
encoder
residual
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210048096.3A
Other languages
Chinese (zh)
Other versions
CN114511516B (en
Inventor
周佳
潘彤
郭震撼
曹晖
袁廷翼
王杨杨
夏天
鲍涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lijing Microelectronics Technology Jiangsu Co ltd
Original Assignee
Lijing Microelectronics Technology Jiangsu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lijing Microelectronics Technology Jiangsu Co ltd filed Critical Lijing Microelectronics Technology Jiangsu Co ltd
Priority to CN202210048096.3A priority Critical patent/CN114511516B/en
Publication of CN114511516A publication Critical patent/CN114511516A/en
Application granted granted Critical
Publication of CN114511516B publication Critical patent/CN114511516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0006Industrial image inspection using a design-rule based approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a micro LED defect detection method based on unsupervised learning, which relates to the field of defect detection, and is characterized in that a residual convolution self-encoder model is subjected to model pre-training by utilizing a normal sample image and an abnormal sample image which are subjected to image pre-processing, the residual convolution self-encoder model comprises an encoder formed based on a residual convolution module and a decoder formed based on a residual transposition convolution module, the output of the encoder of the pre-trained residual convolution self-encoder model is mapped to a potential space and is fitted to a hypersphere, a target function is utilized to optimize the potential space, a defect detection model is obtained through training, and a defect detection model with stronger robustness can be obtained so as to realize automatic defect detection on a micro LED chip.

Description

Micro LED defect detection method based on unsupervised learning
Technical Field
The invention relates to the field of defect detection, in particular to a micro LED defect detection method based on unsupervised learning.
Background
Light Emitting Diodes (LEDs) are widely used in various fields such as displays, vehicles, and medical equipment. The demand for LEDs is increasing because of their high efficiency, low power consumption, long life span, and environmental protection characteristics. However, manufacturing defects undermine these advantages and incur significant losses to the manufacturer, such as time and cost, and therefore to compensate for this defect, more accurate and faster inspection of the LED is required.
Common defect detection methods for the current LED chips mainly comprise Automatic Optical Inspection (AOI), Photoluminescence (PL) detection and Electroluminescence (EL) detection. Among them, AOI is a non-contact visual inspection that can detect surface defects on a wafer or chip and prevent damage caused by contact inspection. Since AOI is faster than PL and EL inspections, the accuracy of AOI can affect the time taken for subsequent inspections (e.g., PL and EL inspections).
At present, neural networks show high performance in computer vision, and therefore, for more accurate inspection, a visual inspection method based on Deep Neural Network (DNN) supervised learning has been proposed. However, supervised learning based approaches have some drawbacks: one of the drawbacks is that they require a data set with a real tag for each chip, which requires a laborious work. In addition, in the actual industrial process, compared with a normal chip, a defective chip rarely appears, and the problem of unbalance of positive and negative samples hinders effective training of the DNN model. Although the class positive and negative sample imbalance problem can be solved by deliberately creating defects or using data enhancement methods, it is difficult to define and create all possible defect patterns, which are difficult to solve in a practical industrial process. Therefore, the above disadvantages cause the realization difficulty of the current vision inspection method based on supervised learning and the result is not accurate enough.
Disclosure of Invention
The invention provides a micro LED defect detection method based on unsupervised learning aiming at the problems and the technical requirements, and the technical scheme of the invention is as follows:
a method for detecting defects of a MicroLED based on unsupervised learning comprises the following steps:
acquiring a sample data set, wherein the sample data set comprises a normal sample image of a normal MicroLED sample chip and an abnormal sample image of an abnormal MicroLED sample chip with defects;
carrying out image preprocessing on a sample image in the sample data set;
performing model pre-training on a residual convolutional self-encoder model by using the sample image subjected to image pre-processing, wherein the residual convolutional self-encoder model comprises an encoder formed based on a residual convolutional module and a decoder formed based on a residual transpose convolutional module;
mapping the output of a coder of a pre-trained residual convolution self-coder model to a potential space and fitting the output to a hypersphere, wherein the hypersphere is used for classifying a normal sample image and an abnormal sample image, optimizing the potential space by using a target function, and training to obtain a defect detection model;
and acquiring an image to be detected of the micro LED sample chip to be detected, inputting the image to be detected into the defect detection model, and completing the defect detection of the micro LED sample chip to be detected.
The further technical scheme is that the objective function is used for minimizing the volume of the hypersphere, minimizing the similarity of potential space vectors corresponding to different labels, and maximizing the similarity of the potential space vectors corresponding to the same label, wherein the labels comprise a normal label corresponding to a normal sample image and an abnormal label corresponding to an abnormal sample image.
The further technical proposal is that the objective function is
Figure BDA0003473251540000021
Wherein N represents the number of frames output by an encoder of the pre-trained residual convolutional self-encoder model, I is each frame output by the encoder of the pre-trained residual convolutional self-encoder model, phi is the encoder, W is a weight parameter of the encoder, and c is the center of a hypersphere in a potential space; z is a radical ofiIs a potential space vector, z, obtained by mapping the ith frame output to a potential spacejIs the potential space vector, Sim (z), obtained by mapping the jth frame output to the potential spacei,zj) Is a potential space vector ziAnd zjSimilarity between them, α is the edge constant, yiLabel, y, representing the output of frame ijA label representing the output of the jth frame.
The method comprises the following steps that a residual convolution module comprises a residual unit, a convolution unit and a maximum pooling unit, a ReLU activation function is adopted as an activation function, the convolution unit and the residual unit respectively process input images, then linear superposition is carried out on the processed input images, the processed input images are output through the maximum pooling unit, and the residual unit keeps the number and value of channels of the input images unchanged.
The further technical scheme is that model pre-training is carried out on a residual convolution self-encoder model by utilizing a sample image subjected to image preprocessing, and the model pre-training is carried out by utilizing the mean square error as a loss function.
The further technical scheme is that the image preprocessing is carried out on the sample image in the sample data set, and the image preprocessing comprises the following steps:
performing morphological processing on a sample image in the sample data set;
and performing frequency domain filtering processing on the sample image after the morphological processing by using a Gaussian difference filter for realizing band-pass filtering to finish image preprocessing.
The further technical solution is that morphological processing is performed on sample images in the sample data set, including for each sample image f (x, y):
performing an opening operation on the sample image f (x, y) by using a structural element b with a preset specification to obtain a background image h (x, y), wherein the x, y represents pixel point coordinates;
the morphologically processed sample image g (x, y) ═ f (x, y) -h (x, y) was determined.
The further technical scheme is that an opening operation is carried out on a sample image f (x, y) by using a structural element b with a preset specification to obtain a background image h (x, y) which is as follows:
Figure BDA0003473251540000031
wherein the content of the first and second substances,
Figure BDA0003473251540000032
represents the gray scale erosion operation of the structural element b on the sample image f (x, y) and has:
Figure BDA0003473251540000033
Figure BDA0003473251540000034
representing pairs of structural elements b after grey-scale etching
Figure BDA0003473251540000035
And has:
Figure BDA0003473251540000036
wherein D isbIs the definition field of the structural element b and (x ', y') is the pixel point coordinates.
The further technical scheme is that the Gaussian difference filter is a band-pass filter formed by difference of two high-pass filtering Gaussian functions with different widths, and is recorded as:
Figure BDA0003473251540000037
wherein (x, y) represents the coordinates of the pixel point,
Figure BDA0003473251540000038
is a standard deviation sigma of a Gaussian function1Is a high-pass filtered gaussian function of the parameters,
Figure BDA0003473251540000039
is a standard deviation sigma of a Gaussian function2Is a high-pass filtered Gaussian function of the parameter, and σ2=Kσ1And K is a coefficient.
The further technical scheme is that the Gaussian difference filter is used for filtering out low-frequency components corresponding to uneven illumination light source distribution in the sample image after morphological processing, high-frequency components corresponding to the defect points and g' (x, y, sigma) edges of the defect points12) Is a difference of Gaussian filter DOG (x, y, sigma)12) Convolution operation result for morphologically processed sample image g (x, y):
Figure BDA00034732515400000310
(x, y) denotes the coordinates of the pixel points, σ1And σ2Two different standard deviations of the gaussian difference filter.
The beneficial technical effects of the invention are as follows:
the invention discloses a micro LED defect detection method based on unsupervised learning, which uses a residual convolution self-encoder model to effectively perform characterization learning, can realize information characterization with stronger fine granularity and richer semantic information for potential space optimization so as to meet the characteristics of smaller micro LED defect and large positive and negative sample difference, obtain a defect detection model of a micro LED with stronger robustness and realize automatic defect detection on a micro LED chip. In addition, the method can improve the accuracy by preprocessing the image in an open operation and frequency domain filtering mode.
Drawings
Fig. 1 is a flowchart of a method of the method for detecting a defect of a micro led based on unsupervised learning according to the present application.
Fig. 2 is a model structure diagram of a residual convolution auto-encoder model.
Fig. 3 is a block diagram of a residual convolution module in a residual convolution self-encoder model.
Fig. 4 is a block diagram of a residual transpose convolution module in a residual convolution auto-encoder model.
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings.
The application discloses a micro LED defect detection method based on unsupervised learning, please combine the flow chart shown in FIG. 1, the method includes the following two parts: the model training part and the model application part are respectively introduced as follows:
the model training part is used for training to obtain a defect detection model.
Includes the following steps, please refer to fig. 1:
step 102, a sample data set is obtained, wherein the sample data set comprises a normal sample image of a normal MicroLED sample chip and an abnormal sample image of an abnormal MicroLED sample chip with defects.
In practical implementation, in combination with an industrial practical situation in which a defective chip rarely occurs, a normal sample image in the sample data set is generally much larger than an abnormal sample image. For example, in one example, the sample data set includes 4629 sample images, wherein only 2% of the sample images are outlier sample images and the remaining 98% of the sample images are normal sample images.
And 104, performing image preprocessing on the sample images in the sample data set.
The basic idea of image pre-processing a sample image is to reduce or even eliminate the effect of non-uniform illumination on the identification of the sample image. In one embodiment, the image pre-processing operation on the sample image comprises: firstly, morphological processing is carried out on a sample image, then, frequency domain filtering processing is carried out on the sample image after the morphological processing by utilizing a Gaussian difference filter for realizing band-pass filtering, and the image preprocessing of the sample image is completed. In practical applications, before performing the morphological processing, the method usually further includes a graying processing and an edge detection operation, which are not described in detail herein. Specifically, for each sample image f (x, y):
(1) and (5) morphological processing.
The background elimination method is a method for eliminating uneven illumination in an airspace, and the basic idea is that in order to extract a target in the space for identification, top-hat (top-hat) transformation of background extraction combines image subtraction and opening operation, and is suitable for bright objects on a dark background.
First, an on operation is performed on a sample image f (x, y) using a structural element b having a predetermined specification to obtain a background image h (x, y):
Figure BDA0003473251540000051
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003473251540000052
represents the gray scale erosion operation of the structural element b on the sample image f (x, y) and has:
Figure BDA0003473251540000053
wherein D isbIs the definition domain of the structural element b, (x, y) represents the coordinates of the pixel point, and (x ', y') represents the coordinates of the pixel point. Gray scale erosion is a local minimum operator, where the minimum is derived from DbA series of pixel neighborhoods determined by the shape of (a).
Figure BDA0003473251540000054
Representing pairs of structural elements b after grey-scale etching
Figure BDA0003473251540000055
And has:
Figure BDA0003473251540000056
likewise, DbIs the definition domain of the structural element b, (x, y) represents the coordinates of the pixel point, and (x ', y') represents the coordinates of the pixel point. Gray scale expansion is a local maximum operator, where the maximum is taken from DbA series of pixel neighborhoods determined by the shape of (a).
When performing an on operation on the sample image f (x, y) by using the structural element b, firstly, performing a gray scale erosion operation on the original sample image f (x, y) by using the structural element b, and then expanding the result k (x, y) obtained by the erosion operation by using the structural element b. The structural element b here is large enough not to fit any object. The gray scale erosion which is firstly carried out can reduce the whole gray scale while removing the details of the image, but the subsequent gray scale expansion can enhance the whole brightness of the image, so that the opening operation can remove the bright details which are smaller than the structural elements b, and simultaneously all the gray scales and the bright area characteristics which are larger than the structural elements b are kept relatively unchanged, so that the whole gray scale of the image is basically kept unchanged.
After obtaining the background image h (x, y), the original sample image f (x, y) and the background image h (x, y) are subjected to image subtraction to obtain a morphologically processed sample image g (x, y) ═ f (x, y) -h (x, y).
(2) And (5) frequency domain filtering processing.
The gaussian difference filter used for performing the frequency-domain filtering process on the morphologically processed sample image g (x, y) is a band-pass filter formed by a difference between two high-pass filtered gaussian functions of different widths, and is noted as:
Figure BDA0003473251540000061
wherein (x, y) represents the coordinates of the pixel point,
Figure BDA0003473251540000062
is a standard deviation sigma of a Gaussian function1Is a high-pass filtered gaussian function of the parameters,
Figure BDA0003473251540000063
is a standard deviation sigma of a Gaussian function2Is a high-pass filtered gaussian function of the parameter.
Gaussian function G with standard deviation sigma of Gaussian function as parameterσ(x, y) is a circularly symmetric function whose two-dimensional expression is:
Figure BDA0003473251540000064
gaussian function GσThe smoothing degree of (x, y) is determined by the value of the standard deviation sigma of the Gaussian function, when the sigma is large, the smoothing effect is large, the detail damage is large, the edge positioning precision is low, and the low-pass filtering effect is obvious. On the contrary, when the sigma is small, the edge positioning precision is high, and the details of the edge are prominent. Gaussian difference filter DOG (x, y, sigma)12) Is a band-pass filter whose performance is determined by the standard deviation sigma of its two Gaussian functions1And σ2And (6) determining. When sigma is12Time, difference of gaussians filter DOG (x, y, σ)12) The filter image with the higher channel subtracts the filter image with the lower passband, so that the enhancement of the passband and the suppression of the background noise are obtained. In one embodiment, σ is set2=Kσ1K is a coefficient determined by numerical simulation, and when the coefficient K takes 4 or 5, the Gaussian difference filter DOG (x, y, sigma)12) The filtering effect of (2) is the best.
Gaussian difference filter DOG (x, y, sigma)12) The method is used for filtering out low-frequency components corresponding to uneven illumination light source distribution in the sample image g (x, y) after morphological processing and high-frequency components corresponding to defects, so that image light field correction is realized, and illumination unevenness correction is realized. Since the image is linearly smoothed and mathematically convolved, the image is smoothed by a convolution operationMorphological-processed sample image g (x, y) with defect edge g' (x, y, σ)12) Is a difference of Gaussian filter DOG (x, y, sigma)12) Convolution operation result for morphologically processed sample image g (x, y):
Figure BDA0003473251540000065
in general, a Gaussian difference filter DOG (x, y, σ) is used12) When convolution operation is performed in the space domain, the amplitude of the direct current component at the low frequency position can be reduced at the same time, namely, the overall image brightness is reduced.
And 106, performing model pre-training on the residual convolution self-encoder model by using the sample image subjected to image pre-processing.
Referring to fig. 2, the residual convolution self-encoder model includes an encoder formed by a residual convolution module and a decoder formed by a residual transposition convolution module. In one embodiment, as shown in fig. 2, the encoder includes three residual convolution modules with successively decreasing feature sizes from input to output, and the decoder includes three residual transposed convolution modules with successively increasing feature sizes from input to output.
Referring to fig. 3, the residual convolution module includes a residual unit, a convolution unit, and a maximum pooling unit, where the activation function adopts a ReLU activation function, the convolution unit and the residual unit respectively process the input images, then linearly superimpose the processed images, and output the processed images through the maximum pooling unit, and the residual unit keeps the number and value of channels of the input images unchanged. And a residual unit in the residual convolution module sequentially performs convolution Conv, batch normalization BN and activation function ReLU processing on the input image. The convolution unit in the residual convolution module performs two rounds of convolution Conv, batch normalization BN and activation function ReLU processing on the input image in sequence. Where the convolution process uses a standard 3 x 3 convolution kernel. The use of an auto-encoder with lower reconstruction errors has more discriminative features in the underlying space. Furthermore, residual concatenation in the residual convolutional auto-encoder model affects the recovery quality, and since the residual convolutional auto-encoder model in this application consists of an auto-encoder and some residual concatenation, the network has a more distinguishable representation than CAE (convolutional auto-encoder).
Correspondingly, referring to fig. 4, the residual transposed convolution module is a transposed structure of the residual convolution module, the residual transposed convolution module also includes a residual unit, a convolution unit and a maximum pooling unit, the activation function adopts a ReLU activation function, the convolution unit and the residual unit respectively process the input image, then perform linear superposition and output through the maximum pooling unit, and the residual unit keeps the number and value of channels of the input image unchanged. And a residual unit in the residual transposition convolution module sequentially performs deconvolution, batch normalization and activation function processing on the input image. And a convolution unit in the residual transposition convolution module sequentially performs two rounds of deconvolution, batch normalization and activation function processing on the input image.
Model pre-training is carried out on a residual convolution self-encoder model by utilizing a sample image subjected to image preprocessing, and model pre-training is carried out by utilizing a mean square error L2-loss as a loss function
Figure BDA0003473251540000071
Figure BDA0003473251540000072
Is a prediction value of the sample image q, YqIs the target value of the sample image Q, Q being the total number of sample images. The residual convolutional auto-encoder model, in which 100 epochs are trained, uses an Adam optimizer with weight decay λ 5e-4 and a learning rate η 1 e-4. At 50epoch, the learning rate decreased by 0.1-fold.
And step 108, mapping the output of the encoder of the pre-trained residual convolution self-encoder model to a potential space and fitting the output to a hypersphere, wherein the hypersphere is used for classifying the normal sample image and the abnormal sample image, optimizing the potential space by using a target function, and training to obtain a defect detection model.
The boundary of the optimized hypersphere is affected by the coder of the pre-trained residual convolutional auto-coder model, and the optimized hypersphere can effectively classify the abnormality because the pre-trained residual convolutional auto-coder model has more distinguishable representation than the CAE (convolutional auto-coder). For potential space optimization, due to the fact that the defects of the MicroLED are small and the difference between positive and negative samples is large, information representation with stronger fine granularity and richer semantic information is needed. Thus, in one embodiment, the objective function is used to minimize the volume of the hypersphere, minimize the similarity of the potential space vectors corresponding to different labels, and maximize the similarity of the potential space vectors corresponding to the same label, including the normal label corresponding to the normal sample image and the abnormal label corresponding to the abnormal sample image. By minimizing the similarity of the potential space vectors corresponding to different labels and maximizing the similarity of the potential space vectors corresponding to the same label, the contrast loss can be increased, so that the representation meaning of the potential space vectors is more definite. In one embodiment, the objective function is:
Figure BDA0003473251540000081
wherein N represents the number of frames output by an encoder of the pre-trained residual convolutional self-encoder model, I is each frame output by the encoder of the pre-trained residual convolutional self-encoder model, phi is the encoder, W is a weight parameter of the encoder, and c is the center of a hypersphere in a potential space; z is a radical ofiIs a potential space vector, z, obtained by mapping the ith frame output to a potential spacejIs the potential space vector, Sim (z), resulting from the mapping of the jth frame output to the potential spacei,zj) Is a potential space vector ziAnd zjSimilarity between, yiLabel, y, representing the output of frame ijA label representing the output of the jth frame. Alpha is an edge constant, and only the negative vector pair with the similarity greater than alpha can add contrast loss, so that the stability of optimization is ensured. The edge constant α in contrast loss is typically 0.4.
The potential spatial optimization detail training encoder pre-trains 200 epochs using an Adam optimizer, with a weight decay λ of 5e-7 and a learning rate η of 1 e-4. At 50epoch, the learning rate decreased by 0.1 times.
After training, the performance of the defect detection model obtained by training can be verified by using a verification data set, and the verification data set comprises a plurality of normal sample images and abnormal sample images. For instance, in one example, the validation data set includes 514 sample images, with 2% of the sample images being outlier sample images and the remaining 98% being normal sample images.
And the model application part is used for detecting the defects of the MicroLED by using the defect detection model obtained by training. Specifically, an image to be detected of the micro LED sample chip to be detected is obtained and input into the defect detection model, the obtaining method is similar to the sample image, and in actual application, image preprocessing is required to be carried out as the sample image. After the image to be detected is input into the defect detection model which is trained in advance, the label indicator of the image to be detected can be output to be a normal label or an abnormal label, so that whether the micro LED sample chip to be detected has defects or not is determined, and the defect detection of the micro LED sample chip to be detected is also completed. In order to illustrate the benefit of the method, the defect analysis result is displayed by visualizing the model potential space through t-SNE, and tests prove that higher performance can be realized by using fewer blocks aiming at models with different conditions, so that accurate and rapid defect detection is realized.
What has been described above is only a preferred embodiment of the present application, and the present invention is not limited to the above embodiment. It is to be understood that other modifications and variations directly derivable or suggested by those skilled in the art without departing from the spirit and concept of the present invention are to be considered as included within the scope of the present invention.

Claims (10)

1. A method for detecting defects of a MicroLED based on unsupervised learning is characterized by comprising the following steps:
acquiring a sample data set, wherein the sample data set comprises a normal sample image of a normal MicroLED sample chip and an abnormal sample image of an abnormal MicroLED sample chip with a defect;
carrying out image preprocessing on a sample image in the sample data set;
performing model pre-training on a residual convolutional self-encoder model by using a sample image subjected to image pre-processing, wherein the residual convolutional self-encoder model comprises an encoder formed based on a residual convolutional module and a decoder formed based on a residual transpose convolutional module;
mapping the output of an encoder of a pre-trained residual convolution self-encoder model to a potential space and fitting the output to a hypersphere, wherein the hypersphere is used for classifying normal sample images and abnormal sample images, optimizing the potential space by using a target function, and training to obtain a defect detection model;
and acquiring an image to be detected of the MicroLED sample chip to be detected, inputting the image to be detected into the defect detection model, and completing the defect detection of the MicroLED sample chip to be detected.
2. The method of claim 1, wherein the objective function is used to minimize the volume of the hypersphere, minimize the similarity of potential space vectors corresponding to different labels, and maximize the similarity of potential space vectors corresponding to the same label, wherein the labels include normal labels corresponding to normal sample images and abnormal labels corresponding to abnormal sample images.
3. The method of claim 2, wherein the objective function is:
Figure FDA0003473251530000011
wherein N represents the number of frames output by an encoder of the pre-trained residual convolutional self-encoder model, I is each frame output by the encoder of the pre-trained residual convolutional self-encoder model, phi is the encoder, W is a weight parameter of the encoder, and c is the center of a hypersphere in a potential space; z is a radical ofiIs a potential space vector, z, obtained by mapping the ith frame output to a potential spacejIs the potential space vector, Sim (z), resulting from the mapping of the jth frame output to the potential spacei,zj) Is a potential space vector ziAnd zjSimilarity between them, α is the edge constant, yiLabel, y, representing the output of frame ijA label representing the output of the jth frame.
4. The method according to claim 1, wherein the residual convolution module comprises a residual unit, a convolution unit and a maximum pooling unit, the activation function is a ReLU activation function, the convolution unit and the residual unit respectively process the input images, then linearly superimpose the processed images and output the processed images through the maximum pooling unit, and the residual unit keeps the number and value of channels of the input images unchanged.
5. The method of claim 1, wherein the model pre-training is performed on the residual convolutional self-encoder model using the sample image subjected to image pre-processing, and the model pre-training is performed using a mean square error as a loss function.
6. The method according to claims 1-5, wherein said image pre-processing a sample image in a sample data set comprises:
performing morphological processing on the sample images in the sample data set;
and performing frequency domain filtering processing on the sample image after the morphological processing by using a Gaussian difference filter for realizing band-pass filtering to finish image preprocessing.
7. The method of claim 6, wherein said morphologically processing the sample images in the sample data set comprises, for each sample image f (x, y):
performing an opening operation on the sample image f (x, y) by using a structural element b with a preset specification to obtain a background image h (x, y), wherein the x, y represents pixel point coordinates;
the morphologically processed sample image g (x, y) ═ f (x, y) -h (x, y) was determined.
8. The method of claim 7, wherein performing an on operation on the sample image f (x, y) using the structural element b with a predetermined specification to obtain the background image h (x, y) is:
Figure FDA0003473251530000021
wherein the content of the first and second substances,
Figure FDA0003473251530000022
represents the gray scale erosion operation of the structural element b on the sample image f (x, y) and has:
Figure FDA0003473251530000023
Figure FDA0003473251530000024
representing pairs of structural elements b after grey-scale etching
Figure FDA0003473251530000025
And has:
Figure FDA0003473251530000026
wherein D isbIs the definition field of the structural element b and (x ', y') is the pixel point coordinates.
9. The method of claim 6,
the gaussian difference filter is a band-pass filter formed by the difference of two high-pass filtered gaussian functions with different widths, and is recorded as:
Figure FDA0003473251530000027
wherein (x, y) represents the coordinates of the pixel point,
Figure FDA0003473251530000028
is a standard deviation sigma of a Gaussian function1Is a high-pass filtered gaussian function of the parameters,
Figure FDA0003473251530000029
is a standard deviation sigma of a Gaussian function2Is a high-pass filtered Gaussian function of the parameter, and σ2=Kσ1And K is a coefficient.
10. The method of claim 6,
the Gaussian difference filter is used for filtering out low-frequency components corresponding to uneven illumination light source distribution in the sample image after morphological processing, high-frequency components corresponding to the defect, and g' (x, y, sigma) edges of the defect12) Is a difference of Gaussian filter DOG (x, y, sigma)12) Convolution operation result for morphologically processed sample image g (x, y):
Figure FDA0003473251530000031
(x, y) denotes the coordinates of the pixel points, σ1And σ2Two different standard deviations of the gaussian difference filter.
CN202210048096.3A 2022-01-17 2022-01-17 Micro LED defect detection method based on unsupervised learning Active CN114511516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210048096.3A CN114511516B (en) 2022-01-17 2022-01-17 Micro LED defect detection method based on unsupervised learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210048096.3A CN114511516B (en) 2022-01-17 2022-01-17 Micro LED defect detection method based on unsupervised learning

Publications (2)

Publication Number Publication Date
CN114511516A true CN114511516A (en) 2022-05-17
CN114511516B CN114511516B (en) 2023-04-07

Family

ID=81550557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210048096.3A Active CN114511516B (en) 2022-01-17 2022-01-17 Micro LED defect detection method based on unsupervised learning

Country Status (1)

Country Link
CN (1) CN114511516B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330702A (en) * 2022-08-01 2022-11-11 无锡雪浪数制科技有限公司 Beverage bottle filling defect identification method based on deep vision
CN115494439A (en) * 2022-11-08 2022-12-20 中遥天地(北京)信息技术有限公司 Space-time coding image correction method based on deep learning
CN117372424A (en) * 2023-12-05 2024-01-09 成都数之联科技股份有限公司 Defect detection method, device, equipment and storage medium
CN117523322A (en) * 2024-01-04 2024-02-06 成都数联云算科技有限公司 Defect detection system and method based on unsupervised learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200039049A (en) * 2018-10-02 2020-04-16 (주)지엘테크 Inspection method for appearance badness and inspection system for appearance badness
CN111145165A (en) * 2019-12-30 2020-05-12 北京工业大学 Rubber seal ring surface defect detection method based on machine vision
CN111383209A (en) * 2019-12-20 2020-07-07 华南理工大学 Unsupervised flaw detection method based on full convolution self-encoder network
CN112800876A (en) * 2021-01-14 2021-05-14 北京交通大学 Method and system for embedding hypersphere features for re-identification
CN113313684A (en) * 2021-05-28 2021-08-27 北京航空航天大学 Video-based industrial defect detection system under dim light condition
CN113516650A (en) * 2021-07-30 2021-10-19 深圳康微视觉技术有限公司 Circuit board hole plugging defect detection method and device based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200039049A (en) * 2018-10-02 2020-04-16 (주)지엘테크 Inspection method for appearance badness and inspection system for appearance badness
CN111383209A (en) * 2019-12-20 2020-07-07 华南理工大学 Unsupervised flaw detection method based on full convolution self-encoder network
CN111145165A (en) * 2019-12-30 2020-05-12 北京工业大学 Rubber seal ring surface defect detection method based on machine vision
CN112800876A (en) * 2021-01-14 2021-05-14 北京交通大学 Method and system for embedding hypersphere features for re-identification
CN113313684A (en) * 2021-05-28 2021-08-27 北京航空航天大学 Video-based industrial defect detection system under dim light condition
CN113516650A (en) * 2021-07-30 2021-10-19 深圳康微视觉技术有限公司 Circuit board hole plugging defect detection method and device based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴江林等: "应用于课堂场景人脸验证的卷积网络方法研究", 《信息技术与网络安全》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330702A (en) * 2022-08-01 2022-11-11 无锡雪浪数制科技有限公司 Beverage bottle filling defect identification method based on deep vision
CN115494439A (en) * 2022-11-08 2022-12-20 中遥天地(北京)信息技术有限公司 Space-time coding image correction method based on deep learning
CN117372424A (en) * 2023-12-05 2024-01-09 成都数之联科技股份有限公司 Defect detection method, device, equipment and storage medium
CN117372424B (en) * 2023-12-05 2024-03-08 成都数之联科技股份有限公司 Defect detection method, device, equipment and storage medium
CN117523322A (en) * 2024-01-04 2024-02-06 成都数联云算科技有限公司 Defect detection system and method based on unsupervised learning
CN117523322B (en) * 2024-01-04 2024-03-15 成都数联云算科技有限公司 Defect detection system and method based on unsupervised learning

Also Published As

Publication number Publication date
CN114511516B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN114511516B (en) Micro LED defect detection method based on unsupervised learning
Yuan-Fu A deep learning model for identification of defect patterns in semiconductor wafer map
Xue-Wu et al. A vision inspection system for the surface defects of strongly reflected metal based on multi-class SVM
Martins et al. Automatic detection of surface defects on rolled steel using computer vision and artificial neural networks
CN110648305B (en) Industrial image detection method, system and computer readable recording medium
CN108346141B (en) Method for extracting defects of single-side light-entering type light guide plate
CN111712769B (en) Method, apparatus, system and storage medium for setting lighting conditions
CN111179250B (en) Industrial product defect detection system based on multitask learning
CN109146847B (en) Wafer map batch analysis method based on semi-supervised learning
US10636133B2 (en) Automated optical inspection (AOI) image classification method, system and computer-readable media
CN113239930A (en) Method, system and device for identifying defects of cellophane and storage medium
CN116188475B (en) Intelligent control method, system and medium for automatic optical detection of appearance defects
CN116777907A (en) Sheet metal part quality detection method
US20190187555A1 (en) Automatic inline detection and wafer disposition system and method for automatic inline detection and wafer disposition
CN112907561A (en) Notebook appearance flaw detection method based on deep learning
TWI683262B (en) Industrial image inspection method and system and computer readable recording medium
CN114723708A (en) Handicraft appearance defect detection method based on unsupervised image segmentation
Peng et al. Automated product boundary defect detection based on image moment feature anomaly
CN114494780A (en) Semi-supervised industrial defect detection method and system based on feature comparison
Pramunendar et al. A Robust Image Enhancement Techniques for Underwater Fish Classification in Marine Environment.
US11144702B2 (en) Methods and systems for wafer image generation
Park et al. Robust inspection of micro-LED chip defects using unsupervised anomaly detection
CN116978834A (en) Intelligent monitoring and early warning system for wafer production
Lin Tiny surface defect inspection of electronic passive components using discrete cosine transform decomposition and cumulative sum techniques
Mittel et al. Vision-based crack detection using transfer learning in metal forming processes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant