WO2021237682A1 - 显示面板的检测装置、检测方法、电子装置、可读介质 - Google Patents
显示面板的检测装置、检测方法、电子装置、可读介质 Download PDFInfo
- Publication number
- WO2021237682A1 WO2021237682A1 PCT/CN2020/093281 CN2020093281W WO2021237682A1 WO 2021237682 A1 WO2021237682 A1 WO 2021237682A1 CN 2020093281 W CN2020093281 W CN 2020093281W WO 2021237682 A1 WO2021237682 A1 WO 2021237682A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- display panel
- layer
- detection
- defect
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 161
- 230000007547 defect Effects 0.000 claims description 157
- 238000012549 training Methods 0.000 claims description 92
- 238000009826 distribution Methods 0.000 claims description 55
- 238000007689 inspection Methods 0.000 claims description 49
- 238000013527 convolutional neural network Methods 0.000 claims description 47
- 238000000034 method Methods 0.000 claims description 36
- 238000012545 processing Methods 0.000 claims description 30
- 238000005070 sampling Methods 0.000 claims description 30
- 238000013528 artificial neural network Methods 0.000 claims description 17
- 238000010606 normalization Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 12
- 238000003860 storage Methods 0.000 claims description 8
- 230000003993 interaction Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 description 26
- 230000008569 process Effects 0.000 description 19
- 239000013598 vector Substances 0.000 description 18
- 238000011176 pooling Methods 0.000 description 11
- 238000012795 verification Methods 0.000 description 10
- 230000003044 adaptive effect Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000005457 optimization Methods 0.000 description 8
- 230000005477 standard model Effects 0.000 description 8
- 238000012360 testing method Methods 0.000 description 8
- 238000010276 construction Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 5
- 230000002207 retinal effect Effects 0.000 description 5
- 238000007477 logistic regression Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000012706 support-vector machine Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4445—Classification of defects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30121—CRT, LCD or plasma display
Definitions
- the present disclosure relates to the field of image recognition, and in particular to a detection device for a display panel, a detection method for a display panel, an electronic device, and a computer-readable medium.
- the detection of display panel defects is mainly carried out by automatic optical inspection (AOI, Automated Optical Inspection) mapping equipment on the production line to take pictures of the position of the display panel that may be defective, and then based on the captured pictures to determine the category and location of the display panel defect Identify it.
- AOI Automatic optical inspection
- the embodiments of the present disclosure provide a detection device for a display panel, a detection method for a display panel, an electronic device, and a computer-readable medium.
- a detection device for a display panel includes:
- An image receiver for receiving the inspection image of the display panel to be inspected
- the detector is configured to input the detection image of the display panel to be detected into a pre-built detection model for detecting the display panel, and use the detection model to generate a detection result;
- the detection model includes:
- the defect category recognition sub-model is used to recognize the defect category of the display panel to be inspected
- the defect category recognition sub-model includes multiple base models and sub-models
- the multiple base models are used to initially classify the defects of the display panel to be inspected
- the secondary model is used for final classification of the defects of the display panel to be inspected according to the input data obtained by integrating the output data of the respective base models.
- the multiple base models are the same convolutional neural network model, and the different base models use multiple first training data sets that satisfy different probability distributions to compare the convolutional neural network models. Obtained from training.
- the multiple first training data sets include sample sets obtained by respectively sampling the original data set according to different predetermined sampling ratios, and the different predetermined sampling ratios are determined according to different probability distributions.
- the sampling ratio of the detection image of the defect category, and the original data set includes a plurality of detection images of different display panels with known defects.
- the convolutional neural network model includes a fully connected layer, a supplementary convolution layer, a batch normalization layer, and a random discard layer;
- the supplementary convolution layer is used to convolve data to be input to the fully connected layer, so that the data convolved by the supplementary convolution layer meets the input dimensions of the fully connected layer;
- the batch standardization layer is used to perform standardization processing on the data to be input to the fully connected layer;
- the random discarding layer is used to randomly discard some neural network units in the convolutional neural network model to avoid overfitting
- the first algorithm when training the convolutional neural network model, the first algorithm is used to initialize the fully connected layer, the second algorithm is used to regularize the fully connected layer, and the third algorithm is used to update the supplementary volume.
- the stack is initialized.
- the secondary model is a classifier, which includes a plurality of fully connected layers and a normalized exponential function layer.
- the defect location recognition sub-model is a target detector.
- embodiments of the present disclosure provide a method for detecting a display panel, including:
- the detection model includes:
- the defect category recognition sub-model is used to recognize the defect category of the display panel to be inspected
- the defect category recognition sub-model includes multiple base models and sub-models
- the multiple base models are used to respectively initially classify the defects of the display panel to be inspected
- the secondary model is used for final classification of the defects of the display panel to be inspected according to the input data obtained by integrating the output data of the respective base models.
- the multiple base models are the same convolutional neural network model, and the different base models use multiple first training data sets that satisfy different probability distributions to compare the convolutional neural network models. Obtained from training.
- the step of generating a plurality of said first training data sets includes:
- the original data set including multiple inspection images of different display panels with known defects
- the original data sets are respectively sampled to obtain the multiple first training data sets.
- the convolutional neural network model includes a fully connected layer, a supplementary convolution layer, a batch normalization layer, and a random discard layer;
- the supplementary convolution layer is used to convolve data to be input to the fully connected layer, so that the data convolved by the supplementary convolution layer meets the input dimensions of the fully connected layer;
- the batch standardization layer is used to perform standardization processing on the data to be input to the fully connected layer;
- the random discarding layer is used to randomly discard some neural network units in the convolutional neural network model to avoid overfitting
- the first algorithm when training the convolutional neural network model, the first algorithm is used to initialize the fully connected layer, the second algorithm is used to regularize the fully connected layer, and the third algorithm is used to update the supplementary volume.
- the stack is initialized.
- the secondary model is a classifier
- the classifier includes a plurality of fully connected layers and a normalized exponential function layer.
- the defect location recognition sub-model is a target detector.
- an electronic device including:
- One or more processors are One or more processors;
- a storage device having one or more programs stored thereon, and when the one or more programs are executed by the one or more processors, the one or more processors realize the detection of any one of the above-mentioned display panels method;
- One or more I/O interfaces are connected between the processor and the memory, and are configured to implement information interaction between the processor and the memory.
- the embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, and when the program is executed by a processor, any one of the foregoing display panel detection methods is implemented.
- FIG. 1 is a block diagram of the composition of a detection device for a display panel in an embodiment of the disclosure
- FIG. 2 is a structural diagram of an alternative implementation of the VGG16 model in the embodiments of the disclosure.
- FIG. 3 is a structural diagram of a retinal net target detection model in an embodiment of the disclosure.
- FIG. 4 is a block diagram of another display panel detection device in an embodiment of the disclosure.
- FIG. 5 is a block diagram of the composition of yet another detection device for a display panel in an embodiment of the disclosure.
- FIG. 6 is a flowchart of a method for detecting a display panel in an embodiment of the disclosure
- FIG. 7 is a flowchart of some steps in another method for detecting a display panel in an embodiment of the present disclosure.
- FIG. 8 is a flowchart of some steps in another method for detecting a display panel in an embodiment of the present disclosure
- FIG. 9 is a flowchart of some steps in another method for detecting a display panel in an embodiment of the disclosure.
- FIG. 10 is a flowchart of some steps in another method for detecting a display panel in an embodiment of the disclosure.
- FIG. 11 is a flowchart of some steps in another method for detecting a display panel in an embodiment of the disclosure.
- FIG. 12 is a block diagram of an electronic device provided by an embodiment of the disclosure.
- FIG. 13 is a block diagram of the composition of a computer-readable medium provided by an embodiment of the disclosure.
- an embodiment of the present disclosure provides a detection device 100 for a display panel, including:
- the image receiver 110 is used to receive the inspection image of the display panel to be inspected
- the detector 120 is configured to input the detection image of the display panel to be detected into a pre-built detection model for detecting the display panel, and use the detection model to generate a detection result.
- the inspection image of the display panel to be inspected includes a picture collected by taking a picture of the display panel with the image acquisition device.
- an AOI detection image collected by shooting a display panel with an AOI device is a device that scans a display panel and collects images based on optical principles to detect the display panel.
- the image receiver 110 receives the inspection image of the display panel to be inspected from the imaging device, for example, receives the inspection image from the AOI device.
- the inspection of the display panel to be inspected includes identifying the type of defect of the display panel to be inspected and marking the defect position of the display panel to be inspected.
- the types of defects include residual, missing, foreign matter, heterochromatic, etc., which are not particularly limited in the embodiment of the present disclosure.
- an inspection model for inspecting the display panel is constructed in advance, and the inspection image of the display panel to be inspected is input into the inspection model to determine the defect category and location of the display panel to be inspected.
- the automatic detection of the display panel to be detected is completed.
- the detection model is constructed after training based on detection images of a large number of different display panels.
- the detection images of different display panels used for training and constructing the detection model can be collected from the same production line or from different production lines.
- the detection device 100 provided by the embodiment of the present disclosure has high detection accuracy for the display panels produced by the production line.
- the detection device 100 It can be used to improve the product quality of a specific production line; when the detection model is constructed based on the training of the detection images of different display panels collected from different production lines, the detection device 100 provided by the embodiment of the present disclosure compares the display panels produced by different production lines. Both have high detection accuracy, which is conducive to mass production.
- a detection model for detecting defects of the display panel is constructed in advance.
- the detection model is constructed based on the training of detection images of a large number of different display panels. Detection;
- the detection device when used to detect the display panel, it can adapt to the constantly changing data distribution on the production line, and can have high detection accuracy for different production lines and different types of display panel defects.
- the display panel inspection device provided by the embodiments of the present disclosure ensures the accuracy of the inspection, reduces the cost of the display panel inspection, improves the inspection efficiency, and is beneficial to Improve the production quality and production efficiency of display panels.
- the detection of the display panel includes two parts: identifying the defect category of the display panel and identifying the defect location of the display panel.
- the detection model in the embodiments of the present disclosure includes:
- the defect category recognition sub-model is used to recognize the defect category of the display panel to be inspected
- the defect location identification sub-model is used to identify the defect location of the display panel to be inspected.
- the defect category recognition sub-model and the defect location recognition sub-model model can be combined arbitrarily.
- the defect category of the display panel can be identified first, and then the defect location of the display panel can be identified; The defect position of the display panel is then identified, and the defect category of the display panel is identified; the defect category of the display panel and the defect location of the display panel can also be identified separately, and then integrated.
- the embodiments of the present disclosure do not specifically limit this.
- an integrated learning algorithm is used to construct a defect category recognition sub-model.
- Integrated learning refers to the use of a series of learners to learn to obtain several individual learners, and then through a combination of strategies to integrate several individual learners to obtain a strong learner.
- the basic idea is that when each individual learner has a preference, that is, when each individual learner only performs better in certain aspects, the individual learners are integrated to ensure the accuracy of the strong learners. Improve the generalization performance of strong learners.
- the defect category recognition sub-model is constructed according to an integrated learning algorithm, specifically, multiple individual learners are constructed according to various defect forms and data distribution of various defects, and then the multiple individual learners The integration is performed to obtain the defect category identification sub-model, which can adapt to the constantly changing data distribution on the production line and improve the detection accuracy of display panel defects.
- the integrated learning algorithm adopted in the embodiments of the present disclosure is a stacking algorithm.
- the stacking algorithm including a two-layer learner structure as an example.
- the first layer includes multiple base models
- the second layer includes a secondary model.
- the main idea of the stacking algorithm is to train multiple base models separately, and then merge the prediction results output by each base model as new data, as the input of the secondary model, and the secondary model gives the final classification result.
- the defect category recognition sub-model includes a plurality of base models and sub-models
- the multiple base models are used to respectively initially classify the defects of the display panel to be inspected
- the secondary model is used for final classification of the defects of the display panel to be inspected according to the input data obtained by integrating the output data of the respective base models.
- multiple base models may have different neural network structures, or may have the same neural network structure, which is not particularly limited in the present disclosure.
- the multiple base models are obtained by respectively training the same neural network using data with different probability distributions.
- Different probability distributions correspond to the data distribution of various forms of defects on the production line, for example, including average distribution, exponential distribution, bootstrap distribution, original distribution, binomial distribution, Gaussian distribution, etc. No special restrictions.
- the embodiments of the present disclosure use data with different probability distributions to train the same neural network structure respectively, so that the constructed defect category recognition sub-model can adapt to the constantly changing data distribution of display panel defects on the production line.
- multiple base models Having the same neural network structure facilitates the selection of the neural network structure with the best performance, and facilitates subsequent optimization and debugging, thereby further improving the detection accuracy of the defect category recognition sub-model.
- the base model is a convolutional neural network model
- different base models use multiple first training data sets that satisfy different probability distributions to perform the convolutional neural network model. Get it through training.
- the multiple first training data sets include sample sets obtained by respectively sampling the original data set according to different predetermined sampling ratios, and the different predetermined sampling ratios are determined according to different probability distributions.
- the sampling ratio of the detection image of the defect category, and the original data set includes a plurality of detection images of different display panels with known defects.
- the first training data set obtained is further explained.
- the same number of detection images of different defect categories are taken from the original data set as the first training data set, that is, the proportion of the detection images of different defect categories in the first training data set Consistent
- the ratio of the detected image of each defect category in the original data set is calculated, and the ratio of the original data set of the detected image station of each defect category is squared to obtain the new ratio, Then, for each defect category, a corresponding number of detection images are taken from the original data set according to a new ratio, and the taken detection images of all the defect categories are taken as the first training data set;
- the original data set is used as the first training data set.
- the remaining detection images are used as the verification set; for the original distribution, according to A ratio of 9:1 divides the original data set into a training data set and a validation set.
- the data processing unit 131 is further configured to divide the original data set. For example, training data, verification data, and test data are divided into three parts according to a predetermined ratio. Among them, the verification data is used for the secondary model, and the test data is used for evaluating the final effect.
- the ratio of training data, verification data, and test data is not particularly limited. For example, the ratio of training data, verification data, and test data is 8:1:1. It should also be noted that after the original data set is divided, the data processing unit 131 generates multiple first training data sets by sampling the divided training data.
- the convolutional neural network model is not particularly limited.
- the convolutional neural network model may be a deep residual network (ResNet, Deep residual network), or a densely connected convolutional network (Densenet). , Densely Connected Convolutional Networks), any one of the VGG network.
- the inventors of the embodiments of the present disclosure have discovered that the VGG model has better performance than other convolutional neural network models when constructing the defect category recognition sub-model.
- the VGG model is a convolutional neural network
- the VGG16 model is a VGG model with a 16-layer network structure.
- the VGG16 standard model has 13 convolutional layers and 3 fully connected layers.
- the conventional convolutional layer described in the embodiment of the present disclosure refers to the original 13 convolutional layers in the VGG16 standard model. In the embodiment of the present disclosure, the VGG16 model is improved.
- a Batch Normalization (BN) layer is added; in the VGG16 standard model, the input image size is 224x224, which is implemented in this disclosure
- BN Batch Normalization
- the input image size is 224x224, which is implemented in this disclosure
- a supplementary convolutional layer is added so that the output of the supplementary convolutional layer meets the requirements of the fully connected layer of the VGG16 standard model Input dimension; the random dropout layer is added.
- the dropout layer is the dropout layer, which means that part of the neural network unit is temporarily discarded from the network according to a certain probability during the training process of the deep learning network, thereby effectively alleviating overfitting happened.
- the convolutional neural network model includes a VGG16 model
- the VGG16 model includes:
- the batch standardization layer is used to standardize the data to be input to the fully connected layer of the VGG16 model
- the supplementary convolution layer is used to convolve the data to be input to the fully connected layer of the VGG16 model, so that the data convolved by the supplementary convolution layer meets the input dimensions of the fully connected layer of the VGG16 model;
- a randomly discarded layer is used to randomly discard some neural network units in the VGG16 model to avoid overfitting.
- FIG. 2 is a structural diagram of an alternative implementation of the VGG16 model in an embodiment of the present disclosure.
- the improved VGG16 model of the present disclosure includes 13 conventional convolutional layers, 1 supplementary convolutional layer, and 3 fully connected layers.
- the maximum pooling layer 1 is used for max-pooling, and the maximum pooling is taken in the local receptive field The point with the largest value;
- the maximum pooling layer 2 is used for maximum pooling
- the maximum pooling layer 3 is used for maximum pooling
- the maximum pooling layer 4 is used for maximum pooling
- the maximum pooling layer 5 is used for maximum pooling
- the Glod algorithm when training the VGG16 model, the Glod algorithm is used for initialization in the fully connected layer of the VGG16 model, and the L2 regularization algorithm is used for regularization to prevent overfitting. It should be noted that the Gloot algorithm is the Gloot algorithm. In the supplementary convolutional layer, the Grot algorithm is also used for initialization.
- the first algorithm when training the convolutional neural network model, the first algorithm is used to initialize the fully connected layer, the second algorithm is used to regularize the fully connected layer, and the first algorithm is used to regularize the fully connected layer.
- the three algorithms initialize the supplementary convolutional layer.
- the first algorithm is a glorot algorithm
- the second algorithm is an L2 regularization algorithm
- the third algorithm is a glorot algorithm
- regularization refers to constraining, adjusting or shrinking the coefficient estimate towards zero to control the complexity of the model and reduce overfitting. According to the different penalty items in the regularization algorithm, it is specifically divided into L1 regularization and L2 regularization.
- the structure of the improved VGG16 model after the last conventional convolutional layer is as follows:
- Supplementary convolutional layer (initialized using glorot) -> BN layer -> flattening layer -> dropout layer -> fully connected layer (initialized using glorot + L2 regularization) -> BN layer -> dropout layer -> fully connected layer (use glorot initialization + L2 regularization) -> BN layer -> fully connected layer (use glorot initialization + L2 regularization) (softmax).
- the secondary model is a classifier
- the classifier may be a support vector machine (SVM) or a multi-class logistic regression classifier.
- SVM support vector machine
- multi-class logistic regression classifier multi-class logistic regression classifier
- the classifier is a neural network including a plurality of fully connected layers and a normalized exponential function layer.
- Softmax is a logistic regression model that can map the input to a real number between 0-1, and the output real number between 0-1 represents the probability of each category being taken.
- softmax can be used as a parameter of the fully connected layer, or can be used as a separate layer after the fully connected layer, which is not particularly limited in the embodiment of the present disclosure.
- the classifier includes 2 fully connected layers and a normalized exponential function layer.
- the output result of the classifier is an n ⁇ 1 dimensional vector, where n is the number of defect categories of the display panel.
- each element is a real number between 0-1, each element corresponds to a defect category of the display panel, and the value of each element represents the defect category of the current display panel.
- the probability of the corresponding defect category when the inspection device 100 inspects the display panel to be inspected, it determines the defect category corresponding to the element with the largest value in the n ⁇ 1 dimensional vector output by the inspection model constructed in the inspection device 100 as the display panel currently being inspected. Defect category.
- the secondary model performs a final classification on the defect of the display panel to be inspected based on the input data obtained by integrating the output data of the respective base models.
- the embodiment of the present disclosure does not specifically limit how to integrate the output data of the multiple base models to obtain the input data.
- the output vectors of each base model are connected to generate a new vector as the input data. For example, suppose that the defect category recognition sub-model includes 4 base models, and the 4 base models respectively correspond to 4 probability distributions.
- each base model that is, the output of the second fully connected level
- 4 m ⁇ 1 dimensional vectors are connected to obtain a 4m ⁇ 1 dimensional vector
- the 4m ⁇ 1 dimensional vector is used as the input data of the secondary model.
- the data obtained by integrating the output data of the multiple base models is stored in the hdf5 format.
- the defect location recognition sub-model is a target detector.
- the target detector includes a retinal mesh target detection model.
- the retinal net target detection model is the RetinaNet model.
- Figure 3 shows the network structure of the RetinaNet model.
- the RetinaNet model will mark the location and type of display panel defects in the output.
- the RetinaNet model when the RetinaNet model is trained, the normal, black, and fuzzy images in the original data set will be excluded from the detection images of the defect categories. In addition, all defect categories are classified as One type is called foreground, so that the RetinaNet model only distinguishes foreground and background during training, thus focusing on the identification of defect positions, without distinguishing defect categories.
- the detection device 100 further includes a model builder 130, and the model builder 130 includes:
- the data processing unit 131 is configured to obtain an original data set, the original data set including a plurality of inspection images of different display panels with known defects;
- the model construction unit 132 is configured to construct the detection model according to the original data set.
- the detection images of different display panels in the original data set can be collected from the same production line or from different production lines. It should be noted that, in order to train the detection model, in the embodiment of the present disclosure, the display panel defects in the detection images constituting the original data set are identified and marked in advance, and the marking content includes the position and the defect of the display panel. category.
- the process of training the convolutional neural network to obtain multiple base models includes:
- the data processing unit 131 is configured to correspond to each of the multiple probability distributions, and respectively determine the sampling ratio of the detection images of different defect categories;
- the data processing unit 131 is further configured to sample the original data sets according to the sampling ratios of the detected images of different defect categories to obtain multiple first training data sets with different probability distributions;
- the model construction unit 132 is configured to use the multiple first training data sets to train the convolutional neural network model, and generate multiple base models respectively corresponding to different probability distributions.
- the data processing unit 131 before the model construction unit 132 trains the convolutional neural network model, the data processing unit 131 also preprocesses the detection images in the first training data set to further increase the training rate of the model.
- the above pretreatment process specifically includes:
- standardization processing refers to the further standardization of the size and format of the detection images in the first training data set, for example, scaling the detection image to 600x600; normalization processing refers to the detection of the detection images in the first training data set
- the image is subjected to dimensionless processing to reduce the magnitude and speed up the reading rate of the detection image. For example, each pixel in the detection image is subtracted from the pixel average of the detection image to normalize the pixel value of the detection image.
- the above difference algorithm is not particularly limited.
- the difference algorithm may be a nearest neighbor difference algorithm, a bilinear difference algorithm, a bicubic difference algorithm, and a Lanzos (LANCZOS) difference algorithm. ) Any one of the difference algorithm.
- the inventors who implemented the present disclosure found that the bicubic difference algorithm and the LANCZOS algorithm have better performance in the image scaling field than other difference algorithms, and the LANCZOS algorithm has a faster running rate.
- the detected image preprocessed by the data processing unit 131 is stored in the hdf5 format, thereby further improving the reading rate.
- the model construction unit 132 also uses an optimization algorithm to perform optimization, so as to accelerate the convergence rate of training the convolutional neural network model.
- the embodiment of the present disclosure does not specifically limit the above optimization algorithm.
- it may be a Stochastic Gradient Descent (SGD) or an adaptive learning rate adjustment (Adadelta, Adaptive Learning Rate) algorithm.
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive moment estimation
- the inventors of the embodiments of the present disclosure have discovered that the SGD algorithm has better performance.
- the learning rate when using the SGD optimizer for optimization, set the learning rate to 0.001, and use the momentum gradient descent (momentum) algorithm and the Nesterov gradient acceleration algorithm to speed up the convergence of the model rate.
- the process of training the classifier to obtain the sub-model includes:
- the data processing unit 131 is configured to integrate the output data of the multiple base models in the process of training the convolutional neural network model to obtain multiple base models to generate a second training set;
- the model construction unit 132 is configured to train the classifier according to the second training set to obtain the secondary model.
- the process of constructing the defect location recognition sub-model includes:
- the model construction unit 132 is configured to train the target detection model according to the original data set to obtain the defect location recognition sub-model.
- the detection device 100 further includes:
- the image obtainer 140 is used to obtain a detection image of the display panel.
- the image acquisition device 140 may be a mapping device, such as an AOI device, that is, the image capturing device may be used as a part of the detection apparatus 100 provided in the embodiment of the present disclosure.
- the image receiver 110 receives the detected image acquired by it from the image acquirer 140.
- model construction module 130 is used to construct the inspection images of different display panels whose multiple defects of the inspection model are known, and may also be acquired by the image acquirer 140.
- a method for detecting a display panel including:
- step S100 the inspection image of the display panel to be inspected is input into a pre-built inspection model for inspecting the display panel, and the display panel to be inspected is inspected;
- the detection model includes:
- the defect category recognition sub-model is used to recognize the defect category of the display panel to be inspected
- the defect category recognition sub-model includes multiple base models and sub-models
- the multiple base models are used to respectively initially classify the defects of the display panel to be inspected
- the secondary model is used for final classification of the defects of the display panel to be inspected according to the input data obtained by integrating the output data of the respective base models.
- a detection model for detecting the display panel is constructed in advance after training.
- the detection image of the display panel to be detected is input into the preset
- the built detection model can get the detection result, so as to realize the automatic detection of the display panel.
- the inspection image of the display panel to be inspected includes a picture collected by taking a picture of the display panel with the image acquisition device.
- an AOI detection image collected by shooting a display panel with an AOI device is a device that scans a display panel and collects images based on optical principles to detect the display panel.
- the inspection of the display panel to be inspected includes identifying the type of defect of the display panel to be inspected and marking the defect position of the display panel to be inspected.
- the types of defects include residual, missing, foreign matter, heterochromatic, etc., which are not particularly limited in the embodiment of the present disclosure.
- the display panel inspection method provided by the embodiment of the present disclosure the display panel is automatically inspected using a pre-built inspection model, which can adapt to the constantly changing data distribution on the production line, and aim at different production lines and different types of display panel defects. All have high detection accuracy.
- the display panel inspection method provided by the embodiments of the present disclosure ensures the accuracy of the inspection, reduces the cost of the display panel inspection, improves the inspection efficiency, and is beneficial to Improve the production quality and production efficiency of display panels.
- the detection of the display panel includes two parts: identifying the defect category of the display panel and identifying the defect location of the display panel.
- the detection model includes a defect category recognition sub-model and a defect location recognition sub-model;
- an integrated learning algorithm is used to construct a defect category recognition sub-model.
- the multiple base models are the same convolutional neural network model, and the different base models use multiple first training data sets that satisfy different probability distributions to compare the convolutional neural network models. Obtained from training.
- the step of generating a plurality of the first training data sets includes:
- step S410 an original data set is generated, the original data set including a plurality of inspection images of different display panels with known defects;
- step S420 corresponding to a variety of probability distributions, respectively determine the sampling ratios of detected images of different defect categories
- step S430 the original data sets are respectively sampled according to the sampling ratios of the detected images of different defect categories to obtain the multiple first training data sets.
- the convolutional neural network model is not particularly limited.
- the convolutional neural network model may be a deep residual network (ResNet, Deep residual network), or a densely connected convolutional network (Densenet). , Densely Connected Convolutional Networks), any one of the VGG network.
- the inventors of the embodiments of the present disclosure have discovered that the VGG model has better performance than other convolutional neural network models when constructing the defect category recognition sub-model.
- the embodiment of the present disclosure uses the VGG16 model to construct the base model.
- the VGG16 standard model has 13 convolutional layers and 3 fully connected layers.
- the conventional convolutional layer described in the embodiment of the present disclosure refers to the original 13 convolutional layers in the VGG16 standard model.
- the VGG16 model is improved.
- a Batch Normalization (BN) layer is added; in the VGG16 standard model, the input image size is 224x224, which is implemented in this disclosure
- BN Batch Normalization
- the input image size is 224x224, which is implemented in this disclosure
- a supplementary convolutional layer is added so that the output of the supplementary convolutional layer meets the requirements of the fully connected layer of the VGG16 standard model Input dimension; the random dropout layer is added.
- the dropout layer is the dropout layer, which means that part of the neural network unit is temporarily discarded from the network according to a certain probability during the training process of the deep learning network, thereby effectively alleviating overfitting happened.
- the Glod algorithm when training the VGG16 model, the Glod algorithm is used for initialization in the fully connected layer of the VGG16 model, and the L2 regularization algorithm is used for regularization to prevent overfitting. It should be noted that the Gloot algorithm is the Gloot algorithm. In the supplementary convolutional layer, the Grot algorithm is also used for initialization.
- the convolutional neural network model includes a fully connected layer, a supplementary convolution layer, a batch normalization layer, and a random discard layer;
- the supplementary convolution layer is used to convolve data to be input to the fully connected layer, so that the data convolved by the supplementary convolution layer meets the input dimensions of the fully connected layer;
- the batch standardization layer is used to perform standardization processing on the data to be input to the fully connected layer;
- the random discarding layer is used to randomly discard some neural network units in the convolutional neural network model to avoid overfitting
- the first algorithm when training the convolutional neural network model, the first algorithm is used to initialize the fully connected layer, the second algorithm is used to regularize the fully connected layer, and the third algorithm is used for the supplementary volume The stack is initialized.
- the first algorithm is a glorot algorithm
- the second algorithm is an L2 regularization algorithm
- the third algorithm is a glorot algorithm
- the secondary model is a classifier
- the classifier may be a support vector machine (SVM) or a multi-class logistic regression classifier.
- SVM support vector machine
- multi-class logistic regression classifier multi-class logistic regression classifier
- the classifier is a neural network including a plurality of fully connected layers and a normalized exponential function layer.
- Softmax is a logistic regression model that can map the input to a real number between 0-1, and the output real number between 0-1 represents the probability of each category being taken.
- softmax can be used as a parameter of the fully connected layer, or can be used as a separate layer after the fully connected layer, which is not particularly limited in the embodiment of the present disclosure.
- the classifier includes 2 fully connected layers and a normalized exponential function layer.
- the output result of the classifier is an n ⁇ 1 dimensional vector, where n is the number of defect categories of the display panel.
- each element is a real number between 0-1, each element corresponds to a defect category of the display panel, and the value of each element represents the defect category of the current display panel.
- the probability of the corresponding defect category is determined as the defect category of the display panel currently being inspected.
- the defect location recognition sub-model is a target detector.
- the target detector includes a retinal mesh target detection model.
- the retinal net target detection model is the RetinaNet model.
- Figure 3 shows the network structure of the RetinaNet model.
- the RetinaNet model will mark the location and type of display panel defects in the output.
- the RetinaNet model when the RetinaNet model is trained, the normal, black, and fuzzy images in the original data set will be excluded from the detection images of the defect categories. In addition, all defect categories are classified as One type is called foreground, so that the RetinaNet model only distinguishes foreground and background during training, thus focusing on the identification of defect positions, without distinguishing defect categories.
- the detection method provided in an embodiment of the present disclosure further includes:
- step S200 the inspection model for inspecting the display panel is constructed.
- step S200 includes:
- step S210 an original data set is generated, the original data set including a plurality of inspection images of different display panels with known defects;
- step S220 the detection model is constructed according to the original data set.
- step S220 includes:
- step S221 the original data set is sampled according to multiple probability distributions to obtain multiple first training data sets with different probability distributions
- step S222 training a convolutional neural network model according to the first training data set to generate a plurality of the base models respectively corresponding to different probability distributions;
- step S223 train the classifier according to the second training data set obtained by integrating the output data of each of the base models to generate the secondary model;
- step S224 the target detector is trained according to the original data set to generate the defect location recognition sub-model.
- the embodiment of the present disclosure does not specifically limit how to integrate the output data of the multiple base models to obtain the second training data set.
- the output vectors of each base model are connected to generate a new vector as a sample of the second training data set.
- the defect category recognition sub-model includes 4 base models, and the 4 base models respectively correspond to 4 probability distributions.
- the penultimate output of each base model that is, the output of the second fully connected layer, which is an m ⁇
- four m ⁇ 1-dimensional vectors are connected to obtain a 4m ⁇ 1-dimensional vector, and the 4m ⁇ 1-dimensional vector is used as the input data of the secondary model.
- the process of training the convolutional neural network to obtain multiple base models includes:
- each of the multiple probability distributions respectively determine the sampling ratio of the detection images of different defect categories
- the convolutional neural network model is trained by using the multiple first training data sets to generate multiple base models respectively corresponding to different probability distributions.
- the remaining detection images are used as the verification set; for the original distribution, according to A ratio of 9:1 divides the original data set into a training data set and a validation set.
- the original data sets are further divided.
- training data, verification data, and test data are divided into three parts according to a predetermined ratio.
- the verification data is used for the secondary model
- the test data is used for evaluating the final effect.
- the ratio of training data, verification data, and test data is not particularly limited.
- the ratio of training data, verification data, and test data is 8:1:1.
- multiple first training data sets are generated by sampling the divided training data.
- the detection images in the first training data set are also preprocessed to further increase the training rate of the model.
- the above pretreatment process specifically includes:
- standardization processing refers to further standardizing the size and format of the detection images in the first training data set, for example, scaling the detection image to 600x600; normalization processing refers to the detection of the detection images in the first training data set
- the image is subjected to dimensionless processing to reduce the magnitude and speed up the reading rate of the detection image. For example, each pixel in the detection image is subtracted from the pixel average of the detection image to normalize the pixel value of the detection image.
- the above difference algorithm is not particularly limited.
- the difference algorithm may be the nearest neighbor difference algorithm, bilinear difference algorithm, bicubic difference algorithm, LANCZOS ) Any one of the difference algorithm.
- the inventors who implemented the present disclosure found that the bicubic difference algorithm and the LANCZOS algorithm have better performance in the image scaling field than other difference algorithms, and the LANCZOS algorithm has a faster running rate.
- the preprocessed detection image is stored in hdf5 format, thereby further improving the reading rate.
- optimization algorithms are also used for optimization to accelerate the convergence rate of training the convolutional neural network model.
- the embodiment of the present disclosure does not specifically limit the above optimization algorithm.
- it may be a Stochastic Gradient Descent (SGD) or an adaptive learning rate adjustment (Adadelta, Adaptive Learning Rate) algorithm.
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive learning rate adjustment
- Adadelta adaptive moment estimation
- the inventors of the embodiments of the present disclosure have discovered that the SGD algorithm has better performance.
- the SGD optimizer when using the SGD optimizer to optimize, set the learning rate to 0.001, and use the momentum gradient descent (momentum) algorithm and the Nesterov gradient acceleration algorithm to speed up the convergence of the model rate.
- the process of training the classifier to obtain the sub-model includes:
- the output data of the multiple base models are integrated to generate a second training set
- the process of constructing the defect location recognition sub-model includes:
- the detection method further includes:
- step S300 the image acquirer is controlled to acquire the inspection image of the display panel to be inspected.
- an electronic device including:
- One or more processors 201 are included in the Appendix 201;
- the memory 202 has one or more programs stored thereon, and when the one or more programs are executed by one or more processors, the one or more processors implement any one of the foregoing display panel detection methods;
- One or more I/O interfaces 203 are connected between the processor and the memory, and are configured to implement information interaction between the processor and the memory.
- the processor 201 is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.
- the memory 202 is a device with data storage capabilities, including but not limited to random access memory (RAM, more specifically Such as SDRAM, DDR, etc.), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory (FLASH); I/O interface (read and write interface) 203 is connected between processor 201 and memory 202 , Can realize the information interaction between the processor 201 and the memory 202, which includes but is not limited to a data bus (Bus) and the like.
- RAM random access memory
- ROM read-only memory
- EEPROM electrically erasable programmable read-only memory
- FLASH flash memory
- I/O interface (read and write interface) 203 is connected between processor 201 and memory 202 , Can realize the information interaction between the processor 201 and the memory 202, which includes but is not limited to a data bus (Bus) and the
- the processor 201, the memory 202, and the I/O interface 203 are connected to each other through the bus 204, and further connected to other components of the computing device.
- an embodiment of the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processor, any one of the foregoing display panel detection methods is implemented.
- Such software may be distributed on a computer-readable medium, and the computer-readable medium may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium).
- the term computer storage medium includes volatile and non-volatile implementations in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data). Sexual, removable and non-removable media.
- Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tapes, magnetic disk storage or other magnetic storage devices, or Any other medium used to store desired information and that can be accessed by a computer.
- a communication medium usually contains computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium. .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (14)
- 一种显示面板的检测装置,包括:图像接收器,用于接收待检测显示面板的检测图像;检测器,用于将待检测显示面板的检测图像输入预先构建的用于检测显示面板的检测模型,并利用所述检测模型生成检测结果;所述检测模型包括:缺陷类别识别子模型,用于识别所述待检测显示面板的缺陷类别;缺陷位置识别子模型,用于标识所述待检测显示面板的缺陷位置;其中,所述缺陷类别识别子模型包括多个基模型和次级模型;多个所述基模型用于对所述待检测显示面板的缺陷进行初始分类;所述次级模型用于根据将各个所述基模型的输出数据进行整合后得到的输入数据,对所述待检测显示面板的缺陷进行最终分类。
- 根据权利要求1所述的检测装置,其中,所述多个基模型为相同的卷积神经网络模型,不同的所述基模型是利用满足不同概率分布的多个第一训练数据集分别对所述卷积神经网络模型进行训练得到的。
- 根据权利要求2所述的检测装置,其中,所述多个第一训练数据集包括按照不同的预定采样比例分别对原始数据集进行采样得到的样本集,所述不同的预定采样比例是根据不同的概率分布确定的不同缺陷类别的检测图像的采样比例,所述原始数据集包括多个缺陷已知的不同显示面板的检测图像。
- 根据权利要求2或3所述的检测装置,其中,所述卷积神经网络模型包括全连接层、补充卷积层、批量标准化层、随机丢弃层;所述补充卷积层用于对待输入所述全连接层的数据进行卷积,以使经所述补充卷积层卷积后的数据满足所述全连接层的输入维度;所述批量标准化层用于对待输入所述全连接层的数据进行标准化处理;所述随机丢弃层用于随机丢弃所述卷积神经网络模型中的部分神经网络单元以避免过拟合;其中,对所述卷积神经网络模型进行训练时,利用第一算法对所述全连接层进行初始化,利用第二算法对所述全连接层进行正则化,利用第三算法对所述补充卷积层进行初始化。
- 根据权利要求1所述的检测装置,其中,所述次级模型为分类器,包括多个全连接层和归一化指数函数层。
- 根据权利要求1所述的检测装置,其中,所述缺陷位置识别子模型为目标检测器。
- 一种显示面板的检测方法,包括:将待检测显示面板的检测图像输入预先构建的用于检测显示面板的检测模型,对所述待检测显示面板进行检测;所述检测模型包括:缺陷类别识别子模型,用于识别所述待检测显示面板的缺陷类别;缺陷位置识别子模型,用于标识所述待检测显示面板的缺陷位置;其中,所述缺陷类别识别子模型包括多个基模型和次级模型;多个所述基模型用于分别对所述待检测显示面板的缺陷进行初始分类;所述次级模型用于根据将各个所述基模型的输出数据进行整合后得到的输入数据,对所述待检测显示面板的缺陷进行最终分类。
- 根据权利要求7所述的检测方法,其中,所述多个基模型为相同的卷积神经网络模型,不同的所述基模型是利用满足不同概率分布的多个第一训练数据集分别对所述卷积神经网络模型进行训练得到的。
- 根据权利要求8所述的检测方法,其中,生成多个所述第一训练数据集的步骤包括:生成原始数据集,所述原始数据集包括多个缺陷已知的不同显示面板的检测图像;对应于多种概率分布,分别确定不同缺陷类别的检测图像的采样比例;按照不同缺陷类别的检测图像的采样比例,分别对所述原始数据集进行采样,得到所述多个第一训练数据集。
- 根据权利要求8或9所述的检测方法,其中,所述卷积神经网络模型包括全连接层、补充卷积层、批量标准化层、随机丢弃层;所述补充卷积层用于对待输入所述全连接层的数据进行卷积,以使经所述补充卷积层卷积后的数据满足所述全连接层的输入维度;所述批量标准化层用于对待输入所述全连接层的数据进行标准化处理;所述随机丢弃层用于随机丢弃所述卷积神经网络模型中的部分神经网络单元以避免过拟合;其中,对所述卷积神经网络模型进行训练时,利用第一算法对所述全连接层进行初始化,利用第二算法对所述全连接层进行正则化,利用第三算法对所述补充卷积层进行初始化。
- 根据权利要求7所述的检测方法,其中,所述次级模型为分类器,所述分类器包括多个全连接层和归一化指数函数层。
- 根据权利要求7所述的检测方法,其中,所述缺陷位置识别子模型为目标检测器。
- 一种电子装置,包括:一个或多个处理器;存储装置,其上存储有一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现根据权利要求7至12中任意一项所述的显示面板的检测方法;一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
- 一种计算机可读介质,其上存储有计算机程序,所述程序被处理器执行时实现根据权利要求7至12中任意一项所述的显示面板的检测方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/093281 WO2021237682A1 (zh) | 2020-05-29 | 2020-05-29 | 显示面板的检测装置、检测方法、电子装置、可读介质 |
US17/417,487 US11900589B2 (en) | 2020-05-29 | 2020-05-29 | Detection device of display panel and detection method thereof, electronic device and readable medium |
CN202080000865.1A CN114175093A (zh) | 2020-05-29 | 2020-05-29 | 显示面板的检测装置、检测方法、电子装置、可读介质 |
US18/543,121 US20240119584A1 (en) | 2020-05-29 | 2023-12-18 | Detection method, electronic device and non-transitory computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/093281 WO2021237682A1 (zh) | 2020-05-29 | 2020-05-29 | 显示面板的检测装置、检测方法、电子装置、可读介质 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/417,487 A-371-Of-International US11900589B2 (en) | 2020-05-29 | 2020-05-29 | Detection device of display panel and detection method thereof, electronic device and readable medium |
US18/543,121 Continuation US20240119584A1 (en) | 2020-05-29 | 2023-12-18 | Detection method, electronic device and non-transitory computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021237682A1 true WO2021237682A1 (zh) | 2021-12-02 |
Family
ID=78745491
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/093281 WO2021237682A1 (zh) | 2020-05-29 | 2020-05-29 | 显示面板的检测装置、检测方法、电子装置、可读介质 |
Country Status (3)
Country | Link |
---|---|
US (2) | US11900589B2 (zh) |
CN (1) | CN114175093A (zh) |
WO (1) | WO2021237682A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024103004A1 (en) * | 2022-11-10 | 2024-05-16 | Versiti Blood Research Institute Foundation, Inc. | Systems, methods, and media for automatically detecting blood abnormalities using images of individual blood cells |
CN117197054A (zh) * | 2023-08-22 | 2023-12-08 | 盐城工学院 | 一种基于人工智能的显示面板坏点检测方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104749184A (zh) * | 2013-12-31 | 2015-07-01 | 研祥智能科技股份有限公司 | 自动光学检测方法和系统 |
US9092842B2 (en) * | 2011-08-04 | 2015-07-28 | Sharp Laboratories Of America, Inc. | System for defect detection and repair |
CN108846841A (zh) * | 2018-07-02 | 2018-11-20 | 北京百度网讯科技有限公司 | 显示屏质量检测方法、装置、电子设备及存储介质 |
CN108961238A (zh) * | 2018-07-02 | 2018-12-07 | 北京百度网讯科技有限公司 | 显示屏质量检测方法、装置、电子设备及存储介质 |
CN109064446A (zh) * | 2018-07-02 | 2018-12-21 | 北京百度网讯科技有限公司 | 显示屏质量检测方法、装置、电子设备及存储介质 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6542830B1 (en) * | 1996-03-19 | 2003-04-01 | Hitachi, Ltd. | Process control system |
US8995747B2 (en) * | 2010-07-29 | 2015-03-31 | Sharp Laboratories Of America, Inc. | Methods, systems and apparatus for defect detection and classification |
US9922269B2 (en) * | 2015-06-05 | 2018-03-20 | Kla-Tencor Corporation | Method and system for iterative defect classification |
EP3336608A1 (en) * | 2016-12-16 | 2018-06-20 | ASML Netherlands B.V. | Method and apparatus for image analysis |
WO2018208791A1 (en) * | 2017-05-08 | 2018-11-15 | Aquifi, Inc. | Systems and methods for inspection and defect detection using 3-d scanning |
US20190318469A1 (en) * | 2018-04-17 | 2019-10-17 | Coherent AI LLC | Defect detection using coherent light illumination and artificial neural network analysis of speckle patterns |
-
2020
- 2020-05-29 CN CN202080000865.1A patent/CN114175093A/zh active Pending
- 2020-05-29 US US17/417,487 patent/US11900589B2/en active Active
- 2020-05-29 WO PCT/CN2020/093281 patent/WO2021237682A1/zh active Application Filing
-
2023
- 2023-12-18 US US18/543,121 patent/US20240119584A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9092842B2 (en) * | 2011-08-04 | 2015-07-28 | Sharp Laboratories Of America, Inc. | System for defect detection and repair |
CN104749184A (zh) * | 2013-12-31 | 2015-07-01 | 研祥智能科技股份有限公司 | 自动光学检测方法和系统 |
CN108846841A (zh) * | 2018-07-02 | 2018-11-20 | 北京百度网讯科技有限公司 | 显示屏质量检测方法、装置、电子设备及存储介质 |
CN108961238A (zh) * | 2018-07-02 | 2018-12-07 | 北京百度网讯科技有限公司 | 显示屏质量检测方法、装置、电子设备及存储介质 |
CN109064446A (zh) * | 2018-07-02 | 2018-12-21 | 北京百度网讯科技有限公司 | 显示屏质量检测方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20220343481A1 (en) | 2022-10-27 |
CN114175093A (zh) | 2022-03-11 |
US11900589B2 (en) | 2024-02-13 |
US20240119584A1 (en) | 2024-04-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109902732B (zh) | 车辆自动分类方法及相关装置 | |
CN111310862B (zh) | 复杂环境下基于图像增强的深度神经网络车牌定位方法 | |
CN107016413B (zh) | 一种基于深度学习算法的烟叶在线分级方法 | |
CN106875373B (zh) | 基于卷积神经网络剪枝算法的手机屏幕mura缺陷检测方法 | |
CN109934805B (zh) | 一种基于低照度图像和神经网络的水污染检测方法 | |
US20240119584A1 (en) | Detection method, electronic device and non-transitory computer-readable storage medium | |
CN114663346A (zh) | 一种基于改进YOLOv5网络的带钢表面缺陷检测方法 | |
CN112819821B (zh) | 一种细胞核图像检测方法 | |
CN116012291A (zh) | 工业零件图像缺陷检测方法及系统、电子设备和存储介质 | |
CN116843650A (zh) | 融合aoi检测与深度学习的smt焊接缺陷检测方法及系统 | |
WO2024021461A1 (zh) | 缺陷检测方法及装置、设备、存储介质 | |
CN114463843A (zh) | 一种基于深度学习的多特征融合鱼类异常行为检测方法 | |
CN117011563A (zh) | 基于半监督联邦学习的道路损害巡检跨域检测方法及系统 | |
CN116342536A (zh) | 基于轻量化模型的铝带材表面缺陷检测方法、系统及设备 | |
CN115240259A (zh) | 一种基于yolo深度网络的课堂环境下人脸检测方法及其检测系统 | |
CN111160100A (zh) | 一种基于样本生成的轻量级深度模型航拍车辆检测方法 | |
CN114549414A (zh) | 一种针对轨道数据的异常变化检测方法及系统 | |
CN113962980A (zh) | 基于改进yolov5x的玻璃容器瑕疵检测方法及系统 | |
CN117690128A (zh) | 胚胎细胞多核目标检测系统、方法和计算机可读存储介质 | |
CN113313678A (zh) | 一种基于多尺度特征融合的精子形态学自动分析方法 | |
JP7123306B2 (ja) | 画像処理装置及び画像処理方法 | |
Abilash et al. | Currency recognition for the visually impaired people | |
CN115641474A (zh) | 基于高效学生网络的未知类型缺陷检测方法与装置 | |
CN111582057B (zh) | 一种基于局部感受野的人脸验证方法 | |
CN112070060A (zh) | 识别年龄的方法、年龄识别模型的训练方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20937247 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20937247 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.06.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20937247 Country of ref document: EP Kind code of ref document: A1 |