WO2021237682A1 - 显示面板的检测装置、检测方法、电子装置、可读介质 - Google Patents

显示面板的检测装置、检测方法、电子装置、可读介质 Download PDF

Info

Publication number
WO2021237682A1
WO2021237682A1 PCT/CN2020/093281 CN2020093281W WO2021237682A1 WO 2021237682 A1 WO2021237682 A1 WO 2021237682A1 CN 2020093281 W CN2020093281 W CN 2020093281W WO 2021237682 A1 WO2021237682 A1 WO 2021237682A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
display panel
layer
detection
defect
Prior art date
Application number
PCT/CN2020/093281
Other languages
English (en)
French (fr)
Inventor
刘雍璋
李昭月
柴栋
王洪
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to PCT/CN2020/093281 priority Critical patent/WO2021237682A1/zh
Priority to US17/417,487 priority patent/US11900589B2/en
Priority to CN202080000865.1A priority patent/CN114175093A/zh
Publication of WO2021237682A1 publication Critical patent/WO2021237682A1/zh
Priority to US18/543,121 priority patent/US20240119584A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4445Classification of defects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30121CRT, LCD or plasma display

Definitions

  • the present disclosure relates to the field of image recognition, and in particular to a detection device for a display panel, a detection method for a display panel, an electronic device, and a computer-readable medium.
  • the detection of display panel defects is mainly carried out by automatic optical inspection (AOI, Automated Optical Inspection) mapping equipment on the production line to take pictures of the position of the display panel that may be defective, and then based on the captured pictures to determine the category and location of the display panel defect Identify it.
  • AOI Automatic optical inspection
  • the embodiments of the present disclosure provide a detection device for a display panel, a detection method for a display panel, an electronic device, and a computer-readable medium.
  • a detection device for a display panel includes:
  • An image receiver for receiving the inspection image of the display panel to be inspected
  • the detector is configured to input the detection image of the display panel to be detected into a pre-built detection model for detecting the display panel, and use the detection model to generate a detection result;
  • the detection model includes:
  • the defect category recognition sub-model is used to recognize the defect category of the display panel to be inspected
  • the defect category recognition sub-model includes multiple base models and sub-models
  • the multiple base models are used to initially classify the defects of the display panel to be inspected
  • the secondary model is used for final classification of the defects of the display panel to be inspected according to the input data obtained by integrating the output data of the respective base models.
  • the multiple base models are the same convolutional neural network model, and the different base models use multiple first training data sets that satisfy different probability distributions to compare the convolutional neural network models. Obtained from training.
  • the multiple first training data sets include sample sets obtained by respectively sampling the original data set according to different predetermined sampling ratios, and the different predetermined sampling ratios are determined according to different probability distributions.
  • the sampling ratio of the detection image of the defect category, and the original data set includes a plurality of detection images of different display panels with known defects.
  • the convolutional neural network model includes a fully connected layer, a supplementary convolution layer, a batch normalization layer, and a random discard layer;
  • the supplementary convolution layer is used to convolve data to be input to the fully connected layer, so that the data convolved by the supplementary convolution layer meets the input dimensions of the fully connected layer;
  • the batch standardization layer is used to perform standardization processing on the data to be input to the fully connected layer;
  • the random discarding layer is used to randomly discard some neural network units in the convolutional neural network model to avoid overfitting
  • the first algorithm when training the convolutional neural network model, the first algorithm is used to initialize the fully connected layer, the second algorithm is used to regularize the fully connected layer, and the third algorithm is used to update the supplementary volume.
  • the stack is initialized.
  • the secondary model is a classifier, which includes a plurality of fully connected layers and a normalized exponential function layer.
  • the defect location recognition sub-model is a target detector.
  • embodiments of the present disclosure provide a method for detecting a display panel, including:
  • the detection model includes:
  • the defect category recognition sub-model is used to recognize the defect category of the display panel to be inspected
  • the defect category recognition sub-model includes multiple base models and sub-models
  • the multiple base models are used to respectively initially classify the defects of the display panel to be inspected
  • the secondary model is used for final classification of the defects of the display panel to be inspected according to the input data obtained by integrating the output data of the respective base models.
  • the multiple base models are the same convolutional neural network model, and the different base models use multiple first training data sets that satisfy different probability distributions to compare the convolutional neural network models. Obtained from training.
  • the step of generating a plurality of said first training data sets includes:
  • the original data set including multiple inspection images of different display panels with known defects
  • the original data sets are respectively sampled to obtain the multiple first training data sets.
  • the convolutional neural network model includes a fully connected layer, a supplementary convolution layer, a batch normalization layer, and a random discard layer;
  • the supplementary convolution layer is used to convolve data to be input to the fully connected layer, so that the data convolved by the supplementary convolution layer meets the input dimensions of the fully connected layer;
  • the batch standardization layer is used to perform standardization processing on the data to be input to the fully connected layer;
  • the random discarding layer is used to randomly discard some neural network units in the convolutional neural network model to avoid overfitting
  • the first algorithm when training the convolutional neural network model, the first algorithm is used to initialize the fully connected layer, the second algorithm is used to regularize the fully connected layer, and the third algorithm is used to update the supplementary volume.
  • the stack is initialized.
  • the secondary model is a classifier
  • the classifier includes a plurality of fully connected layers and a normalized exponential function layer.
  • the defect location recognition sub-model is a target detector.
  • an electronic device including:
  • One or more processors are One or more processors;
  • a storage device having one or more programs stored thereon, and when the one or more programs are executed by the one or more processors, the one or more processors realize the detection of any one of the above-mentioned display panels method;
  • One or more I/O interfaces are connected between the processor and the memory, and are configured to implement information interaction between the processor and the memory.
  • the embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, and when the program is executed by a processor, any one of the foregoing display panel detection methods is implemented.
  • FIG. 1 is a block diagram of the composition of a detection device for a display panel in an embodiment of the disclosure
  • FIG. 2 is a structural diagram of an alternative implementation of the VGG16 model in the embodiments of the disclosure.
  • FIG. 3 is a structural diagram of a retinal net target detection model in an embodiment of the disclosure.
  • FIG. 4 is a block diagram of another display panel detection device in an embodiment of the disclosure.
  • FIG. 5 is a block diagram of the composition of yet another detection device for a display panel in an embodiment of the disclosure.
  • FIG. 6 is a flowchart of a method for detecting a display panel in an embodiment of the disclosure
  • FIG. 7 is a flowchart of some steps in another method for detecting a display panel in an embodiment of the present disclosure.
  • FIG. 8 is a flowchart of some steps in another method for detecting a display panel in an embodiment of the present disclosure
  • FIG. 9 is a flowchart of some steps in another method for detecting a display panel in an embodiment of the disclosure.
  • FIG. 10 is a flowchart of some steps in another method for detecting a display panel in an embodiment of the disclosure.
  • FIG. 11 is a flowchart of some steps in another method for detecting a display panel in an embodiment of the disclosure.
  • FIG. 12 is a block diagram of an electronic device provided by an embodiment of the disclosure.
  • FIG. 13 is a block diagram of the composition of a computer-readable medium provided by an embodiment of the disclosure.
  • an embodiment of the present disclosure provides a detection device 100 for a display panel, including:
  • the image receiver 110 is used to receive the inspection image of the display panel to be inspected
  • the detector 120 is configured to input the detection image of the display panel to be detected into a pre-built detection model for detecting the display panel, and use the detection model to generate a detection result.
  • the inspection image of the display panel to be inspected includes a picture collected by taking a picture of the display panel with the image acquisition device.
  • an AOI detection image collected by shooting a display panel with an AOI device is a device that scans a display panel and collects images based on optical principles to detect the display panel.
  • the image receiver 110 receives the inspection image of the display panel to be inspected from the imaging device, for example, receives the inspection image from the AOI device.
  • the inspection of the display panel to be inspected includes identifying the type of defect of the display panel to be inspected and marking the defect position of the display panel to be inspected.
  • the types of defects include residual, missing, foreign matter, heterochromatic, etc., which are not particularly limited in the embodiment of the present disclosure.
  • an inspection model for inspecting the display panel is constructed in advance, and the inspection image of the display panel to be inspected is input into the inspection model to determine the defect category and location of the display panel to be inspected.
  • the automatic detection of the display panel to be detected is completed.
  • the detection model is constructed after training based on detection images of a large number of different display panels.
  • the detection images of different display panels used for training and constructing the detection model can be collected from the same production line or from different production lines.
  • the detection device 100 provided by the embodiment of the present disclosure has high detection accuracy for the display panels produced by the production line.
  • the detection device 100 It can be used to improve the product quality of a specific production line; when the detection model is constructed based on the training of the detection images of different display panels collected from different production lines, the detection device 100 provided by the embodiment of the present disclosure compares the display panels produced by different production lines. Both have high detection accuracy, which is conducive to mass production.
  • a detection model for detecting defects of the display panel is constructed in advance.
  • the detection model is constructed based on the training of detection images of a large number of different display panels. Detection;
  • the detection device when used to detect the display panel, it can adapt to the constantly changing data distribution on the production line, and can have high detection accuracy for different production lines and different types of display panel defects.
  • the display panel inspection device provided by the embodiments of the present disclosure ensures the accuracy of the inspection, reduces the cost of the display panel inspection, improves the inspection efficiency, and is beneficial to Improve the production quality and production efficiency of display panels.
  • the detection of the display panel includes two parts: identifying the defect category of the display panel and identifying the defect location of the display panel.
  • the detection model in the embodiments of the present disclosure includes:
  • the defect category recognition sub-model is used to recognize the defect category of the display panel to be inspected
  • the defect location identification sub-model is used to identify the defect location of the display panel to be inspected.
  • the defect category recognition sub-model and the defect location recognition sub-model model can be combined arbitrarily.
  • the defect category of the display panel can be identified first, and then the defect location of the display panel can be identified; The defect position of the display panel is then identified, and the defect category of the display panel is identified; the defect category of the display panel and the defect location of the display panel can also be identified separately, and then integrated.
  • the embodiments of the present disclosure do not specifically limit this.
  • an integrated learning algorithm is used to construct a defect category recognition sub-model.
  • Integrated learning refers to the use of a series of learners to learn to obtain several individual learners, and then through a combination of strategies to integrate several individual learners to obtain a strong learner.
  • the basic idea is that when each individual learner has a preference, that is, when each individual learner only performs better in certain aspects, the individual learners are integrated to ensure the accuracy of the strong learners. Improve the generalization performance of strong learners.
  • the defect category recognition sub-model is constructed according to an integrated learning algorithm, specifically, multiple individual learners are constructed according to various defect forms and data distribution of various defects, and then the multiple individual learners The integration is performed to obtain the defect category identification sub-model, which can adapt to the constantly changing data distribution on the production line and improve the detection accuracy of display panel defects.
  • the integrated learning algorithm adopted in the embodiments of the present disclosure is a stacking algorithm.
  • the stacking algorithm including a two-layer learner structure as an example.
  • the first layer includes multiple base models
  • the second layer includes a secondary model.
  • the main idea of the stacking algorithm is to train multiple base models separately, and then merge the prediction results output by each base model as new data, as the input of the secondary model, and the secondary model gives the final classification result.
  • the defect category recognition sub-model includes a plurality of base models and sub-models
  • the multiple base models are used to respectively initially classify the defects of the display panel to be inspected
  • the secondary model is used for final classification of the defects of the display panel to be inspected according to the input data obtained by integrating the output data of the respective base models.
  • multiple base models may have different neural network structures, or may have the same neural network structure, which is not particularly limited in the present disclosure.
  • the multiple base models are obtained by respectively training the same neural network using data with different probability distributions.
  • Different probability distributions correspond to the data distribution of various forms of defects on the production line, for example, including average distribution, exponential distribution, bootstrap distribution, original distribution, binomial distribution, Gaussian distribution, etc. No special restrictions.
  • the embodiments of the present disclosure use data with different probability distributions to train the same neural network structure respectively, so that the constructed defect category recognition sub-model can adapt to the constantly changing data distribution of display panel defects on the production line.
  • multiple base models Having the same neural network structure facilitates the selection of the neural network structure with the best performance, and facilitates subsequent optimization and debugging, thereby further improving the detection accuracy of the defect category recognition sub-model.
  • the base model is a convolutional neural network model
  • different base models use multiple first training data sets that satisfy different probability distributions to perform the convolutional neural network model. Get it through training.
  • the multiple first training data sets include sample sets obtained by respectively sampling the original data set according to different predetermined sampling ratios, and the different predetermined sampling ratios are determined according to different probability distributions.
  • the sampling ratio of the detection image of the defect category, and the original data set includes a plurality of detection images of different display panels with known defects.
  • the first training data set obtained is further explained.
  • the same number of detection images of different defect categories are taken from the original data set as the first training data set, that is, the proportion of the detection images of different defect categories in the first training data set Consistent
  • the ratio of the detected image of each defect category in the original data set is calculated, and the ratio of the original data set of the detected image station of each defect category is squared to obtain the new ratio, Then, for each defect category, a corresponding number of detection images are taken from the original data set according to a new ratio, and the taken detection images of all the defect categories are taken as the first training data set;
  • the original data set is used as the first training data set.
  • the remaining detection images are used as the verification set; for the original distribution, according to A ratio of 9:1 divides the original data set into a training data set and a validation set.
  • the data processing unit 131 is further configured to divide the original data set. For example, training data, verification data, and test data are divided into three parts according to a predetermined ratio. Among them, the verification data is used for the secondary model, and the test data is used for evaluating the final effect.
  • the ratio of training data, verification data, and test data is not particularly limited. For example, the ratio of training data, verification data, and test data is 8:1:1. It should also be noted that after the original data set is divided, the data processing unit 131 generates multiple first training data sets by sampling the divided training data.
  • the convolutional neural network model is not particularly limited.
  • the convolutional neural network model may be a deep residual network (ResNet, Deep residual network), or a densely connected convolutional network (Densenet). , Densely Connected Convolutional Networks), any one of the VGG network.
  • the inventors of the embodiments of the present disclosure have discovered that the VGG model has better performance than other convolutional neural network models when constructing the defect category recognition sub-model.
  • the VGG model is a convolutional neural network
  • the VGG16 model is a VGG model with a 16-layer network structure.
  • the VGG16 standard model has 13 convolutional layers and 3 fully connected layers.
  • the conventional convolutional layer described in the embodiment of the present disclosure refers to the original 13 convolutional layers in the VGG16 standard model. In the embodiment of the present disclosure, the VGG16 model is improved.
  • a Batch Normalization (BN) layer is added; in the VGG16 standard model, the input image size is 224x224, which is implemented in this disclosure
  • BN Batch Normalization
  • the input image size is 224x224, which is implemented in this disclosure
  • a supplementary convolutional layer is added so that the output of the supplementary convolutional layer meets the requirements of the fully connected layer of the VGG16 standard model Input dimension; the random dropout layer is added.
  • the dropout layer is the dropout layer, which means that part of the neural network unit is temporarily discarded from the network according to a certain probability during the training process of the deep learning network, thereby effectively alleviating overfitting happened.
  • the convolutional neural network model includes a VGG16 model
  • the VGG16 model includes:
  • the batch standardization layer is used to standardize the data to be input to the fully connected layer of the VGG16 model
  • the supplementary convolution layer is used to convolve the data to be input to the fully connected layer of the VGG16 model, so that the data convolved by the supplementary convolution layer meets the input dimensions of the fully connected layer of the VGG16 model;
  • a randomly discarded layer is used to randomly discard some neural network units in the VGG16 model to avoid overfitting.
  • FIG. 2 is a structural diagram of an alternative implementation of the VGG16 model in an embodiment of the present disclosure.
  • the improved VGG16 model of the present disclosure includes 13 conventional convolutional layers, 1 supplementary convolutional layer, and 3 fully connected layers.
  • the maximum pooling layer 1 is used for max-pooling, and the maximum pooling is taken in the local receptive field The point with the largest value;
  • the maximum pooling layer 2 is used for maximum pooling
  • the maximum pooling layer 3 is used for maximum pooling
  • the maximum pooling layer 4 is used for maximum pooling
  • the maximum pooling layer 5 is used for maximum pooling
  • the Glod algorithm when training the VGG16 model, the Glod algorithm is used for initialization in the fully connected layer of the VGG16 model, and the L2 regularization algorithm is used for regularization to prevent overfitting. It should be noted that the Gloot algorithm is the Gloot algorithm. In the supplementary convolutional layer, the Grot algorithm is also used for initialization.
  • the first algorithm when training the convolutional neural network model, the first algorithm is used to initialize the fully connected layer, the second algorithm is used to regularize the fully connected layer, and the first algorithm is used to regularize the fully connected layer.
  • the three algorithms initialize the supplementary convolutional layer.
  • the first algorithm is a glorot algorithm
  • the second algorithm is an L2 regularization algorithm
  • the third algorithm is a glorot algorithm
  • regularization refers to constraining, adjusting or shrinking the coefficient estimate towards zero to control the complexity of the model and reduce overfitting. According to the different penalty items in the regularization algorithm, it is specifically divided into L1 regularization and L2 regularization.
  • the structure of the improved VGG16 model after the last conventional convolutional layer is as follows:
  • Supplementary convolutional layer (initialized using glorot) -> BN layer -> flattening layer -> dropout layer -> fully connected layer (initialized using glorot + L2 regularization) -> BN layer -> dropout layer -> fully connected layer (use glorot initialization + L2 regularization) -> BN layer -> fully connected layer (use glorot initialization + L2 regularization) (softmax).
  • the secondary model is a classifier
  • the classifier may be a support vector machine (SVM) or a multi-class logistic regression classifier.
  • SVM support vector machine
  • multi-class logistic regression classifier multi-class logistic regression classifier
  • the classifier is a neural network including a plurality of fully connected layers and a normalized exponential function layer.
  • Softmax is a logistic regression model that can map the input to a real number between 0-1, and the output real number between 0-1 represents the probability of each category being taken.
  • softmax can be used as a parameter of the fully connected layer, or can be used as a separate layer after the fully connected layer, which is not particularly limited in the embodiment of the present disclosure.
  • the classifier includes 2 fully connected layers and a normalized exponential function layer.
  • the output result of the classifier is an n ⁇ 1 dimensional vector, where n is the number of defect categories of the display panel.
  • each element is a real number between 0-1, each element corresponds to a defect category of the display panel, and the value of each element represents the defect category of the current display panel.
  • the probability of the corresponding defect category when the inspection device 100 inspects the display panel to be inspected, it determines the defect category corresponding to the element with the largest value in the n ⁇ 1 dimensional vector output by the inspection model constructed in the inspection device 100 as the display panel currently being inspected. Defect category.
  • the secondary model performs a final classification on the defect of the display panel to be inspected based on the input data obtained by integrating the output data of the respective base models.
  • the embodiment of the present disclosure does not specifically limit how to integrate the output data of the multiple base models to obtain the input data.
  • the output vectors of each base model are connected to generate a new vector as the input data. For example, suppose that the defect category recognition sub-model includes 4 base models, and the 4 base models respectively correspond to 4 probability distributions.
  • each base model that is, the output of the second fully connected level
  • 4 m ⁇ 1 dimensional vectors are connected to obtain a 4m ⁇ 1 dimensional vector
  • the 4m ⁇ 1 dimensional vector is used as the input data of the secondary model.
  • the data obtained by integrating the output data of the multiple base models is stored in the hdf5 format.
  • the defect location recognition sub-model is a target detector.
  • the target detector includes a retinal mesh target detection model.
  • the retinal net target detection model is the RetinaNet model.
  • Figure 3 shows the network structure of the RetinaNet model.
  • the RetinaNet model will mark the location and type of display panel defects in the output.
  • the RetinaNet model when the RetinaNet model is trained, the normal, black, and fuzzy images in the original data set will be excluded from the detection images of the defect categories. In addition, all defect categories are classified as One type is called foreground, so that the RetinaNet model only distinguishes foreground and background during training, thus focusing on the identification of defect positions, without distinguishing defect categories.
  • the detection device 100 further includes a model builder 130, and the model builder 130 includes:
  • the data processing unit 131 is configured to obtain an original data set, the original data set including a plurality of inspection images of different display panels with known defects;
  • the model construction unit 132 is configured to construct the detection model according to the original data set.
  • the detection images of different display panels in the original data set can be collected from the same production line or from different production lines. It should be noted that, in order to train the detection model, in the embodiment of the present disclosure, the display panel defects in the detection images constituting the original data set are identified and marked in advance, and the marking content includes the position and the defect of the display panel. category.
  • the process of training the convolutional neural network to obtain multiple base models includes:
  • the data processing unit 131 is configured to correspond to each of the multiple probability distributions, and respectively determine the sampling ratio of the detection images of different defect categories;
  • the data processing unit 131 is further configured to sample the original data sets according to the sampling ratios of the detected images of different defect categories to obtain multiple first training data sets with different probability distributions;
  • the model construction unit 132 is configured to use the multiple first training data sets to train the convolutional neural network model, and generate multiple base models respectively corresponding to different probability distributions.
  • the data processing unit 131 before the model construction unit 132 trains the convolutional neural network model, the data processing unit 131 also preprocesses the detection images in the first training data set to further increase the training rate of the model.
  • the above pretreatment process specifically includes:
  • standardization processing refers to the further standardization of the size and format of the detection images in the first training data set, for example, scaling the detection image to 600x600; normalization processing refers to the detection of the detection images in the first training data set
  • the image is subjected to dimensionless processing to reduce the magnitude and speed up the reading rate of the detection image. For example, each pixel in the detection image is subtracted from the pixel average of the detection image to normalize the pixel value of the detection image.
  • the above difference algorithm is not particularly limited.
  • the difference algorithm may be a nearest neighbor difference algorithm, a bilinear difference algorithm, a bicubic difference algorithm, and a Lanzos (LANCZOS) difference algorithm. ) Any one of the difference algorithm.
  • the inventors who implemented the present disclosure found that the bicubic difference algorithm and the LANCZOS algorithm have better performance in the image scaling field than other difference algorithms, and the LANCZOS algorithm has a faster running rate.
  • the detected image preprocessed by the data processing unit 131 is stored in the hdf5 format, thereby further improving the reading rate.
  • the model construction unit 132 also uses an optimization algorithm to perform optimization, so as to accelerate the convergence rate of training the convolutional neural network model.
  • the embodiment of the present disclosure does not specifically limit the above optimization algorithm.
  • it may be a Stochastic Gradient Descent (SGD) or an adaptive learning rate adjustment (Adadelta, Adaptive Learning Rate) algorithm.
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive moment estimation
  • the inventors of the embodiments of the present disclosure have discovered that the SGD algorithm has better performance.
  • the learning rate when using the SGD optimizer for optimization, set the learning rate to 0.001, and use the momentum gradient descent (momentum) algorithm and the Nesterov gradient acceleration algorithm to speed up the convergence of the model rate.
  • the process of training the classifier to obtain the sub-model includes:
  • the data processing unit 131 is configured to integrate the output data of the multiple base models in the process of training the convolutional neural network model to obtain multiple base models to generate a second training set;
  • the model construction unit 132 is configured to train the classifier according to the second training set to obtain the secondary model.
  • the process of constructing the defect location recognition sub-model includes:
  • the model construction unit 132 is configured to train the target detection model according to the original data set to obtain the defect location recognition sub-model.
  • the detection device 100 further includes:
  • the image obtainer 140 is used to obtain a detection image of the display panel.
  • the image acquisition device 140 may be a mapping device, such as an AOI device, that is, the image capturing device may be used as a part of the detection apparatus 100 provided in the embodiment of the present disclosure.
  • the image receiver 110 receives the detected image acquired by it from the image acquirer 140.
  • model construction module 130 is used to construct the inspection images of different display panels whose multiple defects of the inspection model are known, and may also be acquired by the image acquirer 140.
  • a method for detecting a display panel including:
  • step S100 the inspection image of the display panel to be inspected is input into a pre-built inspection model for inspecting the display panel, and the display panel to be inspected is inspected;
  • the detection model includes:
  • the defect category recognition sub-model is used to recognize the defect category of the display panel to be inspected
  • the defect category recognition sub-model includes multiple base models and sub-models
  • the multiple base models are used to respectively initially classify the defects of the display panel to be inspected
  • the secondary model is used for final classification of the defects of the display panel to be inspected according to the input data obtained by integrating the output data of the respective base models.
  • a detection model for detecting the display panel is constructed in advance after training.
  • the detection image of the display panel to be detected is input into the preset
  • the built detection model can get the detection result, so as to realize the automatic detection of the display panel.
  • the inspection image of the display panel to be inspected includes a picture collected by taking a picture of the display panel with the image acquisition device.
  • an AOI detection image collected by shooting a display panel with an AOI device is a device that scans a display panel and collects images based on optical principles to detect the display panel.
  • the inspection of the display panel to be inspected includes identifying the type of defect of the display panel to be inspected and marking the defect position of the display panel to be inspected.
  • the types of defects include residual, missing, foreign matter, heterochromatic, etc., which are not particularly limited in the embodiment of the present disclosure.
  • the display panel inspection method provided by the embodiment of the present disclosure the display panel is automatically inspected using a pre-built inspection model, which can adapt to the constantly changing data distribution on the production line, and aim at different production lines and different types of display panel defects. All have high detection accuracy.
  • the display panel inspection method provided by the embodiments of the present disclosure ensures the accuracy of the inspection, reduces the cost of the display panel inspection, improves the inspection efficiency, and is beneficial to Improve the production quality and production efficiency of display panels.
  • the detection of the display panel includes two parts: identifying the defect category of the display panel and identifying the defect location of the display panel.
  • the detection model includes a defect category recognition sub-model and a defect location recognition sub-model;
  • an integrated learning algorithm is used to construct a defect category recognition sub-model.
  • the multiple base models are the same convolutional neural network model, and the different base models use multiple first training data sets that satisfy different probability distributions to compare the convolutional neural network models. Obtained from training.
  • the step of generating a plurality of the first training data sets includes:
  • step S410 an original data set is generated, the original data set including a plurality of inspection images of different display panels with known defects;
  • step S420 corresponding to a variety of probability distributions, respectively determine the sampling ratios of detected images of different defect categories
  • step S430 the original data sets are respectively sampled according to the sampling ratios of the detected images of different defect categories to obtain the multiple first training data sets.
  • the convolutional neural network model is not particularly limited.
  • the convolutional neural network model may be a deep residual network (ResNet, Deep residual network), or a densely connected convolutional network (Densenet). , Densely Connected Convolutional Networks), any one of the VGG network.
  • the inventors of the embodiments of the present disclosure have discovered that the VGG model has better performance than other convolutional neural network models when constructing the defect category recognition sub-model.
  • the embodiment of the present disclosure uses the VGG16 model to construct the base model.
  • the VGG16 standard model has 13 convolutional layers and 3 fully connected layers.
  • the conventional convolutional layer described in the embodiment of the present disclosure refers to the original 13 convolutional layers in the VGG16 standard model.
  • the VGG16 model is improved.
  • a Batch Normalization (BN) layer is added; in the VGG16 standard model, the input image size is 224x224, which is implemented in this disclosure
  • BN Batch Normalization
  • the input image size is 224x224, which is implemented in this disclosure
  • a supplementary convolutional layer is added so that the output of the supplementary convolutional layer meets the requirements of the fully connected layer of the VGG16 standard model Input dimension; the random dropout layer is added.
  • the dropout layer is the dropout layer, which means that part of the neural network unit is temporarily discarded from the network according to a certain probability during the training process of the deep learning network, thereby effectively alleviating overfitting happened.
  • the Glod algorithm when training the VGG16 model, the Glod algorithm is used for initialization in the fully connected layer of the VGG16 model, and the L2 regularization algorithm is used for regularization to prevent overfitting. It should be noted that the Gloot algorithm is the Gloot algorithm. In the supplementary convolutional layer, the Grot algorithm is also used for initialization.
  • the convolutional neural network model includes a fully connected layer, a supplementary convolution layer, a batch normalization layer, and a random discard layer;
  • the supplementary convolution layer is used to convolve data to be input to the fully connected layer, so that the data convolved by the supplementary convolution layer meets the input dimensions of the fully connected layer;
  • the batch standardization layer is used to perform standardization processing on the data to be input to the fully connected layer;
  • the random discarding layer is used to randomly discard some neural network units in the convolutional neural network model to avoid overfitting
  • the first algorithm when training the convolutional neural network model, the first algorithm is used to initialize the fully connected layer, the second algorithm is used to regularize the fully connected layer, and the third algorithm is used for the supplementary volume The stack is initialized.
  • the first algorithm is a glorot algorithm
  • the second algorithm is an L2 regularization algorithm
  • the third algorithm is a glorot algorithm
  • the secondary model is a classifier
  • the classifier may be a support vector machine (SVM) or a multi-class logistic regression classifier.
  • SVM support vector machine
  • multi-class logistic regression classifier multi-class logistic regression classifier
  • the classifier is a neural network including a plurality of fully connected layers and a normalized exponential function layer.
  • Softmax is a logistic regression model that can map the input to a real number between 0-1, and the output real number between 0-1 represents the probability of each category being taken.
  • softmax can be used as a parameter of the fully connected layer, or can be used as a separate layer after the fully connected layer, which is not particularly limited in the embodiment of the present disclosure.
  • the classifier includes 2 fully connected layers and a normalized exponential function layer.
  • the output result of the classifier is an n ⁇ 1 dimensional vector, where n is the number of defect categories of the display panel.
  • each element is a real number between 0-1, each element corresponds to a defect category of the display panel, and the value of each element represents the defect category of the current display panel.
  • the probability of the corresponding defect category is determined as the defect category of the display panel currently being inspected.
  • the defect location recognition sub-model is a target detector.
  • the target detector includes a retinal mesh target detection model.
  • the retinal net target detection model is the RetinaNet model.
  • Figure 3 shows the network structure of the RetinaNet model.
  • the RetinaNet model will mark the location and type of display panel defects in the output.
  • the RetinaNet model when the RetinaNet model is trained, the normal, black, and fuzzy images in the original data set will be excluded from the detection images of the defect categories. In addition, all defect categories are classified as One type is called foreground, so that the RetinaNet model only distinguishes foreground and background during training, thus focusing on the identification of defect positions, without distinguishing defect categories.
  • the detection method provided in an embodiment of the present disclosure further includes:
  • step S200 the inspection model for inspecting the display panel is constructed.
  • step S200 includes:
  • step S210 an original data set is generated, the original data set including a plurality of inspection images of different display panels with known defects;
  • step S220 the detection model is constructed according to the original data set.
  • step S220 includes:
  • step S221 the original data set is sampled according to multiple probability distributions to obtain multiple first training data sets with different probability distributions
  • step S222 training a convolutional neural network model according to the first training data set to generate a plurality of the base models respectively corresponding to different probability distributions;
  • step S223 train the classifier according to the second training data set obtained by integrating the output data of each of the base models to generate the secondary model;
  • step S224 the target detector is trained according to the original data set to generate the defect location recognition sub-model.
  • the embodiment of the present disclosure does not specifically limit how to integrate the output data of the multiple base models to obtain the second training data set.
  • the output vectors of each base model are connected to generate a new vector as a sample of the second training data set.
  • the defect category recognition sub-model includes 4 base models, and the 4 base models respectively correspond to 4 probability distributions.
  • the penultimate output of each base model that is, the output of the second fully connected layer, which is an m ⁇
  • four m ⁇ 1-dimensional vectors are connected to obtain a 4m ⁇ 1-dimensional vector, and the 4m ⁇ 1-dimensional vector is used as the input data of the secondary model.
  • the process of training the convolutional neural network to obtain multiple base models includes:
  • each of the multiple probability distributions respectively determine the sampling ratio of the detection images of different defect categories
  • the convolutional neural network model is trained by using the multiple first training data sets to generate multiple base models respectively corresponding to different probability distributions.
  • the remaining detection images are used as the verification set; for the original distribution, according to A ratio of 9:1 divides the original data set into a training data set and a validation set.
  • the original data sets are further divided.
  • training data, verification data, and test data are divided into three parts according to a predetermined ratio.
  • the verification data is used for the secondary model
  • the test data is used for evaluating the final effect.
  • the ratio of training data, verification data, and test data is not particularly limited.
  • the ratio of training data, verification data, and test data is 8:1:1.
  • multiple first training data sets are generated by sampling the divided training data.
  • the detection images in the first training data set are also preprocessed to further increase the training rate of the model.
  • the above pretreatment process specifically includes:
  • standardization processing refers to further standardizing the size and format of the detection images in the first training data set, for example, scaling the detection image to 600x600; normalization processing refers to the detection of the detection images in the first training data set
  • the image is subjected to dimensionless processing to reduce the magnitude and speed up the reading rate of the detection image. For example, each pixel in the detection image is subtracted from the pixel average of the detection image to normalize the pixel value of the detection image.
  • the above difference algorithm is not particularly limited.
  • the difference algorithm may be the nearest neighbor difference algorithm, bilinear difference algorithm, bicubic difference algorithm, LANCZOS ) Any one of the difference algorithm.
  • the inventors who implemented the present disclosure found that the bicubic difference algorithm and the LANCZOS algorithm have better performance in the image scaling field than other difference algorithms, and the LANCZOS algorithm has a faster running rate.
  • the preprocessed detection image is stored in hdf5 format, thereby further improving the reading rate.
  • optimization algorithms are also used for optimization to accelerate the convergence rate of training the convolutional neural network model.
  • the embodiment of the present disclosure does not specifically limit the above optimization algorithm.
  • it may be a Stochastic Gradient Descent (SGD) or an adaptive learning rate adjustment (Adadelta, Adaptive Learning Rate) algorithm.
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive learning rate adjustment
  • Adadelta adaptive moment estimation
  • the inventors of the embodiments of the present disclosure have discovered that the SGD algorithm has better performance.
  • the SGD optimizer when using the SGD optimizer to optimize, set the learning rate to 0.001, and use the momentum gradient descent (momentum) algorithm and the Nesterov gradient acceleration algorithm to speed up the convergence of the model rate.
  • the process of training the classifier to obtain the sub-model includes:
  • the output data of the multiple base models are integrated to generate a second training set
  • the process of constructing the defect location recognition sub-model includes:
  • the detection method further includes:
  • step S300 the image acquirer is controlled to acquire the inspection image of the display panel to be inspected.
  • an electronic device including:
  • One or more processors 201 are included in the Appendix 201;
  • the memory 202 has one or more programs stored thereon, and when the one or more programs are executed by one or more processors, the one or more processors implement any one of the foregoing display panel detection methods;
  • One or more I/O interfaces 203 are connected between the processor and the memory, and are configured to implement information interaction between the processor and the memory.
  • the processor 201 is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.
  • the memory 202 is a device with data storage capabilities, including but not limited to random access memory (RAM, more specifically Such as SDRAM, DDR, etc.), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory (FLASH); I/O interface (read and write interface) 203 is connected between processor 201 and memory 202 , Can realize the information interaction between the processor 201 and the memory 202, which includes but is not limited to a data bus (Bus) and the like.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • FLASH flash memory
  • I/O interface (read and write interface) 203 is connected between processor 201 and memory 202 , Can realize the information interaction between the processor 201 and the memory 202, which includes but is not limited to a data bus (Bus) and the
  • the processor 201, the memory 202, and the I/O interface 203 are connected to each other through the bus 204, and further connected to other components of the computing device.
  • an embodiment of the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processor, any one of the foregoing display panel detection methods is implemented.
  • Such software may be distributed on a computer-readable medium, and the computer-readable medium may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium).
  • the term computer storage medium includes volatile and non-volatile implementations in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data). Sexual, removable and non-removable media.
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tapes, magnetic disk storage or other magnetic storage devices, or Any other medium used to store desired information and that can be accessed by a computer.
  • a communication medium usually contains computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

一种显示面板的检测装置,其包括:图像接收器(110),用于接收待检测显示面板的检测图像;检测器(120),用于将待检测显示面板的检测图像输入预先构建的用于检测显示面板的检测模型,并利用检测模型生成检测结果。还公开了一种显示面板的检测方法、一种电子装置、一种计算机可读介质。

Description

显示面板的检测装置、检测方法、电子装置、可读介质 技术领域
本公开涉及图像识别领域,特别涉及一种显示面板的检测装置、一种显示面板的检测方法、一种电子装置、一种计算机可读介质。
背景技术
近年来,随着各种智能终端和可穿戴设备的普及,使用屏幕的场合越来越多,对于制造厂商的显示面板生产质量和效率的要求也越来越高。在显示面板的生产过程中,显示面板的缺陷检测对工艺路线上的维修、工艺改进、再生产等多个环节均会产生影响,因此,提高显示面板缺陷检测的效率和效果对产能的提升至关重要。
显示面板缺陷的检测主要是由自动光学检测(AOI,Automated Optical Inspection)采图设备在产线上对可能有缺陷的显示面板位置进行拍摄,然后基于拍摄到的图片对显示面板缺陷的类别和位置进行识别。
发明内容
本公开实施例提供一种显示面板的检测装置、一种显示面板的检测方法、一种电子装置、一种计算机可读介质。
第一方面,一种显示面板的检测装置,包括:
图像接收器,用于接收待检测显示面板的检测图像;
检测器,用于将待检测显示面板的检测图像输入预先构建的用于检测显示面板的检测模型,并利用所述检测模型生成检测结果;
所述检测模型包括:
缺陷类别识别子模型,用于识别所述待检测显示面板的缺陷类别;
缺陷位置识别子模型,用于标识所述待检测显示面板的缺陷位置;
其中,所述缺陷类别识别子模型包括多个基模型和次级模型;
多个所述基模型用于对所述待检测显示面板的缺陷进行初始分类;
所述次级模型用于根据将各个所述基模型的输出数据进行整合后得到的输入数据,对所述待检测显示面板的缺陷进行最终分类。
在一些实施例中,所述多个基模型为相同的卷积神经网络模型,不同的 所述基模型是利用满足不同概率分布的多个第一训练数据集分别对所述卷积神经网络模型进行训练得到的。
在一些实施例中,所述多个第一训练数据集包括按照不同的预定采样比例分别对原始数据集进行采样得到的样本集,所述不同的预定采样比例是根据不同的概率分布确定的不同缺陷类别的检测图像的采样比例,所述原始数据集包括多个缺陷已知的不同显示面板的检测图像。
在一些实施例中,所述卷积神经网络模型包括全连接层、补充卷积层、批量标准化层、随机丢弃层;
所述补充卷积层用于对待输入所述全连接层的数据进行卷积,以使经所述补充卷积层卷积后的数据满足所述全连接层的输入维度;
所述批量标准化层用于对待输入所述全连接层的数据进行标准化处理;
所述随机丢弃层用于随机丢弃所述卷积神经网络模型中的部分神经网络单元以避免过拟合;
其中,对所述卷积神经网络模型进行训练时,利用第一算法对所述全连接层进行初始化,利用第二算法对所述全连接层进行正则化,利用第三算法对所述补充卷积层进行初始化。
在一些实施例中,所述次级模型为分类器,包括多个全连接层和归一化指数函数层。
在一些实施例中,所述缺陷位置识别子模型为目标检测器。
第二方面,本公开实施例提供一种显示面板的检测方法,包括:
将待检测显示面板的检测图像输入预先构建的用于检测显示面板的检测模型,对所述待检测显示面板进行检测;
所述检测模型包括:
缺陷类别识别子模型,用于识别所述待检测显示面板的缺陷类别;
缺陷位置识别子模型,用于标识所述待检测显示面板的缺陷位置;
其中,所述缺陷类别识别子模型包括多个基模型和次级模型;
多个所述基模型用于分别对所述待检测显示面板的缺陷进行初始分类;
所述次级模型用于根据将各个所述基模型的输出数据进行整合后得到的输入数据,对所述待检测显示面板的缺陷进行最终分类。
在一些实施例中,所述多个基模型为相同的卷积神经网络模型,不同的 所述基模型是利用满足不同概率分布的多个第一训练数据集分别对所述卷积神经网络模型进行训练得到的。
在一些实施例中,生成多个所述第一训练数据集的步骤包括:
生成原始数据集,所述原始数据集包括多个缺陷已知的不同显示面板的检测图像;
对应于多种概率分布,分别确定不同缺陷类别的检测图像的采样比例;
按照不同缺陷类别的检测图像的采样比例,分别对所述原始数据集进行采样,得到所述多个第一训练数据集。
在一些实施例中,所述卷积神经网络模型包括全连接层、补充卷积层、批量标准化层、随机丢弃层;
所述补充卷积层用于对待输入所述全连接层的数据进行卷积,以使经所述补充卷积层卷积后的数据满足所述全连接层的输入维度;
所述批量标准化层用于对待输入所述全连接层的数据进行标准化处理;
所述随机丢弃层用于随机丢弃所述卷积神经网络模型中的部分神经网络单元以避免过拟合;
其中,对所述卷积神经网络模型进行训练时,利用第一算法对所述全连接层进行初始化,利用第二算法对所述全连接层进行正则化,利用第三算法对所述补充卷积层进行初始化。
在一些实施例中,所述次级模型为分类器,所述分类器包括多个全连接层和归一化指数函数层。
在一些实施例中,所述缺陷位置识别子模型为目标检测器。
第三方面,本公开实施例提供一种电子装置,包括:
一个或多个处理器;
存储装置,其上存储有一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述任意一种显示面板的检测方法;
一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
第四方面,本公开实施例提供一种计算机可读介质,其上存储有计算机程序,所述程序被处理器执行时实现上述任意一种显示面板的检测方法。
附图说明
附图用来提供对本公开实施例的进一步理解,并且构成说明书的一部分,与本公开的实施例一起用于解释本公开,并不构成对本公开的限制。通过参考附图对详细示例实施例进行描述,以上和其它特征和优点对本领域技术人员将变得更加显而易见,在附图中:
图1为本公开实施例中一种显示面板的检测装置的组成框图;
图2为本公开实施例中VGG16模型的一种可选实施方式的结构图;
图3为本公开实施例中视网膜网目标检测模型的结构图;
图4为本公开实施例中另一种显示面板的检测装置的组成框图;
图5为本公开实施例中又一种显示面板的检测装置的组成框图;
图6为本公开实施例中一种显示面板的检测方法的流程图;
图7为本公开实施例中另一种显示面板的检测方法中部分步骤的流程图;
图8为本公开实施例中又一种显示面板的检测方法中部分步骤的流程图;
图9为本公开实施例中再一种显示面板的检测方法中部分步骤的流程图;
图10为本公开实施例中再一种显示面板的检测方法中部分步骤的流程图;
图11为本公开实施例中再一种显示面板的检测方法中部分步骤的流程图;
图12为本公开实施例提供的一种电子装置的组成框图;
图13为本公开实施例提供的一种计算机可读介质的组成框图。
具体实施方式
为使本领域的技术人员更好地理解本公开的技术方案,下面结合附图对本公开提供显示面板的检测装置、显示面板的检测方法、电子装置、计算机可读介质进行详细描述。
在下文中将参考附图更充分地描述示例实施例,但是所述示例实施例可以以不同形式来体现且不应当被解释为限于本文阐述的实施例。反之,提供这些实施例的目的在于使本公开透彻和完整,并将使本领域技术人员充分理解本公开的范围。
在不冲突的情况下,本公开各实施例及实施例中的各特征可相互组合。
如本文所使用的,术语“和/或”包括一个或多个相关列举条目的任何和所 有组合。
本文所使用的术语仅用于描述特定实施例,且不意欲限制本公开。如本文所使用的,单数形式“一个”和“该”也意欲包括复数形式,除非上下文另外清楚指出。还将理解的是,当本说明书中使用术语“包括”和/或“由……制成”时,指定存在所述特征、整体、步骤、操作、元件和/或组件,但不排除存在或添加一个或多个其它特征、整体、步骤、操作、元件、组件和/或其群组。
除非另外限定,否则本文所用的所有术语(包括技术和科学术语)的含义与本领域普通技术人员通常理解的含义相同。还将理解,诸如那些在常用字典中限定的那些术语应当被解释为具有与其在相关技术以及本公开的背景下的含义一致的含义,且将不解释为具有理想化或过度形式上的含义,除非本文明确如此限定。
第一方面,参照图1,本公开实施例提供一种显示面板的检测装置100,包括:
图像接收器110,用于接收待检测显示面板的检测图像;
检测器120,用于将待检测显示面板的检测图像输入预先构建的用于检测显示面板的检测模型,并利用所述检测模型生成检测结果。
在本公开实施例中,待检测显示面板的检测图像包括用采图设备对显示面板进行拍摄所采集的图片。例如,用AOI设备对显示面板进行拍摄所采集的AOI检测图像。其中,所述AOI设备是基于光学原理扫描显示面板并采集图像,以对显示面板进行检测的设备。图像接收器110从采图设备接收待检测显示面板的检测图像,例如,从AOI设备接收检测图像。
在本公开实施例中,对所述待检测显示面板进行检测包括识别所述待检测显示面板缺陷的类别和标记所述待检测显示面板的缺陷位置。其中,缺陷类别包括残留、缺失、异物、异色等,本公开实施例对此不做特殊限定。
在本公开实施例的检测装置100中,预先构建了用于检测显示面板的检测模型,将待检测显示面板的检测图像输入所述检测模型,就可以确定待检测显示面板的缺陷类别和位置,从而完成对所述待检测显示面板的自动检测。
在本公开实施例中,基于大量不同显示面板的检测图像,经过训练构建起所述检测模型。需要说明的是,用于训练构建所述检测模型的不同显示面板的检测图像可以采自同一产线,也可以采自不同产线。当基于采自同一产 线的不同显示面板的检测图像训练构建检测模型时,本公开实施例提供的检测装置100对该产线生产的显示面板具有较高的检测准确度,所述检测装置100能够用于提高特定产线的产品质量;当基于采自不同产线的不同显示面板的检测图像训练构建所述检测模型时,本公开实施例提供的检测装置100对不同产线生产的显示面板都具有较高的检测准确度,从而有利于批量生产。
在本公开实施例提供的显示面板的检测装置中,预先构建了用于检测显示面板缺陷的检测模型,所述检测模型是基于大量不同显示面板的检测图像训练构建的,能够对显示面板自动进行检测;此外,当用所述检测装置对显示面板进行检测时,能够适应产线上不断变化的数据分布,针对不同产线、不同类别的显示面板缺陷,都能具有较高的检测准确度。相对于依赖人工对采图设备拍摄的显示面板图片进行检测,本公开实施例提供的显示面板的检测装置在确保检测准确率的同时,降低了显示面板检测的成本,提高了检测效率,有利于提升显示面板的生产质量和生产效率。
在本公开实施例中,对显示面板进行检测包括识别显示面板的缺陷类别和标识显示面板的缺陷位置两部分。
相应地,在一些实施例中,本公开实施例中的检测模型包括:
缺陷类别识别子模型,用于识别所述待检测显示面板的缺陷类别;
缺陷位置识别子模型,用于标识所述待检测显示面板的缺陷位置。
需要说明的是,在本公开实施例中,缺陷类别识别子模型和缺陷位置识别子模型模型可以任意组合,例如可以先识别显示面板的缺陷类别,再标识显示面板的缺陷位置;也可以先标识显示面板的缺陷位置,再识别显示面板的缺陷类别;还可以分别进行识别显示面板的缺陷类别和标识显示面板的缺陷位置,然后再进行整合。本公开实施例对此不做特殊限定。
在本公开实施例中,采用集成学习算法构建缺陷类别识别子模型。
集成学习是指,使用一系列学习器进行学习得到若干个个体学习器,然后通过结合策略将若干个个体学习器进行整合得到一个强学习器。其基本思想是,在各个个体学习器存在偏好,即每个个体学习器只在某些方面表现比较好的情况下,通过将个体学习器进行整合,以在确保强学习器准确性的同时,提高强学习器的泛化性能。
在实际生产中,显示面板的缺陷形态多样,且各种形态的缺陷的数据分 布也在不断变化。在本公开实施例中,根据集成学习算法构建所述缺陷类别识别子模型,具体来说,根据多种缺陷形态、各种缺陷的数据分布构建多个个体学习器,然后将多个个体学习器进行整合得到所述缺陷类别识别子模型,从而能够适应产线上不断变化的数据分布,提高对显示面板缺陷的检测准确度。
作为一种可选的实施方式,本公开实施例采用的集成学习算法为堆叠(stacking)算法。
以stacking算法包括两层学习器结构为例。在两层学习器结构中,第一层包括多个基模型,第二层包括一个次级模型。stacking算法的主要思想是,分别训练多个基模型,然后将每个基模型输出的预测结果合并作为新的数据,作为次级模型的输入,由次级模型给出最终的分类结果。
相应地,在一些实施例中,所述缺陷类别识别子模型包括多个基模型和次级模型;
多个所述基模型用于分别对所述待检测显示面板的缺陷进行初始分类;
所述次级模型用于根据将各个所述基模型的输出数据进行整合后得到的输入数据,对所述待检测显示面板的缺陷进行最终分类。
在本公开实施例中,根据stacking算法构建的所述缺陷类别识别子模型中,多个基模型可以具有不同神经网络结构,也可具有相同神经网络结构,本公开对此不做特殊限定。
需要说明的是,在本公开实施例中,当多个基模型具有相同神经网络结构时,所述多个基模型是使用具有不同概率分布的数据分别对相同的神经网络进行训练得到的。不同的概率分布与产线上各种形态的缺陷的数据分布相对应,例如,包括平均分布、指数分布、自举(bootstrap)分布、原始分布、二项分布、高斯分布等,本公开对此不做特殊限定。本公开实施例使用具有不同概率分布的数据分别对相同的神经网络结构进行训练,使得构建的缺陷类别识别子模型能够适应产线上不断变化的显示面板缺陷的数据分布,同时,多个基模型具相同的神经网络结构,有利于选择性能最优的神经网络结构,且方便后续的优化与调试,从而进一步提高缺陷类别识别子模型的检测准确度。
相应地,在一些实施例中,所述基模型为卷积神经网络模型,不同的所 述基模型是利用满足不同概率分布的多个第一训练数据集分别对所述卷积神经网络模型进行训练得到的。
在一些实施例中,所述多个第一训练数据集包括按照不同的预定采样比例分别对原始数据集进行采样得到的样本集,所述不同的预定采样比例是根据不同的概率分布确定的不同缺陷类别的检测图像的采样比例,所述原始数据集包括多个缺陷已知的不同显示面板的检测图像。
以平均分布、指数分布、bootstrap分布、原始分布为例,对得到所述第一训练数据集做进一步说明。
对于平均分布,从所述原始数据集中取同样数量的不同缺陷类别的检测图像,作为第一训练数据集,即,在所述第一训练数据集中,不同缺陷类别的检测图像的所占的比例一致;
对于指数分布,计算获得每一个缺陷类别的检测图像在所述原始数据集中所占的比例,对每一个缺陷类别的检测图像站所述原始数据集的比例进行开方运算,得到新的比例,然后对每一个缺陷类别,按照新的比例从所述原始数据集中取相应数量的检测图像,将取出的所有缺陷类别的检测图像作为所述第一训练数据集;
对于bootstrap分布,通过有放回的取样方式,从所述原始数据集中取预定数量的检测图像,作为所述第一训练数据集;
对于原始分布,将所述原始数据集作为所述第一训练数据集。
需要说明的是,作为一种可选的实施方式,在本公开实施例中,对于平均分布、指数分布、bootstrap分布,在采样结束后,将剩余的检测图像作为验证集;对于原始分布,按照9:1的比例将所述原始数据集划分为训练数据集和验证集。
需要说明的是,作为另一种可选的实施方式,在数据处理单元131通过采样获得多个第一训练数据集之前,数据处理单元131还用于对所述原始数据集进行划分。例如,按照预定比例划分出训练数据、验证数据、测试数据三部分,其中,验证数据用于次级模型,测试数据用于评价最终效果。在本公开实施例中,对训练数据、验证数据、测试数据的比例不做特殊限定,例如,训练数据、验证数据、测试数据的比例为8:1:1。还需要说明的是,在对所述原始数据集进行划分后,数据处理单元131通过对划分得到的训练数据 进行采样,生成多个第一训练数据集。
在本公开实施例中,对所述卷积神经网络模型不做特殊限定,例如,所述卷积神经网络模型可以是深度残差网络(ResNet,Deep residual network)、密集连接卷积网络(Densenet,Densely Connected Convolutional Networks)、VGG网络中的任意一者。经本公开实施例的发明人研究发现,VGG模型在构建所述缺陷类别识别子模型时,相较于其他卷积神经网络模型有更好的性能。
VGG模型是一种卷积神经网络,VGG16模型为具有16层网络结构的VGG模型。通常,VGG16标准模型具有13个卷积层和3个全连接层。本公开实施例中所述的常规卷积层是指VGG16标准模型中原有的13个卷积层。在本公开实施例中,对VGG16模型进行了改进,在VGG16模型的全连接层之前,增加批量标准化(BN,BatchNormalization)层;在VGG16标准模型中,输入的图像大小为224x224,在本公开实施例中,为了处理大小600x600的图像,在VGG16模型的最后一层常规卷积层后,新增了一层补充卷积层,以使补充卷积层的输出满足VGG16标准模型的全连接层的输入维度;增加了随机丢弃层,所述随机丢弃层即dropout层,是指在深度学习网络的训练过程中,按照一定的概率将一部分神经网络单元暂时从网络中丢弃,从而有效缓解过拟合的发生。
相应地,在一些实施例中,所述卷积神经网络模型包括VGG16模型,所述VGG16模型包括:
批量标准化层,用于对待输入所述VGG16模型的全连接层的数据进行标准化处理;
补充卷积层,用于对待输入所述VGG16模型的全连接层的数据进行卷积,以使经所述补充卷积层卷积后的数据满足所述VGG16模型的全连接层的输入维度;
随机丢弃层,所述随机丢弃层用于随机丢弃所述VGG16模型中的部分神经网络单元以避免过拟合。
图2是本公开实施例中,VGG16模型的一种可选实施方式的结构图。如图2所示,在本公开改进后的VGG16模型中,包括13个常规卷积层、1个补充卷积层和3个全连接层。
在图2中,输入的检测图像处理流程如下:
(1)经过常规卷积层1-1和常规卷积层1-2两次卷积后,经过最大池化层1进行最大池化(max-pooling),最大池化即取局部接受域中值最大的点;
(2)经过常规卷积层2-1和常规卷积层2-2两次卷积后,经过最大池化层2进行最大池化;
(3)经过常规卷积层3-1、常规卷积层3-2和常规卷积层3-3三次卷积后,经过最大池化层3进行最大池化;
(4)经过常规卷积层4-1、常规卷积层4-2和常规卷积层4-3三次卷积后,经过最大池化层4进行最大池化;
(5)经过常规卷积层5-1、常规卷积层5-2和常规卷积层5-3三次卷积后,经过最大池化层5进行最大池化;
(6)经过补充卷积层6卷积后,经过全连接层1、全连接层2、全连接层3,最终得到输出。
此外,本公开实施例中,在对VGG16模型进行训练时,在VGG16模型的全连接层中使用格洛特算法进行初始化,并用L2正则化算法进行正则化,以防止过拟合。需要说明的是,所述格洛特算法即glorot算法。在补充卷积层中,也使用格洛特算法进行初始化。
相应地,在一些实施例中,对所述卷积神经网络模型进行训练时,用第一算法对所述全连接层进行初始化,用第二算法对所述全连接层进行正则化,用第三算法对所述补充卷积层进行初始化。
在一些实施例中,所述第一算法为glorot算法,所述第二算法为L2正则化算法,所述第三算法为glorot算法。
需要说明的是,正则化(Regularization)是指通过将系数估计(coefficient estimate)朝零的方向进行约束、调整或缩小,以控制模型复杂度,减小过拟合。根据正则化算法中惩罚项的不同,具体分为L1正则化和L2正则化。
作为一种可选的实施方式,在本公开实施例中,改进后的VGG16模型在最后一层常规卷积层后的结构如下:
补充卷积层(使用glorot初始化)—>BN层—>展平层—>dropout层—>全连接层(使用glorot初始化+L2正则化)—>BN层—>dropout层—>全连接层(使用glorot初始化+L2正则化)—>BN层—>全连接层(使用glorot初始化+L2正则化)(softmax)。
在一些实施例中,所述次级模型为分类器。
本公开实施例对所述分类器不做特殊限定。例如,所述分类器可以是支持向量机(SVM),也可以是多类别逻辑回归分类器。
在一些实施例中,所述分类器是包括多个全连接层和归一化指数函数层的神经网络。
所述归一化指数函数即softmax。softmax是一种逻辑回归模型,能够将输入映射为0-1之间的实数,并且其输出的0-1之间的实数表示每个分类被取到的概率。在本公开实施例中,softmax可以作为全连接层的参数使用,也可以作为全连接层后的单独一层,本公开实施例对此不做特殊限定。
在一些实施例中,所述分类器包括2个全连接层和归一化指数函数层。
需要说明的是,所述分类器的输出结果为一个n×1维的向量,其中,n为显示面板的缺陷类别数。在该n×1维向量中,每个元素都是一个0-1之间的实数,每个元素对应一种显示面板的缺陷类别,每个元素的值代表当前显示面板的缺陷类别为该元素对应的缺陷类别的概率。相应地,检测装置100对待检测显示面板进行检测时,将检测装置100中预先构建的检测模型输出的n×1维向量中值最大的元素对应的缺陷类别,确定为当前正在检测的显示面板的缺陷类别。
如上文所述,在本公开实施例中,所述次级模型根据将各个所述基模型的输出数据进行整合后得到的输入数据,对所述待检测显示面板的缺陷进行最终分类。本公开实施例对于如何将所述多个基模型的输出数据进行整合得到所述输入数据不做特殊限定。作为一种可选的实施方式,将各个基模型的输出向量连接生成一个新的向量,作为所述输入数据。例如,假设缺陷类别识别子模型包括4个基模型,4个基模型分别对应4种概率分布,取每一个基模型的倒数第二层输出,即第二个全连阶层的输出,通常为一个m×1维的向量,将4个m×1维向量连接,得到一个4m×1维的向量,将该4m×1维向量作为所述次级模型的输入数据。
在一些实施例中,将所述多个基模型的输出数据进行整合后的数据存储为hdf5格式。
在一些实施例中,所述缺陷位置识别子模型为目标检测器。
在一些实施例中,所述目标检测器包括视网膜网目标检测模型。
所述视网膜网目标检测模型,即RetinaNet模型。图3示出了RetinaNet模型的网络结构。RetinaNet模型会在输出中标注显示面板缺陷的位置及类别。
需要说明的是,在本公开实施例中,在对RetinaNet模型进行训练时,会剔除原始数据集中的正常图、黑图、模糊图等缺陷类别的检测图像,此外,还将所有缺陷类别归为一类,称为前景,以使RetinaNet模型在训练时只区分前景和背景,从而专注于对缺陷位置的标识,无需区分缺陷类别。
在一些实施例中,参照图4,所述检测装置100还包括模型构建器130,所述模型构建器130包括:
数据处理单元131,用于获取原始数据集,所述原始数据集包括多个缺陷已知的不同显示面板的检测图像;
模型构建单元132,用于根据所述原始数据集构建所述检测模型。
如前文所述,所述原始数据集中的不同显示面板的检测图像可以采自同一产线,也可以采自不同产线。需要说明的是,为了对检测模型进行训练,在本公开实施例中,对构成所述原始数据集的检测图像中的显示面板缺陷提前进行了识别和标注,标注内容包括显示面板缺陷的位置和类别。
可以理解的是,所述原始数据集包含的检测图像的数量越多,根据所述原始数据集构建的检测模型用于显示面板检测时,其检测准确度越高;采集构成所述原始数据集的检测图像的过程随机度越高,所述检测模型用于显示面板检测时,其检测准确度越高;构成所述原始数据集的检测图像包括的显示面板缺陷的类别越多,所述检测模型用于显示面板检测时,其检测准确度越高。
下面对所述模型构建器130构建本公开实施例中的检测模型的过程进行解释说明。
在构建所述缺陷类别识别子模型时,训练卷积神经网络得到多个基模型的过程包括:
数据处理单元131用于对应于多种概率分布中的每一种概率分布,分别确定不同缺陷类别的检测图像的采样比例;
数据处理单元131还用于按照不同缺陷类别的检测图像的采样比例,分别对所述原始数据集进行采样,得到不同概率分布的多个第一训练数据集;
模型构建单元132用于利用所述多个第一训练数据集对卷积神经网络模 型进行训练,生成分别对应于不同概率分布的多个基模型。
在本公开实施例中,在模型构建单元132对卷积神经网络模型进行训练之前,数据处理单元131还对第一训练数据集中的检测图像进行预处理,以进一步提高模型的训练速率。上述预处理过程具体包括:
根据差值算法对所述多个第一训练数据集中的检测图像进行标准化处理;
对标准化处理后的检测图像进行归一化处理。
需要说明的是,标准化处理,是指将第一训练数据集中的检测图像的大小、格式等进一步标准化,例如将检测图像缩放到600x600;归一化处理,是指对第一训练数据集中的检测图像进行无量纲处理,以缩小量值,加快检测图像的读取速率,例如,用检测图像中的每个像素减去检测图像的像素均值,以对检测图像的像素值归一化。
在本公开实施例中,对上述差值算法不做特殊限定,例如,所述差值算法可以是最近邻差值算法、双线性差值算法、双三次差值算法、兰索斯(LANCZOS)差值算法中的任意一者。经本公开实施的发明人研究发现,双三次差值算法和LANCZOS算法在图像缩放领域,相比于其他差值算法有更好地性能,而且LANCZOS算法具有更快的运行速率。
还需要说明的是,作为一种可选的实施方式,数据处理单元131预处理后的检测图像以hdf5格式存储,从而进一步提高读取速率。
在本公开实施例中,模型构建单元132还使用优化算法进行优化,以加快对所述卷积神经网络模型进行训练的收敛速率。
需要说明的是,本公开实施例对上述优化算法不做特殊限定,例如,可以是随机梯度下降算法(SGD,Stochastic Gradient Descent),也可以是自适应学习率调整(Adadelta,Adaptive Learning Rate)算法,还可以是自适应力矩估计(Adam,Adaptive moment estimation)算法。经本公开实施例的发明人研究发现,SGD算法具有更好的性能。
作为一种可选的实施方式,当使用SGD优化器进行优化时,将学习率设置为0.001,并使用动量梯度下降(momentum)算法和涅斯捷罗夫(nesterov)梯度加速算法加快模型的收敛速率。
在构建所述缺陷类别识别子模型时,训练所述分类器得到所述次级模型 的过程包括:
数据处理单元131用于将训练卷积神经网络模型得到多个基模型的过程中,多个基模型的输出数据进行整合,生成第二训练集;
模型构建单元132用于根据所述第二训练集对所述分类器进行训练,得到所述次级模型。
构建所述缺陷位置识别子模型的过程包括:
模型构建单元132用于根据所述原始数据集对所述目标检测模型进行训练,得到所述缺陷位置识别子模型。
在一些实施例中,参照图5,所述检测装100还包括:
图像获取器140,用于获取显示面板的检测图像。
在本公开实施例中,图像获取器140可以为采图设备,例如AOI设备,也就是说,采图设备可以作为本公开实施例提供的检测装置100的一部分。相应地,图像接收器110从图像获取器140接收其获取的检测图像。
还需要说明的是,模型构建模块130用于构建检测模型的多个缺陷已知的不同显示面板的检测图像,也可以是由图像获取器140获取的。
第二方面,参照图6,提供一种显示面板的检测方法,包括:
在步骤S100中,将待检测显示面板的检测图像输入预先构建的用于检测显示面板的检测模型,对所述待检测显示面板进行检测;
所述检测模型包括:
缺陷类别识别子模型,用于识别所述待检测显示面板的缺陷类别;
缺陷位置识别子模型,用于标识所述待检测显示面板的缺陷位置;
其中,所述缺陷类别识别子模型包括多个基模型和次级模型;
多个所述基模型用于分别对所述待检测显示面板的缺陷进行初始分类;
所述次级模型用于根据将各个所述基模型的输出数据进行整合后得到的输入数据,对所述待检测显示面板的缺陷进行最终分类。
在本公开实施例中,基于大量不同显示面板的检测图像,经过训练预先构建了用于检测显示面板的检测模型,当对显示面板进行检测时,将待检测显示面板的检测图像输入所述预先构建的检测模型,就可以得到检测结果,从而实现对显示面板的自动检测。
在本公开实施例中,待检测显示面板的检测图像包括用采图设备对显示 面板进行拍摄所采集的图片。例如,用AOI设备对显示面板进行拍摄所采集的AOI检测图像。其中,所述AOI设备是基于光学原理扫描显示面板并采集图像,以对显示面板进行检测的设备。
在本公开实施例中,对所述待检测显示面板进行检测包括识别所述待检测显示面板缺陷的类别和标记所述待检测显示面板的缺陷位置。其中,缺陷类别包括残留、缺失、异物、异色等,本公开实施例对此不做特殊限定。
在本公开实施例提供的显示面板的检测方法中,利用预先构建的检测模型对显示面板自动进行检测,能够适应产线上不断变化的数据分布,针对不同产线、不同类别的显示面板缺陷,都能具有较高的检测准确度。相对于依赖人工对采图设备拍摄的显示面板图片进行检测,本公开实施例提供的显示面板的检测方法在确保检测准确率的同时,降低了显示面板检测的成本,提高了检测效率,有利于提升显示面板的生产质量和生产效率。
在本公开实施例中,对显示面板进行检测包括识别显示面板的缺陷类别和标识显示面板的缺陷位置两部分。
相应地,在一些实施例中,所述检测模型包括缺陷类别识别子模型和缺陷位置识别子模型;
如上文所述,在本公开实施例中,采用集成学习算法构建缺陷类别识别子模型。
在一些实施例中,所述多个基模型为相同的卷积神经网络模型,不同的所述基模型是利用满足不同概率分布的多个第一训练数据集分别对所述卷积神经网络模型进行训练得到的。
参照图7,在一些实施例中,生成多个所述第一训练数据集的步骤包括:
在步骤S410中,生成原始数据集,所述原始数据集包括多个缺陷已知的不同显示面板的检测图像;
在步骤S420中,对应于多种概率分布,分别确定不同缺陷类别的检测图像的采样比例;
在步骤S430中,按照不同缺陷类别的检测图像的采样比例,分别对所述原始数据集进行采样,得到所述多个第一训练数据集。
在本公开实施例中,对所述卷积神经网络模型不做特殊限定,例如,所述卷积神经网络模型可以是深度残差网络(ResNet,Deep residual network)、 密集连接卷积网络(Densenet,Densely Connected Convolutional Networks)、VGG网络中的任意一者。经本公开实施例的发明人研究发现,VGG模型在构建所述缺陷类别识别子模型时,相较于其他卷积神经网络模型有更好的性能。
作为一种可选的实施方式,本公开实施例用VGG16模型构建所述基模型。
通常,VGG16标准模型具有13个卷积层和3个全连接层。本公开实施例中所述的常规卷积层是指VGG16标准模型中原有的13个卷积层。在本公开实施例中,对VGG16模型进行了改进,在VGG16模型的全连接层之前,增加批量标准化(BN,BatchNormalization)层;在VGG16标准模型中,输入的图像大小为224x224,在本公开实施例中,为了处理大小600x600的图像,在VGG16模型的最后一层常规卷积层后,新增了一层补充卷积层,以使补充卷积层的输出满足VGG16标准模型的全连接层的输入维度;增加了随机丢弃层,所述随机丢弃层即dropout层,是指在深度学习网络的训练过程中,按照一定的概率将一部分神经网络单元暂时从网络中丢弃,从而有效缓解过拟合的发生。此外,本公开实施例中,在对VGG16模型进行训练时,在VGG16模型的全连接层中使用格洛特算法进行初始化,并用L2正则化算法进行正则化,以防止过拟合。需要说明的是,所述格洛特算法即glorot算法。在补充卷积层中,也使用格洛特算法进行初始化。
相应地,在一些实施例中,所述卷积神经网络模型包括全连接层、补充卷积层、批量标准化层、随机丢弃层;
所述补充卷积层用于对待输入所述全连接层的数据进行卷积,以使经所述补充卷积层卷积后的数据满足所述全连接层的输入维度;
所述批量标准化层用于对待输入所述全连接层的数据进行标准化处理;
所述随机丢弃层用于随机丢弃所述卷积神经网络模型中的部分神经网络单元以避免过拟合;
其中,对所述卷积神经网络模型进行训练时,用第一算法对所述全连接层进行初始化,用第二算法对所述全连接层进行正则化,用第三算法对所述补充卷积层进行初始化。
在一些实施例中,所述第一算法为glorot算法,所述第二算法为L2正则化算法,所述第三算法为glorot算法。
在一些实施例中,所述次级模型为分类器。
本公开实施例对所述分类器不做特殊限定。例如,所述分类器可以是支持向量机(SVM),也可以是多类别逻辑回归分类器。
在一些实施例中,所述分类器是包括多个全连接层和归一化指数函数层的神经网络。
所述归一化指数函数即softmax。softmax是一种逻辑回归模型,能够将输入映射为0-1之间的实数,并且其输出的0-1之间的实数表示每个分类被取到的概率。在本公开实施例中,softmax可以作为全连接层的参数使用,也可以作为全连接层后的单独一层,本公开实施例对此不做特殊限定。
在一些实施例中,所述分类器包括2个全连接层和归一化指数函数层。
需要说明的是,所述分类器的输出结果为一个n×1维的向量,其中,n为显示面板的缺陷类别数。在该n×1维向量中,每个元素都是一个0-1之间的实数,每个元素对应一种显示面板的缺陷类别,每个元素的值代表当前显示面板的缺陷类别为该元素对应的缺陷类别的概率。相应地,对待检测显示面板进行检测时,将检测模型输出的n×1维向量中值最大的元素对应的缺陷类别,确定为当前正在检测的显示面板的缺陷类别。
在一些实施例中,所述缺陷位置识别子模型为目标检测器。
在一些实施例中,所述目标检测器包括视网膜网目标检测模型。
所述视网膜网目标检测模型,即RetinaNet模型。图3示出了RetinaNet模型的网络结构。RetinaNet模型会在输出中标注显示面板缺陷的位置及类别。
需要说明的是,在本公开实施例中,在对RetinaNet模型进行训练时,会剔除原始数据集中的正常图、黑图、模糊图等缺陷类别的检测图像,此外,还将所有缺陷类别归为一类,称为前景,以使RetinaNet模型在训练时只区分前景和背景,从而专注于对缺陷位置的标识,无需区分缺陷类别。
在一些实施例中,参照图8,在步骤S100之前,本公开实施例提供的检测方法还包括:
在步骤S200中,构建所述用于检测显示面板的检测模型。
在一些实施例中,参照图9,步骤S200包括:
在步骤S210中,生成原始数据集,所述原始数据集包括多个缺陷已知的不同显示面板的检测图像;
在步骤S220中,根据所述原始数据集构建所述检测模型。
在一些实施例中,参照图10,步骤S220包括:
在步骤S221中,根据多个概率分布对所述原始数据集进行采样,得到不同概率分布的多个第一训练数据集;
在步骤S222中,根据所述第一训练数据集对卷积神经网络模型进行训练,生成分别对应于不同概率分布的多个所述基模型;
在步骤S223中,根据将各个所述基模型的输出数据进行整合得到的第二训练数据集对分类器进行训练,生成所述次级模型;
在步骤S224中,根据所述原始数据集对目标检测器进行训练,生成所述缺陷位置识别子模型。
本公开实施例对于如何将所述多个基模型的输出数据进行整合得到所述第二训练数据集不做特殊限定。作为一种可选的实施方式,将各个基模型的输出向量连接生成一个新的向量,作为所述第二训练数据集的一个样本。例如,假设缺陷类别识别子模型包括4个基模型,4个基模型分别对应4种概率分布,取每一个基模型的倒数第二层输出,即第二个全连接层输出,为一个m×1维的向量,将4个m×1维向量连接,得到一个4m×1维的向量,将该4m×1维向量作为所述次级模型的输入数据。
下面对构建本公开实施例中的检测模型的过程进行解释说明。
在构建所述缺陷类别识别子模型时,训练卷积神经网络得到多个基模型的过程包括:
对应于多种概率分布中的每一种概率分布,分别确定不同缺陷类别的检测图像的采样比例;
按照不同缺陷类别的检测图像的采样比例,分别对所述原始数据集进行采样,得到不同概率分布的多个第一训练数据集;
利用所述多个第一训练数据集对卷积神经网络模型进行训练,生成分别对应于不同概率分布的多个基模型。
需要说明的是,作为一种可选的实施方式,在本公开实施例中,对于平均分布、指数分布、bootstrap分布,在采样结束后,将剩余的检测图像作为验证集;对于原始分布,按照9:1的比例将所述原始数据集划分为训练数据集和验证集。
需要说明的是,作为另一种可选的实施方式,通过采样获得多个第一训练数据集之前,还对所述原始数据集进行划分。例如,按照预定比例划分出训练数据、验证数据、测试数据三部分,其中,验证数据用于次级模型,测试数据用于评价最终效果。在本公开实施例中,对训练数据、验证数据、测试数据的比例不做特殊限定,例如,训练数据、验证数据、测试数据的比例为8:1:1。还需要说明的是,在对所述原始数据集进行划分后,通过对划分得到的训练数据进行采样,生成多个第一训练数据集。
在本公开实施例中,在对卷积神经网络模型进行训练之前,还对第一训练数据集中的检测图像进行预处理,以进一步提高模型的训练速率。上述预处理过程具体包括:
根据差值算法对所述多个第一训练数据集中的检测图像进行标准化处理;
对标准化处理后的检测图像进行归一化处理。
需要说明的是,标准化处理,是指将第一训练数据集中的检测图像的大小、格式等进一步标准化,例如将检测图像缩放到600x600;归一化处理,是指对第一训练数据集中的检测图像进行无量纲处理,以缩小量值,加快检测图像的读取速率,例如,用检测图像中的每个像素减去检测图像的像素均值,以对检测图像的像素值归一化。
在本公开实施例中,对上述差值算法不做特殊限定,例如,所述差值算法可以是最近邻差值算法、双线性差值算法、双三次差值算法、兰索斯(LANCZOS)差值算法中的任意一者。经本公开实施的发明人研究发现,双三次差值算法和LANCZOS算法在图像缩放领域,相比于其他差值算法有更好地性能,而且LANCZOS算法具有更快的运行速率。
还需要说明的是,作为一种可选的实施方式,预处理后的检测图像以hdf5格式存储,从而进一步提高读取速率。
在本公开实施例中,还使用优化算法进行优化,以加快对所述卷积神经网络模型进行训练的收敛速率。
需要说明的是,本公开实施例对上述优化算法不做特殊限定,例如,可以是随机梯度下降算法(SGD,Stochastic Gradient Descent),也可以是自适应学习率调整(Adadelta,Adaptive Learning Rate)算法,还可以是自适应力 矩估计(Adam,Adaptive moment estimation)算法。经本公开实施例的发明人研究发现,SGD算法具有更好的性能。
作为一种可选的实施方式,当使用SGD优化器进行优化时,将学习率设置为0.001,并使用动量梯度下降(momentum)算法和涅斯捷罗夫(nesterov)梯度加速算法加快模型的收敛速率。
在构建所述缺陷类别识别子模型时,训练所述分类器得到所述次级模型的过程包括:
将训练卷积神经网络模型得到多个基模型的过程中,多个基模型的输出数据进行整合,生成第二训练集;
根据所述第二训练集对所述分类器进行训练,得到所述次级模型。
构建所述缺陷位置识别子模型的过程包括:
根据所述原始数据集对所述目标检测模型进行训练,得到所述缺陷位置识别子模型。
在一些实施例中,参照图11,所述检测方法还包括:
在步骤S300中,控制图像获取器获取所述待检测显示面板的检测图像。
第三方面,参照图12,本公开实施例提供一种电子装置,包括:
一个或多个处理器201;
存储器202,其上存储有一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现上述任意一种显示面板的检测方法;
一个或多个I/O接口203,连接在处理器与存储器之间,配置为实现处理器与存储器的信息交互。
其中,处理器201为具有数据处理能力的器件,其包括但不限于中央处理器(CPU)等;存储器202为具有数据存储能力的器件,其包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH);I/O接口(读写接口)203连接在处理器201与存储器202间,能实现处理器201与存储器202的信息交互,其包括但不限于数据总线(Bus)等。
在一些实施例中,处理器201、存储器202和I/O接口203通过总线204相互连接,进而与计算设备的其它组件连接。
上文已经对所述显示面板的检测方法进行了详细的描述,此处不再赘述。
第四方面,参照图13,本公开实施例提供一种计算机可读介质,其上存储有计算机程序,所述程序被处理器执行时实现上述任意一种显示面板的检测方法。
上文已经对所述显示面板的检测方法进行了详细的描述,此处不再赘述。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其它数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其它存储器技术、CD-ROM、数字多功能盘(DVD)或其它光盘存储、磁盒、磁带、磁盘存储或其它磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其它的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其它传输机制之类的调制数据信号中的其它数据,并且可包括任何信息递送介质。
本文已经公开了示例实施例,并且虽然采用了具体术语,但它们仅用于并仅应当被解释为一般说明性含义,并且不用于限制的目的。在一些实例中,对本领域技术人员显而易见的是,除非另外明确指出,否则可单独使用与特定实施例相结合描述的特征、特性和/或元素,或可与其它实施例相结合描述的特征、特性和/或元件组合使用。因此,本领域技术人员将理解,在不脱离由所附的权利要求阐明的本公开的范围的情况下,可进行各种形式和细节上 的改变。

Claims (14)

  1. 一种显示面板的检测装置,包括:
    图像接收器,用于接收待检测显示面板的检测图像;
    检测器,用于将待检测显示面板的检测图像输入预先构建的用于检测显示面板的检测模型,并利用所述检测模型生成检测结果;
    所述检测模型包括:
    缺陷类别识别子模型,用于识别所述待检测显示面板的缺陷类别;
    缺陷位置识别子模型,用于标识所述待检测显示面板的缺陷位置;
    其中,所述缺陷类别识别子模型包括多个基模型和次级模型;
    多个所述基模型用于对所述待检测显示面板的缺陷进行初始分类;
    所述次级模型用于根据将各个所述基模型的输出数据进行整合后得到的输入数据,对所述待检测显示面板的缺陷进行最终分类。
  2. 根据权利要求1所述的检测装置,其中,所述多个基模型为相同的卷积神经网络模型,不同的所述基模型是利用满足不同概率分布的多个第一训练数据集分别对所述卷积神经网络模型进行训练得到的。
  3. 根据权利要求2所述的检测装置,其中,所述多个第一训练数据集包括按照不同的预定采样比例分别对原始数据集进行采样得到的样本集,所述不同的预定采样比例是根据不同的概率分布确定的不同缺陷类别的检测图像的采样比例,所述原始数据集包括多个缺陷已知的不同显示面板的检测图像。
  4. 根据权利要求2或3所述的检测装置,其中,所述卷积神经网络模型包括全连接层、补充卷积层、批量标准化层、随机丢弃层;
    所述补充卷积层用于对待输入所述全连接层的数据进行卷积,以使经所述补充卷积层卷积后的数据满足所述全连接层的输入维度;
    所述批量标准化层用于对待输入所述全连接层的数据进行标准化处理;
    所述随机丢弃层用于随机丢弃所述卷积神经网络模型中的部分神经网络单元以避免过拟合;
    其中,对所述卷积神经网络模型进行训练时,利用第一算法对所述全连接层进行初始化,利用第二算法对所述全连接层进行正则化,利用第三算法对所述补充卷积层进行初始化。
  5. 根据权利要求1所述的检测装置,其中,所述次级模型为分类器,包括多个全连接层和归一化指数函数层。
  6. 根据权利要求1所述的检测装置,其中,所述缺陷位置识别子模型为目标检测器。
  7. 一种显示面板的检测方法,包括:
    将待检测显示面板的检测图像输入预先构建的用于检测显示面板的检测模型,对所述待检测显示面板进行检测;
    所述检测模型包括:
    缺陷类别识别子模型,用于识别所述待检测显示面板的缺陷类别;
    缺陷位置识别子模型,用于标识所述待检测显示面板的缺陷位置;
    其中,所述缺陷类别识别子模型包括多个基模型和次级模型;
    多个所述基模型用于分别对所述待检测显示面板的缺陷进行初始分类;
    所述次级模型用于根据将各个所述基模型的输出数据进行整合后得到的输入数据,对所述待检测显示面板的缺陷进行最终分类。
  8. 根据权利要求7所述的检测方法,其中,所述多个基模型为相同的卷积神经网络模型,不同的所述基模型是利用满足不同概率分布的多个第一训练数据集分别对所述卷积神经网络模型进行训练得到的。
  9. 根据权利要求8所述的检测方法,其中,生成多个所述第一训练数据集的步骤包括:
    生成原始数据集,所述原始数据集包括多个缺陷已知的不同显示面板的检测图像;
    对应于多种概率分布,分别确定不同缺陷类别的检测图像的采样比例;
    按照不同缺陷类别的检测图像的采样比例,分别对所述原始数据集进行采样,得到所述多个第一训练数据集。
  10. 根据权利要求8或9所述的检测方法,其中,所述卷积神经网络模型包括全连接层、补充卷积层、批量标准化层、随机丢弃层;
    所述补充卷积层用于对待输入所述全连接层的数据进行卷积,以使经所述补充卷积层卷积后的数据满足所述全连接层的输入维度;
    所述批量标准化层用于对待输入所述全连接层的数据进行标准化处理;
    所述随机丢弃层用于随机丢弃所述卷积神经网络模型中的部分神经网络单元以避免过拟合;
    其中,对所述卷积神经网络模型进行训练时,利用第一算法对所述全连接层进行初始化,利用第二算法对所述全连接层进行正则化,利用第三算法对所述补充卷积层进行初始化。
  11. 根据权利要求7所述的检测方法,其中,所述次级模型为分类器,所述分类器包括多个全连接层和归一化指数函数层。
  12. 根据权利要求7所述的检测方法,其中,所述缺陷位置识别子模型为目标检测器。
  13. 一种电子装置,包括:
    一个或多个处理器;
    存储装置,其上存储有一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现根据权利要求7至12中任意一项所述的显示面板的检测方法;
    一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
  14. 一种计算机可读介质,其上存储有计算机程序,所述程序被处理器执行时实现根据权利要求7至12中任意一项所述的显示面板的检测方法。
PCT/CN2020/093281 2020-05-29 2020-05-29 显示面板的检测装置、检测方法、电子装置、可读介质 WO2021237682A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2020/093281 WO2021237682A1 (zh) 2020-05-29 2020-05-29 显示面板的检测装置、检测方法、电子装置、可读介质
US17/417,487 US11900589B2 (en) 2020-05-29 2020-05-29 Detection device of display panel and detection method thereof, electronic device and readable medium
CN202080000865.1A CN114175093A (zh) 2020-05-29 2020-05-29 显示面板的检测装置、检测方法、电子装置、可读介质
US18/543,121 US20240119584A1 (en) 2020-05-29 2023-12-18 Detection method, electronic device and non-transitory computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/093281 WO2021237682A1 (zh) 2020-05-29 2020-05-29 显示面板的检测装置、检测方法、电子装置、可读介质

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US17/417,487 A-371-Of-International US11900589B2 (en) 2020-05-29 2020-05-29 Detection device of display panel and detection method thereof, electronic device and readable medium
US18/543,121 Continuation US20240119584A1 (en) 2020-05-29 2023-12-18 Detection method, electronic device and non-transitory computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2021237682A1 true WO2021237682A1 (zh) 2021-12-02

Family

ID=78745491

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093281 WO2021237682A1 (zh) 2020-05-29 2020-05-29 显示面板的检测装置、检测方法、电子装置、可读介质

Country Status (3)

Country Link
US (2) US11900589B2 (zh)
CN (1) CN114175093A (zh)
WO (1) WO2021237682A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024103004A1 (en) * 2022-11-10 2024-05-16 Versiti Blood Research Institute Foundation, Inc. Systems, methods, and media for automatically detecting blood abnormalities using images of individual blood cells
CN117197054A (zh) * 2023-08-22 2023-12-08 盐城工学院 一种基于人工智能的显示面板坏点检测方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104749184A (zh) * 2013-12-31 2015-07-01 研祥智能科技股份有限公司 自动光学检测方法和系统
US9092842B2 (en) * 2011-08-04 2015-07-28 Sharp Laboratories Of America, Inc. System for defect detection and repair
CN108846841A (zh) * 2018-07-02 2018-11-20 北京百度网讯科技有限公司 显示屏质量检测方法、装置、电子设备及存储介质
CN108961238A (zh) * 2018-07-02 2018-12-07 北京百度网讯科技有限公司 显示屏质量检测方法、装置、电子设备及存储介质
CN109064446A (zh) * 2018-07-02 2018-12-21 北京百度网讯科技有限公司 显示屏质量检测方法、装置、电子设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6542830B1 (en) * 1996-03-19 2003-04-01 Hitachi, Ltd. Process control system
US8995747B2 (en) * 2010-07-29 2015-03-31 Sharp Laboratories Of America, Inc. Methods, systems and apparatus for defect detection and classification
US9922269B2 (en) * 2015-06-05 2018-03-20 Kla-Tencor Corporation Method and system for iterative defect classification
EP3336608A1 (en) * 2016-12-16 2018-06-20 ASML Netherlands B.V. Method and apparatus for image analysis
WO2018208791A1 (en) * 2017-05-08 2018-11-15 Aquifi, Inc. Systems and methods for inspection and defect detection using 3-d scanning
US20190318469A1 (en) * 2018-04-17 2019-10-17 Coherent AI LLC Defect detection using coherent light illumination and artificial neural network analysis of speckle patterns

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9092842B2 (en) * 2011-08-04 2015-07-28 Sharp Laboratories Of America, Inc. System for defect detection and repair
CN104749184A (zh) * 2013-12-31 2015-07-01 研祥智能科技股份有限公司 自动光学检测方法和系统
CN108846841A (zh) * 2018-07-02 2018-11-20 北京百度网讯科技有限公司 显示屏质量检测方法、装置、电子设备及存储介质
CN108961238A (zh) * 2018-07-02 2018-12-07 北京百度网讯科技有限公司 显示屏质量检测方法、装置、电子设备及存储介质
CN109064446A (zh) * 2018-07-02 2018-12-21 北京百度网讯科技有限公司 显示屏质量检测方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
US20220343481A1 (en) 2022-10-27
CN114175093A (zh) 2022-03-11
US11900589B2 (en) 2024-02-13
US20240119584A1 (en) 2024-04-11

Similar Documents

Publication Publication Date Title
CN109902732B (zh) 车辆自动分类方法及相关装置
CN111310862B (zh) 复杂环境下基于图像增强的深度神经网络车牌定位方法
CN107016413B (zh) 一种基于深度学习算法的烟叶在线分级方法
CN106875373B (zh) 基于卷积神经网络剪枝算法的手机屏幕mura缺陷检测方法
CN109934805B (zh) 一种基于低照度图像和神经网络的水污染检测方法
US20240119584A1 (en) Detection method, electronic device and non-transitory computer-readable storage medium
CN114663346A (zh) 一种基于改进YOLOv5网络的带钢表面缺陷检测方法
CN112819821B (zh) 一种细胞核图像检测方法
CN116012291A (zh) 工业零件图像缺陷检测方法及系统、电子设备和存储介质
CN116843650A (zh) 融合aoi检测与深度学习的smt焊接缺陷检测方法及系统
WO2024021461A1 (zh) 缺陷检测方法及装置、设备、存储介质
CN114463843A (zh) 一种基于深度学习的多特征融合鱼类异常行为检测方法
CN117011563A (zh) 基于半监督联邦学习的道路损害巡检跨域检测方法及系统
CN116342536A (zh) 基于轻量化模型的铝带材表面缺陷检测方法、系统及设备
CN115240259A (zh) 一种基于yolo深度网络的课堂环境下人脸检测方法及其检测系统
CN111160100A (zh) 一种基于样本生成的轻量级深度模型航拍车辆检测方法
CN114549414A (zh) 一种针对轨道数据的异常变化检测方法及系统
CN113962980A (zh) 基于改进yolov5x的玻璃容器瑕疵检测方法及系统
CN117690128A (zh) 胚胎细胞多核目标检测系统、方法和计算机可读存储介质
CN113313678A (zh) 一种基于多尺度特征融合的精子形态学自动分析方法
JP7123306B2 (ja) 画像処理装置及び画像処理方法
Abilash et al. Currency recognition for the visually impaired people
CN115641474A (zh) 基于高效学生网络的未知类型缺陷检测方法与装置
CN111582057B (zh) 一种基于局部感受野的人脸验证方法
CN112070060A (zh) 识别年龄的方法、年龄识别模型的训练方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937247

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20937247

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.06.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20937247

Country of ref document: EP

Kind code of ref document: A1