CN113657539A - Display panel micro-defect detection method based on two-stage detection network - Google Patents

Display panel micro-defect detection method based on two-stage detection network Download PDF

Info

Publication number
CN113657539A
CN113657539A CN202110979004.9A CN202110979004A CN113657539A CN 113657539 A CN113657539 A CN 113657539A CN 202110979004 A CN202110979004 A CN 202110979004A CN 113657539 A CN113657539 A CN 113657539A
Authority
CN
China
Prior art keywords
weighted
network
feature
result
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110979004.9A
Other languages
Chinese (zh)
Inventor
王伟波
叶淑娇
熊鹏博
谭久彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202110979004.9A priority Critical patent/CN113657539A/en
Publication of CN113657539A publication Critical patent/CN113657539A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of display panel defect detection, and discloses a display panel micro-defect detection method based on a two-stage detection network, which comprises the following steps: (1) improving a network structure by using a weighted feature fusion network and a GA-RPN method; (2) expanding a data set by using a Mosaic data enhancement method to improve the processing capacity of the model on few samples and small defects, inputting a training set subjected to data enhancement into a fast R-CNN network based on weighting characteristic fusion for training, and then obtaining model parameters aiming at a panel defect data set; (3) and inputting the defective panel image into a fast R-CNN network model based on weighted feature fusion to carry out model evaluation and test. According to the method, the relation between the deep-layer features and the shallow-layer features is effectively enhanced through a weighted feature fusion method, the adaptability of the model to different backgrounds is improved through learnable weight parameters, and the GA-RPN method is used for effectively improving the defect detection accuracy while the number of anchor frames is reduced.

Description

Display panel micro-defect detection method based on two-stage detection network
Technical Field
The invention relates to the technical field of display panel defect detection, in particular to a display panel micro-defect detection method based on a two-stage detection network.
Background
As a fundamental industry supporting the development of the information industry, the display industry has always occupied a significant position in the manufacturing industry. In recent years, with the rapid development of information technology, the global display panel industry has rapidly developed. The improvement of the product yield becomes a hot spot of competition of various manufacturers, Automatic Optical Inspection (AOI) adopts an Optical imaging technology to obtain an image of a detected target, and defects are detected through image processing and a pattern recognition algorithm.
The defect detection method in the AOI technology mainly comprises a difference method, a statistical method and a spectrum method. The difference method is used for judging the defect area by performing threshold segmentation on the difference result of the non-defective template image and the image to be detected; the statistical method is to extract the characteristics of the image data in a certain way or reduce the dimension of the data, and input the extracted characteristic information into a constructed classifier to finish the judgment of the existence of the defects or the identification of the defect types; the frequency spectrum method transforms image signals from a space domain to a frequency domain through methods such as Fourier transform, Gabor transform, wavelet transform, discrete cosine transform and the like, removes frequency components containing repetitive texture background in the frequency domain, reconstructs space domain images through corresponding inverse transform to obtain images which retain defect information and filter repetitive background texture, and finally judges image defects through image segmentation. The method increases the difference between the defect and the background by a space domain or frequency domain method, and then obtains a defect image by gray threshold segmentation.
However, with the increasing complexity and size of panel circuits, it is challenging to detect micro defects in the background of multiple complex textures using conventional methods, and the powerful feature extraction capability of the convolutional network eliminates the need for adjusting parameters and changing features according to different images in the conventional methods. At present, a defect detection algorithm based on deep learning is widely applied to the field of defect detection of fabrics, paper, circuit boards, workpiece surfaces, concrete cracks and the like.
Patent CN 111192264A is for promoting the detection efficiency of multiple paper defects, use the non-local mean algorithm of optimization to reduce noise after the defect image is exported to modified Faster R-CNN, realize carrying on accurate positioning and segmentation to the defect image, use this method to test 1000 paper defect images, compare with Faster R-CNN, to damage, spot, fold, impurity defect's omission factor has been reduced 0.2% respectively under the condition that the detection time does not obviously increase, 2.1%, 2.9% and 0.2%.
In patent CN 111986187A, in order to improve the detection accuracy of aerospace electronic solder joints, a lightweight network MobileNet is used to replace the seven-layer convolution in the Tiny _ YOLOv3, and a solder joint infrared image with known defect type is used to train the network, compared with the Tiny-YOLOv3 network, the convergence speed of the Tiny-YOLOv3 network model improved by the MobileNet network is faster, and the mAP (mean Average precision) of the network is improved by 21.62%.
Although the deep learning method has been widely applied in the above-mentioned industrial detection scenario, in the application of the panel defect detection field, due to the existence of the complex texture background, the detection of the micro defect under multiple backgrounds still has the problem of low accuracy.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a display panel micro-defect detection method based on a two-stage detection network, which can realize accurate positioning and classification of micro defects under various panel backgrounds and effectively improve the detection efficiency and accuracy of panel defects.
In order to achieve the purpose, the invention is realized by the following technical scheme: a display panel micro-defect detection method based on a two-stage detection network mainly comprises the following steps:
s1: acquiring a panel defect image dataset
S2: based on the fast R-CNN network structure, a ResNet50 network is used as a main feature extraction network to obtain a plurality of feature layers, a weighted feature fusion network is used for fusing the feature layers, and the construction of the fast R-CNN network structure based on the weighted feature fusion is completed by combining a GA-RPN method.
S3: and training the fusion Faster R-CNN network based on the weighted features by using the panel defect data set after data enhancement to obtain the trained network model parameters.
S4: and inputting the panel image with the defect to be detected into a fusion Faster R-CNN network structure based on the weighted features, and loading the trained network model parameters to finish the detection of the panel defect.
Further, in step S1, a defective panel image is collected, the image is labeled by using labeling software, and the image is converted into a COCO data set format.
Further, in step S2, the output of the last convolutional layer of conv2_ x, conv3_ x, conv4_ x, conv5_ x of ResNet50 is selected as the feature layer for weighted feature fusion. In the present invention, the last convolution layers of conv2_ x, conv3_ x, conv4_ x, and conv5_ x are respectively denoted as C2, C3, C4, and C5, where conv5_ x denotes the deepest layer feature, and conv2_ x denotes the shallowest layer feature.
Further, the weighted feature fusion network in step S2 is formed by:
firstly, carrying out dimensionality reduction processing on feature layers C2, C3, C4 and C5 by using 1 × 1 convolution, then carrying out convolution processing on C5 by using 3 × 3 convolution, and outputting a convolution result as a C5 weighted feature fusion result P5; simultaneously, P5 is subjected to 2 times of upsampling operation and then is subjected to weighted addition with the C4 feature layer C4_ after 1 multiplied by 1 convolution processing, and the weight parameter of the P5 after 2 times of upsampling is set to be alpha5The weight parameter of the feature layer C4_ is β4The result of the weighted addition is alpha5·P54C4, eliminating aliasing effects brought by feature fusion through 3 × 3 convolution of the weighted result, and obtaining a weighted feature fusion result P4; adding the 2 times of upsampled P4 to the C3 feature layer C3_ after 1 multiplied by 1 convolution in a weighted manner, and setting the weight parameter of the 2 times of upsampled P4 as alpha4The weight parameter of the feature layer C3_ is β3The result of the weighted addition is alpha4·P43C3, eliminating aliasing effects brought by feature fusion through 3 × 3 convolution of the weighted result, and obtaining a weighted feature fusion result P3; convolving the 2 times up-sampled P3 with the 1X 1 processed C2 featureLayer C2_ weighted addition, setting the weight parameter of 2 times up-sampled P3 to α3The weight parameter of the feature layer C2_ is β2The result of the weighted addition is alpha3·P32C2, eliminating aliasing effect brought by feature fusion by 3 × 3 convolution of the weighted result, and obtaining a weighted feature fusion result P2. Weight parameter alpha3,α4,α5,β2,β3,β4Has a value range of [0,1 ]]The weight parameters are learnable parameters, and the weight parameters are updated in the back propagation process.
Further, the GA-RPN network in step S2 is configured by:
the main function of the GA-RPN network is to predict the confidence that the receptive field corresponding to each pixel on the characteristic diagram is the target and the corresponding width and height through the position prediction branch and the shape prediction branch respectively. A target with a confidence level greater than a certain threshold is considered a target. The position and the shape of the target in the original image are generally approximately represented by a rectangular anchor frame, namely the GA-RPN has the function of generating a corresponding anchor frame at the position where the target is likely to appear, and the shape of the anchor frame is close to the circumscribed rectangle of the target. The GA-RPN network is composed of a position prediction branch, a shape prediction branch and a feature self-adaption module, wherein the position prediction branch firstly performs single-channel 1 x 1 convolution on an input feature image, and then uses a sigmoid function to map the result after the 1 x 1 convolution to [0,1]Obtaining the probability that each position in the characteristic diagram possibly has the target, and then forming a position probability matrix; shape prediction is carried out after target position prediction is finished, the shape prediction branch is used for realizing shape of a target, the width w and the height h are used for representing the shape of the target, the task of the shape prediction is equivalent to the prediction of w and h, however, the situation that the size of the target in an image is greatly different occurs, experience shows that direct prediction of w and h has the possibility of causing network instability, and therefore, a shape prediction network firstly generates a dw and dh two-channel prediction graph through a 1 multiplied by 1 convolution layer of two channels, and w is sigma s edwAnd h ═ σ · s · edhOutputs a prediction of w and h. The sigma is an empirical scale factor and can be used for constructing the network according to the characteristics and the channel of the data setSetting a test, wherein s is the step distance when the convolution slides, and the value range of dw and dh is [ -1,1]. However, the width and height of the anchor frame predicted by different positions are different, which results in the inconsistent receptive field size of the anchor frame obtained by the method, in order to eliminate the inconsistency, the generated anchor frame is adjusted by a characteristic adaptive module, a new characteristic image is obtained by performing deformable convolution on an original characteristic image, the characteristic adaptive module is a small convolutional neural network formed by 3 x 3 deformable convolution, and the process can be carried out by fi'=NT(fi,wi,hi) Is represented by NTRepresentation feature adaptation module, fiIs a feature of the ith position, (w)i,hi) Is the shape of the anchor frame corresponding to the ith position. First, the output of the shape prediction branch is convolved by 1 × 1 to get the offset of the shape, and then a 3 × 3 deformable convolution is applied to the original feature map with the offset to obtain fiI.e., features after adjustment, the adjusted features of the anchor frame region can then be classified and regressed for anchor frame width and height.
Further, after the anchor block is generated, the network structure is optimized in an end-to-end manner using a multitask loss function including a classification loss function LclsRegression loss function LregLocation loss L of anchor framelocAnd shape prediction loss L of anchor frameshapeThe total loss function is expressed as: l ═ λ1Lloc2Lreg+Lcls+Lreg,λ1And λ2For weight parameters of the loss function, to reduce the influence of unbalance between the target and background regions, the loss L is locatedlocShape prediction Loss using Focal local function construction
Figure BDA0003228288170000031
Said L1Is Smooth L1 Loss, wgAnd hgIs the width and height of the real bounding box matched by the anchor box, and w and h are the predicted values after being screened in the training process.
Further, in step S3, in order to overcome the problems of insufficient training and poor detection capability of the network on the tiny defects, which may be caused by a small number of samples, a Mosaic data enhancement is performed to expand the original data set when picture data is loaded, the principle of the Mosaic data enhancement is to randomly select a plurality of images (default 4 images) in the training set, form a new image by cutting and splicing, while enhancing the data amount, the recognition capability of the model on the small target can be effectively enhanced, and then, the pre-training model parameters of ResNet50 on the COCO data set are used for migration learning.
Further, in step S4, the model parameters with the best performance are saved, and the performance of the model is tested using the actual captured image.
Compared with the prior art, the invention has the beneficial effects that:
1) the invention provides an improved Faster R-CNN network based on a weighted feature fusion network, which can identify the defects of a panel under a complex texture background, effectively avoids complex preprocessing and feature extraction work required in the traditional method and provides an idea for automatic and intelligent detection of the panel defects.
2) The method for enhancing the Mosaic data remarkably increases the training data, improves the recognition capability of the model for small targets, and is beneficial to detecting the micro defects.
3) The weighted feature fusion method provided by the invention can effectively enhance the relation between the deep feature and the shallow feature, and adjust the fusion ratio of the two features through the weight parameters, compared with the traditional feature pyramid fusion method, the weighted feature fusion method has better fusion effect on the features under different resolutions, so that the model has stronger adaptability to defects with different sizes under different texture backgrounds.
4) The method uses the GA-RPN method to replace the traditional RPN method, effectively improves the generation efficiency of the anchor frame, and obviously improves the defect detection capability.
Drawings
FIG. 1 is a schematic flow chart of a method for detecting micro-defects of a display panel based on a two-stage detection network according to the present invention;
FIG. 2 is a network structure diagram of fusion Faster R-CNN based on weighting characteristics in the present invention;
FIG. 3 is a process of enhancing Mosaic data in the present embodiment;
fig. 4 is an example of the panel defect detection result in the present embodiment.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
Examples
As shown in FIG. 1, the invention discloses a display panel microdefect detection method based on a two-stage detection network, comprising the following steps:
1) acquiring a panel defect data set
In the embodiment, in a rechecking stage in a panel detection process, the size range of a defect is 2-100 μm, a wide-field microscope system is used for amplifying a defect image by 20 times, and the defect image is collected, wherein two defect backgrounds, including a complex texture background and a simple texture background, are related in an actual collected image collected in the embodiment, and the defect image includes 4 defect categories, including a film defect, corrosion, a foreign matter and a circuit defect, in total; in order to test the adaptability of the model to various backgrounds, an open-source PCB data set is added, and the open-source PCB data set comprises 6 defect categories including hole missing, rat bite, burrs, short circuits, open circuits and copper throwing.
2) Annotating data
The defect detection can be modeled as a target detection problem in computer vision, and when the target detection is carried out, the position and the shape of a target in an image need to be determined, so that an xml-format annotation file needs to be formed according to the defect position in an acquired original image when a data set is manufactured and is used as a real value in a model training process.
Further, in this embodiment, the LabelImg software is used to manually determine the defect range, and an xml file corresponding to each picture is generated.
3) Partitioning training and test sets
The data set is first divided into 85% training set and 15% test set, then the input images are unified to a size of 600 x 600, and the training set is converted to a common COCO format data set.
4) Construction of fast R-CNN network structure based on weighted feature fusion
In this embodiment, as shown in fig. 2, the construction of the network is optimized on the fast R-CNN structure of the network extracted by using ResNet50 as a backbone feature. In this embodiment, the output of the last convolutional layer of conv2_ x, conv3_ x, conv4_ x, conv5_ x of ResNet50 is used as the feature layer for weighted feature fusion. Here, conv5_ x represents the deepest layer, conv2_ x represents the shallowest layer, and the last convolution layers of conv2_ x, conv3_ x, conv4_ x, and conv5_ x are respectively denoted as C2, C3, C4, and C5, and in this embodiment, the number of channels is 256, 512, 1024, and 2048.
Further, the weighted feature fusion network in this embodiment is formed in the following manner:
firstly, carrying out dimensionality reduction processing on feature layers C2, C3, C4 and C5 by using 1 × 1 convolution, then carrying out convolution processing on C5 by using 3 × 3 convolution, and outputting a convolution result as a result P5 of C5 weighted feature fusion; simultaneously, P5 is subjected to 2 times of upsampling operation and then is subjected to weighted addition with the C4 feature layer C4_ after 1 multiplied by 1 convolution processing, and the weight parameter of the P5 after 2 times of upsampling is set to be alpha5The weight parameter of the feature layer C4_ is β4The result of the weighted addition is alpha5·P54C4, eliminating aliasing effects brought by feature fusion through 3 × 3 convolution of the weighted result, and obtaining a weighted feature fusion result P4; adding the 2 times of upsampled P4 to the C3 feature layer C3_ after 1 multiplied by 1 convolution in a weighted manner, and setting the weight parameter of the 2 times of upsampled P4 as alpha4The weight parameter of the feature layer C3_ is β3The result of the weighted addition is alpha4·P43C3, removing the aliasing effect caused by feature fusion by 3 × 3 convolution from the weighted resultObtaining a weighted feature fusion result P3; adding the 2 times of upsampled P3 to the C2 feature layer C2_ after 1 multiplied by 1 convolution in a weighted manner, and setting the weight parameter of the 2 times of upsampled P3 as alpha3The weight parameter of the feature layer C2_ is β2The result of the weighted addition is alpha3·P32C2, eliminating aliasing effect brought by feature fusion by 3 × 3 convolution of the weighted result, and obtaining a weighted feature fusion result P2. In this embodiment, P2,P3,P4,P5Has a channel number of 256 and a weight parameter alpha3,α4,α5,β2,β3,β4Has a value range of [0,1 ]]The weight parameters are learnable parameters, and the weight parameters are updated in the back propagation process.
Further, the GA-RPN network in the present embodiment is configured in the following manner:
the GA-RPN network is composed of a position prediction branch, a shape prediction branch and a feature self-adaptive module, wherein the position prediction branch firstly performs single-channel 1 multiplied by 1 convolution on an input feature image, and then uses a sigmoid function to map the result after the 1 multiplied by 1 convolution to [0,1]Obtaining the probability that each position in the characteristic diagram possibly has the target, and then forming a position probability matrix; after the target position prediction is completed, the shape prediction is performed by first generating a dw and dh two-channel prediction map by a two-channel 1 × 1 convolution layer, and then passing w ═ σ · s · edwAnd h ═ σ · s · edhOutputs a prediction of w and h. Sigma is an empirical scale factor, the scale factor can be set according to the characteristics and experience of a data set when a network is constructed, s is the step distance during convolution sliding, and the value ranges of dw and dh are [ -1,1]. In this embodiment, σ is 4 and s is 16. The feature adaptation module is a small convolutional neural network formed by 3 x 3 deformable convolution, and the process can be represented by fi'=NT(fi,wi,hi) Is represented by NTRepresentation feature adaptation module, fiIs a feature of the ith position, (w)i,hi) Is the shape of the anchor frame corresponding to the ith position. First, the shape is obtained from the output of the shape prediction branch through 1 × 1 convolutionThen applying a 3 x 3 deformable convolution to the original feature map with the offset to obtain fiI.e., features after adjustment, the adjusted features of the anchor frame region can then be classified and regressed for anchor frame width and height.
Further, after the anchor block is generated, the network structure is optimized in an end-to-end manner using a multitask loss function including a classification loss function LclsRegression loss function LregLocation loss L of anchor framelocAnd shape prediction loss L of anchor frameshapeThe total loss function is expressed as: l ═ λ1Lloc2Lreg+Lcls+Lreg,λ1And λ2Is a weight parameter of the loss function, in this embodiment λ1=1,λ20.1. In this embodiment, the positioning loss LlocShape prediction Loss using Focal local function construction
Figure BDA0003228288170000061
Said L1Is Smooth L1 Loss, wgAnd hgIs the width and height of the real bounding box matched by the anchor box, and w and h are the predicted values after being screened in the training process.
When training the network model, firstly performing Mosaic data enhancement, as shown in fig. 3, taking out a batch of data, in this embodiment, the batch processing (batch size) is 32, randomly taking out 4 pictures from the batch size, performing random position cutting and splicing to form a new image, and repeating the above process for batch size times to obtain the batch data after Mosaic data enhancement.
And then loading the pre-training model parameters of ReaNet50 on the COCO data set, inputting the batch data after the Mosaic data enhancement into a fast R-CNN network model based on weighted feature fusion for training, wherein in the embodiment, the training process carries out 50 times of co-iteration (epochs) on the data set, 270 times of one-time iterative training, a random gradient descent (SGD) is used as an Optimizer (Optimizer), the initial learning rate (learning rate) is 0.002, the learning rate is attenuated to 10% of the original learning rate every 500 times of training, and the initial value of momentum (momentum) is 0.9.
In this example, the model performance was evaluated using the COCO dataset evaluation index.
In this embodiment, the model data is saved 10 times per iteration, and finally the parameters with the best model performance are saved for the actual test.
5) Model testing
In this embodiment, the model is tested using the 15% test set, and part of the test results are shown in fig. 4, for example, and the model performance is evaluated using the evaluation index of the COCO data set.
TABLE 1 comparison of Faster R-CNN with fusion Faster R-CNN based on weighted features
Network model mAP(%) Detection time (second/piece)
Faster R-CNN 91.8% 0.052
Fast R-CNN based on weighted feature fusion network 94.2% 0.065
TABLE 2 Ablation experiment (implantation experiment) based on weighted feature fusion network Faster R-CNN
Method mAP(%) Detection time (second/piece) Up(%)
Faster R-CNN(ResNet50+FPN) 91.9 0.052
+GA-RPN 93.2 0.053 1.3
+ weighted feature fusion network 94.2 0.065 1.0
As shown in table 1, compared to the conventional fast R-CNN structure, the mAP index is improved by 2.4% in this embodiment, and the detection time is slightly increased due to the increased complexity of the network structure, but still within the acceptable range. From the ablation experiments of table 2, it can be seen that both the GA-RPN method and the use of the weighted feature fusion network improve the performance of the model.
The method can be changed into a network service program and is applied to positioning and identifying the defects in the panel defect detection AOI equipment.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and those skilled in the art can easily conceive of various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A display panel micro-defect detection method based on a two-stage detection network is characterized by comprising the following steps:
s1: acquiring a panel defect image data set;
s2: based on the fast R-CNN network structure, a ResNet50 network is used as a main feature extraction network to obtain a plurality of feature layers, a weighted feature fusion network is used for fusing the feature layers, and the construction of the fast R-CNN network structure based on weighted feature fusion is completed by combining a GA-RPN method;
s3: training a weighted feature fusion based fast R-CNN network by using the data-enhanced panel defect data set to obtain trained network model parameters;
s4: and inputting the panel image with the defect to be detected into a fusion Faster R-CNN network structure based on the weighted features, and loading the trained network model parameters to finish the detection of the panel defect.
2. The method for detecting microdefects in a display panel based on a two-stage inspection network as claimed in claim 1, wherein the step S2 is performed by performing feature fusion using a weighted feature fusion network and generating an anchor frame using a GA-RPN method.
3. The method of claim 2, wherein the method comprises the steps of,
between adjacent feature layers is a weighted fusion of learnable weights.
4. The method as claimed in claim 2, wherein the weighted feature fusion network is obtained by:
performing dimensionality reduction processing on five feature layers C2, C3, C4 and C5 output by ResNet50 by using 1 × 1 convolution, performing convolution processing on C5 by using 3 × 3 convolution processing, and taking an output convolution result as a C5 weighted feature fusion result P5; simultaneously, P5 is subjected to 2 times of upsampling operation and then is subjected to weighted addition with the C4 feature layer C4_ after 1 multiplied by 1 convolution processing, and the weight parameter of the P5 after 2 times of upsampling is set to be alpha5The weight parameter of the feature layer C4_ is β4The result of the weighted addition is alpha5·P54C4, convolving the weighted result by 3 × 3 to obtain a weighted feature fusion result P4; adding the 2 times of upsampled P4 to the C3 feature layer C3_ after 1 multiplied by 1 convolution in a weighted manner, and setting the weight parameter of the 2 times of upsampled P4 as alpha4The weight parameter of the feature layer C3_ is β3The result of the weighted addition is alpha4·P43C3, convolving the weighted result by 3 × 3 to obtain a weighted feature fusion result P3; adding the 2 times of upsampled P3 to the C2 feature layer C2_ after 1 multiplied by 1 convolution in a weighted manner, and setting the weight parameter of the 2 times of upsampled P3 as alpha3The weight parameter of the feature layer C2_ is β2The result of the weighted addition is alpha3·P32C2. the weighted result is convolved by 3 × 3 to obtain the weighted feature fusion result P2.
5. The method of claim 4, wherein the method comprises the steps of,
weight parameter alpha3,α4,α5,β2,β3,β4Has a value range of [0,1 ]]And updating the weight parameters in the back propagation process.
6. The method as claimed in claim 1, wherein in step S2, the weighted feature fusion outputs four feature layers P2, P3, P4, and P5, which are respectively input to GA-RPN network for generating anchor frame.
7. The method for detecting microdefect of a display panel based on a two-stage inspection network of claim 1, wherein in step S2, the weight parameters of the GA-RPN network are shared between feature layers.
8. The method for detecting micro-defects of a display panel based on a two-stage inspection network as claimed in claim 1, wherein in step S3, the data enhancement is performed on the panel defect data set by using a Mosaic data enhancement method.
9. The method of claim 1, wherein the method comprises: when the weighted features are fused with the fast R-CNN network for training and learning, the size of the neural network training batch is set to be 32, the training period epoch is set to be 50 times, and a random gradient descent optimization algorithm is adopted for optimization of a loss function in the training process.
10. The method as claimed in any one of claims 2 to 9, wherein the data set is a mixture of images actually acquired in an industrial field and an open source data set, and during training, the defect images in the data set are randomly divided into a training set and a testing set, no repeated samples exist between the training set and the testing set, the training set accounts for 85% of all data, and the testing set accounts for 15% of all data.
CN202110979004.9A 2021-08-25 2021-08-25 Display panel micro-defect detection method based on two-stage detection network Pending CN113657539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110979004.9A CN113657539A (en) 2021-08-25 2021-08-25 Display panel micro-defect detection method based on two-stage detection network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110979004.9A CN113657539A (en) 2021-08-25 2021-08-25 Display panel micro-defect detection method based on two-stage detection network

Publications (1)

Publication Number Publication Date
CN113657539A true CN113657539A (en) 2021-11-16

Family

ID=78481915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110979004.9A Pending CN113657539A (en) 2021-08-25 2021-08-25 Display panel micro-defect detection method based on two-stage detection network

Country Status (1)

Country Link
CN (1) CN113657539A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114943843A (en) * 2022-06-14 2022-08-26 河北工业大学 Welding defect detection method based on shape perception

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114943843A (en) * 2022-06-14 2022-08-26 河北工业大学 Welding defect detection method based on shape perception

Similar Documents

Publication Publication Date Title
CN111325713B (en) Neural network-based wood defect detection method, system and storage medium
CN110930347B (en) Convolutional neural network training method, and method and device for detecting welding spot defects
US8331651B2 (en) Method and apparatus for inspecting defect of pattern formed on semiconductor device
US20200356718A1 (en) Implementation of deep neural networks for testing and quality control in the production of memory devices
CN111292305A (en) Improved YOLO-V3 metal processing surface defect detection method
US9727047B2 (en) Defect detection using structural information
CN109919908B (en) Method and device for detecting defects of light-emitting diode chip
TWI826632B (en) Methods, systems, and non-transitory computer-readable storage medium for semiconductor defect inspection and review
JP2010522316A (en) Method for recognizing array region in die formed on wafer, and setting method for such method
CN112150460A (en) Detection method, detection system, device, and medium
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN111126505A (en) Pavement crack rapid identification method based on deep learning
CN111738338B (en) Defect detection method applied to motor coil based on cascaded expansion FCN network
JP2021039476A (en) Defect inspection device, defect inspection method, and program
CN116342474A (en) Wafer surface defect detection method
JP2023509054A (en) WAFER INSPECTION DEVICE, DATA PROCESSING METHOD AND STORAGE MEDIUM
CN113657539A (en) Display panel micro-defect detection method based on two-stage detection network
Yu et al. SEM image quality enhancement: an unsupervised deep learning approach
CN111325724B (en) Tunnel crack region detection method and device
CN115797314B (en) Method, system, equipment and storage medium for detecting surface defects of parts
CN117252817A (en) Transparent conductive film glass surface defect detection method and system
Li et al. An improved PCB defect detector based on feature pyramid networks
US12008737B2 (en) Deep learning model for noise reduction in low SNR imaging conditions
CN111415365A (en) Image detection method and device
CN112633327B (en) Staged metal surface defect detection method, system, medium, equipment and application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination