CN111681213A - Light guide plate line scratch defect detection method based on deep learning - Google Patents
Light guide plate line scratch defect detection method based on deep learning Download PDFInfo
- Publication number
- CN111681213A CN111681213A CN202010445227.2A CN202010445227A CN111681213A CN 111681213 A CN111681213 A CN 111681213A CN 202010445227 A CN202010445227 A CN 202010445227A CN 111681213 A CN111681213 A CN 111681213A
- Authority
- CN
- China
- Prior art keywords
- image
- light guide
- guide plate
- layer
- gray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Signal Processing (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Quality & Reliability (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a light guide plate line scratch defect detection method based on deep learning, which comprises the following steps of: the method comprises the steps of collecting light guide plate images, carrying out gray level transformation, edge sharpening, gray level range adjustment, defect repair, image difference, global threshold segmentation, connected domain segmentation, feature screening and defect display, and accordingly displaying generated defect identification images. The method can realize that the scratches or defects on the light guide plate can be quickly and accurately distinguished from the collected light guide plate image.
Description
Technical Field
The invention belongs to the field of image recognition of deep learning, and particularly relates to a light guide plate line scratch defect detection method based on deep learning.
Background
The Light Guide Plate (LGP) is made of optical acrylic/PC board and then high-tech material with high reflectivity and no Light absorption, and the bottom of the optical acrylic board is printed with Light Guide points by V-shaped cross grid engraving, laser engraving and UV screen printing technology. In the production and manufacturing process of the light guide plate, line scratches with various sizes and different light and shade are inevitably generated on the surface of the light guide plate under the action of sharp external force due to the influence of factors such as manual operation errors of production equipment, processing and installation processes, wherein the slight line scratches in the sparse region of the light guide point are difficult to detect due to different lengths and small difference between the average gray value and the background gray value of the light guide plate, and the imaging area under an industrial camera is extremely small, the gray value is low, the line scratches are difficult to find by naked eyes and the manual inspection is difficult. The generation of the line scratch defect is divided into two stages of a front process and a rear process, and the line scratch generated in the front process is mainly divided into three conditions: (1) the surface of the mold core is scratched carelessly when the mold core is installed; (2) when the mold is abnormal (such as disassembling a slide block and maintaining the mold), the surface of the mold core is scratched carelessly; (3) when the surface of the mold core is wiped, the surface of the mold core is scratched due to unclean cotton or fingernails, and the like. The occurrence of line scratches in the post-processing process is further classified into three cases: (1) unreasonable debugging of the equipment: the surface of the product is scratched by a certain part of post-processing equipment when the product is processed by a certain post-processing action; (2) when the surface of the light guide plate contacts with a certain part of the post-processing equipment and generates moving friction, the surface of the product is scratched (such as a cutting platform, a polishing platform, a cleaning roller and the like) due to larger friction with the surface of the light guide plate caused by unclean (particles, foreign matters and the like) of the contact surface of the light guide plate; (3) the surface of the product is scratched due to the nonstandard operation method of the inspector or carelessness and carelessness of the inspector. (e.g., when removing hair, the hair-removing knife scratches the product, etc.). The existence of light guide plate defect can influence the use of relevant equipment, leads to the availability factor of equipment, and luminous homogeneity and life-span etc. all can receive the influence, and in addition, the credit of enterprise can seriously be harmd in the sales of defect light guide plate, causes great negative effect to the long-term development of enterprise, consequently, carries out quality testing to the light guide plate of production, rejects the inferior product and is especially important.
At present, domestic enterprises mainly rely on manual operation for detecting defects of light guide plates, but the manual operation has the following problems: (1) the eyesight health of the staff can be seriously damaged when the staff are in poor working environment for a long time; (2) staff are difficult to master the light guide plate detection technology with high complexity and difficulty; (3) the manual detection is very susceptible to the external environment, and the detection precision is difficult to ensure; (4) staff mainly judge according to naked eyes, and a quantifiable quality standard is difficult to form.
Most of the surface defect detection based on deep learning at present is based on a supervised characterization learning method, although the accuracy is high, a large number of defect data sets are needed to train a detection network, and a large number of industrial defect samples cannot be provided in a real industrial environment.
Since the above-mentioned defects of the light guide plate are small, in order to improve the detection accuracy, a high-precision line camera is required to acquire images. The size of the light guide plate image is about 500MB generally, and enterprises require that the detection time is controlled within 16 seconds, so that the requirement on a detection algorithm is very high. The existing algorithm is difficult to meet the actual requirements of enterprises, so that the key is to provide a simple and efficient method for detecting the scratch defects of the light guide plate line.
Disclosure of Invention
The invention aims to provide a light guide plate line scratch defect detection method based on deep learning, so that scratches or defects on a light guide plate can be quickly and accurately detected from an acquired light guide plate image.
In order to solve the problems, the invention provides a method for detecting scratch defects of a light guide plate line based on deep learning, which comprises the following steps:
step 1, collecting light guide plate images: acquiring an image of the light guide plate by adopting a line scanning camera to obtain a high-precision image of the light guide plate;
step 2, gray level conversion: and (3) carrying out gray level transformation on the original image input in the step (1), and expanding the difference between the gray level value of the scratch defect of the light guide point and the line in the light guide plate image and the gray level value of the background by utilizing the following linear transformation formula:
H(x,y)=Mult×K(x,y)+Add
wherein, K (x, y) is the gray value of the original image position (x, y), Mult is the gray value expansion multiple, Add is the gray value increase value, H (x, y) is the gray value after the gray value conversion of the position (x, y);
step 3, edge sharpening: and (3) performing edge sharpening on the image subjected to the gray level transformation in the step (2) by adopting a self-adaptive LoG filter, wherein the expression of the self-adaptive LoG filter is as follows:
max (K) is a global maximum value of gray, min (K) is a global minimum value of gray, K (x, y) is a gray value at an image coordinate (x, y), K (x ', y') is a gray value at an adjacent coordinate (x ', y') at the image coordinate (x, y), and σ is a variance (also called a scale factor);
step 4, gray scale range adjustment: and (3) carrying out gray level adjustment on the image subjected to the self-adaptive LoG filter convolution in the step (3) to obtain an image subjected to gray level adjustment, wherein the adjusted gray level value is as follows:
K'(x,y)=K(x,y)+128
wherein, K (x, y) is a gray value at the image coordinate of (x, y);
step 5, defect repair: inputting the image with the gray scale range adjusted in the step 4 into a trained residual convolution automatic encoder as input data to obtain a defect repaired image;
step 6, image difference: and (3) subtracting the image obtained in the step (4) after the gray level adjustment and the image obtained in the step (5) after the defect repair to obtain a defect enhanced image between the two images, wherein the expression is as follows:
X(x,y)=X1(x,y)-X2(x,y)
wherein, X1For the gray-scaled image obtained in step 4, X2For the defect repair image obtained in step 5, X and y are respectively the image X1And image X2Corresponding pixel point coordinates;
and 7, global threshold segmentation: and (3) performing binarization threshold segmentation on the image obtained by image difference in the step (6), wherein the formula is as follows:
wherein, R (x, y) is the result of threshold segmentation judgment at the pixel point (x, y), th is the gray value of the pixel at (x, y), th0Is a threshold value of the segmentation;
step 8, connected domain segmentation: dividing the area blocks which are not connected together into separate small areas according to the standard of eight-link for the result image area obtained by the step 7;
step 9, feature screening: the area characteristics of the regions are defined as: counting the number of pixels in the region, and setting the region area corresponding to one region R as A, then:
A=∑(x,y)∈R1
wherein x and y are coordinates of the pixel;
the definition of the zone roundness characteristics is: the extent to which the target area approaches a circle is expressed as:
wherein P is the perimeter of the region, S is the area of the region, and C is the region roundness;
screening according to the area and the area roundness characteristics to obtain a line scratch defect area, judging the area meeting the requirement that A is more than 30 n and C is less than 0.05 as a line scratch defect, and generating a defect identification image;
step 10, defect display: and displaying the defect identification image generated in the step 9.
As a further improvement of the light guide plate line scratch defect detection method based on deep learning of the present invention, the trained residual convolution automatic encoder in step 5 specifically comprises the following training steps:
step 5-1, establishing a convolution automatic encoder, wherein the structure comprises: an input layer, a 3 × 3 convolutional layer, a maximum pooling layer, nearest neighbor interpolation upsampling, a 3 × 3 convolutional layer, and an output layer;
step 5-2, improving a residual convolution automatic encoder: referring to the idea of a residual error network, mapping the convolution layer output identity of the convolution automatic encoder to the convolution layer corresponding to the convolution automatic encoder, wherein the identity mapping formula is as follows:
H′(x)=F(H(x))+H(x)
wherein, x is the input image data of the automatic encoder, H (x) is the output of the corresponding layer of the encoder, F (H (x)) is the calculation result of the decoder convolution layer, and H' (x) is the output result of the decoder convolution layer.
The convolution layer is a convolution operation on the convolution kernel for the output of the previous layer, and the expression is as follows:
wherein l +1 is the upper layer, l +2 is the lower layer, b(l+2)For the next layer offset, (u, v) is the origin coordinate, (i + u, j + v) is the coordinate of the eight nearest points around the origin,is the value of the (i + u, j + v) position of the feature map of the previous layer,the weight of the position of the next layer convolution kernel (i, j),for the next layer of feature map location to be the value of (u, v), the convolution kernel coordinates arefa() For the activation function, the ReLU function is used:
wherein, x is the input value of the function, and is the accumulation result of the convolution result of the previous layer output and the convolution kernel in the network;
and (3) performing parameter optimization through a back propagation algorithm, wherein the updating formulas of the weight and the bias are as follows:
wherein the content of the first and second substances,the weight between the jth neuron at the l th layer and the kth neuron at the l +1 th layer is C (theta) as a loss function, and α as a learning rate;
wherein the content of the first and second substances,bias of jth neuron in ith layer, α learning rate, where C (θ) is loss function, and mean square error is used:
C(θ)=∑θ∈I(K(θ)-K′(θ))2
wherein, I is a set of pixel points in the image, K' (theta) is a gray value of a label image pixel, and K (theta) is a gray value of an output image pixel;
step 5-3, training residual convolution automatic encoder
The method comprises the steps that a normal light guide plate image collected in an industrial field is input as a training data set, the collected normal light guide plate image is divided into 224-224 small-size images, gray lines with indefinite length are randomly generated for each divided normal light guide plate image, the gray lines are regarded as defects, then the light guide plate image with the gray lines is regarded as input of a residual convolution automatic encoder, the corresponding normal light guide plate image is output as a target, the pair of images are used as a sample and are trained by 300 samples in total, the training is carried out on the data set for 50 times, and trained network parameters are stored as results and are used as network parameters of the residual convolution automatic encoder for image restoration in the following steps; during training, a mean square error function is adopted as a loss function for evaluating the model.
The invention has the technical advantages that:
the detection algorithm of the invention has simple program, strong adaptability and strong stability, and can effectively detect the light guide plates with different density degrees of the light guide points;
2, the self-adaptive LoG filter and the residual convolution automatic encoder greatly reduce the probability of false judgment and missed detection, and the comprehensive accuracy rate of the algorithm on defect detection can reach 98.0 percent through statistics;
3, the algorithm can be trained and learned only by the normal light guide plate image, so that the problem of insufficient industrial defect samples is solved;
4, the algorithm of the invention is stable and efficient, and is convenient to maintain.
Drawings
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
FIG. 1 is a flow chart of an algorithm of a light guide plate line scratch defect detection method based on deep learning;
FIG. 2 is a cut-away view of a defect-containing light guide plate treated in experiment 1 of the present invention;
FIG. 3 is a gray scale transformation display of the light guide plate image in experiment 1 of the present invention;
FIG. 4 is a partial display of gray scale transformation of a light guide plate image in experiment 1 according to the present invention;
FIG. 5 is a partial display diagram of an image adaptive Gaussian filter of a light guide plate in experiment 1 according to the present invention;
fig. 6 is a diagram showing an image adaptive LoG filtering of a light guide plate in experiment 1 according to the present invention;
FIG. 7 is a structural diagram of a convolutional automatic encoder according to embodiment 1 of the present invention;
FIG. 8 is a schematic diagram illustrating a residual connection between light guide plate images according to embodiment 1 of the present invention;
FIG. 8-1 is a schematic structural diagram of an improved residual convolution automatic encoder in embodiment 1 of the present invention;
FIG. 9 is a graph showing the difference between the defect-containing image and the repaired image in experiment 1 according to the present invention;
FIG. 10 is a graph of an image after global thresholding in experiment 1 of the present invention;
fig. 11 is a diagram showing a scratch defect of the light guide plate line in experiment 1 of the present invention.
Detailed Description
The invention will be further described with reference to specific examples, but the scope of the invention is not limited thereto.
Embodiment 1, a method for detecting a scratch defect of a light guide plate line based on deep learning, as shown in fig. 1, includes the following steps:
s01, collecting the light guide plate image
Collecting the light guide plate image by adopting a line scanning camera to obtain a high-precision light guide plate image so as to show and distinguish fine defects on the light guide plate, wherein the fine defects are used as input of the method;
s02, gradation conversion
Carrying out gray level transformation on the original image input in the S01 so as to better extract the dark scratch defect; expanding the difference between the gray value of the scratch defect of the light guide point and the line in the light guide plate image and the gray value of the background by using the following linear transformation formula:
h (x, y) ═ Mult × K (x, y) + Add (formula 1)
Wherein, K (x, y) is the gray value of the original image position (x, y), Mult is the gray value expansion multiple, Add is the gray value increase value, H (x, y) is the gray value after the gray value conversion of the position (x, y);
s03, edge sharpening
Carrying out edge sharpening on the image subjected to the S02 gray level transformation by adopting an adaptive LoG filter;
first, assuming that max (K) is a global maximum value of gray, min (K) is a global minimum value of gray, K (x, y) is a gray value at an image coordinate (x, y), and K (x ', y') is a gray value at an adjacent coordinate (x ', y') at the image coordinate (x, y), introducing a gray distance W (x, y, x ', y') to gaussian filtering, where W (x, y, x ', y') is expressed as:
based on equation 2, the expression of the improved adaptive gaussian filter G ' (x, y, σ, x ', y ') is:
wherein, σ is a variance, also called a scale factor, and the size of the variance determines the size of the smoothing degree of the Gaussian filter;
then, Laplacian differentiation is adopted for the image, and the expression is as follows:
wherein f (x, y) represents a gray value at coordinates (x, y);
the final resulting adaptive LoG filter expression is:
max (K) is a global maximum value of gray scale, min (K) is a global minimum value of gray scale, K (x, y) is a gray value at the image coordinate (x, y), K (x ', y') is a gray value at an adjacent coordinate (x ', y') at the image coordinate (x, y), and σ is a variance also called a scale factor;
s04, adjusting the gray scale range;
after the image is convolved by the adaptive LoG filter of S03, the gray value will be shifted, and the whole gray value will be between-128 and 127, so the image convolved by the adaptive LoG filter of S04 needs to be gray-adjusted to obtain a gray-adjusted image, where the gray-adjusted gray values are:
k' (x, y) ═ K (x, y) +128 (equation 6)
Wherein K (x, y) is a gray value at the image coordinates (x, y);
s05, designing and training residual convolution automatic encoder
S0501, building a convolution automatic encoder, as shown in fig. 8, the structure includes: an input layer, a 3 × 3 convolutional layer, a maximum pooling layer, nearest neighbor interpolation upsampling, a 3 × 3 convolutional layer, and an output layer;
s0502, improved residual convolution automatic encoder
Referring to the idea of the residual error network, as shown in fig. 7, the convolutional layer output of the convolutional automatic encoder is mapped to the convolutional layer corresponding to the convolutional automatic encoder in an identical manner, that is, the decoder convolutional layer output is the result of convolution of the previous layer of feature map and convolutional kernel plus the output of the convolutional layer corresponding to the encoder, so as to implement the design of the improved residual error convolutional automatic encoder, as shown in fig. 8-1, the equation of the identical mapping is:
h' (x) ═ F (H (x)) + H (x) (formula 7)
Wherein x is the input image data of the automatic encoder, H (x) is the output of the corresponding layer of the encoder, F (H (x)) is the calculation result of the decoder convolutional layer, H' (x) is the output result of the decoder convolutional layer,
the convolution layer is a convolution operation on the convolution kernel for the output of the previous layer, and the expression is as follows:
wherein l +1 is the upper layer, l +2 is the lower layer, b(l+2)For the next layer offset, (u, v) is the origin coordinate, (i + u, j + v) is the coordinate of the eight nearest points around the origin,is the value of the (i + u, j + v) position of the feature map of the previous layer,the weight of the position of the next layer convolution kernel (i, j),for the next layer of feature map location to be the value of (u, v), the convolution kernel coordinates arefa() For the activation function, the ReLU function is used:
wherein, x is the input value of the function, and is the accumulation result of the convolution result of the previous layer output and the convolution kernel in the network;
and (3) performing parameter optimization through a back propagation algorithm, wherein the updating formulas of the weight and the bias are as follows:
wherein the content of the first and second substances,the weight between the jth neuron at the l th layer and the kth neuron at the l +1 th layer is C (theta) as a loss function, and α as a learning rate;
wherein the content of the first and second substances,bias of jth neuron in ith layer, α learning rate, where C (θ) is loss function, and mean square error is used:
C(θ)=∑θ∈I(K(θ)-K′(θ))2(formula 12)
In the formula, I is a set of pixel points in the image, K' (theta) is a gray value of a label image pixel, and K (theta) is a gray value of an output image pixel;
s0503, training residual convolution automatic encoder
The algorithm adopts a data set as a normal light guide plate image acquired in an industrial field, the image is firstly divided into 224 x 224 small-size images because of overlarge size, gray lines with indefinite length are randomly generated on each divided normal light guide plate image and are taken as defects, then the light guide plate image containing the gray lines is taken as the input of a residual convolution automatic encoder, the corresponding normal light guide plate image is taken as the target output, the pair of images are taken as a sample, 300 samples are adopted for training in total, the training is carried out on the data set for 50 times, and the trained network parameters are taken as results to be stored and taken as the parameters of an image repairing network in the following step;
during training, a mean square error function of a formula 12 is used as a loss function for evaluating the model, the loss can be reduced to 0.015 after the training is finished, and the accuracy of the network model on defect detection can reach 98% on a test set containing 300 samples;
s06, defect repair
Inputting the image with the gray level adjusted by the S04 as input data into a residual convolution automatic encoder trained by the S05 to obtain a defect repaired image;
s07 image difference
The image obtained at S04 after the gray scale adjustment has scratches (defects), and the most obvious difference between the defect-repaired image obtained at S06 is the defect portion, so that the difference between the two images can be obtained to obtain a defect-enhanced image between the two images, and the expression is:
X(x,y)=X1(x,y)-X2(x, y) (equation 13)
Wherein, X1For the S04 grayscale adjusted image, X2The defect repair image obtained for S06, X, y being respectively image X1And image X2Corresponding pixel point coordinates;
s08 global threshold segmentation
The image obtained through the image difference of S07 includes the line scratch defect region and also includes the residual light guide point edge, in order to avoid the interference of noise and light guide point edge information, the binarization threshold segmentation is required, and the formula is as follows:
wherein, R (x, y) is the result of threshold segmentation judgment at the pixel point (x, y), th is the gray value of the pixel at (x, y), th0Is a threshold value for segmentation.
S09, dividing a connected domain;
dividing the area blocks which are not connected together into separate small areas according to the eight-link standard for the result image area obtained in the S08;
s10, feature screening;
the area characteristics of the regions are defined as: counting the number of pixels in the region, and if the number of pixels (region area) corresponding to one region R is A, then:
A=∑(x,y)∈R1 (formula 15)
Wherein x and y are coordinates of the pixel;
the definition of the zone roundness characteristics is: the extent to which the target area approaches a circle is expressed as:
wherein P is the perimeter of the region, S is the area of the region, and C is the region roundness;
and screening according to the area and the area roundness characteristics to obtain a line scratch defect area, wherein the area meeting A >30 n and C <0.05 is judged as the line scratch defect, and a defect identification image is generated.
S11, displaying defects;
and displaying the defect identification image generated in the step S10.
Experiment 1, defect identification of a light guide plate image containing defects using the procedure in example 1:
the environment of the experimental algorithm is: CPU is Intel (R) core (TM) i7-8550CPU @1.8 GHz; the GPU is NVIDIAGeForce MX 1504 GB; the operating system is Windows 1064 bits; CUDA version 9.2; cuDNN version 7.5.0.56; HALCON version 18.11 Steady; the PyTorch version is 1.4.0 (Stable); the development platform was VisualStudio 2015;
the experimental process comprises the following steps:
acquiring the light guide plate image by the line scanning camera, processing the light guide plate image according to the step sequence of the embodiment 1, finally obtaining the light guide plate image with the defects clearly identified, and displaying a result image on a computer terminal used in an experiment, wherein the specific process is as follows:
1) collecting the image of the light guide plate by using a line scanning camera; to ensure the accuracy of the experimental results, different images used for training with S0503 in example 1 were used, as shown in fig. 2;
2) and performing gray scale transformation on the original image acquired in the step 1) according to the following formula, wherein H (x, y) is multx K (x, y) + Add, and obtaining an image after gray scale transformation, as shown in fig. 3 and 4:
3) performing edge sharpening on the image after the gray level conversion obtained in the step 2) by adopting a self-adaptive LoG filter according to the following formula, wherein the result image is as shown in FIG. 5 and FIG. 6;
4) and adjusting the gray scale range of the result image after the edge sharpening in the step 3) according to the following formula:
K′(x,y)=K(x,y)+128
5) utilizing a trained residual convolution automatic encoder to repair the defects of the image with the gray scale range adjusted in the step 4);
6) image difference; subtracting the image obtained in the step 4) after adjusting the gray scale range containing the defect and the image obtained in the step 5) after repairing the defect according to the following formula, wherein X (X, y) is X1(x,y)-X2(x, y) to obtain a result image as shown in FIG. 9;
7) global threshold segmentation: performing global threshold segmentation on the image after the difference obtained in the step 6) according to the following formula,the segmented image is as shown in FIG. 10;
8) segmenting the region in the image obtained in the step 7) according to an eight-connected standard to obtain independent connected domains;
9) feature screening and defect display: feature extraction is carried out by combining the area and the area roundness, the scratch defects of the line are screened, the screening result image is shown as figure 11, the defects of the light guide plate image can be clearly and definitely distinguished, and the time for actually detecting a complete image is 14s, so that the reality, effectiveness and high efficiency of the algorithm are demonstrated.
Similarly, the present invention uses the procedure of experiment 1 above to validate batch data with the same results.
Finally, it is also noted that the above-mentioned lists merely illustrate a few specific embodiments of the invention. It is obvious that the invention is not limited to the above embodiments, but that many variations are possible. All modifications which can be derived or suggested by a person skilled in the art from the disclosure of the present invention are to be considered within the scope of the invention.
Claims (2)
1. A light guide plate line scratch defect detection method based on deep learning is characterized by comprising the following steps:
step 1, collecting light guide plate images: acquiring an image of the light guide plate by adopting a line scanning camera to obtain a high-precision image of the light guide plate;
step 2, gray level conversion: and (3) carrying out gray level transformation on the image input in the step (1), and expanding the difference between the gray level value of the scratch defect of the light guide point and the line in the image of the light guide plate and the gray level value of the background by utilizing the following linear transformation formula:
H(x,y)=Mult×K(x,y)+Add
wherein, K (x, y) is the gray value of the original image position (x, y), Mult is the gray value expansion multiple, Add is the gray value increase value, H (x, y) is the gray value after the gray value conversion of the position (x, y);
step 3, edge sharpening: and (3) performing edge sharpening on the image subjected to the gray level transformation in the step (2) by adopting a self-adaptive LoG filter, wherein the expression of the self-adaptive LoG filter is as follows:
max (K) is a global maximum value of gray, min (K) is a global minimum value of gray, K (x, y) is a gray value at an image coordinate (x, y), K (x ', y') is a gray value at an adjacent coordinate (x ', y') at the image coordinate (x, y), and σ is a variance (also called a scale factor);
step 4, gray scale range adjustment: and (3) carrying out gray level adjustment on the image subjected to the self-adaptive LoG filter convolution in the step (3) to obtain an image subjected to gray level adjustment, wherein the adjusted gray level value is as follows:
K'(x,y)=K(x,y)+128
wherein K (x, y) is a gray value at the image coordinates (x, y);
step 5, defect repair: inputting the image with the gray level adjusted in the step 4 into a trained residual convolution automatic encoder as input data to obtain a defect repaired image;
step 6, image difference: and (3) subtracting the image obtained in the step (4) after the gray level adjustment and the image obtained in the step (5) after the defect repair to obtain a defect enhanced image between the two images, wherein the expression is as follows:
X(x,y)=X1(x,y)-X2(x,y)
wherein, X1For the gray-scaled image obtained in step 4, X2For the defect repair image obtained in step 5, X and y are respectively the image X1And image X2Corresponding pixel point coordinates;
and 7, global threshold segmentation: and (3) performing binarization threshold segmentation on the image obtained by image difference in the step (6), wherein the formula is as follows:
wherein, R (x, y) is the result of threshold segmentation judgment at the pixel point (x, y), th is the gray value of the pixel at (x, y), th0Is a threshold value of the segmentation;
step 8, connected domain segmentation: dividing the area blocks which are not connected together into separate small areas according to the standard of eight-link for the result image area obtained by the step 7;
step 9, feature screening: the area characteristics of the regions are defined as: counting the number of pixels in the region, and setting the region area corresponding to one region R as A, then:
A=∑(x,y)∈R1
wherein x and y are coordinates of the pixel;
the definition of the zone roundness characteristics is: the extent to which the target area approaches a circle is expressed as:
wherein P is the perimeter of the region, S is the area of the region, and C is the region roundness;
screening according to the area and the area roundness characteristics to obtain a line scratch defect area, judging the area meeting the condition that A is more than 30 n and C is less than 0.05 as a line scratch defect, and generating a defect identification image;
step 10, defect display: and displaying the defect identification image generated in the step 9.
2. The method for detecting scratch defects of light guide plate lines based on deep learning of claim 1, wherein the trained residual convolution automatic encoder in the step 5 specifically trains the following steps:
step 5-1, establishing a convolution automatic encoder, wherein the structure comprises: an input layer, a 3 × 3 convolutional layer, a maximum pooling layer, nearest neighbor interpolation upsampling, a 3 × 3 convolutional layer, and an output layer;
step 5-2, improving a residual convolution automatic encoder: referring to the idea of a residual error network, mapping the convolution layer output identity of the convolution automatic encoder to the convolution layer corresponding to the convolution automatic encoder, wherein the identity mapping formula is as follows:
H′(x)=F(H(x))+H(x)
wherein, x is the input image data of the automatic encoder, H (x) is the output of the corresponding layer of the encoder, F (H (x)) is the calculation result of the decoder convolution layer, and H' (x) is the output result of the decoder convolution layer;
the convolution layer is a convolution operation on the convolution kernel for the output of the previous layer, and the expression is as follows:
wherein l +1 is the upper layer, l +2 is the lower layer, b(l+2)For the next layer offset, (u, v) is the origin coordinate, (i + u, j + v) is the coordinate of the eight nearest points around the origin,is the value of the (i + u, j + v) position of the feature map of the previous layer,the weight of the position of the next layer convolution kernel (i, j),for the next layer of feature map location to be the value of (u, v), the convolution kernel coordinates arefa() For the activation function, the ReLU function is used:
wherein, x is the input value of the function, and is the accumulation result of the convolution result of the previous layer output and the convolution kernel in the network;
and (3) performing parameter optimization through a back propagation algorithm, wherein the updating formulas of the weight and the bias are as follows:
wherein the content of the first and second substances,the weight between the jth neuron at the l th layer and the kth neuron at the l +1 th layer is C (theta) as a loss function, and α as a learning rate;
wherein the content of the first and second substances,bias of jth neuron in ith layer, α learning rate, where C (θ) is loss function, and mean square error is used:
C(θ)=∑θ∈I(K(θ)-K′(θ))2
wherein, I is a set of pixel points in the image, K' (theta) is a gray value of a label image pixel, and K (theta) is a gray value of an output image pixel;
step 5-3, training residual convolution automatic encoder:
the method comprises the steps that a normal light guide plate image collected on an industrial site is input as a training data set, the collected normal light guide plate image is divided into 224-224 small-size images, gray lines with indefinite length are randomly generated for each divided normal light guide plate image, the gray lines are regarded as defects, then the light guide plate image containing the gray lines is regarded as input of a residual convolution automatic encoder, the corresponding normal light guide plate image is output as a target, the pair of images are used as a sample and are trained by 300 samples in total, the training is carried out on the data set for 50 times, and trained network parameters are stored as results and are used as parameters of an image repairing network in the following step; during training, a mean square error function is adopted as a loss function for evaluating the model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010445227.2A CN111681213A (en) | 2020-05-24 | 2020-05-24 | Light guide plate line scratch defect detection method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010445227.2A CN111681213A (en) | 2020-05-24 | 2020-05-24 | Light guide plate line scratch defect detection method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111681213A true CN111681213A (en) | 2020-09-18 |
Family
ID=72453551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010445227.2A Withdrawn CN111681213A (en) | 2020-05-24 | 2020-05-24 | Light guide plate line scratch defect detection method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111681213A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112222757A (en) * | 2020-12-10 | 2021-01-15 | 扬昕科技(苏州)有限公司 | Lenti mould benevolence patching device based on Laser is smooth |
CN112419228A (en) * | 2020-10-14 | 2021-02-26 | 惠州高视科技有限公司 | Method and device for detecting three-dimensional edge defect of cover plate |
CN113554631A (en) * | 2021-07-30 | 2021-10-26 | 西安电子科技大学 | Chip surface defect detection method based on improved network |
CN116705642A (en) * | 2023-08-02 | 2023-09-05 | 西安邮电大学 | Method and system for detecting silver plating defect of semiconductor lead frame and electronic equipment |
CN117893540A (en) * | 2024-03-18 | 2024-04-16 | 乳山市创新新能源科技有限公司 | Roundness intelligent detection method and system for pressure container |
-
2020
- 2020-05-24 CN CN202010445227.2A patent/CN111681213A/en not_active Withdrawn
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112419228A (en) * | 2020-10-14 | 2021-02-26 | 惠州高视科技有限公司 | Method and device for detecting three-dimensional edge defect of cover plate |
CN112419228B (en) * | 2020-10-14 | 2022-04-05 | 高视科技(苏州)有限公司 | Method and device for detecting three-dimensional edge defect of cover plate |
CN112222757A (en) * | 2020-12-10 | 2021-01-15 | 扬昕科技(苏州)有限公司 | Lenti mould benevolence patching device based on Laser is smooth |
CN112222757B (en) * | 2020-12-10 | 2021-03-02 | 扬昕科技(苏州)有限公司 | Lenti mould benevolence patching device based on laser is smooth |
CN113554631A (en) * | 2021-07-30 | 2021-10-26 | 西安电子科技大学 | Chip surface defect detection method based on improved network |
CN113554631B (en) * | 2021-07-30 | 2024-02-20 | 西安电子科技大学 | Chip surface defect detection method based on improved network |
CN116705642A (en) * | 2023-08-02 | 2023-09-05 | 西安邮电大学 | Method and system for detecting silver plating defect of semiconductor lead frame and electronic equipment |
CN116705642B (en) * | 2023-08-02 | 2024-01-19 | 西安邮电大学 | Method and system for detecting silver plating defect of semiconductor lead frame and electronic equipment |
CN117893540A (en) * | 2024-03-18 | 2024-04-16 | 乳山市创新新能源科技有限公司 | Roundness intelligent detection method and system for pressure container |
CN117893540B (en) * | 2024-03-18 | 2024-05-31 | 乳山市创新新能源科技有限公司 | Roundness intelligent detection method and system for pressure container |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111681213A (en) | Light guide plate line scratch defect detection method based on deep learning | |
CN108460757B (en) | Mobile phone TFT-LCD screen Mura defect online automatic detection method | |
CN105894036B (en) | A kind of characteristics of image template matching method applied to mobile phone screen defects detection | |
Choi et al. | Detection of pinholes in steel slabs using Gabor filter combination and morphological features | |
CN112837290B (en) | Crack image automatic identification method based on seed filling algorithm | |
KR101477665B1 (en) | Defect detection method in heterogeneously textured surface | |
CN108765402B (en) | Non-woven fabric defect detection and classification method | |
CN105447851A (en) | Glass panel sound hole defect detection method and system | |
CN109801286B (en) | Surface defect detection method for LCD light guide plate | |
CN110473201A (en) | A kind of automatic testing method and device of disc surface defect | |
CN103175844A (en) | Detection method for scratches and defects on surfaces of metal components | |
CN108665458A (en) | Transparent body surface defect is extracted and recognition methods | |
CN111080636A (en) | CNN semantic segmentation self-learning detection method for surface defects of color steel tiles | |
CN110349125A (en) | A kind of LED chip open defect detection method and system based on machine vision | |
CN113221881B (en) | Multi-level smart phone screen defect detection method | |
CN108830851B (en) | LCD rough spot defect detection method | |
CN113240623A (en) | Pavement disease detection method and device | |
CN110648330A (en) | Defect detection method for camera glass | |
CN117392042A (en) | Defect detection method, defect detection apparatus, and storage medium | |
Li et al. | A method of surface defect detection of irregular industrial products based on machine vision | |
CN114565607A (en) | Fabric defect image segmentation method based on neural network | |
CN117191792A (en) | Visual detection method and system for defect of microstrip circulator | |
CN115496984A (en) | Ceramic tile finished product defect automatic identification method and device, intelligent terminal and storage medium | |
CN113870299A (en) | 3D printing fault detection method based on edge detection and morphological image processing | |
Kim et al. | Automatic defect detection from SEM images of wafers using component tree |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200918 |
|
WW01 | Invention patent application withdrawn after publication |