CN114881987A - Improved YOLOv 5-based hot-pressing light guide plate defect visual detection method - Google Patents

Improved YOLOv 5-based hot-pressing light guide plate defect visual detection method Download PDF

Info

Publication number
CN114881987A
CN114881987A CN202210559232.5A CN202210559232A CN114881987A CN 114881987 A CN114881987 A CN 114881987A CN 202210559232 A CN202210559232 A CN 202210559232A CN 114881987 A CN114881987 A CN 114881987A
Authority
CN
China
Prior art keywords
light guide
guide plate
pictures
defect
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210559232.5A
Other languages
Chinese (zh)
Inventor
李俊峰
杨元勋
李镇宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sci Tech University ZSTU
Original Assignee
Zhejiang Sci Tech University ZSTU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sci Tech University ZSTU filed Critical Zhejiang Sci Tech University ZSTU
Priority to CN202210559232.5A priority Critical patent/CN114881987A/en
Publication of CN114881987A publication Critical patent/CN114881987A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses a hot-pressing light guide plate defect visual detection method based on improved YOLOv5, which comprises the steps of collecting pictures of a hot-pressing light guide plate, sending the pictures to an upper computer for preprocessing, wherein the preprocessing comprises the steps of obtaining an interested area by using an edge detection algorithm, dividing the interested area into a group of pictures with the size of 416 x 416 by using a sliding window dividing method, sequentially inputting all the obtained pictures with the size of 416 x 416 into a light guide plate defect detection model for target detection and classification, and outputting the pictures with defect types, confidence degrees and defect position marks; the method can not only accurately detect the defect types, but also realize the accurate positioning of the defect areas, and improve the detection effect of white point small targets and dark line defects on the light guide plate.

Description

Improved YOLOv 5-based hot-pressing light guide plate defect visual detection method
Technical Field
The invention relates to the field of light guide plate production and image recognition, in particular to a hot-pressing light guide plate defect visual detection method based on improved YOLOv 5.
Background
The Light Guide Plate (LGP) is used as a main component of the backlight module, and mainly functions to Guide Light by various Light Guide points with different densities and sizes engraved on the surface thereof, so as to convert point Light sources and line Light sources emitted by the LEDs into uniform surface Light sources and promote the Light Guide Plate to uniformly emit Light. In the production and manufacturing process of the hot-pressing light guide plate, due to the influence of factors such as raw material components, equipment use conditions, processing technology, manual operation and the like, processing defects such as bright spots, missing points, line scratches, shadows and the like inevitably occur on the surface of the hot-pressing light guide plate, and the display effect of the liquid crystal screen is directly influenced by the defective hot-pressing light guide plate. The defects are classified into four categories, i.e., white point defects, bright line defects, dark line defects, and surface defects, according to the types of the defects. The white point defect is in a point shape on the surface of the hot-pressing light guide plate, mainly a bright point and a pressure injury. The formation of white defects may be caused by various reasons, for example, in the plasticizing process of the hot-pressed light guide plate, the white defects may be formed due to incomplete melting of the plastic material caused by too low temperature, inclusion of impurities in the plastic material, or heavy dust around the molding machine. The bright line defects, the dark line defects and the surface defects are linear and planar on the surface of the hot-pressing light guide plate and mainly appear as scratches, dirt and damages. In the production process of the hot-pressing light guide plate, the contact surfaces of the polishing machine and the roller cleaning machine and the hot-pressing light guide plate are not clean, or the hot-pressing light guide plate and the transmission mechanism have relative displacement, so that the hot-pressing light guide plate generates larger friction and the like in the movement process, and the defects of scratching, dirt and damage are possibly formed.
At present, most of domestic manufacturers mainly rely on manual operation to accomplish the defect detection of light guide plate, and manual operation has obvious not enough: (1) the detection of the light guide plate is professional, and the requirement on detection personnel is high; (2) the manual inspection has subjectivity, no uniform detection standard exists, and the detection is loose and cannot meet the requirements of customers; (3) the vision of the staff can be seriously damaged when the staff is under dazzling strong light for a long time through manual detection; (4) the manual operation is easily interfered by factors such as external environment, eye fatigue and the like, and can have certain influence on the actual detection efficiency and precision.
At present, the defect detection of the hot-pressing light guide plate is mainly finished through manual optometry. Under special optometry testing platform, light guide plate is lighted, and the inspector is through the defect such as whether bright spot, fish tail, filth and damage appear in some place or many places of observation hot pressing light guide plate of multi-angle, if exist one of above-mentioned defect, then judge the defect. Due to various limitations of manual detection defects, the detection precision, efficiency, stability and the like are difficult to meet the requirements of enterprises and clients. Meanwhile, in an industrial field, enterprises require the time for detecting a single hot-pressed light guide plate to be within 6 seconds, and higher requirements are provided for the defect detection efficiency. Under the circumstance, a new hot-pressing light guide plate quality detection mode is urgently needed to assist manual operation, so that the efficiency and the accuracy of the quality detection of the hot-pressing light guide plate are improved.
Disclosure of Invention
The invention aims to provide a hot-pressing light guide plate defect visual detection method based on improved YOLOv5, which is used for accurately and quickly visually detecting defects of a hot-pressing light guide plate and simultaneously completing the positioning and classification of the defects.
In order to solve the above technical problem, the present invention provides a method for visually detecting defects of a hot-pressed light guide plate based on improved YOLOv5, comprising: collecting pictures of a hot-pressing light guide plate (the pictures of the hot-pressing light guide plate after production are collected by a 16K line scanning camera at the tail end of a light guide plate production line in an industrial field), sending the pictures to an upper computer for preprocessing, wherein the preprocessing comprises the steps of obtaining an interested area by using an edge detection algorithm, dividing the interested area into a group of pictures with the size of 416 x 416 by using a sliding window dividing method, sequentially inputting all the obtained pictures with the size of 416 x 416 into a light guide plate defect detection model for target detection and classification, and outputting the pictures with defect types, confidence degrees and defect position marks;
the light guide plate defect detection model takes YOLOv5 as a baseline network, and comprises a main network based on a CSPDarknet-53 network, a neck part adopting a pyramid layer structure of FPN + PAN and an output part, wherein an HAM module is respectively inserted between each C3 module and a convolution module of the main network, and after 2, 4, 8, 16 and 32 times of downsampling are sequentially carried out on the 416 x 416 picture through the main network, five-layer feature maps of 208 x 208, 104 x 104, 52 x 52, 26 x 26 and 13 x 13 pixels are respectively generated and input into the neck part; the neck portion fuses the feature maps of 52 × 52, 26 × 26 and 13 × 13 pixels while taking the feature map of 13 × 13 pixels as input of the MCM module and connecting the output of the MCM to the first Concat of the FPN, the neck portion generates three new feature maps of sizes 52 × 52 × 27, 26 × 26 × 27 and 13 × 13 × 27, respectively, input to the output portion for object detection and classification.
The invention relates to an improvement of a hot-pressing light guide plate defect visualization detection method based on improved YOLOv5, which comprises the following steps:
the HAM module adopts a residual error structure, the input feature graph F outputs channel information through a high-efficiency channel attention module of a deep convolutional neural network, then outputs spatial information through a convolutional block attention module, and then multiplies the channel information and the spatial information to obtain a feature graph fusing channel and spatial information; then, the input feature graph F is subjected to addition operation with the feature graph of the fusion channel and the spatial information after passing through a Conv module to serve as the output of an HAM module;
the Conv module consists of a common convolution with convolution kernel size of 3 x 3 and step size of 1, batch normalization and a SiLU activation function.
The invention is a further improvement of the hot-pressing light guide plate defect visualization detection method based on the improved YOLOv 5:
the MCM module comprises the steps of performing convolution operation on an input feature map through expansion convolution with expansion rates of 1, 3 and 5 respectively to obtain three feature maps, performing concatenate fusion on the three feature maps, doubling the number of channels, reducing the number of channels through convolution of 1 multiplied by 1, and finally performing multiplication operation on the feature maps input by the MCM module after an activation function sigmoid to obtain a feature map output by the MCM module.
The invention is a further improvement of the hot-pressing light guide plate defect visualization detection method based on the improved YOLOv 5:
the segmentation method of the sliding window specifically comprises the following steps: selecting a window with the size of 416 × 416, wherein a sliding starting point is the upper left corner boundary of the region of interest, the sliding step length is 0.8 times of the window side length each time, and then sliding the window in the region of interest from left to right and from top to bottom in sequence until the lower right corner boundary of the region of interest is reached to obtain the group of pictures with the size of 416 × 416;
the edge detection algorithm adopts a Canny operator in an OpenCV computer vision library, the Canny operator obtains the gradient of the whole hot-pressing light guide plate picture, the position with the maximum gradient change is the edge of an area of interest, and the area of interest is obtained through the edge.
The invention is a further improvement of the hot-pressing light guide plate defect visualization detection method based on the improved YOLOv 5:
the training and testing process of the light guide plate defect detection model comprises the following steps:
1) constructing a test set and a training set;
acquiring hot-pressing light guide plate pictures of an industrial field, acquiring an interested region by using an edge detection algorithm, dividing the interested region into a group of pictures with the size of 416 x 416 by using a sliding window dividing method, manually screening out pictures containing four defect types, expanding the pictures of each defect type, and then expanding the pictures of each defect type according to the ratio of 6: 2: dividing the pictures into a training set, a verification set and a test set, marking the defect type and the defect position of each picture, and generating a corresponding label file by adopting LabelImg software;
2) constructing a loss function of the light guide plate defect detection model:
Loss=ω box L boxobj L objcls L cls (5)
wherein, ω is box =0.05,ω obj 0.5 and ω cls =1;
L box The finger position loss is:
Figure BDA0003655868910000031
Figure BDA0003655868910000032
where n represents the number of samples input, y n Representing the target value, x n A predictor representing a network; IOU is the intersection ratio of the prediction frame and the real frame, rho represents the Euclidean distance of the coordinates of the center points of the real frame A and the prediction frame B, c represents the diagonal distance of the minimum frame enclosing the real frame A and the prediction frame B, alpha is a weight function, and v is a parameter for measuring the consistency of the aspect ratio of A and B;
Figure BDA0003655868910000033
Figure BDA0003655868910000041
Figure BDA0003655868910000042
3) total number of rounds of training was 100, batch size was 16, learning rate was 0.01 and SGD optimizer was used; performing mosaic data enhancement processing on the pictures in the training set to serve as input of model training, performing model fitting by using the training set in the training process, performing gradient reduction on training errors, and updating the weight; and verifying the generalization ability of the model in the training process by using a verification set, adjusting the hyper-parameters of the model, and evaluating the trained model by using a test set so as to obtain the light guide plate defect detection model capable of being used on line.
The defect visual detection method of the hot-pressing light guide plate based on the improved YOLOv5 is further improved as follows:
the expansion processing specifically comprises: randomly selecting 50% of pictures for each defect type, and performing 120-150% of brightness enhancement, translation, horizontal or vertical turnover;
the mosaic data enhancement processing comprises the following steps: firstly, randomly selecting four pictures from a training set, respectively carrying out transformation operations of random turning, scaling and brightness change on the four pictures, and splicing the four transformed pictures into an image according to randomly selected splicing points;
the defect types are white point defects, bright line defects, dark line defects and surface defects.
The invention has the following beneficial effects:
1. the hot-pressing light guide plate defect visual detection method based on the improved YOLOv5 not only can accurately detect the defect type, but also can realize the accurate positioning of the defect area;
2. the invention improves the detection effect of white point small targets and dark line defects on the light guide plate;
3. the detection method of the invention has the advantages of high speed and high accuracy.
Drawings
Fig. 1 is a schematic structural diagram of a HAM module proposed in a defect visualization detection method of a thermocompression light guide plate based on improved YOLOv5 in the present invention;
FIG. 2 is a schematic diagram of the structure of the convolution block attention module in FIG. 1;
FIG. 3 is a schematic diagram of the structure of the high efficiency channel attention module of the deep convolutional neural network of FIG. 1;
FIG. 4 is a schematic structural diagram of an MCM module proposed in the defect visualization detection method of a hot-pressed light guide plate based on the improved YOLOv5 of the present invention;
fig. 5 is a schematic structural diagram of an improved YOLOv5 hot-pressing light guide plate defect detection model according to the present invention;
FIG. 6 is a schematic diagram of a single acquired hot-pressed light guide plate picture, an area of interest, and a sliding window;
FIG. 7 is a schematic diagram showing examples of four types of defect samples ((a) is a white dot defect, (b) is a bright line defect, (c) is a dark line defect, and (d) is a surface defect);
FIG. 8 is a schematic diagram of a process of mosaic data enhancement;
FIG. 9 is a graph of the loss function of the training set and the loss function of the validation set during training;
FIG. 10 is a diagram illustrating test results of a portion of a test sample;
fig. 11 is a diagram showing the detection effect of each comparative model in experiment 1.
Detailed Description
The invention will be further described with reference to specific examples, but the scope of the invention is not limited thereto:
embodiment 1, a hot-pressed light guide plate defect visualization detection method based on an improved YOLOv5, first construct an improved YOLOv5 hot-pressed light guide plate defect detection model (referred to as a light guide plate defect detection model for short), then collect a hot-pressed light guide plate picture data set of an industrial site to produce a training set and a test set of the light guide plate defect detection model, and train and test the model to obtain the light guide plate defect detection model that can be used online. The specific process is as follows:
1. constructing a defect detection model of a light guide plate
1.1 construction of HAM Module
The HAM module is a mixed attention module newly proposed by the invention, and the module enables people to pay more attention to important region information by adjusting the weights of different regions of an image, ignores irrelevant information, namely, enables a network to pay more attention to a defect region, and improves the extraction capability of a backbone network to characteristics. The HAM Module is structured as shown in fig. 1, and includes a volume Block Attention Module (CBAM Module for short), an Efficient Channel Attention Module (ECA-Net Module for short) of a Deep Convolutional Neural network, and a Conv Module.
The structure of the ECA-Net module is shown in FIG. 3, and the output of the input characteristic diagram after passing through the ECA-Net module is used as the input of the CBAM module. First, for the input characteristicsGraph F ∈ R C×H×W Becomes (C × 1 × 1) by Global Average Pooling (GAP); secondly, obtaining the weight value of each channel through the fast one-dimensional convolution with the convolution kernel size of k, and generating a characteristic diagram f' epsilon R through an activation function C×1×1 (ii) a Finally, a feature map f (namely channel information output by the ECA-Net module) obtained by multiplying the input feature map is used as the output of the ECA-Net module, the convolution kernel size k represents how many adjacent ranges participate in attention calculation, and the calculation method is as follows:
f=φ(k)=2 (r×k-b) (1)
Figure BDA0003655868910000051
wherein c is the channel dimension, | t odd Indicating the nearest odd number t, r is set to 2 and b is set to 1.
The CBAM module is shown in FIG. 2, firstly, for the feature map f output from the ECA-Net module, the spatial attention module performs maximum pooling Maxpool (-) and average pooling Avgpol (-) respectively to obtain two feature maps of 1 XHXW; secondly, splicing the two channels, expanding the number of the channels, and then sending the channels into a convolutional layer for convolution operation, wherein the convolutional layer has an input channel of 2, an output channel of 1, a convolutional kernel size of 7 and a step length of 1; and finally, activating a sigmoid function on the output of the convolutional layer to obtain M (f) (namely, spatial information is obtained by the output of the CBAM module).
The spatial attention m (f) of CBAM is calculated as follows:
Figure BDA0003655868910000061
wherein f ∈ R C×H×W The characteristic diagram is output by the ECA-Net module, C is the number of channels, and H and W are respectively the height and width of the characteristic diagram;
Figure BDA0003655868910000062
and
Figure BDA0003655868910000063
respectively the average pooled and maximum pooled feature maps, f 7×7 Represents a convolution of 7 × 7; σ denotes the sigmoid activation function.
The HAM module adopts a residual error structure, the channel information output by the ECA-Net module of the input feature diagram F passes through the CBAM module to output the spatial information, then the obtained channel information and the spatial information are multiplied to obtain a feature diagram fusing the channel information and the spatial information, and the input feature diagram F passes through the Conv module and then is added with the feature diagram fusing the channel information and the spatial information to obtain the output of the HAM module:
Figure BDA0003655868910000064
wherein the Conv module consists of a common convolution with a convolution kernel size of 3 x 3 and a step size of 1, Batch Normalization and a SiLU activation function,
Figure BDA0003655868910000065
indicating an addition operation.
1.2 construction of MCM Module
The MCM module is a multi-expansion convolution module newly proposed by the invention, the module expands the receptive field of the defect area through the expansion convolution with different expansion rates, reduces the information loss of the defect area in the down-sampling process, and the different receptive fields are beneficial to improving the multi-scale expression, enriching the context information and enhancing the characteristic information. The structure of the method is shown in fig. 4, firstly, convolution operation is carried out on an input feature diagram F4 through expansion convolution with expansion rates of 1, 3 and 5 respectively to obtain three feature diagrams, then the three feature diagrams are subjected to concatenate fusion, the number of channels is doubled, the number of the channels is reduced through convolution of 1 x 1, finally, multiplication operation is carried out on the three feature diagrams and the input feature diagram F4 after an activation function sigmoid is carried out, and the three feature diagrams are used as feature diagrams output by an MCM module, namely, shortcut operation is added into the MCM module by taking the residual error idea as reference, and the input and the output of the MCM module are directly connected.
1.3 constructing light guide plate defect detection model
The defect inspection model of light guide plate uses YOLOv5 as a basic network, as shown in fig. 5, and is composed of a trunk network (backbone), a Neck portion (Neck), and an output portion (Head). A backbone network (backbone) is used for feature extraction; the Neck part (hack) is used for carrying out multi-scale fusion on features of different levels extracted by a backbone network (backbone), and the fusion mode is from top to bottom and from bottom to top; the output section (Head) is used to perform object detection and classification.
The backbone network (backbone) is based on a CSPDarknet-53 convolutional neural network and comprises 5 downsampling modules, 4C 3 modules, 1 SPPF module and 4 newly-proposed HAM modules; respectively inserting an HAM module between each C3 module and each convolution module of the backbone network backbone; a backbone network (backbone) extracts feature maps of different sizes from an input image through multiple convolution and downsampling. The size of an input image is 416 × 416 pixels, a backbone network (backbone) generates five layers of feature maps after 2, 4, 8, 16 and 32 times of downsampling, and the sizes of the feature maps are respectively as follows: 208 × 208, 104 × 104, 52 × 52, 26 × 26, and 13 × 13 pixels.
The Neck part (tack) adopts a pyramid layer structure of FPN + PAN, integrates the feature maps of 3, 4 and 5 layers of a backbone network (backbone) to obtain more context information, reduces the loss of the information in the transmission process, improves the network performance, simultaneously leads out the feature map of the 5 th layer of the backbone network (backbone) as the input of an MCM module, and connects the output of the MCM to the first Concat of the FPN. In the fusion process, the FPN structure transfers the shallow semantic information from top to bottom, and the PAN structure transfers the deep semantic information from bottom to top. The two structures jointly enhance the feature fusion capability of the neck network, and three new feature maps are generated after feature fusion, wherein the sizes of the three new feature maps are 52 multiplied by 27, 26 multiplied by 27 and 13 multiplied by 27, and 27 represents the number of channels. The smaller the feature map, the larger the image area corresponding to each grid cell in the feature map, the 13 × 13 × 27 feature map is suitable for detecting large targets, the 26 × 26 × 27 feature map is suitable for detecting medium targets, and the 52 × 52 × 27 feature map is suitable for detecting small targets. Based on these new feature maps, the output section (Head) performs object detection and classification.
2. Building a data set
2.1 sources of training data
And at the tail end of a light guide plate production line in an industrial field, a 16K line scanning camera is adopted to collect produced hot-pressed Light Guide Plate (LGP) pictures and send the pictures to an upper computer for further pretreatment.
2.2 pretreatment
An example of the collected single hot-pressed light guide plate picture is shown in fig. 6, the resolution size is 30000 × 16384, the size of the region of interest is 23732 × 13117, the region of interest is an effective region of the hot-pressed light guide plate, and the input size of the network constructed in step 1.3 is set to 416 × 416, so that the picture needs to be preprocessed.
Firstly, an interested area is obtained by using an edge detection algorithm for the collected whole hot-pressing light guide plate picture, the edge detection algorithm adopts a Canny operator in an OpenCV computer vision library, the Canny operator can calculate the gradient of the whole hot-pressing light guide plate picture, the position with the maximum gradient change is the edge of the interested area, and the position of the interested area can be found through the edge. Then dividing the region of interest into a group of windows with the size of 416 x 416, setting the window size to be 416 x 416, setting a sliding starting point to be the upper left corner boundary of the region of interest, wherein the sliding step length is 0.8 times of the side length of the window each time, then sequentially sliding the windows from left to right and from top to bottom until the lower right corner boundary of the region of interest is reached, and dividing to obtain 2880 windows with the size of 416 x 416; the sliding step length is selected to be 0.8 times of the side length of the window, so that the problem of missing detection of boundary defects in the sliding process can be prevented. Further, the light guide plate defect detection model detects each window.
2.3 dataset partitioning
The 2880 windows obtained in the preprocessing process do not necessarily all contain defects, so all windows need to be screened manually to select the windows containing the defects. The method preprocesses 1800 pictures, constructs a data set for all screened defective windows, and divides the defective windows into white point defects, line defects and surface defects according to defect types, wherein the line defects are subdivided into bright line defects and dark line defects. However, because the number of each defect type in actual production has a certain difference, in order to ensure the reasonability of training and the balance between each defect type, 50% of pictures of each defect type are randomly selected to be subjected to expansion processing, and the expansion processing method comprises the following steps: each defect image is subjected to 120-150% brightness enhancement, translation, horizontal or vertical flipping. The number after the expansion processing is 4112, and the statistical data of various defect samples after the expansion processing are shown in table 1.
TABLE 1 extended Defect dataset
Figure BDA0003655868910000081
Then, for the rationality of training, each defect sample in table 1 is divided according to a training set, a verification set and a test set, wherein the dividing ratio is 6: 2: 2, the specific constitution is shown in table 2 below.
Table 2 composition of defect data set of hot-pressed light guide plate
Figure BDA0003655868910000082
The data set comprises 4112 defect images, and is divided into four types of defects including white dots, bright lines, dark lines and surfaces according to defect types, wherein the white dot defect images are 1042, the bright line defects are 1014, the dark line defects are 1181 and the surface defects are 875. A typical defect sample in the data set is shown in fig. 7, in which fig. 7(a) shows a white dot defect, fig. 7(b) shows a bright line defect, fig. 7(c) shows a dark line defect, and fig. 7(d) shows a surface defect.
2.4 dataset tagging
The improved YOLOv5 network used by the invention adopts a supervised learning mode, so that a constructed data set needs to be labeled, a label making tool is LabelImg, each image in a training set, a verification set and a test set is labeled with the defect type and the defect position during making, and then the LabelImg can generate a label file.
3. Training and testing light guide plate defect detection model
3.1 mosaic data enhancement
In order to enrich the background information of the detected target and improve the robustness of the network, a mosaic data enhancement method is adopted in the training process. The process of enhancing mosaic data is shown in fig. 8, and includes randomly selecting four pictures from a training set, respectively performing random turning, scaling and brightness change on the four pictures, and splicing the four transformed pictures into one picture according to randomly selected splicing points. The method sends four images into the network for training at one time, enriches background information and improves the robustness of the network; meanwhile, a plurality of small targets are added in a random scaling mode, and a data set of the small targets is enriched.
3.2 loss function
The loss function of the light guide plate defect detection model consists of three parts, namely: the position loss, confidence loss, and classification loss are specifically as follows:
Loss=ω box L boxobj L objcls L cls (5)
wherein, according to the study of the hyper-parameters, ω box =0.05,ω obj 0.5 and ω cls =1;
L box Finger position loss, defined as:
Figure BDA0003655868910000091
the IOU is the intersection ratio of the prediction frame and the real frame, and the larger the IOU is, the closer the real frame and the prediction frame are. ρ represents the euclidean distance between the coordinates of the center points of the real box a and the predicted box B, and c represents the diagonal distance of the smallest box enclosing them. α is a weighting function and v is used to measure the consistency of the aspect ratio of A to B.
The definition of IOU is:
Figure BDA0003655868910000092
where A is the real box, B is the predicted box, A ^ B denotes the intersection of A and B, A ^ B denotes the union of A and B,
α and v are defined as follows:
Figure BDA0003655868910000093
Figure BDA0003655868910000094
both classification loss and confidence loss use the two-classification cross-entropy loss function bcewithlogitotsloss, which is defined as follows:
Figure BDA0003655868910000101
where n represents the number of samples input, y n Representing the target value, x n Representing the predicted value of the network.
3.3 training Process and output, test Process and output
The hardware environment and software configuration during training are shown in table 3.
TABLE 3 hardware Environment and software version
Figure BDA0003655868910000102
Total number of rounds of training was 100, batch size was 16, learning rate was 0.01 and SGD optimizer was used. To verify whether the above parameters are optimal, experiments were performed by adjusting the above parameters and observing their performance variation on the modified YOLOV5 (i.e., the light guide plate defect inspection model of the present invention) based on the test set in the hot-pressed LGP self-created defect dataset. The results of the experiment are shown in table 4.
TABLE 4 parameter adjustment and results
Figure BDA0003655868910000103
In experiments, it was found that the variation of the loss function tends to be stable when the number of training times approaches 100, and therefore the number of training times is set to 100 herein. Meanwhile, as can be seen from table 4, the accuracy of the exp1 setting parameter is highest.
In the training process, a training set is used for fitting the model, gradient reduction is carried out on a training error, and the weight is updated; verifying the generalization ability of the model in the training process by using a verification set, and adjusting the hyper-parameters of the model; and the test set is mainly used for evaluating the trained model, so that the defect detection model of the light guide plate which can be used on line is obtained.
Fig. 9 shows a loss function curve of the training set and a loss function curve of the verification set. In fig. 9, the upper line of graphs is a position loss function curve, a confidence coefficient loss function curve and a classification loss function curve during training, and the lower line of graphs is a position loss function curve, a confidence coefficient loss function curve and a classification loss function curve during verification; it can be seen that the loss function curves at training and validation converged rapidly within the first 30 training times and completed when the training times reached 100.
In order to verify the usability of the obtained defect detection model of the light guide plate which can be used online, 4 pictures are randomly selected for testing, and the test result is shown in fig. 10. In fig. 10, white dots, bright lines, dark lines and surface defects are sequentially arranged from left to right, and confidence degrees are sequentially arranged at 0.83, 0.89 and 0.88, so that the trained model of the invention not only identifies the defect type, but also accurately positions the defect type, and solves the problem of low positioning accuracy of the traditional algorithm.
4. On-line use
When the method is actually used on line, the defect detection model of the light guide plate trained in the step 3 is used and operated on a Windows10 system. During online operation, at the tail end of a light guide plate production line in an industrial field, a 16K linear scanning camera is adopted to collect produced hot-pressed light guide plate pictures, and the pictures are sent to an upper computer for further pretreatment. The preprocessing comprises extracting a region of interest, dividing the whole hot-pressing light guide plate picture into a plurality of pictures with sizes of 416 × 416, then predicting all the pictures with sizes of 416 × 416 by a light guide plate defect detection model, and outputting the pictures with defect type, confidence and defect position marks. If the outputted picture contains one or more types of white spots, bright lines, dark lines and surface defects, the hot-pressing light guide plate is judged to be an unqualified product manually, otherwise, the hot-pressing light guide plate is judged to be a qualified product.
Experiment:
to verify the effectiveness of the improved YOLOv5 defect detection model, mean average accuracy (mAP), average Accuracy (AP), and FPS (frames per second) were used as metrics.
AP and mAP are defined as follows:
Figure BDA0003655868910000111
Figure BDA0003655868910000112
the AP consists of areas of P-R curves surrounded by precision and recall rate, the mAP represents an average value of AP values of each category to measure the detection performance of a network model to all categories, the FPS represents the number of pictures which can be processed by a target detection network per second, and the larger the FPS is, the more the number of pictures processed by the target detection network per second is, and the higher the processing speed is.
The experiment builds a deep learning environment based on the Pythrch, runs on the GPU, and acquires experiment data by using a trained model. To further validate the effectiveness of the improved YOLOv5 defect detection model, the present invention was applied to SSD [1] 、YOLOv3 [2] 、YOLOv4 [3] 、YOLOv5 [4] And YOLOX [5] The single-stage target detection methods were compared, and the backbone network structure of each detection method is shown in table 5.
TABLE 5 network architecture for each model
Figure BDA0003655868910000113
Figure BDA0003655868910000121
The results of the comparative experiments between the present invention and the target detection networks such as SSD, YOLOV3, YOLOV4, YOLOV5, and YOLOX are shown in table 6, and the last row in table 6 is the results of the experiments of the present invention.
TABLE 6 correlation method comparison
Figure BDA0003655868910000122
As can be seen from Table 6, the improved YOLOv5 defect detection method of the present invention is significantly superior to other target detection methods. The average precision of the mean value on the YOLOv3 model is the lowest, is 66.6 percent and does not meet the detection requirement; the mean average precision of the Yolov5 model is 97.7%, while the mean average precision of the improved Yolov5 model of the invention is 98.9%, and the overall precision is improved by 1.2%. The precision of the white point is improved by 2.7%, the precision of the dark line is improved by 2.0%, the detection precision of the white point small target and the dark line defect is greatly improved, but the detection precision is slightly reduced in terms of FPS.
To further verify the advancement of the present invention, 5 random pictures were selected from the experiment and tested on each model, and the results are shown in fig. 11. In fig. 11, the first line is 5 randomly selected sample pictures, the second line is the real frame position of the defective region of the 5 pictures, and the 3-8 lines are the prediction results of each model; and the top-right labeled number is the confidence of the prediction, with higher confidence representing a higher likelihood of being a target. It is clear from fig. 11 that different models have different detection effects on the hot-pressed LGP defect data set, YOLOv3 has no detection of face and white point defects, and our method is significantly better than the detection effects of other models as a whole, which further verifies the advancement of the present invention.
The references referred to above are as follows:
[1]Liu W,Anguelov D,Erhan D,et al.Ssd:Single shot multibox detector[C].European conference on computer vision,2016:21-37;
[2]Redmon J,Farhadi A.Yolov3:An incremental improvement[J].arXiv preprint arXiv:1804.02767,2018;
[3]Bochkovskiy A,Wang C-Y,Liao H-Y M.Yolov4:Optimal speed and accuracy of object detection[J].arXiv preprint arXiv:2004.10934,2020;
[4]YOLOv5[EB/OL].https://github.com/ultralytics/yolov5.git/
[5]YOLOX[EB/OL].https://github.com/Megvii-BaseDetection/yoloX.git/
finally, it is noted that the above-mentioned lists merely illustrate some specific embodiments of the invention. It is obvious that the invention is not limited to the above embodiments, but that many variations are possible. All modifications which can be derived or suggested by a person skilled in the art from the disclosure of the present invention are to be considered within the scope of the invention.

Claims (6)

1. A hot-pressing light guide plate defect visual detection method based on improved YOLOv5 is characterized in that pictures of a hot-pressing light guide plate are collected and sent to an upper computer for preprocessing, preprocessing comprises the steps of obtaining an interested area by using an edge detection algorithm, dividing the interested area into a group of pictures with the size of 416 x 416 by using a sliding window dividing method, sequentially inputting all the obtained pictures with the size of 416 x 416 into a light guide plate defect detection model for target detection and classification, and outputting pictures with defect types, confidence degrees and defect position marks;
the light guide plate defect detection model takes YOLOv5 as a baseline network, and comprises a main network based on a CSPDarknet-53 network, a neck part adopting a pyramid layer structure of FPN + PAN and an output part, wherein an HAM module is respectively inserted between each C3 module and a convolution module of the main network, and after 2, 4, 8, 16 and 32 times of downsampling are sequentially carried out on the 416 x 416 picture through the main network, five-layer feature maps of 208 x 208, 104 x 104, 52 x 52, 26 x 26 and 13 x 13 pixels are respectively generated and input into the neck part; the neck portion fuses the feature maps of 52 × 52, 26 × 26 and 13 × 13 pixels while taking the feature map of 13 × 13 pixels as input of the MCM module and connecting the output of the MCM to the first Concat of the FPN, the neck portion generates three new feature maps of sizes 52 × 52 × 27, 26 × 26 × 27 and 13 × 13 × 27, respectively, input to the output portion for object detection and classification.
2. The method for visually detecting the defects of the hot-pressed light guide plate based on the improved YOLOv5 as claimed in claim 1, wherein:
the HAM module adopts a residual error structure, the input feature graph F outputs channel information through a high-efficiency channel attention module of a deep convolutional neural network, then outputs spatial information through a convolutional block attention module, and then multiplies the channel information and the spatial information to obtain a feature graph fusing channel and spatial information; then, the input feature graph F is subjected to addition operation with the feature graph of the fusion channel and the spatial information after passing through a Conv module to serve as the output of an HAM module;
the Conv module consists of a common convolution with convolution kernel size of 3 x 3 and step size of 1, batch normalization and a SiLU activation function.
3. The method for visually detecting the defects of the hot-pressed light guide plate based on the improved YOLOv5 as claimed in claim 2, wherein:
the MCM module comprises the steps of performing convolution operation on an input feature map through expansion convolution with expansion rates of 1, 3 and 5 respectively to obtain three feature maps, performing concatenate fusion on the three feature maps, doubling the number of channels, reducing the number of channels through convolution of 1 multiplied by 1, and finally performing multiplication operation on the feature maps input by the MCM module after an activation function sigmoid to obtain a feature map output by the MCM module.
4. The method for visually detecting the defects of the hot-pressed light guide plate based on the improved YOLOv5 as claimed in claim 3, wherein:
the segmentation method of the sliding window specifically comprises the following steps: selecting a window with the size of 416 × 416, wherein a sliding starting point is the upper left corner boundary of the region of interest, the sliding step length is 0.8 times of the window side length each time, and then sliding the window in the region of interest from left to right and from top to bottom in sequence until the lower right corner boundary of the region of interest is reached to obtain the group of pictures with the size of 416 × 416;
the edge detection algorithm adopts a Canny operator in an OpenCV computer vision library, the Canny operator obtains the gradient of the whole hot-pressing light guide plate picture, the position with the maximum gradient change is the edge of an area of interest, and the area of interest is obtained through the edge.
5. The method for visually detecting the defects of the hot-pressed light guide plate based on the improved YOLOv5 as claimed in claim 4, wherein:
the training and testing process of the light guide plate defect detection model comprises the following steps:
1) constructing a test set and a training set;
acquiring hot-pressing light guide plate pictures of an industrial field, acquiring an interested region by using an edge detection algorithm, dividing the interested region into a group of pictures with the size of 416 x 416 by using a sliding window dividing method, manually screening out pictures containing four defect types, expanding the pictures of each defect type, and then expanding the pictures of each defect type according to the ratio of 6: 2: dividing the pictures into a training set, a verification set and a test set, marking the defect type and the defect position of each picture, and generating a corresponding label file by adopting LabelImg software;
2) constructing a loss function of the light guide plate defect detection model:
Loss=ω box L boxobj L objcls L cls (5)
wherein, ω is box =0.05,ω obj 0.5 and ω cls =1;
L box The finger position loss is:
Figure FDA0003655868900000021
Figure FDA0003655868900000022
where n represents the number of samples input, y n Represents a target value, x n A predictor representing a network; IOU is the intersection ratio of the prediction frame and the real frame, rho represents the Euclidean distance of the coordinates of the center points of the real frame A and the prediction frame B, c represents the diagonal distance of the minimum frame enclosing the real frame A and the prediction frame B, alpha is a weight function, and v is a parameter for measuring the consistency of the aspect ratio of A and B;
Figure FDA0003655868900000023
Figure FDA0003655868900000024
Figure FDA0003655868900000031
3) total number of rounds of training was 100, batch size was 16, learning rate was 0.01 and SGD optimizer was used; performing mosaic data enhancement processing on the pictures in the training set to serve as input of model training, performing model fitting by using the training set in the training process, performing gradient reduction on training errors, and updating the weight; and verifying the generalization ability of the model in the training process by using a verification set, adjusting the hyper-parameters of the model, and evaluating the trained model by using a test set so as to obtain the light guide plate defect detection model capable of being used on line.
6. The method for visually detecting the defects of the hot-pressed light guide plate based on the improved YOLOv5 as claimed in claim 5, wherein:
the expansion processing specifically comprises: randomly selecting 50% of pictures for each defect type, and performing 120-150% of brightness enhancement, translation, horizontal or vertical turnover;
the mosaic data enhancement processing comprises the following steps: firstly, randomly selecting four pictures from a training set, respectively carrying out transformation operations of random turning, scaling and brightness change on the four pictures, and splicing the four transformed pictures into an image according to randomly selected splicing points;
the defect types are white point defects, bright line defects, dark line defects and surface defects.
CN202210559232.5A 2022-05-23 2022-05-23 Improved YOLOv 5-based hot-pressing light guide plate defect visual detection method Pending CN114881987A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210559232.5A CN114881987A (en) 2022-05-23 2022-05-23 Improved YOLOv 5-based hot-pressing light guide plate defect visual detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210559232.5A CN114881987A (en) 2022-05-23 2022-05-23 Improved YOLOv 5-based hot-pressing light guide plate defect visual detection method

Publications (1)

Publication Number Publication Date
CN114881987A true CN114881987A (en) 2022-08-09

Family

ID=82678414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210559232.5A Pending CN114881987A (en) 2022-05-23 2022-05-23 Improved YOLOv 5-based hot-pressing light guide plate defect visual detection method

Country Status (1)

Country Link
CN (1) CN114881987A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168259A (en) * 2023-04-26 2023-05-26 厦门微图软件科技有限公司 Automatic defect classification algorithm applied to OLED lighting system
CN116363485A (en) * 2023-05-22 2023-06-30 齐鲁工业大学(山东省科学院) Improved YOLOv 5-based high-resolution target detection method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090109373A1 (en) * 2007-10-24 2009-04-30 Hitoshi Taniguchi Liquid Crystal Display
CN112233092A (en) * 2020-10-16 2021-01-15 广东技术师范大学 Deep learning method for intelligent defect detection of unmanned aerial vehicle power inspection
CN113327243A (en) * 2021-06-24 2021-08-31 浙江理工大学 PAD light guide plate defect visualization detection method based on AYOLOv3-Tiny new framework
CN113506286A (en) * 2021-07-27 2021-10-15 西安电子科技大学 Microwave chip defect detection method based on small sample data set of YOLOv5 algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090109373A1 (en) * 2007-10-24 2009-04-30 Hitoshi Taniguchi Liquid Crystal Display
CN112233092A (en) * 2020-10-16 2021-01-15 广东技术师范大学 Deep learning method for intelligent defect detection of unmanned aerial vehicle power inspection
CN113327243A (en) * 2021-06-24 2021-08-31 浙江理工大学 PAD light guide plate defect visualization detection method based on AYOLOv3-Tiny new framework
CN113506286A (en) * 2021-07-27 2021-10-15 西安电子科技大学 Microwave chip defect detection method based on small sample data set of YOLOv5 algorithm

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168259A (en) * 2023-04-26 2023-05-26 厦门微图软件科技有限公司 Automatic defect classification algorithm applied to OLED lighting system
CN116168259B (en) * 2023-04-26 2023-08-08 厦门微图软件科技有限公司 Automatic defect classification method applied to OLED lighting system
CN116363485A (en) * 2023-05-22 2023-06-30 齐鲁工业大学(山东省科学院) Improved YOLOv 5-based high-resolution target detection method
CN116363485B (en) * 2023-05-22 2024-03-12 齐鲁工业大学(山东省科学院) Improved YOLOv 5-based high-resolution target detection method

Similar Documents

Publication Publication Date Title
CN111862064B (en) Silver wire surface flaw identification method based on deep learning
US11494891B2 (en) Method of inspecting and evaluating coating state of steel structure and system therefor
JP2017049974A (en) Discriminator generator, quality determine method, and program
CN111223093A (en) AOI defect detection method
CN114881987A (en) Improved YOLOv 5-based hot-pressing light guide plate defect visual detection method
CN111951249A (en) Mobile phone light guide plate defect visual detection method based on multitask learning network
CN111982916A (en) Welding seam surface defect detection method and system based on machine vision
CN108520514B (en) Consistency detection method for electronic elements of printed circuit board based on computer vision
CN113643268B (en) Industrial product defect quality inspection method and device based on deep learning and storage medium
CN110473201A (en) A kind of automatic testing method and device of disc surface defect
CN114910480A (en) Wafer surface defect detection method based on machine vision
CN114119591A (en) Display screen picture quality detection method
CN114359235A (en) Wood surface defect detection method based on improved YOLOv5l network
CN116091505A (en) Automatic defect detection and classification method and system for sapphire substrate
CN113327243B (en) PAD light guide plate defect visual detection method based on Ayolov3-Tiny new framework
CN111986145A (en) Bearing roller flaw detection method based on fast-RCNN
CN113866182A (en) Detection method and system for detecting defects of display module
CN116091506B (en) Machine vision defect quality inspection method based on YOLOV5
CN116958073A (en) Small sample steel defect detection method based on attention feature pyramid mechanism
Rahiddin et al. Local Texture Representation for Timber Defect Recognition based on Variation of LBP
CN115546141A (en) Small sample Mini LED defect detection method and system based on multi-dimensional measurement
CN110910373B (en) Identification method of orthotropic steel bridge deck fatigue crack detection image
CN114548250A (en) Mobile phone appearance detection method and device based on data analysis
CN112767365A (en) Flaw detection method
CN114119479A (en) Industrial production line quality monitoring method based on image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination