CN112200762A - Diode glass bulb defect detection method - Google Patents

Diode glass bulb defect detection method Download PDF

Info

Publication number
CN112200762A
CN112200762A CN202010633906.2A CN202010633906A CN112200762A CN 112200762 A CN112200762 A CN 112200762A CN 202010633906 A CN202010633906 A CN 202010633906A CN 112200762 A CN112200762 A CN 112200762A
Authority
CN
China
Prior art keywords
image
diode glass
glass bulb
diode
industrial camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010633906.2A
Other languages
Chinese (zh)
Inventor
刘桂华
向伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mianyang Keruite Robot Co ltd
Southwest University of Science and Technology
Original Assignee
Mianyang Keruite Robot Co ltd
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mianyang Keruite Robot Co ltd, Southwest University of Science and Technology filed Critical Mianyang Keruite Robot Co ltd
Priority to CN202010633906.2A priority Critical patent/CN112200762A/en
Publication of CN112200762A publication Critical patent/CN112200762A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8883Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention relates to the field of computer vision networks, and aims to provide a method for detecting defects of a diode glass bulb, which comprises the following steps of 1: placing a diode glass bulb to be detected on an industrial camera platform, and starting a backlight source at the bottom of the industrial camera platform; step 2: an industrial camera is arranged on the industrial camera platform, and a diode glass bulb image is captured through a telecentric lens on the industrial camera; and step 3: the method comprises the steps of taking the collected diode glass shell image as input and sending the input to a trained defect identification model in a computer, wherein the output of the defect identification model is the positioned diode glass shell image, and the defects on the positioned diode glass shell image are marked and displayed.

Description

Diode glass bulb defect detection method
Technical Field
The invention relates to the field of computer vision, in particular to a method for detecting defects of a diode glass shell.
Background
Because the diode glass bulb is transparent and has internal structure and outline characteristics, the interference with defect characteristics is easily formed in the imaging process, in addition, the interference of external environment is avoided, the detection is very difficult, the classification accuracy is difficult to improve, and the application to the industry needs very high indexes as guarantee.
The invention discloses CN201510053437.6, a cylindrical diode surface defect detection device based on machine vision, and discloses hardware and software algorithm design of the cylindrical diode surface defect detection device. The hardware design comprises the following steps: selecting the type of an industrial camera, selecting the type of a lens and building an optical platform; the design of the software defect detection algorithm comprises the following steps: the method comprises the steps of tube body segmentation, tube body pretreatment, defect ROI segmentation, feature extraction and decision tree classifier design. The invention aims at the design of an optical platform, and tests out a reasonable lighting mode and a light source placing mode through an optical principle and the self structural characteristics of an object. Aiming at the design of a defect detection operator, the difficulty lies in the segmentation and texture feature extraction of a defect ROI, and improved stroke width conversion and a method for extracting the features of a patterned gradient histogram are respectively provided; and finally, classifying the defects through a decision tree classifier, wherein the defect identification rate is close to 100%, the classification rate reaches 96.2%, and a better identification and classification effect is obtained.
Therefore, a method capable of identifying the defects on the surface of the diode glass bulb is needed, which can rapidly detect the defects of the diode glass bulb and accurately realize the positioning of the defects.
Disclosure of Invention
The invention aims to provide a method for detecting the defects of the diode glass bulb, which can accurately position the defect detection of the diode glass bulb, has reasonable structure and ingenious design and is suitable for popularization;
the technical scheme adopted by the invention is as follows: the method for detecting the defects of the diode glass bulb comprises the following steps:
step 1: placing a diode glass bulb to be detected on an industrial camera platform, and starting a backlight source at the bottom of the industrial camera platform; step 2: an industrial camera is arranged on the industrial camera platform, and a diode glass bulb image is captured through a telecentric lens on the industrial camera;
and step 3: and sending the collected diode glass shell image as input to a trained defect identification model in a computer, wherein the output of the defect identification model is the positioned diode glass shell image, and the defect on the positioned diode glass shell image is marked and displayed.
Preferably, in step 2, the defect identification model is a YOLOv3 model.
Preferably, in step 2, the industrial camera platform further includes a light source controller, the light source controller is respectively connected to the industrial camera and the backlight source, and the industrial camera is connected to the computer through the light source controller.
Preferably, in the step 3, the training process of the defect recognition model includes the following steps,
step 11: 4000 defective diode glass shell images of different types are obtained, and the step 2 is carried out;
step 22: expanding the obtained diode glass shell image to obtain a 50000 sample set, and marking the image of the processed sample set;
step 33: and (3) carrying out YOLOv3 model training on the marked diode glass bulb crack image, wherein 42000 sheets are used as a training set, 8000 sheets are used as a verification set, and the trained defect identification model is obtained.
Preferably, in the step 11, the acquired image is flipped left and right to obtain a flipped image; performing different-size cutting to obtain images of various sizes; carrying out multi-scale scaling to obtain a multi-size scaled image; and the turning image, the images with various sizes and the scaling images with various sizes form a processed sample set.
Preferably, the number of images of the processed sample set is a multiple of the number of envelope samples.
Preferably, the anchor frame size is obtained by performing a plurality of iterations of the K-means algorithm on the VOC data set, and when the input image size is 416 × 416, the YOLOv3 anchor frame size is { [10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326 }.
Preferably, the loss function integrates anchor frame center coordinate loss, width and height loss, confidence coefficient loss and classification, the anchor frame loss is calculated by a sum of squares, and the classification error and the confidence coefficient error are calculated by a binary cross loss entropy, and the specific formula is as follows:
Figure 7
wherein
Figure BDA0002567074960000031
Indicating a certain real target contained in the jth anchor frame of the ith grid. The parts 1 and 2 are anchor frame loss, the parts 3 and 4 are confidence loss, the confidence error comprises a target part and a non-target part, and the number of the anchor frames without the target is far more than that of the anchor frames with the target, so that the anchor frames without the target have a coefficient lambda before the targetnoobj0.5, reducing the contribution weight; part 5 is the classification error.
Preferably, the training process of the training diagram set on the YOLOv3 model is as follows:
dividing the images of the input training atlas into S-S grids;
generating 3 bounding boxes for each grid in the S-S grid, wherein the attributes comprise a central coordinate, a width, a height, a confidence coefficient and a probability of belonging to a workpiece crack target; and (3) eliminating a candidate frame which does not contain the target through the object confidence coefficient being less than the threshold th1, and then utilizing a non-maximum value to inhibit and select a candidate frame which has the maximum intersection ratio (IoU) with the real frame for target prediction, wherein the prediction is as follows:
bx=σ(tx)+cx(1)
by=σ(ty)+cy(2)
Figure 2
Figure 3
bx,by,bw,bhi.e. the center coordinates, width and height of the final predicted bounding box for the network. Wherein c isx,cyIs the coordinate offset of the grid; p is a radical ofw,phIs the width and height of the anchor box mapped into the feature map; t is tx,ty,tw,thIs a parameter to be learned in the network training process, tw,thRepresenting the degree of scaling of the prediction box, tx,tyRepresents the degree of center coordinate shift of the prediction box, and σ represents the sigmoid function. Updating t by continuous learningx,ty,tw,thParameters are set so that the prediction box is closer to the real box, and the training is stopped when the network loss is less than the set threshold th2 or the training number reaches the maximum iteration number N.
It is worth noting that the training of the data set on the YOLOv3 model uses 3 scales to perform 3 bounding box predictions:
scale 1, adding convolution layers after the feature extraction network, wherein the down-sampling proportion is 32, the scale of an output feature graph is 13 x 13, and the feature graph is suitable for detecting small targets;
the scale 2 is used for sampling the last-but-one convolution layer (namely, 2) in the scale 1, the down-sampling proportion is 16, and the up-sampling proportion is connected with a characteristic diagram with the scale of 26 and 26 in series, is increased by 2 times compared with the scale 1, and is suitable for detecting a medium-scale target;
dimension 3: analogy to scale 2, a 52 x 52 size signature was obtained, suitable for detecting larger targets.
Compared with the prior art, the invention has the beneficial effects that:
1. machine identification, high accuracy and high recall rate;
2. the YOLOv3 model is used for measurement and calculation, the training process is fast, and the allocation is convenient.
Drawings
FIG. 1 is a schematic view of the apparatus of the present invention in use;
FIG. 2 is a schematic diagram of a method for detecting defects of a diode glass bulb according to an embodiment of the present invention;
FIG. 3 is a diagram of the YOLOv3 model in an embodiment of the invention;
fig. 4 is a diagram illustrating the effect of diode glass envelope defects in an embodiment of the present invention.
Description of reference numerals: 1. an industrial camera stand; 2. an industrial camera; 3. a telecentric lens; 4. a glass envelope; 5. a backlight light source; 6. A light source controller; 7. a computer.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to fig. 1 to 4 of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other implementations made by those of ordinary skill in the art based on the embodiments of the present invention are obtained without inventive efforts.
Referring to fig. 1, the method for detecting defects of a diode glass bulb includes the following steps:
step 1: placing a diode glass bulb to be detected on an industrial camera platform, and starting a backlight source at the bottom of the industrial camera platform;
step 2: an industrial camera is arranged on the industrial camera platform, and a diode glass bulb image is captured through a telecentric lens on the industrial camera;
and step 3: and sending the collected diode glass shell image as input to a trained defect identification model in a computer, wherein the output of the defect identification model is the positioned diode glass shell image, and the defect on the positioned diode glass shell image is marked and displayed.
It should be noted that, referring to fig. 3, in step 2, the defect recognition model is a YOLOv3 model, in step 2, the industrial camera platform further includes a light source controller, the light source controller is respectively connected to the industrial camera and the backlight light source, the industrial camera is connected to the computer through the light source controller, in step 3, the training process of the defect recognition model includes the following steps,
step 11: acquiring 4000 glass shell images of diodes with different types of defects, and entering the step 2;
step 22: expanding the image of the diode glass shell sample to obtain 50000 sample sets, and marking the image of the processed sample set;
step 33: and carrying out YOLOv3 model training on the marked diode glass bulb crack image to obtain a trained defect identification model.
It should be noted that, in the step 11, the acquired image is flipped left and right to obtain a flipped image; performing different-size cutting to obtain images of various sizes; carrying out multi-scale scaling to obtain a multi-size scaled image; and the turning image, the images with various sizes and the scaling images with various sizes form a processed sample set.
It is worth mentioning that the number of images of the processed sample set is a multiple of the number of samples of the diode bulb,
it is worth mentioning that the anchor frame size is obtained by performing a plurality of iterations of the K-means algorithm on the VOC data set, and when the input image size is 416 x 416, the YOLOv3 anchor frame size is { [10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326 }.
It is worth to be noted that the loss function integrates the anchor frame center coordinate loss, the width and height loss, the confidence coefficient loss and the classification, the anchor frame loss is calculated by the sum of squares, and the classification error and the confidence coefficient error are calculated by the binary cross loss entropy, and the specific formula is as follows:
Figure 6
wherein
Figure BDA0002567074960000052
Indicating a certain real target contained in the jth anchor frame of the ith grid. The parts 1 and 2 are anchor frame loss, the parts 3 and 4 are confidence loss, the confidence error comprises a target part and a non-target part, and the number of the anchor frames without the target is far more than that of the anchor frames with the target, so that the anchor frames without the target have a coefficient lambda before the targetnoobj0.5, reducing the contribution weight; part 5 is the classification error.
It should be noted that the training process of the training diagram set on the YOLOv3 model is as follows:
dividing the images of the input training atlas into S-S grids;
generating 3 bounding boxes for each grid in the S-S grid, wherein the attributes comprise a central coordinate, a width, a height, a confidence coefficient and a probability of belonging to a workpiece crack target; and (3) eliminating a candidate frame which does not contain the target through the object confidence coefficient being less than the threshold th1, and then utilizing a non-maximum value to inhibit and select a candidate frame which has the maximum intersection ratio (IoU) with the real frame for target prediction, wherein the prediction is as follows:
bx=σ(tx)+cx(1)
by=σ(ty)+cy(2)
Figure BDA0002567074960000061
Figure BDA0002567074960000062
bx,by,bw,bhi.e. the center coordinates, width and height of the final predicted bounding box for the network. Wherein c isx,cyIs the coordinate offset of the grid; p is a radical ofw,phIs the width and height of the anchor box mapped into the feature map; t is tx,ty,tw,thIs a parameter to be learned in the network training process, tw,thRepresenting the degree of scaling of the prediction box, tx,tyRepresents the degree of center coordinate shift of the prediction box, and σ represents the sigmoid function. Updating t by continuous learningx,ty,tw,thParameters are set so that the prediction box is closer to the real box, and the training is stopped when the network loss is less than the set threshold th2 or the training number reaches the maximum iteration number N.
It is worth noting that the training of the data set on the YOLOv3 model uses 3 scales to perform 3 bounding box predictions:
scale 1, adding convolution layers after the feature extraction network, wherein the down-sampling proportion is 32, the scale of an output feature graph is 13 x 13, and the feature graph is suitable for detecting small targets;
the scale 2 is used for sampling the last-but-one convolution layer (namely, 2) in the scale 1, the down-sampling proportion is 16, and the up-sampling proportion is connected with a characteristic diagram with the scale of 26 and 26 in series, is increased by 2 times compared with the scale 1, and is suitable for detecting a medium-scale target;
dimension 3: analogy to scale 2, a 52 x 52 size signature was obtained, suitable for detecting larger targets.
In the specific embodiment, after sample expansion, 50000 diode glass shell samples with defects are selected as training sets at random, and 5000 samples are selected as test sets. The training times are 80000 times in total, the weight is automatically saved every 5000 times of training, the basic learning rate is 0.001, the batch size is 32, the momentum is 0.9, the weight attenuation coefficient is 0.0005, and overfitting is reduced by adopting L2 regularization.
The method is characterized in that the detection performance of the YOLOv3 target detection model on the tubular workpiece is measured by adopting Accuracy (Accuracy), Recall rate (Recall) and video test frame number (FPS), the higher the Accuracy and Recall rate are, the better the detection effect is represented, and the practical application can be better met, and the larger the FPS value is, the better the real-time detection effect is represented by the YOLOv3 target detection model.
It is worth noting that one type of defect on the diode envelope is shown in fig. 4.
The FPS of the YOLOv3 target detection model operated on a 1080TI video card computer is 18.3f/s, the Accuracy (Accuracy) and the Recall (Recall) of defect sample (bad) detection are shown in Table 1, and the Table 1 shows the detection condition of the YOLOv3 target detection model on the diode glass shell:
TABLE 1
Type (B) Rate of accuracy Recall rate
Diode glass shell 98.21 96.38
As can be seen from table 1, the accuracy of the YOLOv3 target detection model on the diode glass bulb detection is 98.21%, and the recall rate is 96.38%. The method has high accuracy and recall rate for detecting the tubular workpiece, and detects defects by using a Yolov3 target detection model, and tests the diode glass shell image as shown in figure 2. Fig. 2 is a schematic diagram of a diode glass bulb detection by a YOLOv3 target detection model, and the YOLOv3 target detection model can meet the actual requirements of diode glass bulb detection in industrial production and has a good application prospect.
In summary, the implementation principle of the invention is as follows: comprises the following steps of 1: placing a diode glass bulb to be detected on an industrial camera platform, and starting a backlight source at the bottom of the industrial camera platform; step 2: an industrial camera is arranged on the industrial camera platform, and a diode glass bulb image is captured through a telecentric lens on the industrial camera; and step 3: the method comprises the steps of taking the collected diode glass shell image as input and sending the input to a trained defect identification model in a computer, wherein the output of the defect identification model is the positioned diode glass shell image, and the defects on the positioned diode glass shell image are marked and displayed.

Claims (10)

1. The method for detecting the defects of the diode glass bulb is characterized by comprising the following steps of:
step 1: placing a diode glass bulb to be detected on an industrial camera platform, and starting a backlight source at the bottom of the industrial camera platform;
step 2: an industrial camera is arranged on the industrial camera platform, and a diode glass bulb image is captured through a telecentric lens on the industrial camera;
and step 3: and sending the collected diode glass shell image as input to a trained defect identification model in a computer, wherein the output of the defect identification model is the positioned diode glass shell image, and the defect on the positioned diode glass shell image is marked and displayed.
2. The diode glass bulb defect detection method of claim 1, wherein in the step 2, the defect identification model is a YOLOv3 model.
3. The diode glass bulb defect detecting method of claim 2, wherein in the step 2, the industrial camera platform further comprises a light source controller, the light source controller is respectively connected with the industrial camera and the backlight light source, and the industrial camera is connected with the computer through the light source controller.
4. The diode glass bulb defect detection method of claim 3, wherein in the step 3, the training process of the defect identification model comprises the following steps,
step 11: 4000 diode glass shell images with different types of defects are obtained, and the step 2 is carried out;
step 22: expanding the defective diode glass bulb image to obtain 50000 sample sets, and marking the image of the processed sample set;
step 33: and carrying out YOLOv3 model training on the marked diode glass bulb crack image to obtain a trained defect identification model.
5. The diode glass bulb defect detection method according to claim 4, characterized in that in the step 11, the acquired image is inverted left and right to obtain an inverted image; performing different-size cutting to obtain images of various sizes; carrying out multi-scale scaling to obtain a multi-size scaled image; and the turning image, the images with various sizes and the scaling images with various sizes form a processed sample set.
6. The diode bulb defect detection method of claim 5, wherein the number of images of the processed sample set is a multiple of the number of bulb samples.
7. The diode envelope defect detection method of claim 6, characterized in that the anchor frame size is obtained by a number of iterations of the K-means algorithm on the VOC data set, and when the input image size is 416 x 416, the YOLOv3 anchor frame size is { [10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326] }.
8. The diode glass bulb defect detection method of claim 7, characterized in that the loss function integrates anchor frame center coordinate loss, width and height loss, confidence error and classification error, the anchor frame center coordinate loss is calculated by a sum of squares, and the classification error and confidence error are calculated by a binary cross loss entropy.
9. The diode glass bulb defect detection method of claim 1, characterized in that the training process of the training atlas to the YOLOv3 model is performed by dividing the image of the input training atlas into S x S grids, each grid of the S x S grids generating 3 bounding boxes, and the attributes include center coordinates, width, height, confidence and probability of belonging to the workpiece crack target; and eliminating a candidate frame which does not contain the target through the object confidence coefficient being less than the threshold th1, and then utilizing a non-maximum value to inhibit and select a candidate frame which is intersected with the real frame and is most than IoU to carry out target prediction, wherein the target prediction formula is as follows:
bx=σ(tx)+cx
by=σ(ty)+cy
Figure FDA0002567074950000021
Figure FDA0002567074950000022
bx,byfinal prediction of the coordinates of the center of the bounding box, b, for the S-S networkwAnd bhFinally predicting the width and height of the bounding box for S x S network respectively, wherein cxAnd cyIs the coordinate offset of the grid; p is a radical ofwAnd phRespectively, the width and height of the anchor frame mapped into the feature map; t is tx,ty,tw,thIs a parameter to be learned in the network training process, tw,thRepresenting the degree of scaling of the prediction box, tx,tyRepresents the degree of deviation of the center coordinates of the prediction box, sigma represents sigmoid function, and t is updatedx,ty、,tw,、thAnd parameters, namely, enabling the prediction box to be closer to the real box, and stopping training when the network loss function value is smaller than a set threshold th2 or the training times reach the maximum iteration times N.
10. The diode glass bulb defect detection method of claim 1, wherein the training of the data set to the YOLOv3 model uses 3 scales to perform 3 bounding box predictions:
the setting method of the scale 1 is that a plurality of convolution layers are added after the feature extraction network, and the scale of the output feature graph is 13 x 13 by reducing the sampling proportion to 32 so as to detect the small defect scale target;
the setting method of the scale 2 is that the sampling rate of the last but one convolution layer of the scale 1 is set to be 16, and then the sampling rate is connected with the characteristic diagram with the scale of 26 in series and is increased by 2 times compared with the scale 1 so as to detect the medium defect scale target;
the setting method of the scale 3 is to obtain a 52 × 52 feature map for detecting a larger defect target by analogy with the scale 2.
CN202010633906.2A 2020-07-02 2020-07-02 Diode glass bulb defect detection method Pending CN112200762A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010633906.2A CN112200762A (en) 2020-07-02 2020-07-02 Diode glass bulb defect detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010633906.2A CN112200762A (en) 2020-07-02 2020-07-02 Diode glass bulb defect detection method

Publications (1)

Publication Number Publication Date
CN112200762A true CN112200762A (en) 2021-01-08

Family

ID=74006519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010633906.2A Pending CN112200762A (en) 2020-07-02 2020-07-02 Diode glass bulb defect detection method

Country Status (1)

Country Link
CN (1) CN112200762A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113504238A (en) * 2021-06-04 2021-10-15 广东华中科技大学工业技术研究院 Glass surface defect collecting device and detection method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961235A (en) * 2018-06-29 2018-12-07 山东大学 A kind of disordered insulator recognition methods based on YOLOv3 network and particle filter algorithm
CN109636772A (en) * 2018-10-25 2019-04-16 同济大学 The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning
CN110310261A (en) * 2019-06-19 2019-10-08 河南辉煌科技股份有限公司 A kind of Contact Net's Suspension Chord defects detection model training method and defect inspection method
CN110838112A (en) * 2019-11-08 2020-02-25 上海电机学院 Insulator defect detection method based on Hough transform and YOLOv3 network
WO2020068784A1 (en) * 2018-09-24 2020-04-02 Schlumberger Technology Corporation Active learning framework for machine-assisted tasks
US20200134810A1 (en) * 2018-10-26 2020-04-30 Taiwan Semiconductor Manufacturing Company Ltd. Method and system for scanning wafer
CN111292305A (en) * 2020-01-22 2020-06-16 重庆大学 Improved YOLO-V3 metal processing surface defect detection method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961235A (en) * 2018-06-29 2018-12-07 山东大学 A kind of disordered insulator recognition methods based on YOLOv3 network and particle filter algorithm
WO2020068784A1 (en) * 2018-09-24 2020-04-02 Schlumberger Technology Corporation Active learning framework for machine-assisted tasks
CN109636772A (en) * 2018-10-25 2019-04-16 同济大学 The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning
US20200134810A1 (en) * 2018-10-26 2020-04-30 Taiwan Semiconductor Manufacturing Company Ltd. Method and system for scanning wafer
CN110310261A (en) * 2019-06-19 2019-10-08 河南辉煌科技股份有限公司 A kind of Contact Net's Suspension Chord defects detection model training method and defect inspection method
CN110838112A (en) * 2019-11-08 2020-02-25 上海电机学院 Insulator defect detection method based on Hough transform and YOLOv3 network
CN111292305A (en) * 2020-01-22 2020-06-16 重庆大学 Improved YOLO-V3 metal processing surface defect detection method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FEI ZHOU 等: "A Generic Automated Surface Defect Detection Based on a Bilinear Model", 《APPLIED SCIENCES》 *
张广世 等: "基于改进YOLOv3网络的齿轮缺陷检测", 《激光与光电子学进展》 *
牛乾: "二极管玻壳表面缺陷检测技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
郭毅强: "晶圆表面缺陷视觉检测研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113504238A (en) * 2021-06-04 2021-10-15 广东华中科技大学工业技术研究院 Glass surface defect collecting device and detection method
CN113504238B (en) * 2021-06-04 2023-12-22 广东华中科技大学工业技术研究院 Glass surface defect acquisition device and detection method

Similar Documents

Publication Publication Date Title
CN109977808B (en) Wafer surface defect mode detection and analysis method
CN110310259B (en) Improved YOLOv3 algorithm-based knot defect detection method
WO2020177432A1 (en) Multi-tag object detection method and system based on target detection network, and apparatuses
CN109509187B (en) Efficient inspection algorithm for small defects in large-resolution cloth images
WO2022236876A1 (en) Cellophane defect recognition method, system and apparatus, and storage medium
CN110555842A (en) Silicon wafer image defect detection method based on anchor point set optimization
CN108038846A (en) Transmission line equipment image defect detection method and system based on multilayer convolutional neural networks
CN111429418A (en) Industrial part detection method based on YO L O v3 neural network
CN111754498A (en) Conveyor belt carrier roller detection method based on YOLOv3
CN109544522A (en) A kind of Surface Defects in Steel Plate detection method and system
CN113643228B (en) Nuclear power station equipment surface defect detection method based on improved CenterNet network
CN110929795B (en) Method for quickly identifying and positioning welding spot of high-speed wire welding machine
CN111652853A (en) Magnetic powder flaw detection method based on deep convolutional neural network
CN113222982A (en) Wafer surface defect detection method and system based on improved YOLO network
CN110610210B (en) Multi-target detection method
CN114581782B (en) Fine defect detection method based on coarse-to-fine detection strategy
CN111860106B (en) Unsupervised bridge crack identification method
CN111931800B (en) Tunnel surface defect classification method based on deep convolutional neural network
CN111127417B (en) Printing defect detection method based on SIFT feature matching and SSD algorithm improvement
CN112233090A (en) Film flaw detection method based on improved attention mechanism
CN112508857B (en) Aluminum product surface defect detection method based on improved Cascade R-CNN
CN111178177A (en) Cucumber disease identification method based on convolutional neural network
CN113221956A (en) Target identification method and device based on improved multi-scale depth model
CN112200762A (en) Diode glass bulb defect detection method
CN111597939A (en) High-speed rail line nest defect detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210108

RJ01 Rejection of invention patent application after publication