CN111291830A - Method for improving glass surface defect detection efficiency and accuracy - Google Patents
Method for improving glass surface defect detection efficiency and accuracy Download PDFInfo
- Publication number
- CN111291830A CN111291830A CN202010144610.4A CN202010144610A CN111291830A CN 111291830 A CN111291830 A CN 111291830A CN 202010144610 A CN202010144610 A CN 202010144610A CN 111291830 A CN111291830 A CN 111291830A
- Authority
- CN
- China
- Prior art keywords
- accuracy
- network
- fster
- image
- rcnn
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of target detection and identification, and discloses a method for improving the detection efficiency and accuracy of glass surface defects, which comprises the following steps: extracting defect samples, inputting fster-rcnn, ssd and Yolov3 target recognition network training learning, and storing the learned model; using fster-rcnn, ssd and Yolov3 target recognition network to train the learned model for image detection and recognition to obtain the image detection accuracy of the target recognition network; comparing the image detection accuracy rates of the fster-rcnn, ssd and Yolov3 target recognition networks, distributing weights according to the sequence of the accuracy rates from large to small, combining the fster-rcnn, ssd and Yolov3 target detection networks to obtain a combined classifier for training to obtain the comprehensive accuracy rate, and marking as accuracy 1; the dynamic weight combination fster-rcnn, ssd and Yolov3 target recognition network trains the learned model; and collecting a sample image, inputting the sample image into the network training learning model after dynamic weight combination, and outputting the defect position and category on the sample image. The method is beneficial to improving the detection efficiency and accuracy.
Description
Technical Field
The invention belongs to the field of target detection and identification, and particularly relates to a method for improving the detection efficiency and accuracy of glass surface defects.
Background
For example, chinese patent publication No. CN 107123111a discloses a method for constructing a depth residual error network for mobile phone screen defect detection, which collects images including defects and normal images, marks them, trains a custom depth residual error network through training data until convergence and has higher accuracy; generating a shallow network model by using a method of randomly removing each residual module of a deep residual network with a certain probability, and repeating the operation to generate a plurality of network models with different depths; zooming mobile phone screen pictures shot by a high-resolution camera in different proportions to form a picture pyramid, dividing the pictures into small blocks and enabling the picture blocks to have certain overlapping areas for the pictures of each scale, and sending all the small blocks of pictures into network models with different depths together as a group; selecting a characteristic graph output by each network model as a response graph of the defect, obtaining the position of the defective area of the mobile phone screen by adopting a threshold segmentation method, and finally overlapping the detection results of the network models at different depths to obtain the final detection result. However, this method is disadvantageous in improving the detection efficiency.
Disclosure of Invention
The invention aims to provide a detection method capable of improving efficiency and accuracy.
In order to solve the problems, the method for improving the efficiency and the accuracy of detecting the defects on the surface of the glass comprises the following steps:
the method comprises the following steps: extracting a defect sample, inputting fster-rcnn target recognition network for training and learning, and storing a learned model;
step two: extracting a defect sample, inputting the defect sample into the ssd target detection network for training and learning, and storing a learned model;
extracting a defect sample, inputting a Yolov3 target detection network training learning, and storing a learned model;
step four: training the learned model by using the fster-rcnn target identification network to perform image detection and identification to obtain the image detection accuracy of the fster-rcnn target identification network;
step five: training the learned model by using the ssd target detection network to perform image detection and identification to obtain the image detection accuracy of the ssd target detection network;
step six: carrying out image detection and identification by using a model after learning of the Yolov3 target detection network training to obtain the image detection accuracy of the Yolov3 target detection network;
step seven: comparing the image detection accuracy of the fster-rcnn target identification network, the image detection accuracy of the ssd target detection network and the image detection accuracy of the Yolov3 target detection network, distributing weights according to the sequence of the accuracy from large to small, and sequentially marking as w1, w2 and w 3;
step eight: at time 1, by weight w 1: w 2: the fster-rcnn target identification network, the ssd target detection network and the Yolov3 target detection network are combined in a w 3-1: 1:1 mode to obtain a combined classifier for training to obtain comprehensive accuracy, and the comprehensive accuracy is marked as accuracy 1;
step nine: and when the n is more than or equal to 2 times, combining the fster-rcnn target recognition network, the ssd target detection network and the Yolov3 target detection network according to the weight w1, n/(n +1), w2, 2/3 (1-n/(n +1)), and w3, 1/3 (1-n/(n +1)) to obtain a combined classifier for training to obtain the comprehensive accuracy, which is recorded as accuracy (n).
Step ten: recording ideal accuracy as p, wherein p is one of accuracy (n), when | accuracy (n) -p | < epsilon, accuracy (n) converges on p, recording the weight ratio w1, w2 and w3 of the time as an optimal weight ratio, and combining the fster-rcnn target identification network, the ssd target detection network and the Yolov3 target detection network according to the optimal weight ratio to obtain an optimal network;
step eleven: combining the model after the fster-rcnn target recognition network training learning, the model after the ssd target detection network training learning and the model after the Yolov3 target detection network training learning by the dynamic weight;
step twelve: and collecting a sample image, inputting the sample image into the network training learning model after dynamic weight combination, and outputting the defect position and category on the sample image.
Further, the surface defect of the defect sample is a scratch or a chipping edge or an air bubble or a stain.
Further, in step twelve, the sample image is subjected to denoising processing by a residual error method.
Further, in step twelve, the sample image is subjected to median filtering denoising processing.
The recognition classification algorithm selects the three most commonly used deep learning target recognition algorithms to perform dynamic weight combination, and combines the advantages of the three algorithms to improve the detection precision and speed.
Drawings
Fig. 1 shows a background image sr1 used in the residual method.
Fig. 2 shows a detected image src2 used in the residual method.
Fig. 3 shows the residual-method noise-reduced image dst.
FIG. 4 is a converged image of integrated accuracy.
Detailed Description
The method and the algorithm are as follows:
as shown in fig. 1-3, the surface defect detection process:
first, images are acquired, and images are taken using a camera, video camera, or the like.
Secondly, image denoising treatment
(1) Reducing external dust interference by residual error method (mainly realizing that a background image is collected firstly under the same environment, then a sample image is collected, two images are matched, and then difference operation is carried out pixel by pixel to weaken external dust interference)
And (3) image segmentation algorithm by reference residual method:
residual method denoising principle: under the same environment, a background image sr1 and an image src2 to be detected are acquired before and after, because src1 and src2 are two images acquired before and after under the same condition, namely the background image src1 and the background on the image src2 to be detected have the same noise and dust generated noise point, and then src2 and src1 are used for making a difference value pixel by pixel, and subtrack (src2, src1, dst, Mat (), -1); the same noise on src2 as on src1 can be subtracted, and the obtained residual image dst is an image from which dust noise is removed, so as to provide a source image with higher contrast for later stage target recognition.
Because the external light source may be unevenly illuminated, and the like, the camera may generate impulse noise, salt and pepper noise and the like when acquiring images, and the median filtering can effectively remove image noise and simultaneously retain image edge details. (refer to several denoising algorithms commonly used in image processing books (opencv3) and finally select a median filtering algorithm through a contrast experiment)
Median filtering: median filtering is a typical nonlinear filtering technique, and the basic principle is to replace the value of a point in a digital image or digital sequence with the median of the values of the points in a field of the point, so that the surrounding pixel values are close to the true values, thereby eliminating isolated noise points. The method is therefore particularly useful for removing impulse noise, salt and pepper noise, since it does not depend on values in the field that differ significantly from typical values. To take a simple example: the median filtering of the one-dimensional sequence {0,3,4,0,7} is ordered to be {0,0,3,4,7}, and then the median is 3.
Target identification and classification:
by referring to three deep learning articles of fster-rcnn, ssd and yolov3, the advantages of the three networks can be combined to improve the detection accuracy and speed by combining the three networks with dynamic weight.
fster-rcnn target identification network: defect samples (scratches, edge breakage, bubbles, dirt and the like) are extracted, 2000 defect samples of each type are input into a network for training and learning, and the learned model is stored.
ssd target detection network: the ssd network is trained using the same samples as above, and the learned model is saved.
Yolov3 target detection network: the yolov3 network was trained using the same samples and the learned model was saved.
And combining the three target identification network models by dynamic weight, inputting the acquired image for testing, and accurately outputting the position and the category of the defect on the sample to be detected, namely finishing the identification and classification of the glass surface defect detection.
Dynamic weight combination: the recognition classification algorithm selects the three most commonly used deep learning target recognition algorithms to perform dynamic weight combination, and combines the advantages of the three algorithms to improve the detection precision and speed.
The basic idea is as follows: respectively training three target recognition algorithms, testing the recognition accuracy of each algorithm, and comparing the accuracy of the three algorithms (if the accuracy is ranked as fast-rcnn > ssd > yolov 3). Initially, respectively and equally distributing weights w1, w2 and w3 (both weights are one third) to the fast-rcnn, ssd and yolov3 to obtain a combined classifier for training to obtain a comprehensive accuracy (accuracy1), and then increasing the algorithm weight with the highest accuracy to two thirds (namely w1 is 2/3), wherein the weight w3(yolov3) with the lowest accuracy is: w2(ssd) ═ 1:2 (i.e., w2 ═ 2/9, w3 ═ 1/9), three algorithms are combined with new weights to train to obtain a new integrated accuracy (accuracy2), at this time, accuracy2> accuracy1, at this time, w1 ═ 3/4, the ratio of w2 and w3 is 2:1 combined classifier to obtain an integrated accuracy3, by analogy, w1 is sequentially promoted to 4/5, 5/6, 7/8, 8/9.. said., w2 and w3 are continuously combined with 2:1 to allocate the remaining weights to the classifiers to obtain new accuracy4, accuracycacy 5, accuracycacy 6, accuracycacy 7.. finally, with w1 as the x axis, acycacy as the y axis, and as shown in a graph of convergence point 364, the highest accuracy value can be allocated as a graph shown in a graph 364, and then combining the three target recognition networks to obtain the optimal network.
The method can overcome the problems of the traditional surface defect detection, combines the advantages of the three algorithms, and is beneficial to comprehensively improving the efficiency and the accuracy of the screen defect detection.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.
Claims (4)
1. A method for improving the detection efficiency and accuracy of glass surface defects is characterized by comprising the following steps:
the method comprises the following steps: extracting a defect sample, inputting fster-rcnn target recognition network for training and learning, and storing a learned model;
step two: extracting a defect sample, inputting the defect sample into the ssd target detection network for training and learning, and storing a learned model;
extracting a defect sample, inputting a Yolov3 target detection network training learning, and storing a learned model;
step four: training the learned model by using the fster-rcnn target identification network to perform image detection and identification to obtain the image detection accuracy of the fster-rcnn target identification network;
step five: training the learned model by using the ssd target detection network to perform image detection and identification to obtain the image detection accuracy of the ssd target detection network;
step six: carrying out image detection and identification by using a model after learning of the Yolov3 target detection network training to obtain the image detection accuracy of the Yolov3 target detection network;
step seven: comparing the image detection accuracy of the fster-rcnn target identification network, the image detection accuracy of the ssd target detection network and the image detection accuracy of the Yolov3 target detection network, distributing weights according to the sequence of the accuracy from large to small, and sequentially marking as w1, w2 and w 3;
step eight: at time 1, by weight w 1: w 2: the fster-rcnn target identification network, the ssd target detection network and the Yolov3 target detection network are combined in a w 3-1: 1:1 mode to obtain a combined classifier for training to obtain comprehensive accuracy, and the comprehensive accuracy is marked as accuracy 1;
step nine: when the nth time is more than or equal to 2 times, combining the fster-rcnn target recognition network, the ssd target detection network and the Yolov3 target detection network according to the weight w1 ═ n/(n +1), w2 ═ 2/3 (1-n/(n +1)), and w3 ═ 1/3 (1-n/(n +1)) to obtain a combined classifier for training to obtain the comprehensive accuracy, and marking the comprehensive accuracy as accuracy (n);
step ten: recording ideal accuracy as p, wherein p is one of accuracy (n), when | accuracy (n) -p | < epsilon, accuracy (n) converges on p, recording the weight ratio w1, w2 and w3 of the time as an optimal weight ratio, and combining the fster-rcnn target identification network, the ssd target detection network and the Yolov3 target detection network according to the optimal weight ratio to obtain an optimal network;
step eleven: combining the model after the fster-rcnn target recognition network training learning, the model after the ssd target detection network training learning and the model after the Yolov3 target detection network training learning by the dynamic weight;
step twelve: and collecting a sample image, inputting the sample image into the network training learning model after dynamic weight combination, and outputting the defect position and category on the sample image.
2. The method according to claim 1, wherein the surface defect of the defect sample is a scratch or a chipping or a blister or a smudge.
3. The method of claim 2, wherein in step twelve, the sample image is denoised by a residual method.
4. The method for improving the efficiency and accuracy of detecting defects on a glass surface as claimed in claim 2 or 3, wherein in step twelve, the sample image is subjected to median filtering denoising.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010144610.4A CN111291830B (en) | 2020-03-04 | 2020-03-04 | Method for improving glass surface defect detection efficiency and accuracy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010144610.4A CN111291830B (en) | 2020-03-04 | 2020-03-04 | Method for improving glass surface defect detection efficiency and accuracy |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111291830A true CN111291830A (en) | 2020-06-16 |
CN111291830B CN111291830B (en) | 2023-03-03 |
Family
ID=71022529
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010144610.4A Active CN111291830B (en) | 2020-03-04 | 2020-03-04 | Method for improving glass surface defect detection efficiency and accuracy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111291830B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107123111A (en) * | 2017-04-14 | 2017-09-01 | 浙江大学 | A kind of depth residual error net structure method for mobile phone screen defects detection |
CN108765391A (en) * | 2018-05-19 | 2018-11-06 | 科立视材料科技有限公司 | A kind of plate glass foreign matter image analysis methods based on deep learning |
US20180342050A1 (en) * | 2016-04-28 | 2018-11-29 | Yougetitback Limited | System and method for detection of mobile device fault conditions |
CN108918527A (en) * | 2018-05-15 | 2018-11-30 | 佛山市南海区广工大数控装备协同创新研究院 | A kind of printed matter defect inspection method based on deep learning |
CN110728657A (en) * | 2019-09-10 | 2020-01-24 | 江苏理工学院 | Annular bearing outer surface defect detection method based on deep learning |
-
2020
- 2020-03-04 CN CN202010144610.4A patent/CN111291830B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180342050A1 (en) * | 2016-04-28 | 2018-11-29 | Yougetitback Limited | System and method for detection of mobile device fault conditions |
CN107123111A (en) * | 2017-04-14 | 2017-09-01 | 浙江大学 | A kind of depth residual error net structure method for mobile phone screen defects detection |
CN108918527A (en) * | 2018-05-15 | 2018-11-30 | 佛山市南海区广工大数控装备协同创新研究院 | A kind of printed matter defect inspection method based on deep learning |
CN108765391A (en) * | 2018-05-19 | 2018-11-06 | 科立视材料科技有限公司 | A kind of plate glass foreign matter image analysis methods based on deep learning |
CN110728657A (en) * | 2019-09-10 | 2020-01-24 | 江苏理工学院 | Annular bearing outer surface defect detection method based on deep learning |
Non-Patent Citations (1)
Title |
---|
张丹丹: "基于改进卷积神经网络的玻璃缺陷识别方法研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》 * |
Also Published As
Publication number | Publication date |
---|---|
CN111291830B (en) | 2023-03-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113450307B (en) | Product edge defect detection method | |
CN109509187B (en) | Efficient inspection algorithm for small defects in large-resolution cloth images | |
CN109598287B (en) | Appearance flaw detection method for resisting network sample generation based on deep convolution generation | |
CN112819772B (en) | High-precision rapid pattern detection and recognition method | |
CN109767422A (en) | Pipe detection recognition methods, storage medium and robot based on deep learning | |
CN115063409B (en) | Method and system for detecting surface material of mechanical cutter | |
CN104881662A (en) | Single-image pedestrian detection method | |
CN110245697B (en) | Surface contamination detection method, terminal device and storage medium | |
CN114627383B (en) | Small sample defect detection method based on metric learning | |
CN112614062A (en) | Bacterial colony counting method and device and computer storage medium | |
CN114120317B (en) | Optical element surface damage identification method based on deep learning and image processing | |
CN101908205B (en) | Magic square coding-based median filter method | |
CN115063620B (en) | Bit layering based Roots blower bearing wear detection method | |
CN111612759B (en) | Printed matter defect identification method based on deep convolution generation type countermeasure network | |
CN109118434A (en) | A kind of image pre-processing method | |
CN113870202A (en) | Far-end chip defect detection system based on deep learning technology | |
CN114495098A (en) | Diaxing algae cell statistical method and system based on microscope image | |
CN115731198A (en) | Intelligent detection system for leather surface defects | |
CN113076860B (en) | Bird detection system under field scene | |
CN113673396A (en) | Spore germination rate calculation method and device and storage medium | |
CN111291830B (en) | Method for improving glass surface defect detection efficiency and accuracy | |
CN112308087A (en) | Integrated imaging identification system and method based on dynamic vision sensor | |
CN116958073A (en) | Small sample steel defect detection method based on attention feature pyramid mechanism | |
CN108960285B (en) | Classification model generation method, tongue image classification method and tongue image classification device | |
CN109934817A (en) | The external contouring deformity detection method of one seed pod |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |