CN109685030A - A kind of mug rim of a cup defects detection classification method based on convolutional neural networks - Google Patents
A kind of mug rim of a cup defects detection classification method based on convolutional neural networks Download PDFInfo
- Publication number
- CN109685030A CN109685030A CN201811631744.8A CN201811631744A CN109685030A CN 109685030 A CN109685030 A CN 109685030A CN 201811631744 A CN201811631744 A CN 201811631744A CN 109685030 A CN109685030 A CN 109685030A
- Authority
- CN
- China
- Prior art keywords
- image
- neural networks
- convolutional neural
- cup
- mug
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 37
- 230000007547 defect Effects 0.000 title claims abstract description 30
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 27
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 24
- 230000000694 effects Effects 0.000 claims description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 230000007423 decrease Effects 0.000 claims 1
- 238000012986 modification Methods 0.000 claims 1
- 230000004048 modification Effects 0.000 claims 1
- 238000000605 extraction Methods 0.000 abstract description 8
- 238000002372 labelling Methods 0.000 abstract description 7
- 230000006870 function Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005352 clarification Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/32—Normalisation of the pattern dimensions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The mug rim of a cup defects detection classification method based on convolutional neural networks that the invention discloses a kind of, comprising the following steps: mug rim of a cup image information A, is acquired by image capturing system;B, the noise of image is collected using opencv removal and data set expands;C, flaw labeling is carried out to mug rim of a cup image using LabelImg, the training set image marked is uniformly formatted as fixed size: 2M*2M;D, training convolutional neural networks are treated using formatted training set to be trained;E, image characteristics extraction is carried out to the image after flaw labeling using convolutional neural networks model after training;F, region recommendation network generates the positive sample candidate frame and negative sample candidate frame of identical quantity according to characteristics of image;G, detection target in region is recommended to classify target.So far, whole system, which completes, classifies to mug rim of a cup defects detection.The present invention can be effectively used for the classification of mug rim of a cup defects detection, improve detection degree of automation and efficiency and reduce the labor intensity of influence and worker of the human factor to detection process.
Description
Technical field
The present invention relates to rim of a cup defects detection sorting technique field, specially a kind of mugs based on convolutional neural networks
Rim of a cup defects detection classification method.
Background technique
With the continuous development of Chinese manufacturing, the continuous improvement of living standards of the people, mug has become the people day
The a part often lived, mug rim of a cup defect are primarily referred to as the spot, notch and scratch of rim of a cup, these defects directly affect
Product sales volume and corporate image.Therefore, then become particularly important using suitable defects detection taxonomic methods.Traditional artificial detection
Method there is detection efficiency it is low, great work intensity, the drawbacks such as precision is low, other researcher is by computer vision
It is combined with image procossing, the defect situation of mug is detected by defect comparison, this defect image recognizer needs
Manual construction, the main feature of selection target, and choose suitable classifier and identified that limitation is larger.For example, candidate regions
Domain differentiates, i.e., screens to the candidate region being partitioned into according to shape feature, gray feature and Hu invariant moment features, this is needed
Very important person participates in some main features of design defect, this there is a problem: more for defect based on the feature of hand-designed
The variation of sample does not have good robustness, is only applicable to specific defects detection, is difficult to adapt to that defect area is not of uniform size, shape
Shape wide variety, background area complexity image automatic identification and positioning.
In recent years, with the appearance of millions of tape label training set and the appearance based on GPU training algorithm, make to train
Complicated convolutional network model is no longer extravagantly hoped.Convolutional neural networks, not only can be certainly compared to the method that traditional-handwork extracts feature
Clarification of objective is arrived in dynamic study, and is suitble to the processing of data set, moreover it is possible to be learnt end to end, and overwhelming majority prediction
It is all completed in GPU, greatly improves the speed and accuracy of target detection.
Therefore based on tell above by convolutional neural networks be introduced into mug rim of a cup defects detection classification in become one kind can
Row scheme compares the speed and accuracy for improving target detection compared to defect, while improving compared to traditional artificial detection
Detection degree of automation and efficiency and the labor intensity for reducing influence and worker of the human factor to detection process.
Summary of the invention
It is an object of the invention to design a kind of mug rim of a cup defects detection classification method based on convolutional neural networks,
To solve the problems mentioned in the above background technology.
To achieve the above object, the invention provides the following technical scheme: a kind of mug cup based on convolutional neural networks
Mouth defects detection classification method, the following steps are included:
A, mug rim of a cup image information is acquired by image capturing system;
B, expanded using the noise and data set of opencv removal acquired image;
C, flaw labeling is carried out to mug rim of a cup image using labelImg, by the unified formatting of the training set image marked
For fixed size: 2M*2M;
D, training convolutional neural networks are treated using formatted training set to be trained;
E, image characteristics extraction is carried out to the image after flaw labeling using convolutional neural networks model after training;
F, region recommendation network generates the positive sample candidate frame and negative sample candidate frame of identical quantity according to characteristics of image;
G, detection target in region is recommended to classify target.
Preferably, right value update includes following part in training network in the step D:
A, decay in the middle addition weight of traditional Adam method;
B, weight decaying, which is not added in loss function, participates in gradient and calculates, but parameter when updating every time can again additionally into
The attenuation process of weight of row;
C, the parameter of weight decaying updates step are as follows:
Wherein,For learning rate,For weight attenuation coefficient,It is additional weight attenuation term,It is for numerical stability
Small constant, default, the deviation of modified first moment, the deviation of modified second moment,WithIn sectionIt is interior, it is proposed that default are as follows: to be respectively 0.9 and 0.999.
Preferably, generating positive and negative sample pane in the step F includes following part:
A, one is generated on characteristic pattern(default takes) small sliding window;
B, it is predicted while generating each small stroke of windowA recommendation region candidate frame;
C, each small sliding window is mapped as 256 dimensional feature vectors under ZF network, which is sent into two full articulamentums arranged side by side,
Layer is returned including target frame classification layer and target frame, wherein target frame classification layer exports two scores, i.e., each target frame is corresponding
In target and non-targeted probability.Target frame returns layer and exports four regression parameters, respectively represent the center of target frame
Coordinate and width are high, as the amendment to sliding window, obtain primary modified target frame;
D, using the friendship with the true frame of target and than IOU greater than 0.7 candidate frame as positive sample, will hand over and compare with the true frame of target
Candidate frame of the IOU less than 0.3 can be then rejected as negative sample, remaining candidate frame.
Preferably, detection target in region is recommended classify including following part target in the step G:
A, the step F positive and negative candidate frame generated is mapped on the primitive character figure that convolutional neural networks finally extract;
B, it is uniformly formatted as by the pond ROI layerDefault n=7;
C, network uses the recommendation provincial characteristics of acquisitionClassifier judges target category, and to belonging to some
The region candidate frame of classification carries out second using the recurrence layer of network and adjusts, and further corrects its position;
D, last classification results are obtained.
Compared with prior art, the beneficial effects of the present invention are:
(1) mug defects detection classification method provided by the invention can effectively improve detection classification the degree of automation and standard
True property and efficiency;
(2) detection method can accurately reflect mug defect type, and there have convenient for the rim of a cup defect to mug to be a relatively accurate
Assessment is conducive to manufacturer according to defect type and improves technique, improves production efficiency.
Detailed description of the invention
Fig. 1 is flow chart of the present invention.
Fig. 2 is the work flow diagram of the pond ROI layer.
Fig. 3 is overall network structure chart of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
Attached drawing 1, attached drawing 2 and attached drawing 3 are please referred to, the present invention provides a kind of system schema: one kind is based on convolutional neural networks
Mug rim of a cup defects detection classification method.Present invention work is further retouched in detail with specific embodiment with reference to the accompanying drawing
State: wherein attached drawing 1 is system overall flow figure, and attached drawing 2 is the work flow diagram of the pond ROI layer, and attached drawing 3 is overall network of the present invention
Structure chart:
The following steps are included:
A, mug rim of a cup image information is acquired by image capturing system;
B, expanded using the noise and data set of opencv removal acquired image;
C, flaw labeling is carried out to mug rim of a cup image using labelImg, by the unified formatting of the training set image marked
For fixed size: 2M*2M;
D, training convolutional neural networks are treated using formatted training set to be trained;
E, image characteristics extraction is carried out to the image after flaw labeling using convolutional neural networks model after training;
F, region recommendation network generates the positive sample candidate frame and negative sample candidate frame of identical quantity according to characteristics of image;
G, detection target in region is recommended to classify target.
In the present invention, image capturing system in step A the following steps are included:
A, CCD camera is fixed on operating platform;
B, after position adjustment is suitable, CCD camera obtains the rim of a cup image of mug;
C, image is acquired and is saved by host computer, is used for subsequent operation.
The present invention replaces the work of human eye in traditional artificial detection using CCD camera, and CCD camera is fixed on operating platform
On can guarantee detection stability, improve the degree of automation of detection process, while reducing error brought by human factor.
In the present invention, in step B image denoising and data set expand the following steps are included:
A, opencv3.3.0 and visual studio2013 is installed and carries out environment configurations;
B, operation is filtered to acquired image using medianBlur function in opencv;
C, data set is expanded using functions such as resize, flip in opencv.
The present invention is denoised using medianBlur function in opencv, is had compared to other tradition denoising methods more suitable
The denoising effect for closing this method keeps the convolutional neural networks effect trained more preferable using Image geometry transform EDS extended data set,
Increase the accuracy of result.
In the present invention, flaw labeling in step C the following steps are included:
A, python2.7.17 and labelImg is installed and carries out environment configurations;
B, sound code file folder opens predefined_classes.txt using notepad++, and modifies default category;
C, " Open Dir " opens Photo folder, successively picture is selected to start to be labeled, and uses " Create RectBox "
Start picture frame, and marks associated disadvantages classification;
D, the training set image marked is uniformly formatted as fixed size: 2M*2M.
The labelImg that the present invention uses is a kind of convenient and fast picture annotation tool, and specified zone marker can be by it
Specified defect classification facilitates the training for carrying out convolutional neural networks.
In the present invention, in step D convolutional neural networks training the following steps are included:
A, selection convolutional neural networks structure and determining initial parameter;
B, the iterative process of training includes the successively backpropagation to the propagated forward and error of each small lot data,
After the completion of data whole backpropagation, then unified update weighting parameter, is used for next round iteration, secondary when reaching expected iteration
Number stops iteration, calculates the final accuracy rate of network;
C, network structure and relevant parameter are constantly adjusted according to training, is required until training accuracy rate reaches;
D, the data such as the convolutional neural networks model after training and weighting parameter are used for subsequent operation.
The convolutional neural networks that the present invention uses need to be trained network first with the good data set of step C flag,
Trained network model can be used to that acquired image to be detected and be classified, and the program improves the intelligence of detection
Change degree, while reducing the labor intensity of worker.
In the present invention, image characteristics extraction in step E the following steps are included:
A, convolutional layer perceives image based on local receptor field principle, obtains characteristics of image as characteristic pattern and is used for subsequent behaviour
Make;
B, pond layer carries out second extraction to characteristic pattern based on the local correlations principle of image, is retaining characteristics of image as far as possible
Under the premise of be further reduced needed for training number of parameters and reduce the risk of over-fitting.
The image characteristics extraction network that the present invention uses, convolutional layer can obtain the feature more met with real image, reduce
The complexity of network, prevents over-fitting;Pond layer is that the characteristic pattern extracted to convolutional layer carries out second extraction, is reduced quasi-
The risk of conjunction.
In the present invention, image characteristics extraction in step F the following steps are included:
A, one is generated on characteristic pattern(default takes) small sliding window;
B, it is predicted while generating each small stroke of windowA recommendation region candidate frame;
C, each small sliding window is mapped as 256 dimensional feature vectors under ZF network, which is sent into two full articulamentums arranged side by side,
Layer is returned including target frame classification layer and target frame, wherein target frame classification layer exports two scores, i.e., each target frame is corresponding
In target and non-targeted probability.Target frame returns layer and exports four regression parameters, respectively represent the center of target frame
Coordinate and width are high, as the amendment to sliding window, obtain primary modified target frame;
D, using the friendship with the true frame of target and than IOU greater than 0.7 candidate frame as positive sample, will hand over and compare with the true frame of target
Candidate frame of the IOU less than 0.3 can be then rejected as negative sample, remaining candidate frame.
The region recommendation network that the present invention uses generates positive and negative sample pane, compared to previous acquisition candidate frame method, more
Accelerate speed, and is easy to combine with target classification later.
In the present invention, target classification in step G the following steps are included:
A, the step F positive and negative candidate frame generated is mapped on the primitive character figure that convolutional neural networks finally extract;
B, it is uniformly formatted as by the pond ROI layerDefault n=7;
C, network uses the recommendation provincial characteristics of acquisitionClassifier judges target category, and to belonging to some class
Other region candidate frame carries out second using the recurrence layer of network and adjusts, and further corrects its position;
D, last classification results are obtained.
What the present invention used recommends detection target in region to classify target, and recurrence ginseng can be obtained while classification
Number amendment classification results, keep classification results more accurate.
Claims (3)
1. a kind of mug rim of a cup defects detection classification method based on convolutional neural networks, it is characterised in that: after improving
Adam update convolutional neural networks network weight, according in renewal process gradient decline situation, it is appropriate to learning rate into
Row modification improves training effectiveness to adapt to different gradients.
2. a kind of mug rim of a cup defects detection classification method based on convolutional neural networks, it is characterised in that: raw on characteristic pattern
At small sliding window simultaneously predict recommend region candidate frame, using the friendship with the true frame of target and than IOU greater than 0.7 candidate frame as
Positive sample, using the candidate frame with the true frame friendship of target and than IOU less than 0.3 as negative sample, remaining candidate frame can then be given up
It abandons.
3. a kind of mug rim of a cup defects detection classification method based on convolutional neural networks, it is characterised in that: to being pushed away
It recommends provincial characteristics and judges target category using softmax classifier, and net is utilized to the region candidate frame for belonging to some classification
The recurrence layer of network carries out second and adjusts, and further corrects its position, until effect is best, is finally reached optimal classification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811631744.8A CN109685030A (en) | 2018-12-29 | 2018-12-29 | A kind of mug rim of a cup defects detection classification method based on convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811631744.8A CN109685030A (en) | 2018-12-29 | 2018-12-29 | A kind of mug rim of a cup defects detection classification method based on convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109685030A true CN109685030A (en) | 2019-04-26 |
Family
ID=66190164
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811631744.8A Pending CN109685030A (en) | 2018-12-29 | 2018-12-29 | A kind of mug rim of a cup defects detection classification method based on convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109685030A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110764790A (en) * | 2019-10-18 | 2020-02-07 | 东北农业大学 | Data set marking method for deep learning |
CN111105411A (en) * | 2019-12-30 | 2020-05-05 | 创新奇智(青岛)科技有限公司 | Magnetic shoe surface defect detection method |
CN111768388A (en) * | 2020-07-01 | 2020-10-13 | 哈尔滨工业大学(深圳) | Product surface defect detection method and system based on positive sample reference |
CN112113978A (en) * | 2020-09-22 | 2020-12-22 | 成都国铁电气设备有限公司 | Vehicle-mounted tunnel defect online detection system and method based on deep learning |
CN112633327A (en) * | 2020-12-02 | 2021-04-09 | 西安电子科技大学 | Staged metal surface defect detection method, system, medium, equipment and application |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106886755A (en) * | 2017-01-19 | 2017-06-23 | 北京航空航天大学 | A kind of intersection vehicles system for detecting regulation violation based on Traffic Sign Recognition |
US20180322623A1 (en) * | 2017-05-08 | 2018-11-08 | Aquifi, Inc. | Systems and methods for inspection and defect detection using 3-d scanning |
CN108985337A (en) * | 2018-06-20 | 2018-12-11 | 中科院广州电子技术有限公司 | A kind of product surface scratch detection method based on picture depth study |
CN109035233A (en) * | 2018-07-24 | 2018-12-18 | 西安邮电大学 | Visual attention network and Surface Flaw Detection method |
CN109064454A (en) * | 2018-07-12 | 2018-12-21 | 上海蝶鱼智能科技有限公司 | Product defects detection method and system |
-
2018
- 2018-12-29 CN CN201811631744.8A patent/CN109685030A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106886755A (en) * | 2017-01-19 | 2017-06-23 | 北京航空航天大学 | A kind of intersection vehicles system for detecting regulation violation based on Traffic Sign Recognition |
US20180322623A1 (en) * | 2017-05-08 | 2018-11-08 | Aquifi, Inc. | Systems and methods for inspection and defect detection using 3-d scanning |
CN108985337A (en) * | 2018-06-20 | 2018-12-11 | 中科院广州电子技术有限公司 | A kind of product surface scratch detection method based on picture depth study |
CN109064454A (en) * | 2018-07-12 | 2018-12-21 | 上海蝶鱼智能科技有限公司 | Product defects detection method and system |
CN109035233A (en) * | 2018-07-24 | 2018-12-18 | 西安邮电大学 | Visual attention network and Surface Flaw Detection method |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110764790A (en) * | 2019-10-18 | 2020-02-07 | 东北农业大学 | Data set marking method for deep learning |
CN111105411A (en) * | 2019-12-30 | 2020-05-05 | 创新奇智(青岛)科技有限公司 | Magnetic shoe surface defect detection method |
CN111105411B (en) * | 2019-12-30 | 2023-06-23 | 创新奇智(青岛)科技有限公司 | Magnetic shoe surface defect detection method |
CN111768388A (en) * | 2020-07-01 | 2020-10-13 | 哈尔滨工业大学(深圳) | Product surface defect detection method and system based on positive sample reference |
CN111768388B (en) * | 2020-07-01 | 2023-08-11 | 哈尔滨工业大学(深圳) | Product surface defect detection method and system based on positive sample reference |
CN112113978A (en) * | 2020-09-22 | 2020-12-22 | 成都国铁电气设备有限公司 | Vehicle-mounted tunnel defect online detection system and method based on deep learning |
CN112633327A (en) * | 2020-12-02 | 2021-04-09 | 西安电子科技大学 | Staged metal surface defect detection method, system, medium, equipment and application |
CN112633327B (en) * | 2020-12-02 | 2023-06-30 | 西安电子科技大学 | Staged metal surface defect detection method, system, medium, equipment and application |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107133616B (en) | Segmentation-free character positioning and identifying method based on deep learning | |
CN109685030A (en) | A kind of mug rim of a cup defects detection classification method based on convolutional neural networks | |
CN106971152B (en) | Method for detecting bird nest in power transmission line based on aerial images | |
CN108960245B (en) | Tire mold character detection and recognition method, device, equipment and storage medium | |
CN106875373B (en) | Mobile phone screen MURA defect detection method based on convolutional neural network pruning algorithm | |
CN111178197B (en) | Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method | |
CN107609525B (en) | Remote sensing image target detection method for constructing convolutional neural network based on pruning strategy | |
CN109241982A (en) | Object detection method based on depth layer convolutional neural networks | |
CN106980852B (en) | Based on Corner Detection and the medicine identifying system matched and its recognition methods | |
CN110321815A (en) | A kind of crack on road recognition methods based on deep learning | |
CN109086799A (en) | A kind of crop leaf disease recognition method based on improvement convolutional neural networks model AlexNet | |
CN110276386A (en) | A kind of apple grading method and system based on machine vision | |
CN104598885B (en) | The detection of word label and localization method in street view image | |
CN110675370A (en) | Welding simulator virtual weld defect detection method based on deep learning | |
CN106340016A (en) | DNA quantitative analysis method based on cell microscope image | |
CN107609575A (en) | Calligraphy evaluation method, calligraphy evaluating apparatus and electronic equipment | |
CN108734108B (en) | Crack tongue identification method based on SSD network | |
CN112037219A (en) | Metal surface defect detection method based on two-stage convolution neural network | |
CN108985337A (en) | A kind of product surface scratch detection method based on picture depth study | |
CN112819821B (en) | Cell nucleus image detection method | |
CN111161244B (en) | Industrial product surface defect detection method based on FCN + FC-WXGboost | |
CN110766058A (en) | Battlefield target detection method based on optimized RPN (resilient packet network) | |
CN111488920A (en) | Bag opening position detection method based on deep learning target detection and recognition | |
CN112749675A (en) | Potato disease identification method based on convolutional neural network | |
CN114648806A (en) | Multi-mechanism self-adaptive fundus image segmentation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190426 |
|
WD01 | Invention patent application deemed withdrawn after publication |