CN109559298A - Emulsion pump defect detection method based on deep learning - Google Patents
Emulsion pump defect detection method based on deep learning Download PDFInfo
- Publication number
- CN109559298A CN109559298A CN201811357765.5A CN201811357765A CN109559298A CN 109559298 A CN109559298 A CN 109559298A CN 201811357765 A CN201811357765 A CN 201811357765A CN 109559298 A CN109559298 A CN 109559298A
- Authority
- CN
- China
- Prior art keywords
- training
- network
- layer
- sample
- pump
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 75
- 239000000839 emulsion Substances 0.000 title claims abstract description 34
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 238000001514 detection method Methods 0.000 title claims description 54
- 238000012549 training Methods 0.000 claims abstract description 89
- 238000000034 method Methods 0.000 claims abstract description 39
- 238000012360 testing method Methods 0.000 claims abstract description 19
- 230000008569 process Effects 0.000 claims abstract description 11
- 238000013526 transfer learning Methods 0.000 claims abstract description 11
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 10
- 238000000605 extraction Methods 0.000 claims abstract description 8
- 230000000694 effects Effects 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 230000008859 change Effects 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 24
- 230000004913 activation Effects 0.000 claims description 15
- 238000005457 optimization Methods 0.000 claims description 12
- 210000002569 neuron Anatomy 0.000 claims description 10
- 238000005086 pumping Methods 0.000 claims description 10
- 241001269238 Data Species 0.000 claims description 6
- 238000005520 cutting process Methods 0.000 claims description 6
- 230000002779 inactivation Effects 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000013461 design Methods 0.000 claims description 5
- 238000011478 gradient descent method Methods 0.000 claims description 5
- 238000002474 experimental method Methods 0.000 claims description 4
- 230000005764 inhibitory process Effects 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 241000208340 Araliaceae Species 0.000 claims description 3
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims description 3
- 235000003140 Panax quinquefolius Nutrition 0.000 claims description 3
- 238000013459 approach Methods 0.000 claims description 3
- 230000002902 bimodal effect Effects 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 3
- 238000013480 data collection Methods 0.000 claims description 3
- 235000008434 ginseng Nutrition 0.000 claims description 3
- 238000003475 lamination Methods 0.000 claims description 3
- 230000000750 progressive effect Effects 0.000 claims description 3
- 230000000644 propagated effect Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000003860 storage Methods 0.000 claims description 3
- 230000023886 lateral inhibition Effects 0.000 claims description 2
- 230000002950 deficient Effects 0.000 abstract description 3
- 238000012795 verification Methods 0.000 abstract 2
- 238000013145 classification model Methods 0.000 abstract 1
- 238000001746 injection moulding Methods 0.000 abstract 1
- 238000002347 injection Methods 0.000 description 5
- 239000007924 injection Substances 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 238000007689 inspection Methods 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 230000001617 migratory effect Effects 0.000 description 4
- 238000012706 support-vector machine Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 208000003464 asthenopia Diseases 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000000243 solution Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for detecting defects of an emulsion pump based on deep learning, which is used for respectively constructing classification models of all angles based on the principles of transfer learning and convolutional neural networks in the deep learning so as to detect defect samples. First, the network model was pre-trained using the Mini-ImageNet dataset. And then, adjusting the model structure and loading parameters of a pre-training network, inputting the training set and the verification set of each angle of the emulsion pump into a convolutional neural network for training after an image preprocessing algorithm, automatically performing the processes of feature extraction and classification in the network, and adjusting the network hyper-parameters according to the change of the accuracy rate of the verification set in the training process to obtain a final network model. And finally, inputting the preprocessed emulsion pump test sample into the trained model, and detecting the defect identification effect of the final model. The method can solve the interference of the rotation of the pump top nozzle part and the random injection molding point of the pump body, and accurately detect the defective emulsion pump sample.
Description
Technical field
The method for the emulsion pumps defects detection based on deep learning that the present invention relates to a kind of.
Background technique
Emulsion pumps are the important components of wash liquor vessel, and social required quantity is huge, extensive market, it is therefore desirable to large quantities of
Amount production.But it will appear the problem that surface contamination spot, tail pipe are inserted in process of production.Therefore it needs to carry out stringent surface
Detection promotes its commercial value and use value to guarantee its quality.Traditional emulsion pumps defect inspection method be mostly with
Based on artificial detection, artificial detection method is vulnerable to the thinking of testing staff, mood, human eye fatigue, experience difference, working strength etc.
The influence of factor and the reliability for reducing testing result, it is low to eventually lead to detection efficiency, is unfavorable for production.It designs herein
Surface defects detection algorithm is applied in emulsion pumps defect detecting system, allows machine detection substitution is artificial to detect, realizes inspection
Automation is surveyed, product quality can be improved with the effective solution above problem, push enterprise technology transition and upgrade.
Currently, industrial existing surface defects detection algorithm is with image treating or feature extraction, machine mostly
Based on learning algorithm classification, i.e., manually extracts the feature of image representative and obtained feature is input to Machine learning classifiers
In to train workpiece, defect disaggregated model.And the defects detection of emulsion pumps is three dimensional detection, and more camera lenses is needed to shoot each sample
This, the collected sample angle of each camera lens is different, to judge whether the sample is deposited the case where by observing each camera lens collecting sample
In defect.Since the collected sample shape characteristic of different camera lenses, defect characteristic are different, it is therefore desirable to according to each shooting angle
Specific form designs corresponding feature extraction algorithm and sorting algorithm, this will will increase the complexity of detection algorithm realization, shadow
Ring algorithm detection rates.There are rational disturbances on the pump top of emulsion pumps simultaneously, and pumping upper body, there are random injection points to interfere, can serious shadow
The accuracy rate for ringing algorithm, so being not particularly suited for the detection of emulsion pumps defect sample.Deep learning is applied to industrial object
Surface defects detection automatically extracts sample characteristics with depth network, captures the advanced features of target, and is input to high-rise net
Network carries out Classification and Identification, the comprehensive and robustness of sample characteristics extraction can be improved, to improve Detection accuracy.It is most of
Although depth network can remove the complex process of characteristic of human nature's extraction from, great amount of samples is needed in training process, and emulsion pumps have
Defective sample is few, the positive and negative extremely non-uniform feature of sample distribution, will lead to parameter over-fitting during training, influences to lack
Fall into detection effect.
Summary of the invention
To overcome defect existing in the prior art, the present invention proposes a kind of emulsion pumps defects detection based on deep learning
Method, particular technique content is as follows:
A method of the emulsion pumps defects detection based on deep learning, it is flat as the exploitation of algorithm and detection using Digits
Platform, by the single pump housing be divided into pump top, pump upper end, pump lower end, four angles of tail pipe sample image, based on being migrated in deep learning
The principle of study and convolutional neural networks constructs the disaggregated model of each angle respectively to detect defect sample, and specifically including that makes
With Mini-ImageNet data set pre-training network model;Adjustment model structure and the parameter for loading pre-training network, and will be newborn
The training set and verifying collection of each angle of liquid pump are input to training in convolutional neural networks after Image Pretreatment Algorithm, in network
In carry out the process of feature extraction and classification automatically, according to the super ginseng of variation adjustment network of verifying collection accuracy rate in training process
Number, obtains final network model;Pretreated emulsion pumps test sample is input in trained model, final mould is detected
The defect recognition effect of type.
It specifically includes the following steps:
Step 1 makes pre-training phase data collection;
The Mini-ImageNet data in ImageNet data set are obtained, the figure of 6000 10 classifications, every class samples is chosen
It include 50000 image datas including training set as data, test set includes 10000 image datas;
Step 2 is based on caffe in Dights platform and constructs pre-training network structure, and will be at the beginning of network weight offset parameter
Beginningization, pre-training network successively include
(1) Data: sample input layer, input pre-training sample-size are 227 × 227, channel number 3;
(2) the first convolutional layer of Conv-11, convolution kernel size are set as 11 × 11, and step-length Stride is set as 4, zero padding Pad
It is set as 0, characteristic pattern is having a size of 54 × 54, and characteristic pattern depth is 96, and activation primitive selects relu function;
(3) the first pond Pool-3 layer, core size are set as 3 × 3, and step-length Stride is set as 2, and zero padding Pad is set as 0,
For characteristic pattern having a size of 26 × 26, pond layer does not change characteristic pattern depth;
(4) the first lateral inhibition of LRN layer, upper layer characteristic pattern is normalized;
(5) the second convolutional layer of Conv-5, convolution kernel size are set as 5 × 5, and step-length Stride is set as 1, and zero padding Pad is set
It is 2, characteristic pattern is having a size of 25 × 25, and characteristic pattern depth is 256, and activation primitive selects relu function;
(6) the second pond Pool-3 layer, core size are set as 3 × 3, and step-length Stride is set as 2, and zero padding Pad is set as 0,
Characteristic pattern is having a size of 11 × 11;
(7) LRN second side inhibition layer normalizes upper layer characteristic pattern;
(8) Conv-3 third convolutional layer, convolution kernel size are set as 3 × 3, and step-length Stride is set as 1, and zero padding Pad is set
It is 1, characteristic pattern is having a size of 10 × 10, and characteristic pattern depth is 384, and activation primitive selects relu function;
(9) Conv-3 Volume Four lamination, convolution kernel size are set as 3 × 3, and step-length Stride is set as 1, and zero padding Pad is set
It is 1, characteristic pattern is having a size of 9 × 9, and characteristic pattern depth is 384, and activation primitive selects relu function;
(10) the 5th convolutional layer of Conv-3, convolution kernel size are set as 3 × 3, S step-length tride and are set as 1, and zero padding Pad is set
It is 1, characteristic pattern is having a size of 8 × 8, and characteristic pattern depth is 256, and activation primitive selects relu function;
(11) Pool-3 third pond layer, core size are set as 3 × 3, and step-length Stride is set as 2, and zero padding Pad is set as 0,
Characteristic pattern is having a size of 3 × 3;
(12) upper layer characteristic pattern is stretched as a column vector by Flatten flatness layer, is prepared for subsequent classification;
(13) the full articulamentum of Fc first, is set as 4096 for neuron number, is 0.5 by Dropout parameter setting, enables the layer
The probability of layer neuron inactivation is 0.5;
(14) the full articulamentum of Fc second, is set as 4096 for neuron number, is 0.5 by Dropout parameter setting, enables the layer
The probability of layer neuron inactivation is 0.5;
(15) Softmax layers, number of targets is set as classification number 10;
The hyper parameter of pre-training network is arranged in step 3;
Training batch Batchsize indicates that input sample amount used in single Internet communication, the parameter can reduce training
Computation burden and parameter storage burden, be set as 128;Epoch is responsible for controlling network cycle-index, and 1 epoch indicates traversal one
All over training set, it is set as 100;In order to approach loss function minimum value, experiment setting gradient type learning rate, 1-30epoch learning rate
It is 0.0001,60-100epoch learning rate for 0.001,30-60epoch learning rate is 0.00001, optimization algorithm is chosen random
Gradient descent method SGD;
Step 4, pre-training network and the weight and offset parameter for storing network update;
Step 5, emulsion pumps sample collection;
Image of the image pattern using four angles such as pump top, pump upper end, pump lower end, tail pipes, each angle acquisition 1500
Open image: training set 750 is opened, and verifying collection 250, test set 500 is opened, and the defect sample of each sample set and the ratio of normal sample are
1:1;Wherein defect includes pump top, pump upper end, the dotted greasy dirt defect for pumping lower end, linear greasy dirt defect, bulk greasy dirt defect, tail
Pipe falls to insert defect;
Step 6, artificial to mark emulsion pumps pump top, pump upper end, pump lower end, tail pipe sample, setting 1 is defect sample, and 2 are
Normal sample;
Step 7 carries out batch pretreatment using training set and verifying collection of the Image Pretreatment Algorithm to pump top;
Image pre-processing method is as follows:
Firstly, artificial cutting image approximate region and carrying out greyscale transformation, image is obtained by low-pass filter and is substantially taken turns
It is wide;
Then, Threshold segmentation is carried out to image using the bimodal split plot design of histogram, pump top threshold value is set as 15, and pump upper end is
25, pump lower section is 30, and due to falling to insert, defect is more apparent not to need Threshold segmentation to tail pipe;Area maximum region is chosen, and is generated outer
Connect rectangle;
Finally, dividing boundary rectangle on the original image, target area image is obtained, is reduced using equal interval sampling principle
The size of image is filled with 227 × 227 using the mode of 0 filling by image;
Step 8 modifies network structure, softmax layer parameter is set as 2, remaining structure is constant;
Step 9, the principle based on transfer learning load the weight offset parameter of pre-training network, by model training stage
The parameter value of network weight biasing of (2)-(11) layer be initialized as the value that the pre-training stage obtains;
Step 10 sets training stage hyper parameter;Since model training stage sample size is less compared to the pre-training stage,
So training batch Batchsize is set as 32, cycle of training, epoch was set as 60;It is progressive that learning rate is still set as gradient
Type, it be 0.0001,40-60epoch learning rate is 0.00001 that 1-20epoch learning rate, which is 0.001,20-40epoch learning rate,
Optimization algorithm uses Nai Site love stochastic gradient descent method Nesterov&SGD;
The network mould of the image patterns of four angles such as pump top, pump upper end, pump lower end, tail pipe is respectively trained in step 11
Type uses Nestorv&SGD method to optimize it to update the weight of each network layer and offset parameter, and each angle is selected to verify
Collect the highest defects detection model of precision;
Network losses function uses logarithm loss functionWherein m value etc.
In parameter Batchsize, yiIndicate the label value of i-th of sample.f(xi;θ) indicate the predicted value of network propagated forward;For table
It states conveniently, the weight offset parameter of network layer is indicated with θ, loss function is optimized by optimization algorithm, constantly update θ value to obtain
Optimal models are obtained, this paper optimization algorithm uses Nai Site love Stochastic gradient method Nesterov&SGD;The algorithm introduces
Nesterov momentum corrects current gradient direction by advanced gradient during stochastic gradient descent;Specific algorithm
As follows: firstly, initial momentum v=0, is arranged momentum parameter α=0.9, parameter temporarily updates:Then, it obtains
Advanced gradient:Momentum v is updated using momentum parameter α, learning rate ∈, advanced gradient g:
v←αv-∈.q;Finally, utilizing momentum v θ: θ ← θ of undated parameter+v;
Step 12 respectively carries out the test set of pump top, pump upper end, pump lower end, tail pipe using Image Pretreatment Algorithm
Pretreatment is criticized, then is input in corresponding defects detection model and obtains prediction result, it is compared to obtain each angle with label
Spend the accuracy rate of model defect detection.
Compared with prior art, superiority of the invention is embodied in: using Digits as the exploitation of algorithm and detection platform,
The single pump housing is divided into pump top, pump upper end, the sample image for pumping four lower end, tail pipe angles, is learned based on being migrated in deep learning
It practises and the principle of convolutional neural networks constructs the disaggregated model of each angle respectively to detect defect sample, be suitable for less sample size
Training can effectively avoid the interference of the rotation of pump top and pump upper end injection point, successfully identify defective sample, have detection quasi-
The feature that true rate is high, rate is fast.
Detailed description of the invention
Fig. 1 is algorithm frame schematic diagram of the invention.
Fig. 2 is the pre-training schematic network structure of step two of the invention.
Fig. 3 is the pump upper end image that the present invention is generated by cutting image.
Fig. 4 is the pump lower end image that the present invention is generated by cutting image.
Fig. 5 is the pump top image that the present invention is generated by cutting image.
Fig. 6 is the modification schematic network structure of step eight of the invention.
Fig. 7 is emulsion pumps defects detection result screenshot of the invention.
Fig. 8 is of the invention based on transfer learning and the sample of non-migratory study verifying collection accuracy rate and curve cycle of training
Figure.
Specific embodiment
As follows in conjunction with attached drawing 1 to 8, application scheme is further described:
A method of the single pump housing is divided under pump top, pump upper end, pump by the emulsion pumps defects detection based on deep learning
It holds, the sample image of four angles of tail pipe, each angle is constructed based on transfer learning in deep learning and convolutional neural networks respectively
Disaggregated model to detect defect sample,
Referring to attached drawing 1, using Digits as the exploitation of algorithm and detection platform, by the single pump housing be divided into pump top, pump upper end,
The sample image for pumping four lower end, tail pipe angles is distinguished based on the principle of transfer learning in deep learning and convolutional neural networks
The disaggregated model of each angle is constructed to detect defect sample, is specifically included that using Mini-ImageNet data set pre-training net
Network model;Adjustment model structure and the parameter for loading pre-training network, and the training set of each angle of emulsion pumps and verifying collection are passed through
It is input to training in convolutional neural networks after crossing Image Pretreatment Algorithm, carries out the mistake of feature extraction and classification automatically in a network
Journey adjusts network hyper parameter according to the variation of verifying collection accuracy rate in training process, obtains final network model;After pre-processing
Emulsion pumps test sample be input in trained model, detect the defect recognition effect of final mask.
It specifically includes the following steps:
Step 1 makes pre-training phase data collection;
The Mini-ImageNet data in ImageNet data set are obtained, the figure of 6000 10 classifications, every class samples is chosen
It include 50000 image datas including training set as data, test set includes 10000 image datas;
Step 2 is based on caffe in Dights platform and constructs pre-training network structure, and will be at the beginning of network weight offset parameter
Beginningization, referring to attached drawing 2, pre-training network successively includes
(1) Data: sample input layer, input pre-training sample-size are 227 × 227, channel number 3:
(2) the first convolutional layer of Conv-11, convolution kernel size are set as 11 × 11, and step-length Stride is set as 4, zero padding Pad
It is set as 0, characteristic pattern is having a size of 54 × 54, and characteristic pattern depth is 96, and activation primitive selects relu function;
(3) the first pond Pool-3 layer, core size are set as 3 × 3, and step-length Stride is set as 2, and zero padding Pad is set as 0,
For characteristic pattern having a size of 26 × 26, pond layer does not change characteristic pattern depth;
(4) LRN first case inhibition layer normalizes upper layer characteristic pattern;
(5) the second convolutional layer of Cony-5, convolution kernel size are set as 5 × 5, and step-length Stride is set as 1, and zero padding Pad is set
It is 2, characteristic pattern is having a size of 25 × 25, and characteristic pattern depth is 256, and activation primitive selects relu function;
(6) the second pond Pool-3 layer, core size are set as 3 × 3, and step-length Stride is set as 2, and zero padding Pad is set as 0,
Characteristic pattern is having a size of 11 × 11;
(7) LRN second side inhibition layer normalizes upper layer characteristic pattern;
(8) Conv-3 third convolutional layer, convolution kernel size are set as 3 × 3, and step-length Stride is set as 1, and zero padding Pad is set
It is 1, characteristic pattern is having a size of 10 × 10, and characteristic pattern depth is 384, and activation primitive selects relu function;
(9) Conv-3 Volume Four lamination, convolution kernel size are set as 3 × 3, and step-length Stride is set as 1, and zero padding Pad is set
It is 1, characteristic pattern is having a size of 9 × 9, and characteristic pattern depth is 384, and activation primitive selects relu function;
(10) the 5th convolutional layer of Conv-3, convolution kernel size are set as 3 × 3, S step-length tride and are set as 1, and zero padding Pad is set
It is 1, characteristic pattern is having a size of 8 × 8, and characteristic pattern depth is 256, and activation primitive selects relu function;
(11) Pool-3 third pond layer, core size are set as 3 × 3, and step-length Stride is set as 2, and zero padding Pad is set as 0,
Characteristic pattern is having a size of 3 × 3;
(12) upper layer characteristic pattern is stretched as a column vector by Flatten flatness layer, is prepared for subsequent classification;
(13) the full articulamentum of Fc first, is set as 4096 for neuron number, is 0.5 by Dropout parameter setting, enables the layer
The probability of layer neuron inactivation is 0.5;
(14) the full articulamentum of Fc second, is set as 4096 for neuron number, is 0.5 by Dropout parameter setting, enables the layer
The probability of layer neuron inactivation is 0.5;
(15) Softmax layers, number of targets is set as classification number 10;
The hyper parameter of pre-training network is arranged in step 3;
Training batch Batchsize indicates that input sample amount used in single Internet communication, the parameter can reduce training
Computation burden and parameter storage burden, be set as 128;Epoch is responsible for controlling network cycle-index, and 1 epoch indicates traversal one
All over training set, it is set as 100;In order to approach loss function minimum value, experiment setting gradient type learning rate, 1-30epoch learning rate
It is 0.0001,60-100epoch learning rate for 0.001,30-60epoch learning rate is 0.00001, optimization algorithm is chosen random
Gradient descent method SGD;
Step 4, pre-training network and the weight and offset parameter for storing network update;
Step 5, emulsion pumps sample collection;
Image of the image pattern using four angles such as pump top, pump upper end, pump lower end, tail pipes, each angle acquisition 1500
Open image: training set 750 is opened, and verifying collection 250, test set 500 is opened, and the defect sample of each sample set and the ratio of normal sample are
1:1;Wherein defect includes pump top, pump upper end, the dotted greasy dirt defect for pumping lower end, linear greasy dirt defect, bulk greasy dirt defect, tail
Pipe falls to insert defect;
Step 6, artificial to mark emulsion pumps pump top, pump upper end, pump lower end, tail pipe sample, setting 1 is defect sample, and 2 are
Normal sample;
Step 7 carries out batch pretreatment using training set and verifying collection of the Image Pretreatment Algorithm to pump top;
Referring to attached drawing 3 to 5, image pre-processing method is as follows:
Firstly, artificial cutting image approximate region and carrying out greyscale transformation, image is obtained by low-pass filter and is substantially taken turns
It is wide;
Then, Threshold segmentation is carried out to image using the bimodal split plot design of histogram, pump top threshold value is set as 15, and pump upper end is
25, pump lower section is 30, and due to falling to insert, defect is more apparent not to need Threshold segmentation to tail pipe;Area maximum region is chosen, and is generated outer
Connect rectangle;
Finally, dividing boundary rectangle on the original image, target area image is obtained, is reduced using equal interval sampling principle
The size of image is filled with 227 × 227 using the mode of 0 filling by image;
Step 8 modifies network structure referring to attached drawing 6 and softmax layer parameter is set as 2, remaining structure is constant;
Step 9, the principle based on transfer learning load the weight offset parameter of pre-training network, by model training stage
The parameter value of network weight biasing of (2)-(11) layer be initialized as the value that the pre-training stage obtains:
Step 10 sets training stage hyper parameter;Since model training stage sample size is less compared to the pre-training stage,
So training batch Batchsize is set as 32, cycle of training, epoch was set as 60;It is progressive that learning rate is still set as gradient
Type, it be 0.0001,40-60epoch learning rate is 0.00001 that 1-20epoch learning rate, which is 0.001,20-40epoch learning rate,
Optimization algorithm uses Nai Site love stochastic gradient descent method Nesterov&SGD;
The network mould of the image patterns of four angles such as pump top, pump upper end, pump lower end, tail pipe is respectively trained in step 11
Type uses Nestorv&SGD method to optimize it to update the weight of each network layer and offset parameter, and each angle is selected to verify
Collect the highest defects detection model of precision;
Network losses function uses logarithm loss functionWherein m value etc.
In parameter Batchsize, yiIndicate the label value of i-th of sample.f(xi;θ) indicate the predicted value of network propagated forward;For table
It states conveniently, the weight offset parameter of network layer is indicated with θ, loss function is optimized by optimization algorithm, constantly update θ value to obtain
Optimal models are obtained, this paper optimization algorithm uses Nai Site love Stochastic gradient method Nesterov&SGD;The algorithm introduces
Nesterov momentum corrects current gradient direction by advanced gradient during stochastic gradient descent;Specific algorithm
As follows: firstly, initial momentum v=0, is arranged momentum parameter d=0.9, parameter temporarily updates:Then, it obtains
Advanced gradient:Momentum v is updated using momentum parameter α, learning rate ∈, advanced gradient g:
v←αv-∈.g;Finally, utilizing momentum v θ: θ ← θ of undated parameter+v;
Step 12 respectively carries out the test set of pump top, pump upper end, pump lower end, tail pipe using Image Pretreatment Algorithm
Pretreatment is criticized, then is input in corresponding defects detection model and obtains prediction result, it is compared to obtain each angle with label
Spend the accuracy rate of model defect detection.
According to above-mentioned steps, the detection model of four angles of training is collected using training set and verifying, test set is as algorithm
The sample of demonstration, experiment setting 1 is defect sample, and 2 be normal sample, and testing result is as shown in fig. 7, preceding four width figure is on pump
End, pump top, the defect sample for pumping lower end and tail pipe, detection model are much larger than 2 to the probability that its predicted value is 1, can accurately
Detect defect sample.Two width figures are the normal sample containing injection point interference and mouth rational disturbance respectively afterwards, detect mould
Type it is predicted be 2 probability much larger than 1, algorithm can effectively avoid the interference and mouth interference of injection point.
The algorithm is respectively compared pump top, pump upper end, pump lower end, tail pipe during transfer learning and non-migratory learning training
Verifying collection accuracy rate and cycle of training curve graph, as shown in figure 8, figure (a), figure (b) respectively be pump upper body and pump top song
Line chart, since imaging surface is more complex, defect characteristic is unobvious, and non-migratory learning curve often just reaches in 40 cycles
Convergence, and transfer learning curve can be restrained in 5 cycles, accuracy is also above non-migratory study;Scheme (c), figure (d)
Expression is the curve for pumping the lower part of the body and tail pipe, and since defect characteristic is more apparent, transfer learning effect is relatively small, but still can accelerate
Network convergence rate improves the detection accuracy of network model.
1 algorithm accuracy in detection of table
Tab.1 The detection accuracy of the algorithm
2 algorithm of table detects the used time
Tab.2 The detection time of the algorithm
It compared the defect recognition effect of this algorithm Yu three kinds of traditional algorithms simultaneously.Due to being required in industrial processes
The accuracy rate and rate of emulsion pumps defects detection, so comparison index is set as the accuracy of test set and the detection speed of single sample
Rate, respectively as shown in table 1, table 2 shown in.Wherein, DBN (Deep Belief Nets) Reconstruction Method utilizes sample set training DBN net
Network finally verifies sample with the presence or absence of defect to construct a template, then by comparing the diversity factor of test sample and template.LBP
(Local Binary Pattern)+SVM (Support Vector Machine) first extracts sample in the training process
LBP feature, in conjunction with SVM classifier training defects detection model to realize defects detection.
Garbor+KLPP (Kernel locality preserving projections, KLPP)+MLP (Multi-
Layer perception) it is similar with above method, sample is subjected to Gabor transformation and extracts mean value and Variance feature, and is made
Feature is subjected to dimensionality reduction with KLPP method, then is input to training classifier in multi-layer perception (MLP) (MLP), realizes the inspection of defect emulsion pumps
It surveys.
Due to being influenced by the rotation of pump mouth and pump housing injection point it can be seen from table 1,2 result of table, pump upper end and pump top
Detection accuracy can be less than the accuracy rate of pump lower end and tail pipe.Since four angles of emulsion pumps are an entirety, so this selected works
Select four minimum accuracys rate of angle and longest used time.Method accuracy rate based on LBP+SVM is 70.6%, the inspection of single sample
Survey time 5.76s;Method accuracy rate based on Garbor+KLPP+MLP is 83.8%, the detection time 7.68s of single sample;Base
In DBN Reconstruction Method accuracy rate 67.8%, the detection time 30.62s of single sample;This paper algorithm accuracy rate is 93.4%, single sample
Detection time 2.52s.From the above data, it can be seen that this paper algorithm in Detection accuracy and is substantially better than it on the detection used time
His algorithm.
It is that above-mentioned preferred embodiment should be regarded as application scheme embodiment for example, all with application scheme thunder
Same, approximate or technology deduction, replacement, improvement for making based on this etc., are regarded as the protection scope of this patent.
Claims (2)
1. a kind of method of the emulsion pumps defects detection based on deep learning, which is characterized in that opening using Digits as algorithm
The single pump housing is divided into pump top, pump upper end, the sample image for pumping four lower end, tail pipe angles, is based on depth by hair and detection platform
The principle of transfer learning and convolutional neural networks constructs the disaggregated model of each angle respectively to detect defect sample in study, leads
It include: using Mini-ImageNet data set pre-training network model;Adjustment model structure and the ginseng for loading pre-training network
Number, and the training set of each angle of emulsion pumps and verifying collection are input in convolutional neural networks after Image Pretreatment Algorithm and are instructed
Practice, carry out the process of feature extraction and classification automatically in a network, is adjusted according to the variation of verifying collection accuracy rate in training process
Network hyper parameter obtains final network model;Pretreated emulsion pumps test sample is input in trained model, is examined
Survey the defect recognition effect of final mask.
2. according to right ask 1 described in the emulsion pumps defects detection based on deep learning method, which is characterized in that it includes such as
Lower step:
Step 1 makes pre-training phase data collection;
The Mini-ImageNet data in ImageNet data set are obtained, 10 classifications, the picture number of 6000 samples of every class are chosen
According to, including training set include 50000 image datas, test set include 10000 image datas;
Step 2 is based on caffe in Dights platform and constructs pre-training network structure, and network weight offset parameter is initial
Change, pre-training network successively includes
(1) Data: sample input layer, input pre-training sample-size are 227 × 227, channel number 3;
(2) the first convolutional layer of Conv-11, convolution kernel size are set as 11 × 11, and step-length Stride is set as 4, and zero padding Pad is set as
0, characteristic pattern is having a size of 54 × 54, and characteristic pattern depth is 96, and activation primitive selects relu function;
(3) the first pond Pool-3 layer, core size are set as 3 × 3, and step-length Stride is set as 2, and zero padding Pad is set as 0, feature
For figure having a size of 26 × 26, pond layer does not change characteristic pattern depth;
(4) the first lateral inhibition of LRN layer, upper layer characteristic pattern is normalized;
(5) the second convolutional layer of Conv-5, convolution kernel size are set as 5 × 5, and step-length Stride is set as 1, and zero padding Pad is set as 2,
Characteristic pattern is having a size of 25 × 25, and characteristic pattern depth is 256, and activation primitive selects relu function;
(6) the second pond Pool-3 layer, core size are set as 3 × 3, and step-length Stride is set as 2, and zero padding Pad is set as 0, feature
Figure is having a size of 11 × 11;
(7) LRN second side inhibition layer normalizes upper layer characteristic pattern;
(8) Conv-3 third convolutional layer, convolution kernel size are set as 3 × 3, and step-length Stride is set as 1, and zero padding Pad is set as 1,
Characteristic pattern is having a size of 10 × 10, and characteristic pattern depth is 384, and activation primitive selects relu function;
(9) Conv-3 Volume Four lamination, convolution kernel size are set as 3 × 3, and step-length Stride is set as 1, and zero padding Pad is set as 1,
Characteristic pattern is having a size of 9 × 9, and characteristic pattern depth is 384, and activation primitive selects relu function;
(10) the 5th convolutional layer of Conv-3, convolution kernel size are set as 3 × 3, S step-length tride and are set as 1, and zero padding Pad is set as 1,
Characteristic pattern is having a size of 8 × 8, and characteristic pattern depth is 256, and activation primitive selects relu function;
(11) Pool-3 third pond layer, core size are set as 3 × 3, and step-length Stride is set as 2, and zero padding Pad is set as 0, feature
Figure is having a size of 3 × 3;
(12) upper layer characteristic pattern is stretched as a column vector by Flatten flatness layer, is prepared for subsequent classification;
(13) the full articulamentum of Fc first, is set as 4096 for neuron number, is 0.5 by Dropout parameter setting, enables this refreshing layer by layer
Probability through member inactivation is 0.5;
(14) the full articulamentum of Fc second, is set as 4096 for neuron number, is 0.5 by Dropout parameter setting, enables this refreshing layer by layer
Probability through member inactivation is 0.5;
(15) Softmax layers, number of targets is set as classification number 10;
The hyper parameter of pre-training network is arranged in step 3;
Training batch Batchsize indicates that input sample amount used in single Internet communication, the parameter can reduce trained meter
Burden and parameter storage burden are calculated, is set as 128;Epoch is responsible for controlling network cycle-index, and 1 epoch indicates time instruction of traversal
Practice collection, is set as 100;In order to approach loss function minimum value, experiment setting gradient type learning rate, 1-30epoch learning rate is
0.001,30-60epoch learning rate is that 0.0001,60-100epoch learning rate is 0.00001, and optimization algorithm chooses boarding steps
Spend descent method SGD;
Step 4, pre-training network and the weight and offset parameter for storing network update;
Step 5, emulsion pumps sample collection;
Image pattern opens figure using pump top, pump upper end, the image for pumping four angles such as lower end, tail pipe, each angle acquisition 1500
Picture: training set 750 is opened, and verifying collection 250, test set 500 is opened, and the defect sample of each sample set and the ratio of normal sample are 1: 1;
Wherein defect includes pump top, pump upper end, the dotted greasy dirt defect for pumping lower end, linear greasy dirt defect, bulk greasy dirt defect, and tail pipe falls
Insert defect;
Step 6, artificial to mark emulsion pumps pump top, pump upper end, pump lower end, tail pipe sample, setting 1 is defect sample, and 2 be normal
Sample;
Step 7 carries out batch pretreatment using training set and verifying collection of the Image Pretreatment Algorithm to pump top;
Image pre-processing method is as follows:
Firstly, artificial cutting image approximate region and carrying out greyscale transformation, passes through low-pass filter and obtain image general profile;
Then, Threshold segmentation is carried out to image using the bimodal split plot design of histogram, pump top threshold value is set as 15, and pump upper end is 25, pump
Lower section is 30, and due to falling to insert, defect is more apparent not to need Threshold segmentation to tail pipe;Area maximum region is chosen, and generates external square
Shape;
Finally, dividing boundary rectangle on the original image, target area image is obtained, is reduced and is schemed using equal interval sampling principle
The size of image is filled with 227 × 227 using the mode of 0 filling by picture;
Step 8 modifies network structure, softmax layer parameter is set as 2, remaining structure is constant;
Step 9, the principle based on transfer learning load the weight offset parameter of pre-training network, by model training stage
(2)-(11) parameter value that the network weight of layer biases is initialized as the value obtained in the pre-training stage;
Step 10 sets training stage hyper parameter;Since model training stage sample size is less compared to the pre-training stage, so
Training batch Batchsize is set as 32, and cycle of training, epoch was set as 60;It is progressive that learning rate is still set as gradient, 1-
It is 0.0001,40-60epoch learning rate is 0.00001 that 20epoch learning rate, which is 0.001,20-40epoch learning rate, optimization
Algorithm uses Nai Site love stochastic gradient descent method Nesterov&SGD;
Step 11 is respectively trained the network model of the image patterns of four angles such as pump top, pump upper end, pump lower end, tail pipe, adopts
It is optimized with Nestorv&SGD method to update the weight of each network layer and offset parameter, selects each angle verifying collection essence
Spend highest defects detection model;
Network losses function uses logarithm loss functionWherein m value is equal to ginseng
Number Batchsize, yiIndicate the label value of i-th of sample.f(xi;θ) indicate the predicted value of network propagated forward;For the side of statement
Just, the weight offset parameter of network layer is indicated with θ, loss function is optimized by optimization algorithm, constantly update θ value to obtain most
Excellent model, this paper optimization algorithm use Nai Site love Stochastic gradient method Nesterov&SGD;It is dynamic that the algorithm introduces Nesterov
Amount, during stochastic gradient descent, current gradient direction is corrected by advanced gradient;Specific algorithm is as follows: firstly,
Momentum parameter α=0.9 is arranged in initial momentum v=0, and parameter temporarily updates:Then, advanced gradient is obtained:Momentum v:v ← α v- ∈ is updated using momentum parameter α, learning rate ∈, advanced gradient g
g;Finally, utilizing momentum v θ: θ ← θ of undated parameter+v;
Step 12, using Image Pretreatment Algorithm respectively to pump top, pump upper end, pump lower end, tail pipe test set carry out batch it is pre-
Processing, then be input in corresponding defects detection model and obtain prediction result, it is compared to obtain all angles mould with label
The accuracy rate of type defects detection.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811357765.5A CN109559298A (en) | 2018-11-14 | 2018-11-14 | Emulsion pump defect detection method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811357765.5A CN109559298A (en) | 2018-11-14 | 2018-11-14 | Emulsion pump defect detection method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109559298A true CN109559298A (en) | 2019-04-02 |
Family
ID=65866527
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811357765.5A Pending CN109559298A (en) | 2018-11-14 | 2018-11-14 | Emulsion pump defect detection method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109559298A (en) |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110020691A (en) * | 2019-04-11 | 2019-07-16 | 重庆信息通信研究院 | LCD screen defect inspection method based on the training of convolutional neural networks confrontation type |
CN110084150A (en) * | 2019-04-09 | 2019-08-02 | 山东师范大学 | A kind of Automated Classification of White Blood Cells method and system based on deep learning |
CN110136116A (en) * | 2019-05-15 | 2019-08-16 | 广东工业大学 | A kind of injection molding pump defect inspection method, device, equipment and storage medium |
CN110186375A (en) * | 2019-06-06 | 2019-08-30 | 西南交通大学 | Intelligent high-speed rail white body assemble welding feature detection device and detection method |
CN110222681A (en) * | 2019-05-31 | 2019-09-10 | 华中科技大学 | A kind of casting defect recognition methods based on convolutional neural networks |
CN110245587A (en) * | 2019-05-29 | 2019-09-17 | 西安交通大学 | A kind of remote sensing image object detection method based on Bayes's transfer learning |
CN110309914A (en) * | 2019-07-03 | 2019-10-08 | 中山大学 | Deep learning model reasoning accelerated method based on Edge Server Yu mobile terminal equipment collaboration |
CN110333076A (en) * | 2019-06-19 | 2019-10-15 | 电子科技大学 | Method for Bearing Fault Diagnosis based on CNN-Stacking |
CN110348417A (en) * | 2019-07-17 | 2019-10-18 | 济南大学 | A kind of optimization method of depth Gesture Recognition Algorithm |
CN110490858A (en) * | 2019-08-21 | 2019-11-22 | 西安工程大学 | A kind of fabric defect Pixel-level classification method based on deep learning |
CN110555467A (en) * | 2019-08-13 | 2019-12-10 | 深圳创新奇智科技有限公司 | industrial data classification method based on model migration |
CN110598691A (en) * | 2019-08-01 | 2019-12-20 | 广东工业大学 | Medicine character label identification method based on improved multilayer perceptron |
CN110633739A (en) * | 2019-08-30 | 2019-12-31 | 太原科技大学 | Polarizer defect image real-time classification method based on parallel module deep learning |
CN110660040A (en) * | 2019-07-24 | 2020-01-07 | 浙江工业大学 | Industrial product irregular defect detection method based on deep learning |
CN110689051A (en) * | 2019-09-06 | 2020-01-14 | 北京市安全生产科学技术研究院 | Intelligent identification method for corrosion mode in gas pipeline based on transfer learning |
CN110827260A (en) * | 2019-11-04 | 2020-02-21 | 燕山大学 | Cloth defect classification method based on LBP (local binary pattern) features and convolutional neural network |
CN111027631A (en) * | 2019-12-13 | 2020-04-17 | 四川赛康智能科技股份有限公司 | X-ray image classification and identification method for judging crimping defects of high-voltage strain clamp |
CN111161228A (en) * | 2019-12-20 | 2020-05-15 | 东南大学 | Button surface defect detection method based on transfer learning |
CN111199543A (en) * | 2020-01-07 | 2020-05-26 | 南京航空航天大学 | Refrigerator-freezer surface defect detects based on convolutional neural network |
CN111488912A (en) * | 2020-03-16 | 2020-08-04 | 哈尔滨工业大学 | Laryngeal disease diagnosis system based on deep learning neural network |
CN111507990A (en) * | 2020-04-20 | 2020-08-07 | 南京航空航天大学 | Tunnel surface defect segmentation method based on deep learning |
CN111612747A (en) * | 2020-04-30 | 2020-09-01 | 重庆见芒信息技术咨询服务有限公司 | Method and system for rapidly detecting surface cracks of product |
CN111626994A (en) * | 2020-05-18 | 2020-09-04 | 江苏远望仪器集团有限公司 | Equipment fault defect diagnosis method based on improved U-Net neural network |
CN111696109A (en) * | 2020-05-25 | 2020-09-22 | 深圳大学 | High-precision layer segmentation method for retina OCT three-dimensional image |
CN111709918A (en) * | 2020-06-01 | 2020-09-25 | 深圳市深视创新科技有限公司 | Product defect classification method combining multiple channels based on deep learning |
CN112097673A (en) * | 2019-06-18 | 2020-12-18 | 上汽通用汽车有限公司 | Virtual matching method and system for vehicle body parts |
CN112132196A (en) * | 2020-09-14 | 2020-12-25 | 中山大学 | Cigarette case defect identification method combining deep learning and image processing |
CN113034483A (en) * | 2021-04-07 | 2021-06-25 | 昆明理工大学 | Cigarette defect detection method based on deep migration learning |
CN113129257A (en) * | 2019-12-30 | 2021-07-16 | 美光科技公司 | Apparatus and method for determining wafer defects |
CN113344847A (en) * | 2021-04-21 | 2021-09-03 | 安徽工业大学 | Long tail clamp defect detection method and system based on deep learning |
CN114066849A (en) * | 2021-11-16 | 2022-02-18 | 东北大学秦皇岛分校 | Deep learning-based electrical interface defect detection method |
CN114120317A (en) * | 2021-11-29 | 2022-03-01 | 哈尔滨工业大学 | Optical element surface damage identification method based on deep learning and image processing |
WO2022134304A1 (en) * | 2020-12-22 | 2022-06-30 | 东方晶源微电子科技(北京)有限公司 | Defect detection model training method and defect detection method and apparatus and device |
CN115063609A (en) * | 2022-06-28 | 2022-09-16 | 华南理工大学 | Heat pipe liquid absorption core oxidation grading method based on deep learning |
EP4141786A4 (en) * | 2021-01-28 | 2023-08-09 | BOE Technology Group Co., Ltd. | Defect detection method and apparatus, model training method and apparatus, and electronic device |
CN117274822A (en) * | 2023-11-21 | 2023-12-22 | 中国电建集团华东勘测设计研究院有限公司 | Processing method and device of water and soil loss monitoring model and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160163035A1 (en) * | 2014-12-03 | 2016-06-09 | Kla-Tencor Corporation | Automatic Defect Classification Without Sampling and Feature Selection |
CN205538740U (en) * | 2016-01-29 | 2016-08-31 | 广州番禺职业技术学院 | Intelligence timber surface defect detection system |
CN106769048A (en) * | 2017-01-17 | 2017-05-31 | 苏州大学 | Adaptive deep confidence network bearing fault diagnosis method based on Nesterov momentum method |
CN107862692A (en) * | 2017-11-30 | 2018-03-30 | 中山大学 | A kind of ribbon mark of break defect inspection method based on convolutional neural networks |
CN107868979A (en) * | 2017-08-31 | 2018-04-03 | 西安理工大学 | A kind of silicon single crystal diameter control method based on permanent casting speed control structure |
CN108345911A (en) * | 2018-04-16 | 2018-07-31 | 东北大学 | Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics |
-
2018
- 2018-11-14 CN CN201811357765.5A patent/CN109559298A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160163035A1 (en) * | 2014-12-03 | 2016-06-09 | Kla-Tencor Corporation | Automatic Defect Classification Without Sampling and Feature Selection |
CN205538740U (en) * | 2016-01-29 | 2016-08-31 | 广州番禺职业技术学院 | Intelligence timber surface defect detection system |
CN106769048A (en) * | 2017-01-17 | 2017-05-31 | 苏州大学 | Adaptive deep confidence network bearing fault diagnosis method based on Nesterov momentum method |
CN107868979A (en) * | 2017-08-31 | 2018-04-03 | 西安理工大学 | A kind of silicon single crystal diameter control method based on permanent casting speed control structure |
CN107862692A (en) * | 2017-11-30 | 2018-03-30 | 中山大学 | A kind of ribbon mark of break defect inspection method based on convolutional neural networks |
CN108345911A (en) * | 2018-04-16 | 2018-07-31 | 东北大学 | Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics |
Non-Patent Citations (13)
Title |
---|
ERZHU LI ET AL.: "Integrating Multilayer Features of Convolutional Neural Networks for Remote Sensing Scene Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 * |
MAX FERGUSON ET AL.: "Automatic localization of casting defects with convolutional neural networks", 《2017 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA)》 * |
YANGQING JIA ET AL.: "Caffe: Convolutional Architecture for Fast Feature Embedding", 《PROCEEDINGS OF THE 2014 ACM CONFERENCE ON MULTIMEDIA (MM"14)》 * |
伊恩•古德费洛 著: "《深度学习》", 31 August 2017 * |
刘子强: "基于机器视觉的日化用品泵头缺陷高速在线检测系统研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
刘宁: "基于机器视觉的某产品logo表面缺陷检测系统研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
周筑博 等: "基于深度卷积神经网络的输电线路可见光图像目标检测", 《液晶与显示》 * |
宋光慧: "基于迁移学习与深度卷积特征的图像标注方法研究", 《中国优秀博硕士学位论文全文数据库(博士) 信息科技辑》 * |
尹晔 等: "基于迁移学习的甜菜褐斑病识别方法", 《计算机工程与设计》 * |
崔雪红: "基于深度学习的轮胎缺陷无损检测与分类技术研究", 《中国优秀博硕士学位论文全文数据库(博士) 工程科技Ⅱ辑》 * |
景军锋等: "基于卷积神经网络的织物表面缺陷分类方法", 《测控技术》 * |
沈卉卉 等: "基于受限玻尔兹曼机的专家乘积系统的一种改进算法", 《电子与信息学报》 * |
肖琳 等: "基于Caffe的路面裂缝识别研究", 《工程技术》 * |
Cited By (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084150A (en) * | 2019-04-09 | 2019-08-02 | 山东师范大学 | A kind of Automated Classification of White Blood Cells method and system based on deep learning |
CN110084150B (en) * | 2019-04-09 | 2021-05-11 | 山东师范大学 | Automatic white blood cell classification method and system based on deep learning |
CN110020691A (en) * | 2019-04-11 | 2019-07-16 | 重庆信息通信研究院 | LCD screen defect inspection method based on the training of convolutional neural networks confrontation type |
CN110020691B (en) * | 2019-04-11 | 2022-10-11 | 重庆信息通信研究院 | Liquid crystal screen defect detection method based on convolutional neural network impedance type training |
CN110136116A (en) * | 2019-05-15 | 2019-08-16 | 广东工业大学 | A kind of injection molding pump defect inspection method, device, equipment and storage medium |
CN110136116B (en) * | 2019-05-15 | 2024-01-23 | 广东工业大学 | Injection pump defect detection method, device, equipment and storage medium |
CN110245587A (en) * | 2019-05-29 | 2019-09-17 | 西安交通大学 | A kind of remote sensing image object detection method based on Bayes's transfer learning |
CN110222681A (en) * | 2019-05-31 | 2019-09-10 | 华中科技大学 | A kind of casting defect recognition methods based on convolutional neural networks |
CN110186375A (en) * | 2019-06-06 | 2019-08-30 | 西南交通大学 | Intelligent high-speed rail white body assemble welding feature detection device and detection method |
CN112097673A (en) * | 2019-06-18 | 2020-12-18 | 上汽通用汽车有限公司 | Virtual matching method and system for vehicle body parts |
CN110333076A (en) * | 2019-06-19 | 2019-10-15 | 电子科技大学 | Method for Bearing Fault Diagnosis based on CNN-Stacking |
CN110333076B (en) * | 2019-06-19 | 2021-01-26 | 电子科技大学 | Bearing fault diagnosis method based on CNN-Stacking |
CN110309914A (en) * | 2019-07-03 | 2019-10-08 | 中山大学 | Deep learning model reasoning accelerated method based on Edge Server Yu mobile terminal equipment collaboration |
CN110348417A (en) * | 2019-07-17 | 2019-10-18 | 济南大学 | A kind of optimization method of depth Gesture Recognition Algorithm |
CN110348417B (en) * | 2019-07-17 | 2022-09-30 | 济南大学 | Optimization method of depth gesture recognition algorithm |
CN110660040A (en) * | 2019-07-24 | 2020-01-07 | 浙江工业大学 | Industrial product irregular defect detection method based on deep learning |
CN110598691B (en) * | 2019-08-01 | 2023-05-02 | 广东工业大学 | Drug character label identification method based on improved multilayer perceptron |
CN110598691A (en) * | 2019-08-01 | 2019-12-20 | 广东工业大学 | Medicine character label identification method based on improved multilayer perceptron |
CN110555467B (en) * | 2019-08-13 | 2020-10-23 | 深圳创新奇智科技有限公司 | Industrial data classification method based on model migration |
CN110555467A (en) * | 2019-08-13 | 2019-12-10 | 深圳创新奇智科技有限公司 | industrial data classification method based on model migration |
CN110490858B (en) * | 2019-08-21 | 2022-12-13 | 西安工程大学 | Fabric defective pixel level classification method based on deep learning |
CN110490858A (en) * | 2019-08-21 | 2019-11-22 | 西安工程大学 | A kind of fabric defect Pixel-level classification method based on deep learning |
CN110633739B (en) * | 2019-08-30 | 2023-04-07 | 太原科技大学 | Polarizer defect image real-time classification method based on parallel module deep learning |
CN110633739A (en) * | 2019-08-30 | 2019-12-31 | 太原科技大学 | Polarizer defect image real-time classification method based on parallel module deep learning |
CN110689051A (en) * | 2019-09-06 | 2020-01-14 | 北京市安全生产科学技术研究院 | Intelligent identification method for corrosion mode in gas pipeline based on transfer learning |
CN110827260A (en) * | 2019-11-04 | 2020-02-21 | 燕山大学 | Cloth defect classification method based on LBP (local binary pattern) features and convolutional neural network |
CN110827260B (en) * | 2019-11-04 | 2023-04-21 | 燕山大学 | Cloth defect classification method based on LBP characteristics and convolutional neural network |
CN111027631A (en) * | 2019-12-13 | 2020-04-17 | 四川赛康智能科技股份有限公司 | X-ray image classification and identification method for judging crimping defects of high-voltage strain clamp |
CN111027631B (en) * | 2019-12-13 | 2023-09-01 | 四川赛康智能科技股份有限公司 | X-ray image classification and identification method for judging crimping defects of high-voltage strain clamp |
CN111161228A (en) * | 2019-12-20 | 2020-05-15 | 东南大学 | Button surface defect detection method based on transfer learning |
CN111161228B (en) * | 2019-12-20 | 2023-08-25 | 东南大学 | Button surface defect detection method based on transfer learning |
US11922613B2 (en) | 2019-12-30 | 2024-03-05 | Micron Technology, Inc. | Apparatuses and methods for determining wafer defects |
CN113129257A (en) * | 2019-12-30 | 2021-07-16 | 美光科技公司 | Apparatus and method for determining wafer defects |
CN111199543A (en) * | 2020-01-07 | 2020-05-26 | 南京航空航天大学 | Refrigerator-freezer surface defect detects based on convolutional neural network |
CN111488912A (en) * | 2020-03-16 | 2020-08-04 | 哈尔滨工业大学 | Laryngeal disease diagnosis system based on deep learning neural network |
CN111507990A (en) * | 2020-04-20 | 2020-08-07 | 南京航空航天大学 | Tunnel surface defect segmentation method based on deep learning |
CN111612747A (en) * | 2020-04-30 | 2020-09-01 | 重庆见芒信息技术咨询服务有限公司 | Method and system for rapidly detecting surface cracks of product |
CN111612747B (en) * | 2020-04-30 | 2023-10-20 | 湖北煌朝智能自动化装备有限公司 | Rapid detection method and detection system for product surface cracks |
CN111626994A (en) * | 2020-05-18 | 2020-09-04 | 江苏远望仪器集团有限公司 | Equipment fault defect diagnosis method based on improved U-Net neural network |
CN111696109A (en) * | 2020-05-25 | 2020-09-22 | 深圳大学 | High-precision layer segmentation method for retina OCT three-dimensional image |
CN111709918A (en) * | 2020-06-01 | 2020-09-25 | 深圳市深视创新科技有限公司 | Product defect classification method combining multiple channels based on deep learning |
CN111709918B (en) * | 2020-06-01 | 2023-04-18 | 深圳市深视创新科技有限公司 | Product defect classification method combining multiple channels based on deep learning |
CN112132196B (en) * | 2020-09-14 | 2023-10-20 | 中山大学 | Cigarette case defect identification method combining deep learning and image processing |
CN112132196A (en) * | 2020-09-14 | 2020-12-25 | 中山大学 | Cigarette case defect identification method combining deep learning and image processing |
WO2022134304A1 (en) * | 2020-12-22 | 2022-06-30 | 东方晶源微电子科技(北京)有限公司 | Defect detection model training method and defect detection method and apparatus and device |
EP4141786A4 (en) * | 2021-01-28 | 2023-08-09 | BOE Technology Group Co., Ltd. | Defect detection method and apparatus, model training method and apparatus, and electronic device |
CN113034483A (en) * | 2021-04-07 | 2021-06-25 | 昆明理工大学 | Cigarette defect detection method based on deep migration learning |
CN113034483B (en) * | 2021-04-07 | 2022-06-10 | 昆明理工大学 | Cigarette defect detection method based on deep migration learning |
CN113344847A (en) * | 2021-04-21 | 2021-09-03 | 安徽工业大学 | Long tail clamp defect detection method and system based on deep learning |
CN113344847B (en) * | 2021-04-21 | 2023-10-31 | 安徽工业大学 | Deep learning-based long tail clamp defect detection method and system |
CN114066849A (en) * | 2021-11-16 | 2022-02-18 | 东北大学秦皇岛分校 | Deep learning-based electrical interface defect detection method |
CN114066849B (en) * | 2021-11-16 | 2024-08-20 | 东北大学秦皇岛分校 | Electrical interface defect detection method based on deep learning |
CN114120317A (en) * | 2021-11-29 | 2022-03-01 | 哈尔滨工业大学 | Optical element surface damage identification method based on deep learning and image processing |
CN114120317B (en) * | 2021-11-29 | 2024-04-16 | 哈尔滨工业大学 | Optical element surface damage identification method based on deep learning and image processing |
CN115063609A (en) * | 2022-06-28 | 2022-09-16 | 华南理工大学 | Heat pipe liquid absorption core oxidation grading method based on deep learning |
CN115063609B (en) * | 2022-06-28 | 2024-03-26 | 华南理工大学 | Deep learning-based heat pipe liquid absorption core oxidation grading method |
CN117274822A (en) * | 2023-11-21 | 2023-12-22 | 中国电建集团华东勘测设计研究院有限公司 | Processing method and device of water and soil loss monitoring model and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109559298A (en) | Emulsion pump defect detection method based on deep learning | |
CN109299716A (en) | Training method, image partition method, device, equipment and the medium of neural network | |
CN109902678A (en) | Model training method, character recognition method, device, electronic equipment and computer-readable medium | |
CN106407958B (en) | Face feature detection method based on double-layer cascade | |
CN106651830A (en) | Image quality test method based on parallel convolutional neural network | |
CN107529650A (en) | Network model construction and closed loop detection method, corresponding device and computer equipment | |
CN109613006A (en) | A kind of fabric defect detection method based on end-to-end neural network | |
CN110992351B (en) | sMRI image classification method and device based on multi-input convolution neural network | |
CN108021947A (en) | A kind of layering extreme learning machine target identification method of view-based access control model | |
CN109376663A (en) | A kind of human posture recognition method and relevant apparatus | |
CN110321785A (en) | A method of introducing ResNet deep learning network struction dermatoglyph classification prediction model | |
Liu et al. | Coastline extraction method based on convolutional neural networks—A case study of Jiaozhou Bay in Qingdao, China | |
CN107818080A (en) | Term recognition methods and device | |
CN108932712A (en) | A kind of rotor windings quality detecting system and method | |
CN110334594A (en) | A kind of object detection method based on batch again YOLO algorithm of standardization processing | |
CN114359199A (en) | Fish counting method, device, equipment and medium based on deep learning | |
CN110008853A (en) | Pedestrian detection network and model training method, detection method, medium, equipment | |
CN110046570A (en) | A kind of silo grain inventory dynamic supervision method and apparatus | |
CN109829414A (en) | A kind of recognition methods again of the pedestrian based on label uncertainty and human body component model | |
CN110287985A (en) | A kind of deep neural network image-recognizing method based on the primary topology with Mutation Particle Swarm Optimizer | |
CN116258990A (en) | Cross-modal affinity-based small sample reference video target segmentation method | |
CN109583289A (en) | The gender identification method and device of crab | |
An et al. | Fabric defect detection using deep learning: An Improved Faster R-approach | |
Zhang et al. | Target detection of banana string and fruit stalk based on YOLOv3 deep learning network | |
CN110175985A (en) | Carbon fiber composite core wire damage detecting method, device and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190402 |
|
RJ01 | Rejection of invention patent application after publication |