CN110288013A - A Defect Label Recognition Method Based on Block Segmentation and Multiple Input Siamese Convolutional Neural Networks - Google Patents
A Defect Label Recognition Method Based on Block Segmentation and Multiple Input Siamese Convolutional Neural Networks Download PDFInfo
- Publication number
- CN110288013A CN110288013A CN201910537875.8A CN201910537875A CN110288013A CN 110288013 A CN110288013 A CN 110288013A CN 201910537875 A CN201910537875 A CN 201910537875A CN 110288013 A CN110288013 A CN 110288013A
- Authority
- CN
- China
- Prior art keywords
- block
- weight
- iteration
- label
- label picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 20
- 238000000034 method Methods 0.000 title claims abstract description 19
- 230000011218 segmentation Effects 0.000 title claims abstract description 18
- 230000007547 defect Effects 0.000 title abstract description 49
- 230000002950 deficient Effects 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims description 21
- 238000009826 distribution Methods 0.000 claims description 10
- 238000005520 cutting process Methods 0.000 claims description 5
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 238000004364 calculation method Methods 0.000 abstract description 2
- 238000013480 data collection Methods 0.000 abstract 1
- 238000001514 detection method Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 238000009776 industrial production Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及工业生产活动中检测识别领域的缺陷标签检测识别领域,尤其涉及一种基于block分割和多重输入孪生卷积神经网络的缺陷标签识别方法。The invention relates to the field of defect label detection and identification in the field of detection and identification in industrial production activities, in particular to a defect label identification method based on block segmentation and multiple input twin convolutional neural networks.
背景技术Background technique
随着社会的发展,现如今,市面上很多商品都带有标签,标签具有标记产品关键信息的作用,在人们的工作及生活中所发挥的作用越来越大,同时标签的质量问题也越来越受到人们的关注。然而,标签在生产过程中,由于受到生产工艺及机械精度等因素的影响,生产出来的标签经常会出现很多质量问题,如标签破损,印刷不良包括字符多印、少印、缺划、有划痕现象等;因此标签缺陷检测环节至关重要。同时由于缺陷标签种类繁多,缺陷标签的检测与分类也变的十分困难。当前缺陷标签检测识别主要有以下三种方法:With the development of society, nowadays, many commodities on the market have labels. Labels have the function of marking the key information of products. They play an increasingly important role in people's work and life. At the same time, the quality problems of labels are also increasing. more and more people's attention. However, during the production process of labels, due to factors such as production technology and mechanical precision, the produced labels often have many quality problems, such as label damage, poor printing, including overprinting, underprinting, missing and scratched characters. mark phenomenon, etc.; therefore, the label defect detection link is very important. At the same time, due to the wide variety of defect labels, the detection and classification of defect labels becomes very difficult. At present, there are three main methods for defect label detection and identification:
1.在工业生产中,生产流水线上的工人是通过人眼比对的方法检测标签的质量,并保留质量合格的标签,丢弃不合格的标签。1. In industrial production, workers on the production line check the quality of labels through human eye comparison, keep labels with qualified quality, and discard unqualified labels.
存在的问题是:人工检测标签质量的方法存在各种弊端,比如检测速度慢、精度低,成本高,而且长时间的人工检测极易造成人的疲劳。The existing problem is that there are various drawbacks in the method of manual detection of label quality, such as slow detection speed, low precision, high cost, and long-term manual detection can easily cause human fatigue.
2.基于差分处理的缺陷标签检测方法,准备参考标签,让待测标签对参考标签做差分处理,可以检测标签图片。2. The defect label detection method based on differential processing, prepare the reference label, let the label to be tested perform differential processing on the reference label, and can detect the label image.
存在的问题是:准备的参考标签不合适,标签内容不同造成的误检,光照不均匀造成的误检等。The existing problems are: inappropriate reference labels prepared, false detections caused by different label contents, and false detections caused by uneven illumination.
3.基于频率域处理的缺陷标签检测方法,利用缺陷信号频率较高的特征可以检测出缺陷标签。3. The defect label detection method based on frequency domain processing can detect the defect label by using the feature of the defect signal with high frequency.
存在的问题是:标签图片与缺陷信息频率相似的区域造成误检,对标签内容有要求等。The existing problems are: the label image and the area with similar frequency of defect information cause false detection, and there are requirements for label content, etc.
所以迫切需要开发一套用于工业生产线上的缺陷标签检测方法。该方法能对标签印刷内容进行自动检测以及精确识别缺陷标签位置和类别,同时还能对缺陷标签的种类进行自定义扩展,系统灵活性极高。Therefore, there is an urgent need to develop a set of defect label detection methods for industrial production lines. The method can automatically detect the label printing content, accurately identify the position and category of defective labels, and can also customize and expand the types of defective labels, and the system is extremely flexible.
发明内容SUMMARY OF THE INVENTION
有鉴于此,确有必要提供一种基于block分割和多重输入孪生卷积神经网络的缺陷标签识别方法,将block块标签数据集进行训练,确定缺陷所属的类别以及缺陷发生的位置,再结合adaboost算法进行正确的分类,从而大大降低了缺陷标签识别的计算量和复杂度,同时也有效提高了缺陷标签识别分类的准确性。在此基础上,我们可以自定义缺陷种类标签图片进行训练,该系统的灵活性和可扩展性比较好。In view of this, it is indeed necessary to provide a defect label recognition method based on block segmentation and multiple input Siamese convolutional neural network. The block label data set is trained to determine the category of the defect and the location of the defect, and then combined with adaboost The algorithm performs correct classification, thereby greatly reducing the calculation amount and complexity of defect label identification, and also effectively improving the accuracy of defect label identification and classification. On this basis, we can customize the label images of defect types for training, and the system has good flexibility and scalability.
为了克服现有技术的缺陷,本发明的技术方案如下:In order to overcome the defects of the prior art, the technical scheme of the present invention is as follows:
一种基于block分割和多重输入孪生卷积神经网络的缺陷标签识别方法,包括以下步骤:A defect label recognition method based on block segmentation and multiple input Siamese convolutional neural network, including the following steps:
步骤S1:将标签图片进行block分割处理;Step S1: perform block segmentation processing on the label image;
步骤S2:使用block块标签图片数据集对多重输入孪生卷积神经网络进行训练,得到训练好的多重输入孪生残差网络模型;Step S2: Use the block label image data set to train the multi-input Siamese convolutional neural network to obtain a trained multi-input Siamese residual network model;
步骤S3:使用经过训练的网络模型对缺陷标签进行识别和分类。Step S3: Identify and classify defect labels using the trained network model.
其中,所述S1进一步包括:Wherein, the S1 further includes:
步骤S11:获取标签图片数据集,并进行block切割处理;Step S11: Obtain the label image data set, and perform block cutting processing;
步骤S12:将切割后的标签图片block块存储于block块标签图片数据库中;Step S12: store the cut label picture block in the block label picture database;
所述步骤S11进一步包括:The step S11 further includes:
步骤S111:标签图片集中的标签宽度为w,高度为h;Step S111: the width of the label in the label picture set is w, and the height is h;
步骤S112:用宽度和高度均为n的block块来对标签图片进行block切割;Step S112: block cutting the label image with block blocks whose width and height are n;
步骤S113:每张标签图片被划分为w/n*h/n个block块。Step S113: Each label image is divided into w/n*h/n blocks.
所述步骤S2进一步包括:The step S2 further includes:
步骤S21:从block块标签图片数据库中获取block块标签图片数据集;Step S21: obtain block label picture data set from block label picture database;
步骤S22:使用block块标签图片数据集训练多重输入孪生残差网络模型;Step S22: use the block label image data set to train the multi-input siamese residual network model;
所述步骤S22进一步包括:The step S22 further includes:
步骤S221:初始化训练集的权值分布,每一个训练样本最开始时都被赋予相同的权重:1/N,可表示如下:Step S221: Initialize the weight distribution of the training set, each training sample is given the same weight at the beginning: 1/N, which can be expressed as follows:
D1=(w11,w12...w1i...,w1N), D 1 =(w 11 , w 12 ... w 1i ..., w 1N ),
上式中D1表示第一轮迭代的具有指定权值的数据集,W下标中,第一个数字表明了是第几轮迭代,第二个数字为样本的索引,样本的数量为N;In the above formula, D1 represents the data set with the specified weight in the first round of iteration. In the subscript W, the first number indicates the number of iterations, the second number is the index of the sample, and the number of samples is N;
步骤S222:假设我们要进行M轮迭代,也就是选出M个最优弱分类器,接下来开始迭代,其中m表示的是迭代的次数。Step S222: Suppose we want to perform M rounds of iterations, that is, select M optimal weak classifiers, and then start the iteration, where m represents the number of iterations.
for(int m=1;m<=M;m++);for(int m=1; m <= M; m++);
步骤S223:使用具有权值分布Dm的训练数据集学习,得到一个最优弱分类器:Step S223: Use the training data set with the weight distribution D m to learn to obtain an optimal weak classifier:
Gm(x):χ→,{-1,+1}G m (x): χ→, {-1, +1}
其中Gm(x)表示的是在第m轮对具有权值分布Dm的训练集学习得到的分类器,分类的结果为χ→,{-1,+1}where G m (x) represents the classifier learned from the training set with the weight distribution D m in the mth round, and the classification result is χ→,{-1,+1}
步骤S224:选取一个当前误差率最低的弱分类器G作为第m个基本分类器Gm,并计算弱分类器:Gm(x):χ→,{-1,+1},计算Gm(x)在训练数据集上的分类误差率:Step S224: Select a weak classifier G with the lowest current error rate as the m-th basic classifier Gm, and calculate the weak classifier: G m (x): χ→, {-1, +1}, calculate Gm (x ) classification error rate on the training dataset:
其中em表示误差率,xi表示的是输入样本,yi表示的是分类结果,wmi表示的是在第m轮迭代中第i个样本的权重。where em represents the error rate, xi represents the input sample, yi represents the classification result, and w mi represents the weight of the ith sample in the mth iteration.
步骤S225:计算表示Gm(x)在最终分类器中的权重:Step S225: Calculate Denote the weight of Gm(x) in the final classifier:
步骤S226:更新训练数据集的权值分布进行第m+1轮迭代:Step S226: Update the weight distribution of the training data set and perform the m+1th round of iteration:
Dm+1=(wm+1,1,wm+1,2...wm+1,i...,wm+1,N),D m+1 = (w m+1 , 1 , w m+1, 2 ... w m+1, i ..., w m+1, N ),
其中Dm+1表示第m+1次迭代中的数据集,Wm+1表示的是第m+1次迭代中数据集的权重,Zm表示的是归一化常数, 表示的是Gm(x)在最终分类器中的权重,Gm是第m轮迭代中当前误差率最低的弱分类器。where D m+1 represents the data set in the m+1th iteration, W m+1 represents the weight of the data set in the m+1th iteration, Z m represents the normalization constant, Represents the weight of Gm(x) in the final classifier, and Gm is the weak classifier with the lowest error rate in the mth iteration.
权值更新后,便使得被基本分类器Gm(x)误分类样本的权值增大,而被正确分类样本的权值减小;After the weights are updated, the weights of the misclassified samples by the basic classifier Gm(x) increase, while the weights of the correctly classified samples decrease;
步骤S227:经过上边M轮迭代后,便可以组合强分类器,作为训练好的模最终型:Step S227: After the above M rounds of iteration, the strong classifier can be combined as the final model of the trained model:
sign函数是符号函数,大于0返回1,小于0返回-1,等于0返回0。Gm(x)是在第m轮迭代中得到的分类器,是当前分类器在最终的分类器中所占的权重。The sign function is a sign function. It returns 1 if it is greater than 0, -1 if it is less than 0, and 0 if it is equal to 0. Gm(x) is the classifier obtained in the mth iteration, is the weight of the current classifier in the final classifier.
与现有技术相比较,本发明具有的有益效果:Compared with the prior art, the present invention has the beneficial effects:
高效性:1.本发明对标签图片进行block分割处理,可以在缺陷信息较弱的情况下高效抓取标签图片的缺陷特征,同时能够快速识别出缺陷位置。2.本发明利用block标签图片数据库存储标签图片信息,采用深度学习的由残差网络构成的多重输入孪生卷积神经网络对标签图片数据集进行训练,得到了高效的多重输入孪生残差网络模型,提高了缺陷标签图片分类性能,改善了现有的缺陷标签图片复杂度高容易造成误检的缺点,提高了识别的效率。Efficiency: 1. The present invention performs block segmentation processing on the label image, which can efficiently capture the defect features of the label image when the defect information is weak, and can quickly identify the defect location. 2. The present invention utilizes the block label picture database to store label picture information, and adopts the multi-input twin convolutional neural network composed of the residual network of deep learning to train the label picture data set, and obtains an efficient multi-input twin residual network model. , which improves the classification performance of defect label images, improves the defect that the high complexity of the existing defect label images easily leads to false detection, and improves the efficiency of identification.
准确性:1.本发明对block块标签数据集进行训练,建立了精确的多重输入孪生残差网络模型,由于孪生网络有多个输入端共享参数,可以提取的是同一类特征,映射到特征向量是具有很高的相似性的,最终将缺陷标签图片归类为最相似的参考标签类别中,改善了现有的标签图片容易提取到无用特征,造成分类错误的缺点,提高了分类的准确性。2.结合改进的标签图片的adaboost算法,对分类错误的标签图片进一步训练,进一步提高了缺陷标签图片类别预测的准确性,改善了现有的缺陷标签检测识别技术对缺陷标签分类准确性差的缺点。Accuracy: 1. The present invention trains the block label data set, and establishes an accurate multi-input twin residual network model. Since the twin network has multiple input terminals to share parameters, the same type of features can be extracted and mapped to features The vector has a high similarity, and finally the defect label image is classified into the most similar reference label category, which improves the fact that the existing label image is easy to extract useless features, causing classification errors, and improves the accuracy of classification. sex. 2. Combined with the adaboost algorithm of the improved label image, further training the wrongly classified label image, further improving the accuracy of the category prediction of the defect label image, and improving the defect label classification accuracy of the existing defect label detection and identification technology. .
可扩展性:本发明采用的是多重输入孪生卷积神经网络对数据进行训练,孪生网络的输入一个作为待测标签,其余输入端均为参考标签,通过相似性进行分类,不仅提高了分类的准确性,而且可以自定义缺陷标签种类,适应了缺陷标签种类繁多,识别困难的缺点。Scalability: The present invention uses a multi-input twin convolutional neural network to train the data. One input of the twin network is used as the label to be tested, and the rest of the inputs are reference labels. Classification is performed by similarity, which not only improves the classification efficiency. Accuracy, and the types of defect labels can be customized, which adapts to the shortcomings of a wide variety of defect labels and difficult identification.
附图说明Description of drawings
图1为本发明提供的一种基于block分割和多重输入孪生卷积神经网络的缺陷标签识别方法的流程图;Fig. 1 is a kind of flowchart of the defect label identification method based on block segmentation and multiple input twin convolutional neural network provided by the present invention;
图2为本发明提供的一种基于block分割和多重输入孪生卷积神经网络的缺陷标签识别方法的详细流程图;2 is a detailed flowchart of a method for identifying defect labels based on block segmentation and multiple-input twin convolutional neural networks provided by the present invention;
图3为本发明提供的一种基于block分割和多重输入孪生卷积神经网络的缺陷标签识别方法步骤S1的示意图;3 is a schematic diagram of step S1 of a defect label identification method based on block segmentation and multiple input twin convolutional neural networks provided by the present invention;
图4为本发明提供的一种基于block分割和多重输入孪生卷积神经网络的缺陷标签识别方法多重输入孪生网络模型的结构图;4 is a structural diagram of a multiple input twin network model of a defect label identification method based on block segmentation and multiple input twin convolutional neural networks provided by the present invention;
图5为本发明提供的一种基于block分割和多重输入孪生卷积神经网络的缺陷标签识别方法残差网络的结构图;5 is a structural diagram of a residual network of a defect label identification method based on block segmentation and multiple input twin convolutional neural networks provided by the present invention;
图6为本发明提供的一种基于block分割和多重输入孪生卷积神经网络的缺陷标签识别方法步骤S22的详细流程图;6 is a detailed flowchart of step S22 of a method for identifying defect labels based on block segmentation and multiple-input twin convolutional neural networks provided by the present invention;
如下具体实施例将结合上述附图进一步说明本发明。The following specific embodiments will further illustrate the present invention in conjunction with the above drawings.
具体实施方式Detailed ways
以下将结合附图对本发明提供的技术方案作进一步说明。The technical solutions provided by the present invention will be further described below with reference to the accompanying drawings.
当标签图片和缺陷信息对比较大时,提取特征时不容易抓取到有用信息,基于此,我们对标签图片进行了block分割;为了保证抓取的是更深层次的缺陷特征,同时为了保证更深层次缺陷特征的分类检测效果,我们采用了残差网络来对缺陷特征进行特征提取;为了保证抓取的是同一类并且最相似的特征,提高分类的准确性,我们采取了孪生残差网络来抓取缺陷特征;同时为了提高分类的准确性,我们使用了多重输入的孪生网络进行相似性比较,同时还采用adaboost算法对分类错误的标签图片重复进行训练。本发明提供了一种基于block分割和多重输入孪生卷积神经网络的缺陷标签识别方法。When the label image and defect information are relatively large, it is not easy to capture useful information when extracting features. Based on this, we block the label image; For the classification and detection effect of hierarchical defect features, we use a residual network to extract the defect features; in order to ensure that the same type and the most similar features are captured and improve the accuracy of classification, we use a twin residual network to Capture defect features; at the same time, in order to improve the accuracy of classification, we use a multi-input siamese network for similarity comparison, and also use the adaboost algorithm to repeatedly train the misclassified label images. The invention provides a defect label identification method based on block segmentation and multiple input twin convolutional neural network.
本发明提供的一种基于block分割和多重输入孪生卷积神经网络的缺陷标签识别方法,图1和2所示为本发明基于block分割和多重输入孪生卷积神经网络的缺陷标签识别系统,整体而言,本发明包括3大步骤,步骤S1:对标签图片进行block分割;步骤S2:使用block标签图片数据集训练多重输入孪生残差网络模型;步骤S3:使用经过训练的多重输入孪生残差网络对缺陷标签图片进行分类和识别;The present invention provides a method for identifying defect labels based on block segmentation and multiple-input twin convolutional neural networks. Figures 1 and 2 show the defect label identification system based on block segmentation and multiple-input twin convolutional neural networks of the present invention. In other words, the present invention includes three major steps, step S1: performing block segmentation on the label image; step S2: using the block label image data set to train the multiple input siamese residual network model; step S3: using the trained multiple input siamese residual The network classifies and identifies defect label images;
参见图3,步骤S1基于block标签图片数据库,将宽度为w,高度为h的标签图片分割为w/n*h/n个小的标签block块;Referring to Fig. 3, step S1 is based on the block label picture database, and the label picture with width w and height h is divided into w/n*h/n small label block blocks;
参见图4,本发明所用的孪生卷积神经模型是由多个残差网络构成的,每个残差网络对应有一个输入,一个输入端为待测标签信息,其余输入端均为参考标签信息。Referring to FIG. 4 , the twin convolutional neural model used in the present invention is composed of multiple residual networks, each residual network corresponds to one input, one input terminal is the label information to be measured, and the other input terminals are reference label information. .
参见图5,本发明应用的ResNet-34中一个残差模块是由两层卷积再加一个恒等映射形成的,有效缓解了梯度消失的影响,使得网络模型层数可以大大增加。Referring to FIG. 5 , a residual module in ResNet-34 applied in the present invention is formed by two layers of convolution and an identity map, which effectively alleviates the influence of gradient disappearance, so that the number of layers of the network model can be greatly increased.
参见图6,步骤S22使用adaboost算法训练多重孪生残差网络模型,具体包括如下步骤:Referring to Figure 6, step S22 uses the adaboost algorithm to train the multiple twin residual network model, which specifically includes the following steps:
步骤S221:初始化训练集的权值分布,每一个训练样本最开始时都被赋予相同的权重:1/N,可表示如下:Step S221: Initialize the weight distribution of the training set, each training sample is given the same weight at the beginning: 1/N, which can be expressed as follows:
D1=(w11,w12...w1i...,w1N), D 1 = (w 11 , w 12 ... w 1i ..., w 1N ),
上式中D1表示第一轮迭代的具有指定权值的数据集,W下标中,第一个数字表明了是第几轮迭代,第二个数字为样本的索引,样本的数量为N;In the above formula, D1 represents the data set with the specified weight in the first round of iteration. In the subscript W, the first number indicates the number of iterations, the second number is the index of the sample, and the number of samples is N;
步骤S222:假设我们要进行M轮迭代,也就是选出M个最优弱分类器,接下来开始迭代,其中m表示的是迭代的次数。Step S222: Suppose we want to perform M rounds of iterations, that is, select M optimal weak classifiers, and then start the iteration, where m represents the number of iterations.
for(int m=1;m<=M;m++);for(int m=1; m <= M; m++);
步骤S223:使用具有权值分布Dm的训练数据集学习,得到一个最优弱分类器:Step S223: Use the training data set with the weight distribution D m to learn to obtain an optimal weak classifier:
Gm(x):χ→,{-1,+1}G m (x): χ→, {-1, +1}
其中Gm(x)表示的是在第m轮对具有权值分布Dm的训练集学习得到的分类器,分类的结果为χ→,{-1,+1}where G m (x) represents the classifier learned from the training set with the weight distribution D m in the mth round, and the classification result is χ→,{-1,+1}
步骤S224:选取一个当前误差率最低的弱分类器G作为第m个基本分类器Gm,并计算弱分类器:Gm(x):χ→,{-1,+1},计算Gm(x)在训练数据集上的分类误差率:Step S224: Select a weak classifier G with the lowest current error rate as the m-th basic classifier Gm, and calculate the weak classifier: G m (x): χ→, {-1, +1}, calculate Gm (x ) classification error rate on the training dataset:
其中em表示误差率,xi表示的是输入样本,yi表示的是分类结果,wmi表示的是在第m轮迭代中第i个样本的权重。where em represents the error rate, xi represents the input sample, yi represents the classification result, and w mi represents the weight of the ith sample in the mth iteration.
步骤S225:计算表示Gm(x)在最终分类器中的权重:Step S225: Calculate Denote the weight of Gm(x) in the final classifier:
步骤S226:更新训练数据集的权值分布进行第m+1轮迭代:Step S226: Update the weight distribution of the training data set and perform the m+1th round of iteration:
Dm+1=(wm+1,1,wm+1,2...wm+1,i...,wm+1,N),D m+1 = (w m+1 , 1 , w m+1, 2 ... w m+1, i ..., w m+1, N ),
其中Dm+1表示第m+1次迭代中的数据集,Wm+1表示的是第m+1次迭代中数据集的权重,Zm表示的是归一化常数, 表示的是Gm(x)在最终分类器中的权重,Gm是第m轮迭代中当前误差率最低的弱分类器。where D m+1 represents the data set in the m+1th iteration, W m+1 represents the weight of the data set in the m+1th iteration, Z m represents the normalization constant, Represents the weight of Gm(x) in the final classifier, and Gm is the weak classifier with the lowest error rate in the mth iteration.
权值更新后,便使得被基本分类器Gm(x)误分类样本的权值增大,而被正确分类样本的权值减小;After the weights are updated, the weights of the misclassified samples by the basic classifier Gm(x) increase, while the weights of the correctly classified samples decrease;
步骤S227:经过上边M轮迭代后,便可以组合强分类器,作为训练好的模最终型:Step S227: After the above M rounds of iteration, the strong classifier can be combined as the final model of the trained model:
sign函数是符号函数,大于0返回1,小于0返回-1,等于0返回0。Gm(x)是在第m轮迭代中得到的分类器,是当前分类器在最终的分类器中所占的权重。The sign function is a sign function. It returns 1 if it is greater than 0, -1 if it is less than 0, and 0 if it is equal to 0. Gm(x) is the classifier obtained in the mth iteration, is the weight of the current classifier in the final classifier.
以上实施例的说明只是用于帮助理解本发明的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。The descriptions of the above embodiments are only used to help understand the method and the core idea of the present invention. It should be pointed out that for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can also be made to the present invention, and these improvements and modifications also fall within the protection scope of the claims of the present invention.
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments enables any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910537875.8A CN110288013A (en) | 2019-06-20 | 2019-06-20 | A Defect Label Recognition Method Based on Block Segmentation and Multiple Input Siamese Convolutional Neural Networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910537875.8A CN110288013A (en) | 2019-06-20 | 2019-06-20 | A Defect Label Recognition Method Based on Block Segmentation and Multiple Input Siamese Convolutional Neural Networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110288013A true CN110288013A (en) | 2019-09-27 |
Family
ID=68004851
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910537875.8A Pending CN110288013A (en) | 2019-06-20 | 2019-06-20 | A Defect Label Recognition Method Based on Block Segmentation and Multiple Input Siamese Convolutional Neural Networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110288013A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110992334A (en) * | 2019-11-29 | 2020-04-10 | 深圳易嘉恩科技有限公司 | Quality evaluation method for DCGAN network generated image |
CN111291657A (en) * | 2020-01-21 | 2020-06-16 | 同济大学 | Crowd counting model training method based on difficult case mining and application |
CN111325708A (en) * | 2019-11-22 | 2020-06-23 | 济南信通达电气科技有限公司 | Power transmission line detection method and server |
CN111709920A (en) * | 2020-06-01 | 2020-09-25 | 深圳市深视创新科技有限公司 | Template defect detection method |
CN112907510A (en) * | 2021-01-15 | 2021-06-04 | 中国人民解放军国防科技大学 | Surface defect detection method |
CN116128798A (en) * | 2022-11-17 | 2023-05-16 | 台州金泰精锻科技股份有限公司 | Finish forging process for bell-shaped shell forged surface teeth |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104866865A (en) * | 2015-05-11 | 2015-08-26 | 西南交通大学 | DHOG and discrete cosine transform-based overhead line system equilibrium line fault detection method |
CN105653450A (en) * | 2015-12-28 | 2016-06-08 | 中国石油大学(华东) | Software defect data feature selection method based on combination of modified genetic algorithm and Adaboost |
CN108074231A (en) * | 2017-12-18 | 2018-05-25 | 浙江工业大学 | Magnetic sheet surface defect detection method based on convolutional neural network |
CN108389180A (en) * | 2018-01-19 | 2018-08-10 | 浙江工业大学 | A kind of fabric defect detection method based on deep learning |
-
2019
- 2019-06-20 CN CN201910537875.8A patent/CN110288013A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104866865A (en) * | 2015-05-11 | 2015-08-26 | 西南交通大学 | DHOG and discrete cosine transform-based overhead line system equilibrium line fault detection method |
CN105653450A (en) * | 2015-12-28 | 2016-06-08 | 中国石油大学(华东) | Software defect data feature selection method based on combination of modified genetic algorithm and Adaboost |
CN108074231A (en) * | 2017-12-18 | 2018-05-25 | 浙江工业大学 | Magnetic sheet surface defect detection method based on convolutional neural network |
CN108389180A (en) * | 2018-01-19 | 2018-08-10 | 浙江工业大学 | A kind of fabric defect detection method based on deep learning |
Non-Patent Citations (2)
Title |
---|
FIGHTING41LOVE: "《Siamese network 孪生神经网络--一个简单神奇的结构》", 《简书》 * |
PAN_JINQUAN: "《Adaboost算法原理分析和实例+代码(简明易懂)》", 《CSDN》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111325708A (en) * | 2019-11-22 | 2020-06-23 | 济南信通达电气科技有限公司 | Power transmission line detection method and server |
CN111325708B (en) * | 2019-11-22 | 2023-06-30 | 济南信通达电气科技有限公司 | Transmission line detection method and server |
CN110992334A (en) * | 2019-11-29 | 2020-04-10 | 深圳易嘉恩科技有限公司 | Quality evaluation method for DCGAN network generated image |
CN110992334B (en) * | 2019-11-29 | 2023-04-07 | 四川虹微技术有限公司 | Quality evaluation method for DCGAN network generated image |
CN111291657A (en) * | 2020-01-21 | 2020-06-16 | 同济大学 | Crowd counting model training method based on difficult case mining and application |
CN111709920A (en) * | 2020-06-01 | 2020-09-25 | 深圳市深视创新科技有限公司 | Template defect detection method |
CN112907510A (en) * | 2021-01-15 | 2021-06-04 | 中国人民解放军国防科技大学 | Surface defect detection method |
CN112907510B (en) * | 2021-01-15 | 2023-07-07 | 中国人民解放军国防科技大学 | Surface defect detection method |
CN116128798A (en) * | 2022-11-17 | 2023-05-16 | 台州金泰精锻科技股份有限公司 | Finish forging process for bell-shaped shell forged surface teeth |
CN116128798B (en) * | 2022-11-17 | 2024-02-27 | 台州金泰精锻科技股份有限公司 | Finish forging method for bell-shaped shell forging face teeth |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110288013A (en) | A Defect Label Recognition Method Based on Block Segmentation and Multiple Input Siamese Convolutional Neural Networks | |
CN110414368B (en) | Unsupervised pedestrian re-identification method based on knowledge distillation | |
CN109101938B (en) | Multi-label age estimation method based on convolutional neural network | |
CN112507901B (en) | Unsupervised pedestrian re-identification method based on pseudo tag self-correction | |
CN102609681B (en) | Face recognition method based on dictionary learning models | |
CN105389593B (en) | Image object recognition methods based on SURF feature | |
CN106650731B (en) | A Robust License Plate and Vehicle Logo Recognition Method | |
CN111476307B (en) | Lithium battery surface defect detection method based on depth field adaptation | |
CN104008395B (en) | An Intelligent Detection Method of Bad Video Based on Face Retrieval | |
CN110598693A (en) | Ship plate identification method based on fast-RCNN | |
CN108647595B (en) | A vehicle re-identification method based on multi-attribute deep features | |
CN110188654B (en) | Video behavior identification method based on mobile uncut network | |
CN104298992B (en) | A kind of adaptive scale pedestrian recognition methods again based on data-driven | |
CN112861626B (en) | Fine granularity expression classification method based on small sample learning | |
CN111382690B (en) | Vehicle re-identification method based on multi-loss fusion model | |
CN109583375B (en) | A multi-feature fusion method and system for face image illumination recognition | |
CN109902202A (en) | A video classification method and device | |
CN113407644A (en) | Enterprise industry secondary industry multi-label classifier based on deep learning algorithm | |
CN105574489A (en) | Layered stack based violent group behavior detection method | |
CN104978569B (en) | A kind of increment face identification method based on rarefaction representation | |
CN111914902A (en) | A method for Chinese medicine identification and surface defect detection based on deep neural network | |
CN105095475A (en) | Incomplete attribute tagged pedestrian re-identification method and system based on two-level fusion | |
CN114092742A (en) | A multi-angle-based small sample image classification device and method | |
CN107247954A (en) | A kind of image outlier detection method based on deep neural network | |
CN107657276B (en) | A Weakly Supervised Semantic Segmentation Method Based on Finding Semantic Clusters |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190927 |