CN113066075B - A denim defect detection method and device based on multi-image fusion - Google Patents

A denim defect detection method and device based on multi-image fusion Download PDF

Info

Publication number
CN113066075B
CN113066075B CN202110385654.0A CN202110385654A CN113066075B CN 113066075 B CN113066075 B CN 113066075B CN 202110385654 A CN202110385654 A CN 202110385654A CN 113066075 B CN113066075 B CN 113066075B
Authority
CN
China
Prior art keywords
denim
image
light source
neural network
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110385654.0A
Other languages
Chinese (zh)
Other versions
CN113066075A (en
Inventor
庞子龙
于豫访
张晨龙
武戈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Juchao Technology Co ltd
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University filed Critical Henan University
Priority to CN202110385654.0A priority Critical patent/CN113066075B/en
Publication of CN113066075A publication Critical patent/CN113066075A/en
Application granted granted Critical
Publication of CN113066075B publication Critical patent/CN113066075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及一种多图像融合的牛仔布瑕疵检测方法及装置,所述方法包括通过验布机构采集牛仔布图像数据,构建牛仔布图像数据的数据集,并通过标记机构人工标注所述牛仔布图像数据中有瑕疵的图像数据,所述牛仔布图像数据的数据集包括正面光源图像和背面光源图像,接着将多组所述牛仔布图像进行预处理,并进行MinPooling强化和差值组合三通道处理,最终利用输出数据对神经网络进行训练,对所述神经网络训练选定的ROIOHEM算法进行处理,最终使用完成训练的所述神经网络,对牛仔布进行检测,并对检测到瑕疵的牛仔布进行自动标记。本发明融合了模板图像、正面光源图像和背面光源图像、使得瑕疵的特征更加丰富且突出,基于深度学习网络提高了识别的精度。

Figure 202110385654

The invention relates to a method and a device for detecting denim flaws with multi-image fusion. The method includes collecting denim image data through a cloth inspection mechanism, constructing a data set of denim image data, and manually marking the denim cloth through a marking mechanism. Image data with flaws in the image data, the data set of the denim image data includes a front light source image and a back light source image, and then multiple sets of the denim images are preprocessed, and MinPooling enhancement and difference combination three channels are performed Processing, finally using the output data to train the neural network, processing the ROI selected by the neural network training with the OHEM algorithm, and finally using the neural network that has completed the training to detect the denim, and detect the flaws. Denim is automatically marked. The invention integrates the template image, the front light source image and the back light source image, so that the features of defects are more abundant and prominent, and the recognition accuracy is improved based on the deep learning network.

Figure 202110385654

Description

一种多图像融合的牛仔布瑕疵检测方法及装置A multi-image fusion denim defect detection method and device

技术领域technical field

本发明涉及智能检测领域,具体涉及一种多图像融合的牛仔布瑕疵检测方法及装置。The invention relates to the field of intelligent detection, in particular to a multi-image fusion denim defect detection method and device.

背景技术Background technique

牛仔布瑕疵的检测一般由人工进行,人工检测有检测速度慢、漏检和误检率高且检测标准难统一的问题。不能满足产业上大批量的生产要求。The detection of denim defects is generally carried out manually. Manual detection has the problems of slow detection speed, high rate of missed detection and false detection, and difficult to unify the detection standards. It cannot meet the mass production requirements in the industry.

现有的基于深度学习的牛仔布瑕疵检测算法如公开号CN111062925A的《一种基于深度学习的布匹疵点智能识别方法》这种方法基于改进的SSD模型,减少了模型计算量,提高了检测速度。但是现实中采集到的牛仔布图像会出现特征不明显的问题。单一的瑕疵图像难以准确地识别瑕疵;The existing denim defect detection algorithm based on deep learning, such as "A method for intelligent recognition of cloth defects based on deep learning" in the publication number CN111062925A, is based on the improved SSD model, which reduces the amount of model calculation and improves the detection speed. However, the denim images collected in reality have the problem of indistinct features. A single defect image is difficult to accurately identify defects;

又如公开号CN111398292A的《一种基于gabor滤波和CNN的布匹瑕疵检测方法、系统及设备》通过将采集的布匹图像经预处理后使用gabor滤波器提取图像的纹理特征,然后再将纹理特征送入CNN网络进行分类,可快速识别出是正常的布匹纹理还是瑕疵,这种方法在牛仔布的特征提取上仍有不足,导致筛分不够精确;Another example is "A Method, System and Equipment for Cloth Defect Detection Based on Gabor Filtering and CNN" of Publication No. CN111398292A, which uses the Gabor filter to extract the texture features of the image after preprocessing the collected cloth images, and then sends the texture features to the Enter the CNN network for classification, which can quickly identify whether it is a normal cloth texture or a defect. This method is still insufficient in the feature extraction of denim, resulting in inaccurate screening;

因此亟需一种快速、准确的自动化牛仔布瑕疵检测方法,使用于牛仔布批量生产检测。Therefore, there is an urgent need for a fast and accurate automatic denim defect detection method for mass production of denim.

发明内容Contents of the invention

本发明为有效解决现有的牛仔布瑕疵检测精确度低以及速度慢的问题,提供了一种多图像融合的牛仔布瑕疵检测方法及装置,融合了模板图像、正面光源图像和背面光源图像、使得瑕疵的特征更加丰富且突出,基于深度学习网络提高了识别的精度,从而实现牛仔布瑕疵的快速及精确检测。In order to effectively solve the problems of low accuracy and slow speed of existing denim defect detection, the present invention provides a multi-image fusion denim defect detection method and device, which integrates template images, front light source images and back light source images, The feature of the defect is richer and more prominent, and the accuracy of recognition is improved based on the deep learning network, so as to realize the rapid and accurate detection of denim defects.

为了实现上述目的,本发明第一方面提出了一种多图像融合的牛仔布瑕疵检测方法,所述方法包括以下步骤:In order to achieve the above object, the first aspect of the present invention proposes a multi-image fusion denim defect detection method, the method includes the following steps:

步骤1:通过验布机构采集牛仔布图像数据,构建牛仔布图像数据的数据集,并通过标记机构人工标注所述牛仔布图像数据中有瑕疵的图像数据;Step 1: collect denim image data through the cloth inspection mechanism, construct a data set of denim image data, and manually mark the defective image data in the denim image data through the marking mechanism;

步骤2:定义拍摄机构在机体正面的光源机构作用下得到的图像为正面光源图像;Step 2: Define the image obtained by the shooting mechanism under the action of the light source mechanism on the front of the body as the front light source image;

拍摄机构在机体背面的光源机构作用下得到的图像为背面光源图像;The image obtained by the shooting mechanism under the action of the light source mechanism on the back of the body is the image of the back light source;

将所述正面光源图像和背面光源图像一一对应形成一组牛仔布图像;Forming a group of denim images in one-to-one correspondence with the front light source image and the back light source image;

步骤3:将多组所述牛仔布图像进行预处理用于扩充所述数据集,得到均衡数据集;Step 3: preprocessing multiple groups of denim images to expand the data set to obtain a balanced data set;

步骤4:将所述均衡数据集进行MinPooling强化和差值组合三通道处理,并采用所述经过MinPooling强化和差值组合三通道处理的均衡数据集对神经网络进行训练;Step 4: The balanced data set is subjected to three-channel processing of MinPooling strengthening and difference combination, and the neural network is trained using the balanced data set processed through MinPooling strengthening and difference combination three-channel;

步骤5:对所述神经网络训练选定的ROI用OHEM算法进行处理;Step 5: process the ROI selected by the neural network training with the OHEM algorithm;

步骤6:使用完成训练的所述神经网络,对牛仔布进行检测,并对检测到瑕疵的牛仔布进行自动标记。Step 6: Use the trained neural network to detect the denim, and automatically mark the denim with defects detected.

进一步地,步骤3所述预处理包括:Further, the preprocessing described in step 3 includes:

步骤3.1:通过高斯滤波对牛仔布图像进行加权平均,得到消除高斯噪声的数据集;Step 3.1: Carry out weighted averaging on the denim image by Gaussian filtering to obtain a data set that eliminates Gaussian noise;

步骤3.2:将牛仔布图像均衡化增强图像对比度;Step 3.2: Equalize the denim image to enhance image contrast;

步骤3.3:通过膨胀将牛仔布图像中高亮区域或白色部分进行扩张;Step 3.3: expand the highlighted area or white part of the denim image through expansion;

步骤3.4:对牛仔布图片进行边缘padding处理,在图像边缘将像素值设置为0,并填充为黑色背景;Step 3.4: Perform edge padding on the denim image, set the pixel value to 0 at the edge of the image, and fill it with a black background;

对牛仔布图片进行随机裁剪处理,并对随机裁剪后的所述牛仔布图片进行去除图像边框处理。Random cropping is performed on the denim picture, and image frame removal processing is performed on the randomly cropped denim picture.

进一步地,步骤4所述MinPooling强化包括:Further, the MinPooling enhancement described in step 4 includes:

将牛仔布图片的像素值设为负值,并对牛仔布图片进行最大值池化,进而再次将牛仔布图片像素值取负,强化牛仔布图片中牛仔布细节纹理。Set the pixel value of the denim image to a negative value, and perform maximum pooling on the denim image, and then set the pixel value of the denim image negative again to strengthen the denim detail texture in the denim image.

进一步地,步骤4所述的差值组合三通道处理包括:Further, the difference combination three-channel processing described in step 4 includes:

步骤4.2.1:设置牛仔布模板图像;Step 4.2.1: Set the denim template image;

步骤4.2.2:设置融合图像,所述融合图像具有三个通道;Step 4.2.2: setting a fused image, the fused image has three channels;

第一通道为待测牛仔布图片;The first channel is the denim image to be tested;

第二通道为牛仔布模板图像;The second channel is the denim template image;

第三通道为对第一通道与第二通道的像素矩阵采用带权差值运算得到的差值图;The third channel is a difference map obtained by using a weighted difference operation on the pixel matrix of the first channel and the second channel;

步骤4.2.3:将所述融合图像输入神经网络。Step 4.2.3: Input the fused image into the neural network.

进一步地,步骤4所述神经网络包括两个并列的基于Faster-RCNN的特征提取网络模型;Further, the neural network described in step 4 includes two parallel feature extraction network models based on Faster-RCNN;

分别以采集的正面光源图像和背面光源图像作为输入,对应的分别以ResNeXt50、ResNet50作为backbone,并加入可变形卷积,再使用PositionEncoding对特征进行位置信息编码;The collected front light source image and back light source image are used as input respectively, and ResNeXt50 and ResNet50 are respectively used as the backbone respectively, and deformable convolution is added, and then PositionEncoding is used to encode the position information of the feature;

提取出两组特征,将所述两组特征对应连接构成特征提取网络的输出特征向量。Two groups of features are extracted, and the two groups of features are correspondingly connected to form an output feature vector of the feature extraction network.

本发明第二方面提出了一种多图像融合的牛仔布瑕疵检测装置,包括机体、支撑机构、旋转机构、电机以及传动装置,其特征在于,所述支撑机构设置于机体上,支撑机构包括用于放置牛仔布的支撑板;The second aspect of the present invention proposes a multi-image fusion denim flaw detection device, which includes a body, a support mechanism, a rotation mechanism, a motor and a transmission device, and is characterized in that the support mechanism is arranged on the body, and the support mechanism includes Support board for placing denim;

所述电机设置于机体内部,电机通过传动装置连接旋转机构,并带动旋转机构转动,所述旋转机构固定于支撑机构上部;The motor is arranged inside the body, the motor is connected to the rotating mechanism through a transmission device, and drives the rotating mechanism to rotate, and the rotating mechanism is fixed on the upper part of the supporting mechanism;

机体上部还设置有验布机构,所述验布机构包括拍摄机构、光源机构和标记机构,拍摄机构用于建立图像数据、所述光源机构分设于机体正面及背面两侧,光源机构用于为拍摄机构提供充足光照,所述标记机构用于对具有瑕疵的图像数据进行标注。The upper part of the body is also provided with a cloth inspection mechanism, which includes a camera, a light source, and a marking mechanism. The shooting mechanism provides sufficient light, and the marking mechanism is used to mark the image data with defects.

通过上述技术方案,本发明的有益效果为:Through the above technical scheme, the beneficial effects of the present invention are:

本发明首先通过验布机构采集牛仔布图像数据,构建牛仔布图像数据的数据集,并通过标记机构人工标注所述牛仔布图像数据中有瑕疵的图像数据,所述牛仔布图像数据的数据集包括正面光源图像和背面光源图像,将所述正面光源图像和背面光源图像一一对应形成一组牛仔布图像,接着将多组所述牛仔布图像进行预处理用于扩充所述数据集得到均衡数据集;The present invention first collects denim image data through a cloth inspection mechanism, constructs a data set of denim image data, and manually marks defective image data in the denim image data through a marking mechanism, and the data set of the denim image data Including a front light source image and a back light source image, forming a set of denim images in one-to-one correspondence between the front light source images and the back light source images, and then performing preprocessing on multiple sets of the denim images to expand the data set to obtain a balance data set;

为了强化牛仔布的细节纹理对均衡数据集进行MinPooling强化和差值组合三通道处理,并采用所述经过MinPooling强化和差值组合三通道处理的均衡数据集对神经网络进行训练,对所述神经网络训练选定的ROI用OHEM算法进行处理,最终使用完成训练的所述神经网络,对牛仔布进行检测,并对检测到瑕疵的牛仔布进行自动标记。In order to strengthen the detailed texture of denim, the balanced data set is subjected to three-channel processing of MinPooling enhancement and difference combination, and the neural network is trained by using the balanced data set processed by MinPooling enhancement and difference combination three-channel, and the neural network is trained. The ROI selected by the network training is processed with the OHEM algorithm, and finally the trained neural network is used to detect the denim and automatically mark the denim with defects detected.

本发明基于深度学习方法将正面光源图像和背面光源图像融合,使得检测模型得到更多维的信息输入,对于瑕疵特征的多维度识别有重要意义,将模板信息和瑕疵信息进行融合,对于提升瑕疵检测精度有重要意义,优化了Loss函数,对小瑕疵的识别有重大提升。The present invention fuses the front light source image and the back light source image based on the deep learning method, so that the detection model can obtain more dimensional information input, which is of great significance for the multi-dimensional recognition of defect features, and the fusion of template information and defect information is helpful for improving defects The detection accuracy is of great significance. The Loss function has been optimized, which has greatly improved the recognition of small defects.

附图说明Description of drawings

图1为本发明一种多图像融合的牛仔布瑕疵检测方法的流程图;Fig. 1 is the flow chart of a kind of multi-image fusion denim flaw detection method of the present invention;

图2为本发明一种多图像融合的牛仔布瑕疵检测方法的网络结构图;Fig. 2 is a network structure diagram of a denim defect detection method of multi-image fusion in the present invention;

图3为本发明一种多图像融合的牛仔布瑕疵检测装置的结构示意图。Fig. 3 is a schematic structural diagram of a multi-image fusion denim defect detection device according to the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are part of the present invention Examples, not all examples. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

实施例1Example 1

如图1所示,一种多图像融合的牛仔布瑕疵检测方法,所述方法包括:As shown in Figure 1, a kind of denim defect detection method of multi-image fusion, described method comprises:

步骤1:通过验布机构采集牛仔布图像数据,构建牛仔布图像数据的数据集,并通过标记机构人工标注所述牛仔布图像数据中有瑕疵的图像数据;Step 1: collect denim image data through the cloth inspection mechanism, construct a data set of denim image data, and manually mark the defective image data in the denim image data through the marking mechanism;

步骤2:定义拍摄机构在机体正面的光源机构作用下得到的图像为正面光源图像;Step 2: Define the image obtained by the shooting mechanism under the action of the light source mechanism on the front of the body as the front light source image;

拍摄机构在机体背面的光源机构作用下得到的图像为背面光源图像;The image obtained by the shooting mechanism under the action of the light source mechanism on the back of the body is the image of the back light source;

将所述正面光源图像和背面光源图像一一对应形成一组牛仔布图像;Forming a group of denim images in one-to-one correspondence with the front light source image and the back light source image;

步骤3:将多组所述牛仔布图像进行预处理用于扩充所述数据集,得到均衡数据集;Step 3: preprocessing multiple groups of denim images to expand the data set to obtain a balanced data set;

步骤4:将所述均衡数据集进行MinPooling强化和差值组合三通道处理,并采用所述经过MinPooling强化和差值组合三通道处理的均衡数据集对神经网络进行训练;Step 4: The balanced data set is subjected to three-channel processing of MinPooling strengthening and difference combination, and the neural network is trained using the balanced data set processed through MinPooling strengthening and difference combination three-channel;

步骤5:对所述神经网络训练选定的ROI用OHEM算法进行处理;Step 5: process the ROI selected by the neural network training with the OHEM algorithm;

步骤6:使用完成训练的所述神经网络,对牛仔布进行检测,并对检测到瑕疵的牛仔布进行自动标记。Step 6: Use the trained neural network to detect the denim, and automatically mark the denim with defects detected.

在本发明中,引入正面光源图像和背面光源图像增加图像识别多维度信息,同时采用MinPooling强化和差值组合三通道处理的均衡数据集对神经网络进行训练提高了模型的特征提取能力,使牛仔布瑕疵检测结果更加精确,尤其在细小瑕疵特征显示上性能表现优越。In the present invention, the front light source image and the back light source image are introduced to increase the multi-dimensional information of image recognition, and at the same time, the balanced data set of MinPooling strengthening and difference combination three-channel processing is used to train the neural network, which improves the feature extraction ability of the model and makes the cowboy The detection results of cloth defects are more accurate, especially in the display of small defect features.

实施例2Example 2

由于牛仔布缺陷的图像获得成本较高且难以获得,所以基于上述实施例1在本实施例中通过预处理扩充数据集,具体的:Because the image acquisition cost of denim defects is relatively high and difficult to obtain, so based on the above-mentioned embodiment 1, the data set is expanded by preprocessing in this embodiment, specifically:

步骤3.1:通过高斯滤波对牛仔布图像进行加权平均,得到消除高斯噪声的数据集;高斯滤波可以对整个图像加权平均,也就是每一个像素点的值,都由其本身和邻域内的其他像素值通过加权平均后得到;Step 3.1: Weighted average the denim image through Gaussian filtering to obtain a data set that eliminates Gaussian noise; Gaussian filtering can weight the entire image, that is, the value of each pixel is determined by itself and other pixels in the neighborhood The value is obtained by weighted average;

步骤3.2:将牛仔布图像均衡化增强图像对比度,使得牛仔布图像相对于原始图像更加清晰,增加数据集的数量来训练程序。Step 3.2: Equalize the denim image to enhance the image contrast, making the denim image clearer than the original image, and increasing the number of data sets to train the program.

步骤3.3:通过膨胀将牛仔布图像中高亮区域或白色部分进行扩张;使其运行结果图比原图的高亮区域更大。Step 3.3: Expand the highlighted area or white part of the denim image by dilation; make the running result image larger than the highlighted area of the original image.

步骤3.4:对牛仔布图片进行边缘padding处理,在图像边缘将像素值设置为0,并填充为黑色背景;Step 3.4: Perform edge padding on the denim image, set the pixel value to 0 at the edge of the image, and fill it with a black background;

对牛仔布图片进行随机裁剪处理,并对随机裁剪后的所述牛仔布图片进行去除图像边框处理。Random cropping is performed on the denim picture, and image frame removal processing is performed on the randomly cropped denim picture.

通过上述步骤,在不影响图像本身特征的情况下,不仅能够提高模型的精度,也能增强模型的鲁棒性,并且将图片随机角度翻转,来增强模型的尺度不变性和方向不变性,增强了模型对于图像识别的能力。Through the above steps, without affecting the characteristics of the image itself, not only the accuracy of the model can be improved, but also the robustness of the model can be enhanced, and the image can be flipped at random angles to enhance the scale invariance and direction invariance of the model. The ability of the model for image recognition.

实施例3Example 3

基于上述实施例1为了强化牛仔布细小瑕疵特征显示能力实现牛仔布细小瑕疵检测,在本实施例中对步骤4进行优化,具体的:Based on the above-mentioned embodiment 1, in order to strengthen the denim small defect feature display ability and realize the denim small defect detection, step 4 is optimized in this embodiment, specifically:

步骤4所述MinPooling强化包括:The MinPooling enhancement described in step 4 includes:

将牛仔布图片的像素值设为负值,并对牛仔布图片进行最大值池化,进而再次将牛仔布图片像素值取负,强化牛仔布图片中牛仔布细节纹理。Set the pixel value of the denim image to a negative value, and perform maximum pooling on the denim image, and then set the pixel value of the denim image negative again to strengthen the denim detail texture in the denim image.

作为一种可实施方式,所述的差值组合三通道处理包括:As an implementable manner, the three-channel processing of the difference combination includes:

步骤4.2.1:设置牛仔布模板图像;Step 4.2.1: Set the denim template image;

步骤4.2.2:设置融合图像,所述融合图像具有三个通道;Step 4.2.2: setting a fused image, the fused image has three channels;

第一通道为待测牛仔布图片;The first channel is the denim image to be tested;

第二通道为牛仔布模板图像;The second channel is the denim template image;

第三通道为对第一通道与第二通道的像素矩阵采用带权差值运算得到的差值图;The third channel is a difference map obtained by using a weighted difference operation on the pixel matrix of the first channel and the second channel;

步骤4.2.3:将所述融合图像输入神经网络。Step 4.2.3: Input the fused image into the neural network.

实施例4Example 4

基于上述多个实施例为构建神经网络对步骤5进行优化,具体的:Based on the above-mentioned multiple embodiments, step 5 is optimized for constructing a neural network, specifically:

步骤4所述神经网络包括两个并列的基于Faster-RCNN的特征提取网络模型;Neural network described in step 4 comprises two parallel feature extraction network models based on Faster-RCNN;

分别以采集的正面光源图像和背面光源图像作为输入,对应的分别以ResNeXt50、ResNet50作为backbone,并加入可变形卷积,再使用PositionEncoding对特征进行位置信息编码;The collected front light source image and back light source image are used as input respectively, and ResNeXt50 and ResNet50 are respectively used as the backbone respectively, and deformable convolution is added, and then PositionEncoding is used to encode the position information of the feature;

提取出两组特征,将所述两组特征对应连接构成特征提取网络的输出特征向量。Two groups of features are extracted, and the two groups of features are correspondingly connected to form an output feature vector of the feature extraction network.

在本实施例中如图2所示,将训练中候选框和图片作为输入,对于其中的ROI网络层建立两个相同的RoI网络,其中一个只可读,另一个可读、可写,只可读的网络对所有的RoI做前向计算,可读、可写网络,对被选择的hardRoIs不仅做前向计算也做反向传播计算;In this embodiment, as shown in Figure 2, the candidate frame and the picture in the training are used as input, and two identical RoI networks are established for the ROI network layer therein, one of which is only readable, and the other is readable and writable. The readable network performs forward calculation for all RoIs, and the readable and writable network performs not only forward calculation but also backpropagation calculation for the selected hardRoIs;

进而使用随机梯度下降更新网络参数。The network parameters are then updated using stochastic gradient descent.

所述Faster R-CNN主要包含以下步骤:The Faster R-CNN mainly includes the following steps:

步骤S01:把图片输入到卷积神经网络(CNN)中,得到特征图(Feature map);Step S01: Input the picture into the convolutional neural network (CNN) to obtain a feature map;

步骤S02:把提取到的卷积特征输入到区域生成网络(RPN)中,得到候选框的特征信息;Step S02: Input the extracted convolution features into the region generation network (RPN) to obtain the feature information of the candidate frame;

步骤S03:对于候选框中已经提取出的特征信息,进行池化(RoI pooling),得到固定大小的图像;Step S03: Perform pooling (RoI pooling) on the feature information extracted from the candidate frame to obtain a fixed-size image;

步骤S04:针对于特定的类别的候选框,使用分类器来判别是否属于某一类别,并使用回归器进一步调整位置信息;Step S04: For the candidate frame of a specific category, use a classifier to determine whether it belongs to a certain category, and use a regressor to further adjust the position information;

其中,所述步骤S01具体为:Wherein, the step S01 is specifically:

步骤S01.1:把图片缩放至固定大小M×N的比例,然后将M×N的图像送入卷积层(Conv Layers);Step S01.1: Scale the image to a fixed size of M×N, and then send the M×N image to the convolutional layer (Conv Layers);

步骤S01.2:得到特征图(Feature map)的细节包括:Step S01.2: Get the details of the feature map (Feature map):

分别使用使用ResNet50、ResNeXt50提取特征。Use ResNet50 and ResNeXt50 to extract features respectively.

ResNet50包括5个层次对应Conv1到Conv5,其中先经过一个Conv1层(kernel_size(卷积核的长度)=7,pad(填充像素)=3,stride(步长)=2),之后经过一个pooling层,最后经过从Conv2到Conv5的Bottleneck(瓶颈层)输出特征图。ResNet50 includes 5 levels corresponding to Conv1 to Conv5, which first passes through a Conv1 layer (kernel_size (length of the convolution kernel) = 7, pad (filling pixels) = 3, stride (step size) = 2), and then passes through a pooling layer , and finally output the feature map through the Bottleneck (bottleneck layer) from Conv2 to Conv5.

上文引入的Bottleneck(瓶颈层)是一种网络结构,ResNet50的4个层次(Conv2到Conv5)都含有这种结构,每一个Bottleneck都由1×1、3×3、1×1顺序的卷积构成,不同层次中的Bottleneck的卷积数量不同,总的Bottleneck数量也不同,从Conv2到Conv5分别含有3、4、6、3个Bottleneck。The Bottleneck (bottleneck layer) introduced above is a network structure. The 4 layers of ResNet50 (Conv2 to Conv5) all contain this structure. Each Bottleneck consists of 1×1, 3×3, 1×1 sequential volumes The number of bottlenecks in different layers is different, and the total number of bottlenecks is also different. Conv2 to Conv5 contain 3, 4, 6, and 3 bottlenecks respectively.

ResNeXt50网络结构与ResNet50相仿,差别在于每个Bottleneck的第二个卷积替换为分组卷积;The network structure of ResNeXt50 is similar to that of ResNet50, the difference is that the second convolution of each Bottleneck is replaced by group convolution;

所述步骤S02具体为:The step S02 is specifically:

步骤S02.1:首先把特征图经过一个(3×3)卷积,得到一个(256×M×M)的特征向量Step S02.1: First, the feature map undergoes a (3×3) convolution to obtain a (256×M×M) feature vector

步骤S02.2:把所得到的(256×M×M)的特征向量分别经过两次(1×1)卷积,分别得到一个(18×M×M)的特征图和一个(36×M×M)的特征图,对应(M×M×9)个结果,同时每个结果又包含2个分数(前景和背景的分数)和4个坐标(针对原图坐标的偏移量),并对应输入特征图的每一个点的9个锚框(anchors);Step S02.2: Convolve the obtained (256×M×M) feature vectors twice (1×1) to obtain a (18×M×M) feature map and a (36×M ×M) feature map, corresponding to (M×M×9) results, and each result contains 2 scores (foreground and background scores) and 4 coordinates (offset for the original image coordinates), and 9 anchor boxes (anchors) corresponding to each point of the input feature map;

步骤S02.2中引入了锚框(anchors),这是一组生成的矩形框,其中每一个锚框(anchors)有4个值分别为(x1,y1,x2,y2)分别表示矩形框的左上和右下角点坐标,并且可以把矩形框分成长宽比分别为1∶1,1∶2,2∶1比例的三种,通过设置锚框(anchors),判断哪些锚框(anchors)为正锚框(positive anchor),哪些为负锚框(negative anchor)。Anchor boxes (anchors) are introduced in step S02.2, which is a set of generated rectangular boxes, where each anchor box (anchors) has 4 values (x 1 , y 1 , x 2 , y 2 ) respectively Represents the coordinates of the upper left and lower right corners of the rectangular frame, and the rectangular frame can be divided into three types with a ratio of 1:1, 1:2, and 2:1. By setting anchor frames (anchors), determine which anchor frames (anchors) are positive anchor boxes (positive anchor), which are negative anchor boxes (negative anchor).

步骤S02.3:把所得到的(18×M×M)的特征图输入到重新调整矩阵层(reshapelayer),之后再输入到分类层(softmax layer)得到分类后的图像,最后再输入重新调整矩阵层(reshape layer)得到候选区域(proposal)Step S02.3: Input the obtained (18×M×M) feature map to the reshape layer (reshape layer), and then input it to the classification layer (softmax layer) to obtain the classified image, and finally input the reshape layer The matrix layer (reshape layer) gets the candidate area (proposal)

给定anchor S=(Sx,Sy,Sw,Sh),通过平移和缩放操作(dx(S),dy(S),dw(S),dh(S))使回归窗口(G′)更加接近于真实窗口(G)Given anchor S=(S x , S y , S w , S h ), through translation and scaling operations (d x (S), d y (S), d w (S), d h (S)) make The regression window (G′) is closer to the real window (G)

平移:panning:

G′X=SW·dx(S)+SX G′ X =S W ·d x (S)+S X

G′y=Sh·dy(S)+Sy G′ y =S h d y (S)+S y

缩放:Zoom:

G′w=SW·exp(dw(S))G′ w = S W exp(d w (S))

G′y=Sh·exp(dh(S))G′ y =S h exp(d h (S))

通过线性回归得到dx(S),dy(S),dw(S),dh(S),目标函数如公式(1)所示:d x (S), d y (S), d w (S), d h (S) are obtained through linear regression, and the objective function is shown in formula (1):

Figure BDA0003014781360000081
Figure BDA0003014781360000081

其中,d*(S)是预测值,W*是是学习的参数,

Figure BDA0003014781360000082
是anthors的特征图(Feature map)所组成的特征向量;Among them, d * (S) is the predicted value, W * is the parameter of learning,
Figure BDA0003014781360000082
Is the feature vector composed of the feature map (Feature map) of anthors;

L1损失函数如公式(2)所示:The L1 loss function is shown in formula (2):

Figure BDA0003014781360000083
Figure BDA0003014781360000083

步骤S02.6:生成锚框(anchors),对所有的锚框(anchors)进行边框回归(bounding box regression);Step S02.6: generate anchor boxes (anchors), and perform bounding box regression on all anchor boxes (anchors);

步骤S02.7:对输入的正样本softmax回归计分(positive softmax scores)由大到小排序锚框(anchors),提取修正位置后的正锚框(positive anchors);Step S02.7: Sort the anchor boxes (anchors) from large to small for the input positive sample softmax regression scores (positive softmax scores), and extract the positive anchor boxes (positive anchors) after the corrected position;

步骤S02.8:以超出边界的正锚框(positive anchors)为图像边界,并剔除尺寸非常小的正锚框(positive anchors);Step S02.8: Take the positive anchors beyond the boundary as the image boundary, and remove the very small positive anchors;

步骤S02.9:对剩余的正锚框(positive anchors)进行非极大值抑制(Non-Maximum Suppression);Step S02.9: Perform Non-Maximum Suppression on the remaining positive anchors;

步骤S02.10:把对应的分类器的结果作为候选区域(proposal)输出;Step S02.10: output the result of the corresponding classifier as a proposal;

所述步骤S03具体为:The step S03 is specifically:

步骤S03.1:把M×N大小的候选区域(proposals)映射回(M/16)×(N/16)大小;Step S03.1: Map the candidate area (proposals) of size M×N back to size (M/16)×(N/16);

步骤S03.2:把每个候选区域(proposal)对应的特征图(Feature map)区域水平分为池化宽度(pooled_w)×池化高度(pooled_h)的网络;Step S03.2: Divide the feature map region corresponding to each candidate region (proposal) into a network of pooling width (pooled_w)×pooling height (pooled_h) horizontally;

步骤S03.3:对网格的每一部分都进行最大池化(max pooling)处理,并输出pooled_w×pooled_h大小;Step S03.3: Perform max pooling processing on each part of the grid, and output pooled_w×pooled_h size;

所述步骤S04具体体现为:The step S04 is specifically embodied as:

步骤S04.1:把候选区域特征映射(proposal Feature maps)输入到网络中,通过全连接和softmax对选区域(proposals)进行分类;Step S04.1: Input the candidate region feature maps (proposal Feature maps) into the network, and classify the selected regions (proposals) through full connection and softmax;

步骤S04.2:对候选区域(proposals)进行边框回归/边界框回归(bounding boxregression)获取高精度的目标检测框。Step S04.2: Perform frame regression/bounding box regression (bounding box regression) on candidate regions (proposals) to obtain high-precision target detection frames.

实施例5Example 5

本发明实施例提供一种多图像融合的牛仔布瑕疵检测装置如图3所示,包括机体、支撑机构、旋转机构、电机以及传动装置,其特征在于,所述支撑机构设置于机体上,支撑机构包括用于放置牛仔布的支撑板;The embodiment of the present invention provides a multi-image fusion denim defect detection device, as shown in Figure 3, including a body, a support mechanism, a rotation mechanism, a motor, and a transmission device. It is characterized in that the support mechanism is arranged on the body to support Mechanism includes a support plate for placing denim;

所述电机设置于机体内部,电机通过传动装置连接旋转机构,并带动旋转机构转动,所述旋转机构固定于支撑机构上部;The motor is arranged inside the body, the motor is connected to the rotating mechanism through a transmission device, and drives the rotating mechanism to rotate, and the rotating mechanism is fixed on the upper part of the supporting mechanism;

机体上部还设置有验布机构,所述验布机构包括拍摄机构、光源机构和标记机构,拍摄机构用于建立图像数据、所述光源机构分设于机体正面及背面两侧,光源机构用于为拍摄机构提供充足光照,所述标记机构用于对具有瑕疵的图像数据进行标注。The upper part of the body is also provided with a cloth inspection mechanism, which includes a camera, a light source, and a marking mechanism. The shooting mechanism provides sufficient light, and the marking mechanism is used to mark the image data with defects.

以上所述之实施例,只是本发明的较佳实施例而已,并非限制本发明的实施范围,故凡依本发明专利范围所述的构造、特征及原理所做的等效变化或修饰,均应包括于本发明申请专利范围内。The above-described embodiments are only preferred embodiments of the present invention, and do not limit the scope of the present invention. Therefore, all equivalent changes or modifications made according to the structure, features and principles described in the patent scope of the present invention are valid. Should be included in the patent scope of the present invention.

Claims (3)

1. A method for detecting defects of multi-image fused denim is characterized by comprising the following steps:
step 1: collecting denim layout image data through a cloth inspecting mechanism, constructing a data set of the denim layout image data, and manually marking defective image data in the denim layout image data through a marking mechanism;
and 2, step: defining an image obtained by a shooting mechanism under the action of a light source mechanism on the front side of the machine body as a front light source image;
the image obtained by the shooting mechanism under the action of the light source mechanism on the back of the machine body is a back light source image;
the front light source image and the back light source image correspond to each other one by one to form a group of denim images;
and step 3: preprocessing a plurality of groups of the denim layout images to expand the data set to obtain a balanced data set;
and 4, step 4: subjecting the equalized data set toMinPoolingThree-channel processing of strengthening and difference combination is adoptedMinPoolingTraining a neural network by using a balanced data set processed by combining the strengthening channel and the difference value with three channels;
and 5: selected for training the neural networkROIBy usingOHEMProcessing an algorithm;
step 6: detecting the denim by using the trained neural network, and automatically marking the denim with the detected flaws;
step 3, the pretreatment comprises the following steps:
step 3.1: carrying out weighted average on the denim image through Gaussian filtering to obtain a data set for eliminating Gaussian noise;
step 3.2: equalizing the denim layout image to enhance the image contrast;
step 3.3: expanding the highlight area or the white part in the jean layout image through expansion;
step 3.4: performing edging on the jean picturepaddingProcessing, setting the pixel value to 0 at the edge of the image, and filling the pixel value as a black background;
randomly cutting the jean picture, and removing an image frame of the jean picture after random cutting;
step 4 is as described aboveMinPoolingThe strengthening comprises the following steps:
setting the pixel value of the denim picture as a negative value, performing maximum pooling on the denim picture, and further taking the pixel value of the denim picture as a negative value again to strengthen the denim detail texture in the denim picture;
the processing of the difference combined three channels in the step 4 comprises the following steps:
step 4.2.1: setting a denim template image;
step 4.2.2: setting a fused image, wherein the fused image has three channels;
the first channel is a denim picture to be detected;
the second channel is a denim template image;
the third channel is a difference image obtained by performing weighted difference operation on the pixel matrix of the first channel and the pixel matrix of the second channel;
step 4.2.3: and inputting the fused image into a neural network.
2. The method for detecting the defects of the denim fabric with the multi-image fusion function according to claim 1, wherein the neural network of the step 4 comprises two parallel denim fabric defects based on the multi-image fusion functionFaster-RCNNExtracting a network model;
respectively taking the collected front light source image and back light source image as input, and respectively correspondingly takingResNeXt50ResNet50AsbackboneAnd adding a deformable convolution for reusePositionEncoding, carrying out position information coding on the features;
and extracting two groups of features, and correspondingly connecting the two groups of features to form an output feature vector of the feature extraction network.
3. The denim flaw detection device based on the multi-image fusion denim flaw detection method of any one of claims 1 to 2, characterized by comprising a machine body, a supporting mechanism, a rotating mechanism, a motor and a transmission device, wherein the supporting mechanism is arranged on the machine body and comprises a supporting plate for placing denim;
the motor is arranged in the machine body, is connected with the rotating mechanism through a transmission device and drives the rotating mechanism to rotate, and the rotating mechanism is fixed at the upper part of the supporting mechanism;
the cloth inspecting mechanism is further arranged on the upper portion of the machine body and comprises a shooting mechanism, a light source mechanism and a marking mechanism, the shooting mechanism is used for establishing image data, the light source mechanism is respectively arranged on the front side and the back side of the machine body, the light source mechanism is used for providing sufficient illumination for the shooting mechanism, and the marking mechanism is used for marking the defective image data.
CN202110385654.0A 2021-04-10 2021-04-10 A denim defect detection method and device based on multi-image fusion Active CN113066075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110385654.0A CN113066075B (en) 2021-04-10 2021-04-10 A denim defect detection method and device based on multi-image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110385654.0A CN113066075B (en) 2021-04-10 2021-04-10 A denim defect detection method and device based on multi-image fusion

Publications (2)

Publication Number Publication Date
CN113066075A CN113066075A (en) 2021-07-02
CN113066075B true CN113066075B (en) 2022-11-01

Family

ID=76566597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110385654.0A Active CN113066075B (en) 2021-04-10 2021-04-10 A denim defect detection method and device based on multi-image fusion

Country Status (1)

Country Link
CN (1) CN113066075B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101895873B1 (en) * 2017-06-15 2018-09-06 주식회사 에스엠비나 Method and apparatus for fabric inspection
CN109934802A (en) * 2019-02-02 2019-06-25 浙江工业大学 A Cloth Defect Detection Method Based on Fourier Transform and Image Morphology
CN111260614A (en) * 2020-01-13 2020-06-09 华南理工大学 A Convolutional Neural Network Fabric Defect Detection Method Based on Extreme Learning Machine
CN111553898A (en) * 2020-04-27 2020-08-18 东华大学 A fabric defect detection method based on convolutional neural network
CN111709915A (en) * 2020-05-28 2020-09-25 拉萨经济技术开发区美第意户外用品有限公司 Automatic detection method and system for quick-drying fabric defects

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101895873B1 (en) * 2017-06-15 2018-09-06 주식회사 에스엠비나 Method and apparatus for fabric inspection
CN109934802A (en) * 2019-02-02 2019-06-25 浙江工业大学 A Cloth Defect Detection Method Based on Fourier Transform and Image Morphology
CN111260614A (en) * 2020-01-13 2020-06-09 华南理工大学 A Convolutional Neural Network Fabric Defect Detection Method Based on Extreme Learning Machine
CN111553898A (en) * 2020-04-27 2020-08-18 东华大学 A fabric defect detection method based on convolutional neural network
CN111709915A (en) * 2020-05-28 2020-09-25 拉萨经济技术开发区美第意户外用品有限公司 Automatic detection method and system for quick-drying fabric defects

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Fabric defect detection using local homogeneity and morphological image processing;A. Rebhi et al;《 2016 International Image Processing, Applications and Systems (IPAS)》;20170320;1-5 *
基于智能学习算法的布匹瑕疵检测方法研究;管淼;《中国博士学位论文全文数据库(电子期刊)》;20200415;第2020年卷(第04期);全文 *
基于机器视觉的玻璃纤维布缺陷检测技术研究;李纪峰;《中国优秀硕士学位论文全文数据库(电子期刊)》;20180215;第2018年卷(第02期);全文 *

Also Published As

Publication number Publication date
CN113066075A (en) 2021-07-02

Similar Documents

Publication Publication Date Title
CN111161243B (en) Industrial product surface defect detection method based on sample enhancement
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN111402226A (en) A Surface Defect Detection Method Based on Cascaded Convolutional Neural Networks
CN109961049B (en) Cigarette brand identification method under complex scene
CN108520114B (en) Textile fabric defect detection model and training method and application thereof
CN111179229A (en) Industrial CT defect detection method based on deep learning
CN111223088A (en) A casting surface defect recognition method based on deep convolutional neural network
EP4322106B1 (en) Defect detection method and apparatus
CN112802016A (en) Real-time cloth defect detection method and system based on deep learning
CN108492291B (en) CNN segmentation-based solar photovoltaic silicon wafer defect detection system and method
CN112102224B (en) A cloth defect recognition method based on deep convolutional neural network
CN111080622A (en) Neural network training method, workpiece surface defect classification and detection method and device
CN116485709A (en) A Bridge Concrete Crack Detection Method Based on YOLOv5 Improved Algorithm
CN116563237B (en) A hyperspectral image detection method for chicken carcass defects based on deep learning
CN114973032B (en) Deep convolutional neural network-based photovoltaic panel hot spot detection method and device
CN112233059B (en) Light guide plate defect detection method based on two-stage residual attention network of segmentation + decision making
CN110084129A (en) A kind of river drifting substances real-time detection method based on machine vision
CN114549507B (en) Improved Scaled-YOLOv4 fabric defect detection method
CN114359235A (en) Wood surface defect detection method based on improved YOLOv5l network
CN113205136A (en) Real-time high-precision detection method for appearance defects of power adapter
CN116205876A (en) Unsupervised notebook appearance defect detection method based on multi-scale standardized flow
CN118506115A (en) Multi-focal-length embryo image prokaryotic detection method and system based on optimal arc fusion
CN116703919A (en) A Surface Impurity Detection Method Based on the Optimal Transmission Distance Loss Model
CN112712552A (en) Fault detection method for vehicle tread scratch
CN114549489A (en) An instance segmentation defect detection method for quality inspection of carved lipsticks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20241013

Address after: Room 1805-2, Jianye Smart Port, No. 210 Kaiyuan Avenue, Luolong District, Luoyang City, Henan Province, China 471000

Patentee after: Henan Juchao Technology Co.,Ltd.

Country or region after: China

Address before: 475004 Henan University (Jinming campus), the intersection of Dongjing Avenue and Jinming Avenue, Jinming District, Kaifeng City, Henan Province

Patentee before: Henan University

Country or region before: China

TR01 Transfer of patent right