WO2021098323A1 - 一种基于多模态融合深度学习的布匹缺陷检测方法 - Google Patents

一种基于多模态融合深度学习的布匹缺陷检测方法 Download PDF

Info

Publication number
WO2021098323A1
WO2021098323A1 PCT/CN2020/111380 CN2020111380W WO2021098323A1 WO 2021098323 A1 WO2021098323 A1 WO 2021098323A1 CN 2020111380 W CN2020111380 W CN 2020111380W WO 2021098323 A1 WO2021098323 A1 WO 2021098323A1
Authority
WO
WIPO (PCT)
Prior art keywords
cloth
defect
image
detection data
defects
Prior art date
Application number
PCT/CN2020/111380
Other languages
English (en)
French (fr)
Inventor
孙富春
方斌
刘华平
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Priority to US17/281,923 priority Critical patent/US20220414856A1/en
Publication of WO2021098323A1 publication Critical patent/WO2021098323A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the invention relates to a cloth defect detection method based on multi-modal fusion deep learning, and belongs to the technical field of cloth defect detection.
  • Fabric defect detection is an indispensable link in the fabric production process, which directly determines the value of the fabric produced.
  • Defects of cloth refer to that in the process of cloth production, due to some factors, the textile machine makes mistakes in the weaving process, which in turn causes the cloth to have structural defects such as missing threads, thread breaks, or dyeing during the dyeing process. Unequal defects. Such cloth defects will lead to a decrease in the aesthetics and comfort of the clothes made from the final cloth.
  • a deep learning-based fabric defect detection method disclosed by Nanjing University of Posts and Telecommunications (application number: 201910339022.3).
  • This method uses the ResNet network and the Fast R-CNN network, and is based on a high-definition cloth detection image library for training.
  • this method lacks a way to identify fabric images with dyeing defects.
  • uneven color changes have a very large impact on the neural network, which will cause the neural network to extract a large number of wrong features and cause misjudgment of structural defects.
  • this method requires high image clarity, and it is difficult to provide such accurate and high-quality images in a complex industrial production environment.
  • the purpose of the present invention is to solve the problem of existing cloth defect detection, and propose a cloth defect detection method based on multi-modal fusion deep learning.
  • the tactile sensor is used to identify structural defects on the surface of the cloth, which is not affected by the color of the cloth and the external lighting environment; at the same time, combined with the cloth image collected by the external camera, the color defects of the cloth can be identified; the use of multi-modal fusion deep learning
  • the method combines the information of the two, using complementary information can greatly improve the accuracy of detection, and has strong robustness.
  • the present invention proposes a cloth defect detection method based on multi-modal fusion deep learning, which is characterized in that it comprises the following steps:
  • Step 1 Establish fabric inspection data sets for different types of defects
  • the tactile sensor is in contact with the cloth surface of different defects, and the cloth texture images of various defects are collected.
  • the defects of the cloth are divided into normal, structural defects and color defects.
  • the structural defects include scraping, thinning, and neps. , Holes, rovings, creases and running needles, the color defects include dirt, colored flowers, colored yarns, dyed flowers, black dots, omissions and dark lines; the camera is used to collect the texture of the cloth at the same position with the tactile sensor Collect the external image of the cloth at the same angle.
  • the external image of the cloth and the corresponding piece of cloth texture image are used as a set of cloth detection data, and the defect of the cloth is used as the label of each image, that is, each group of defective cloth detection data includes one Fabric texture image, an external image of the fabric and a defect label; all the collected fabric inspection data constitute the fabric inspection data set;
  • Step 2 Establish a classification model based on multi-modal fusion deep learning
  • the feature extraction network and the multi-modal fusion network are connected to construct a classification model based on multi-modal fusion deep learning; the feature extraction network uses two parallel ResNet-50 networks to detect the cloth in each group of collected cloths.
  • the texture image and the cloth external image are used as input to extract the features of the cloth texture image and the external image to obtain two vectors of length N, which are connected to obtain a vector of length 2N as the extracted feature vector output;
  • the modal fusion network uses 2 to 4 layers of fully connected networks to detect fabric defects.
  • the output of the first layer of fully connected network is used as the input of the next layer of fully connected network, and the input of the first layer of fully connected network is the feature
  • the feature vector of length 2N obtained by extracting the network, the output of the last layer of fully connected network is the feature vector that characterizes the defect of the cloth.
  • the length of the feature vector is equal to the number of defect label types contained in the input set of cloth detection data. Each element of the feature vector represents the probability that the cloth is a variety of defects;
  • Step 3 Train cloth defect detection model
  • the cloth detection data set obtained in step 1 is divided into training set and test set; the cloth texture image and cloth external image belonging to the same set of cloth detection data in the training set are respectively input into the two parallel ResNets of the classification model established in step 2. -50 network, and then use the back propagation algorithm to train the cloth detection model, in which the loss function Softmax Loss is used to constrain the training process of the cloth detection model; the test set is used to judge the training effect of the cloth defect detection model to obtain the training completion The cloth inspection model;
  • Step 4 Collect the texture image of the cloth to be inspected and the corresponding external image, and input it into the trained cloth defect detection model to detect the defect of the cloth.
  • the defect is the output of the cloth defect detection model to characterize the defect of the cloth The label of the defect with the highest confidence in the feature vector of.
  • the cloth detection data set that divides the training set and the test set is replaced by an expanded cloth detection data set
  • the expanded cloth detection data set is obtained by the following steps: the cloth detection data set obtained in step 1
  • the cloth texture image and cloth external image are randomly performed the same rotation and translation data enhancement operations to generate a new set of data
  • the remaining groups of data in the cloth detection data set are separately described Data enhancement operation; adding all the generated sets of new data to the cloth detection data set to obtain the expanded cloth detection data set.
  • the present invention provides a cloth defect detection method based on multi-modal fusion deep learning.
  • the tactile sensor is used to obtain the surface texture of the cloth, which can detect structural defects without being affected by the color of the cloth and the external lighting environment; the method of multi-modal fusion deep learning is used to combine it with the image information taken by the external camera to identify the cloth.
  • the use of complementary information can greatly improve the accuracy of structural defect detection.
  • Fig. 1 is a flowchart of a method for detecting a match defect according to an embodiment of the present invention.
  • Fig. 2 is a schematic structural diagram of a tactile sensor used in an embodiment method of the present invention.
  • Fig. 1 is an overall flowchart of an embodiment of the present invention, including the following steps:
  • Step 1 Establish fabric inspection data sets for different types of defects
  • the tactile sensor is in contact with the cloth surface of different defects, and the cloth texture images of various defects are collected.
  • the defects of the cloth are divided into normal, structural defects and color defects.
  • the structural defects include scraping, thinning, and neps. , Holes, rovings, creases and running needles, the color defects include dirt, colored flowers, colored yarns, dyed flowers, black dots, omissions and dark lines; the camera is used to collect the texture of the cloth at the same position with the tactile sensor Collect the external image of the cloth at the same angle.
  • each group of defective cloth detection data includes one Cloth texture image, a piece of cloth external image and a defect label; all collected cloth inspection data are cut off the edges to form a cloth inspection data set;
  • the structure of the tactile sensor used in this step is shown in Figure 2. It includes a housing 10 composed of a U-shaped base 9 and a top cover 2, and a transparent elastic body 1, a light emitter 3 and an image capture camera 8 contained in the housing 10 .
  • the image capture camera 8 is mounted on the base 9 of the housing 10 through the first support plate 7.
  • a transparent support block 5 is provided above the image capture camera 8.
  • the bottom of the transparent support block 5 is connected to the first support plate 7.
  • the two supporting plates 6 are fixed, and the top of the transparent supporting block 5 is restricted by a third supporting plate 4 fixed to the side wall of the middle part of the housing 10.
  • the top of the transparent elastomer 1 protrudes from the top cover 2 of the housing 10, and the bottom of the transparent elastomer 1 is in contact with the top of the transparent support block 5;
  • the transparent elastomer 1 is made of transparent material polydimethylsiloxane (PDMS)
  • PDMS polydimethylsiloxane
  • the resulting rectangular block is mixed with different proportions of PDMS basic components and curing agent to obtain elastomers with different softness (the specific production process is a well-known technology in the art), which has low elastic modulus and high repeatability And metal adhesion;
  • the top surface of the transparent elastomer 1 is sputtered with a metal aluminum film (the thickness of the metal aluminum film in this embodiment is 2000A), which can map the surface texture of the cloth when it is in contact with the cloth, and the surface texture is determined by the image The capture camera 8 captures.
  • the illuminators 3 are evenly distributed around the bottom of the transparent elastic body 1, and the illuminators 3 are supported by the third support plate 4.
  • the illuminators 3 are used to provide the transparent elastic body 1 with stable and uniform illumination to avoid changes in brightness caused by natural light. Detection deviation; the illuminator 3 of this embodiment uses LED lights and a matching circuit board, specifically, 8 patch-type white LED lights uniformly arranged around the transparent elastic body 1 and the total voltage does not exceed 4V.
  • the camera used in this step is a commercial camera.
  • an area scan camera model MV-CA050-11UC is used, and the collected external images of the cloth are used to reflect the defects of the cloth as a whole.
  • the tactile sensor and the camera are both controlled by a mechanical arm to realize their respective image collection.
  • Step 2 Establish a classification model based on multi-modal fusion deep learning
  • the feature extraction network and the multi-modal fusion network are connected to construct a classification model based on multi-modal fusion deep learning.
  • the feature extraction network uses two parallel ResNet-50 networks, respectively taking the cloth texture image and the cloth external image in the collected cloth detection data as input, and is used to extract the characteristics of the cloth texture image and the external image, and obtain two A vector with a length of N (chosen as 1000 in this embodiment) is connected to obtain a vector with a length of 2N as the extracted feature output.
  • the vector contains both the texture image and the feature information in the external image. This vector is used as input into the multi-modal fusion network for cloth defect detection.
  • the multi-modal fusion network adopts 2 to 4 layers of fully connected network (this embodiment adopts a three-layer fully connected network), the output of the former layer of fully connected network is used as the input of the latter layer of fully connected network, and the first layer of fully connected network
  • the input of the network is the feature vector of length 2N obtained by the aforementioned feature extraction network, and the output of the last layer of fully connected network is the feature vector that characterizes the defect of the cloth.
  • the length of the feature vector is equal to the length of the input set of cloth detection data.
  • each element of the feature vector represents the probability that the cloth is a variety of defects;
  • the input of the first-layer fully connected network is the feature vector of length 2000 obtained by the aforementioned feature extraction network,
  • the output length is 1024 feature vectors; the output of the first layer of fully connected network is used as the input of the second layer of fully connected network, and the second layer of fully connected network outputs the feature vector of length of 1024.
  • Step 3 Use data enhancement to expand the cloth inspection data set obtained in step 1
  • step 3 can be defaulted.
  • Step 4 Train cloth defect detection model
  • the expanded cloth detection data set obtained in step 3 (for the default case of step 3, here is the cloth detection data set obtained in step 1) is divided into the training set and the test set.
  • the cloth texture image and cloth external image belonging to the same set of cloth detection data in the training set are respectively input into the two feature extraction networks of the classification model established in step 2, and then the cloth detection model is trained through the backpropagation algorithm, which uses the loss function Softmax Loss constrains the training process of the cloth detection model.
  • the test set is used to judge the training effect of the cloth defect detection model, which is used to assist the adjustment of the network model parameters to obtain the trained cloth detection model.
  • Step 5 Collect the texture image of the cloth to be inspected and the corresponding external image, and input it into the trained cloth defect detection model to detect the defect of the cloth.
  • the defect is the output of the cloth defect detection model to characterize the defect of the cloth The label of the defect with the highest confidence in the feature vector of.
  • step 1 On the cloth to be inspected, use the method described in step 1 to collect texture images and external images, and then input this set of data into the cloth inspection model trained in step 4, and judge the defects of the cloth through the output results happening.
  • the fabric defect detection method proposed in the present invention detects both the tactile sensor and the camera simultaneously, and combines the deep learning algorithm of vision and tactile modal fusion to achieve the function of fabric defect detection;
  • the tactile sensor has high accuracy and can be used for Detects and recognizes fabric defects that are difficult to identify with the naked eye, such as thin fabrics or colored fabrics. It can be used for defect detection of a variety of fabrics; external images can provide fabric color defect information, and can also assist the tactile sensor to provide some structural defects Information:
  • the use of two types of complementary information of vision and touch can greatly improve the accuracy of detection and has high robustness.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Processing (AREA)

Abstract

本发明提出的一种基于多模态融合深度学习的布匹缺陷检测方法,首先利用触觉感知传感器与不同缺陷情况的布匹表面接触,采集各种缺陷情况的布匹纹理图像,利用摄像机采集相应的布匹外部图像,将相应的一张布匹外部图像与一张布匹纹理图像作为一组布匹检测数据;然后将特征提取网络和多模态融合网络连接构建基于多模态融合深度学习的分类模型,以采集的各组布匹检测数据中的布匹纹理图像和布匹外部图像作为输入,以布匹缺陷情况作为输出;利用采集的布匹检测数据对建立的分类模型进行训练;最后利用训练完毕的分类模型对布匹缺陷情况进行检测。本发明利用视觉和触觉两类互补的信息,可大幅提高检测的准确度和鲁棒性。

Description

一种基于多模态融合深度学习的布匹缺陷检测方法 技术领域
本发明涉及一种基于多模态融合深度学习的布匹缺陷检测方法,属于布匹缺陷检测技术领域。
背景技术
布匹的缺陷检测是布匹生产过程中一个不可或缺的环节,直接决定了生产出的布匹的价值。布匹的缺陷指的是在布匹生产的过程中,由于一些因素,导致纺织机器在纺织过程中出现差错,进而使得布匹在局部出现缺线、冒线等结构性缺陷,或者是染色过程中出现染色不均等缺陷。这样的布匹缺陷会导致最终布匹制作的衣物的美观和舒适程度下降。
目前国内的大多数企业仍采用人眼识别的方法来检测布匹缺陷,然而该方法需要检测人员经过大量的培训和实际操作经验,且在检测过程中存在着效率低下、检测标准不一致等问题。这会降低布匹生产的效率,并会使得其生产出的布匹质量参差不齐。
为了克服人工检测的不足,目前已有使用深度学习来识别布匹缺陷的方法,例如南京邮电大学公开的一种基于深度学习的布匹缺陷检测方法(申请号:201910339022.3)。该方法利用ResNet网络以及Fast R-CNN网络,基于高清的布匹检测图像库进行训练。但是该方法缺乏对于染色缺陷的布匹图像进行识别的办法,同时不均匀的颜色改变对于神经网络的影响是非常大的,会让神经网络提取大量错误的特征而对结构性的缺陷造成误判。同时由于需要识别细小的结构性缺陷,该方法对于图像清晰度要求很高,而在复杂的工业生产环境中很难提供如此准确高质量的图像。
发明内容
本发明的目的是解决现有的布匹缺陷检测的问题,提出一种基于多模态融合深度学习的布匹缺陷检测方法。采用触觉感知传感器来识别布匹表面的结构性缺陷,其不受布匹颜色和外界光照环境的影响;同时结合外部摄像机采集的布匹图像,可以识别出布匹的颜色缺陷;使用多模态融合深度学习的方法将两者的信息结合,利用互补的信息可以大幅提高检测的准确度,鲁棒性强。
为了实现上述目的,本发明采用如下技术方案:
本发明提出的一种基于多模态融合深度学习的布匹缺陷检测方法,其特征在于,包括 以下步骤:
步骤1:建立不同类型缺陷的布匹检测数据集
将触觉感知传感器与不同缺陷情况的布匹表面接触,采集各种缺陷情况的布匹纹理图像,布匹的缺陷情况分为正常、结构缺陷和颜色缺陷,所述结构缺陷包括刮丝、稀丝、棉结、破洞、粗纱、褶痕和跑针,所述颜色缺陷包括脏污、色花、色纱、染花、黑点、漏印和暗纹;利用摄像机在触觉感知传感器采集布匹纹理的同一位置同一角度采集布匹外部图像,该布匹外部图像与相应的一张布匹纹理图像作为一组布匹检测数据,以布匹的缺陷情况作为每张图像的标签,即每组缺陷布匹检测数据中,包含一张布匹纹理图像、一张布匹外部图像和一个缺陷情况标签;将采集到的所有组布匹检测数据构成布匹检测数据集;
步骤2:建立基于多模态融合深度学习的分类模型
将特征提取网络和多模态融合网络连接构建基于多模态融合深度学习的分类模型;所述特征提取网络使用两个并列的ResNet-50网络,分别以采集的各组布匹检测数据中的布匹纹理图像和布匹外部图像作为输入,用于提取布匹纹理图像和外部图像的特征,得到两个长度为N的向量,将其连接得到一个长度为2N的向量作为提取的特征向量输出;所述多模态融合网络采用2~4层全连接网络,用于布匹缺陷的检测,前一层全连接网络的输出作为后一层全连接网络的输入,第一层全连接网络的输入为所述特征提取网络得到的长度为2N的特征向量,最后一层全连接网络的输出为表征布匹缺陷情况的特征向量,该特征向量的长度等于输入的一组布匹检测数据中含有的缺陷情况标签种类数,该特征向量的各元素分别表示布匹为各类缺陷情况的概率;
步骤3:训练布匹缺陷检测模型
对步骤1中得到的布匹检测数据集进行训练集与测试集的划分;将训练集中属于同一组布匹检测数据的布匹纹理图像和布匹外部图像分别输入步骤2建立的分类模型的两个并列的ResNet-50网络,之后通过反向传播算法对布匹检测模型进行训练,其中采用损失函数Softmax Loss对布匹检测模型的训练过程进行约束;利用测试集来判断布匹缺陷检测模型的训练效果,以得到训练完毕的布匹检测模型;
步骤4:采集待检测布匹的纹理图像和对应的外部图像,将其输入到训练完毕的布匹缺陷检测模型中,检测布匹的缺陷情况,该缺陷情况是由布匹缺陷检测模型输出的表征布匹缺陷情况的特征向量中置信度最高的缺陷情况标签。
进一步地,所述步骤3中,在划分训练集与测试集的布匹检测数据集由一个扩充的布匹检测数据集替换,该扩充的布匹检测数据集通过以下步骤获取:将步骤1得到的布匹检测数据集中的任一组数据,对其布匹纹理图像和布匹外部图像随机进行相同的旋转、平移的数据增强操作,以生成一组新的数据;对布匹检测数据集中的其余组数据分别进行所述 数据增强操作;将生成的所有组新数据加入到布匹检测数据集中,得到所述扩充的布匹检测数据集。
本发明的特点及有益效果:
本发明提出一种基于多模态融合深度学习的布匹缺陷检测方法。采用触觉感知传感器来获取布匹表面纹理,其可以不受布匹颜色和外界光照环境影响的检测结构性缺陷;使用多模态融合深度学习的方法将其与外部摄像机拍摄的图像信息结合,在识别布匹颜色缺陷的同时,由于外部图像也能得到部分结构缺陷特征,利用互补的信息可以大幅提高结构性缺陷检测的准确度。
附图说明
图1是本发明实施例的匹缺陷检测方法的流程图。
图2是本发明实施例方法中采用的触觉感知传感器的结构示意图。
具体实施方式
本发明提出的一种基于多模态融合深度学习的布匹缺陷检测方法结合附图及实施例详细说明如下:
参见图1,为本发明实施例的整体流程图,包括以下步骤:
步骤1:建立不同类型缺陷的布匹检测数据集
将触觉感知传感器与不同缺陷情况的布匹表面接触,采集各种缺陷情况的布匹纹理图像,布匹的缺陷情况分为正常、结构缺陷和颜色缺陷,所述结构缺陷包括刮丝、稀丝、棉结、破洞、粗纱、褶痕和跑针,所述颜色缺陷包括脏污、色花、色纱、染花、黑点、漏印和暗纹;利用摄像机在触觉感知传感器采集布匹纹理的同一位置同一角度采集布匹外部图像,该布匹外部图像与相应的一张布匹纹理图像作为一组布匹检测数据,以布匹的缺陷情况作为每张图像的标签,即每组缺陷布匹检测数据中,包含一张布匹纹理图像、一张布匹外部图像和一个缺陷情况标签;将采集到的所有组布匹检测数据剪裁掉边缘后,构成布匹检测数据集;
本步骤中使用的触觉感知传感器,其结构参见图2,包括由U型底座9和顶盖2构成的外壳10,以及容纳于外壳10内的透明弹性体1、发光器3和图像采集摄像头8。其中,图像采集摄像头8通过第一支撑板7安装在外壳10的底座9上,图像采集摄像头8上方设有一透明支撑块5,该透明支撑块5的底部由与第一支撑板7相连的第二支撑板6固定,透明支撑块5的顶部通过固定于外壳10中部侧壁的第三支撑板4限位。透明弹性体1的顶部凸出于外壳10的顶盖2,透明弹性体1的底部与透明支撑块5的顶部相接触;透明 弹性体1由透明材料聚二甲基硅氧烷(PDMS)制成的矩形块体,通过混合不同比例的PDMS基本组分和固化剂来得到不同柔软度的弹性体(具体制作工艺为本领域的公知技术),该弹性体具有低弹性模量、高重复性及金属附着性;透明弹性体1的顶部表面溅射有金属铝薄膜(本实施例中该金属铝薄膜的厚底为2000A),可以在与布匹接触时映射布匹的表面纹理,该表面纹理由图像采集摄像头8采集。发光器3均布在透明弹性体1的底部四周,且该发光器3由第三支撑板4支撑,发光器3用于为透明弹性体1提供稳定统一的光照,避免由于自然光亮度变化造成的检测偏差;本实施例的发光器3采用LED灯及配套的电路板,具体地,为8个均匀布置在透明弹性体1四周且总电压不超过4V的贴片式白色LED灯。
本步骤中使用的摄像机为商用摄像头,本实施例采用型号为MV-CA050-11UC的面阵相机,所采集的布匹外部图像用于从整体上反应布匹的缺陷情况。
所述触觉感知传感器和摄像机均由机械臂控制实现各自的图像采集。
步骤2:建立基于多模态融合深度学习的分类模型
将特征提取网络和多模态融合网络连接构建基于多模态融合深度学习的分类模型。所述特征提取网络使用两个并列的ResNet-50网络,分别以采集的各组布匹检测数据中的布匹纹理图像和布匹外部图像作为输入,用于提取布匹纹理图像和外部图像的特征,得到两个长度为N(本实施例中选取为1000)的向量,将其连接得到一个长度为2N的向量作为提取的特征输出,该向量同时包含了纹理图像和外部图像中的特征信息。将该向量作为输入传入多模态融合网络用于布匹缺陷的检测。所述多模态融合网络采用2~4层全连接网络(本实施例采用三层全连接网络),前一层全连接网络的输出作为后一层全连接网络的输入,第一层全连接网络的输入为前述特征提取网络得到的长度为2N的特征向量,最后一层全连接网络的输出为表征布匹缺陷情况的特征向量,该特征向量的长度等于输入的一组布匹检测数据中含有的缺陷情况标签种类数,该特征向量的各元素分别表示布匹为各类缺陷情况的概率;本实施例中,第一层全连接网络的输入为前述特征提取网络得到的长度为2000的特征向量,输出长度为1024的特征向量;第一层全连接网络的输出作为第二层全连接网络的输入,第二层全连接网络输出长度为1024的特征向量。。
步骤3:利用数据增强对步骤1得到的布匹检测数据集进行扩充
将采集到的所有数据进行数据增强操作:对步骤1得到的布匹检测数据集中的任一组数据,对其布匹纹理图像和布匹外部图像随机进行相同的旋转、平移的数据增强操作,以生成一组新的数据;对布匹检测数据集中的其余组数据分别进行所述数据增强操作;将生成的所有组新数据加入到布匹检测数据集中,从而得到一个扩充的布匹检测数据集,用于后续网络模型的训练。
在步骤1得到的布匹检测数据集中数据充足的情况下,步骤3可缺省。
步骤4:训练布匹缺陷检测模型
首先对步骤3中得到的扩充的布匹检测数据集(对于步骤3缺省的情况,此处为步骤1得到的布匹检测数据集)进行训练集与测试集的划分,本实施例中划分比例为:训练集:测试集=9:1。将训练集中属于同一组布匹检测数据的布匹纹理图像和布匹外部图像分别输入步骤2建立的分类模型的两个特征提取网络,之后通过反向传播算法对布匹检测模型进行训练,其中采用损失函数Softmax Loss对布匹检测模型的训练过程进行约束。利用测试集来判断布匹缺陷检测模型的训练效果,用以辅助网络模型参数的调整,以得到训练完毕的布匹检测模型。
步骤5:采集待检测布匹的纹理图像和对应的外部图像,将其输入到训练完毕的布匹缺陷检测模型中,检测布匹的缺陷情况,该缺陷情况是由布匹缺陷检测模型输出的表征布匹缺陷情况的特征向量中置信度最高的缺陷情况标签。
在待检测的布匹上,使用步骤1中所述的方法进行纹理图像和外部图像的采集,然后将这组数据输入到步骤4中训练完毕的布匹检测模型中,通过输出结果来判断布匹的缺陷情况。
综上,本发明提出的布匹缺陷检测方法,触觉感知传感器和摄像头同时检测,结合视觉和触觉两类模态融合的深度学习算法实现了布匹缺陷检测的功能;该触觉感知传感器精度高,可用于检测识别较细的织物或是上色的织物等肉眼难以识别的布匹缺陷,可适用于多种织物的缺陷检测;外部图像则可提供布匹颜色缺陷信息,同时也能辅助触觉传感器提供部分结构缺陷信息;利用视觉和触觉两类互补的信息可以大幅提高检测的准确度,鲁棒性高。
以上所述仅为本发明的实施例,并非因此限制本发明的保护范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的保护范围内。

Claims (3)

  1. 一种基于多模态融合深度学习的布匹缺陷检测方法,其特征在于,包括以下步骤:
    步骤1:建立不同类型缺陷的布匹检测数据集
    将触觉感知传感器与不同缺陷情况的布匹表面接触,采集各种缺陷情况的布匹纹理图像,布匹的缺陷情况分为正常、结构缺陷和颜色缺陷,所述结构缺陷包括刮丝、稀丝、棉结、破洞、粗纱、褶痕和跑针,所述颜色缺陷包括脏污、色花、色纱、染花、黑点、漏印和暗纹;利用摄像机在触觉感知传感器采集布匹纹理的同一位置同一角度采集布匹外部图像,该布匹外部图像与相应的一张布匹纹理图像作为一组布匹检测数据,以布匹的缺陷情况作为每张图像的标签,即每组缺陷布匹检测数据中,包含一张布匹纹理图像、一张布匹外部图像和一个缺陷情况标签;将采集到的所有组布匹检测数据构成布匹检测数据集;
    步骤2:建立基于多模态融合深度学习的分类模型
    将特征提取网络和多模态融合网络连接构建基于多模态融合深度学习的分类模型;所述特征提取网络使用两个并列的ResNet-50网络,分别以采集的各组布匹检测数据中的布匹纹理图像和布匹外部图像作为输入,用于提取布匹纹理图像和外部图像的特征,得到两个长度为N的向量,将其连接得到一个长度为2N的向量作为提取的特征向量输出;所述多模态融合网络采用2~4层全连接网络,用于布匹缺陷的检测,前一层全连接网络的输出作为后一层全连接网络的输入,第一层全连接网络的输入为所述特征提取网络得到的长度为2N的特征向量,最后一层全连接网络的输出为表征布匹缺陷情况的特征向量,该特征向量的长度等于输入的一组布匹检测数据中含有的缺陷情况标签种类数,该特征向量的各元素分别表示布匹为各类缺陷情况的概率;
    步骤3:训练布匹缺陷检测模型
    对步骤1中得到的布匹检测数据集进行训练集与测试集的划分;将训练集中属于同一组布匹检测数据的布匹纹理图像和布匹外部图像分别输入步骤2建立的分类模型的两个并列的ResNet-50网络,之后通过反向传播算法对布匹检测模型进行训练,其中采用损失函数Softmax Loss对布匹检测模型的训练过程进行约束;利用测试集来判断布匹缺陷检测模型的训练效果,以得到训练完毕的布匹检测模型;
    步骤4:采集待检测布匹的纹理图像和对应的外部图像,将其输入到训练完毕的布匹缺陷检测模型中,检测布匹的缺陷情况,该缺陷情况是由布匹缺陷检测模型输出的表征布匹缺陷情况的特征向量中置信度最高的缺陷情况标签。
  2. 根据权利要求1所述的布匹缺陷检测方法,其特征在于,所述触觉感知传感器包括由底座和顶盖构成的外壳,以及容纳于所述外壳内的透明弹性体、发光器和图像采集摄 像头;其中,所述图像采集摄像头通过第一支撑板安装在所述底座上,图像采集摄像头上方设有一透明支撑块,该透明支撑块的底部由与所述第一支撑板相连的第二支撑板固定,透明支撑块的顶部通过固定于所述外壳中部侧壁的第三支撑板限位;所述透明弹性体的顶部凸出于所述顶盖,该透明弹性体的底部与所述透明支撑块的顶部相接触;所述透明弹性体的顶部表面溅射有金属铝薄膜,用于在与布匹接触时映射布匹的表面纹理,该表面纹理由所述图像采集摄像头采集;所述发光器均布在透明弹性体的底部四周,且该发光器由第三支撑板支撑。
  3. 根据权利要求1或2所述的布匹缺陷检测方法,其特征在于,所述步骤3中,在划分训练集与测试集的布匹检测数据集由一个扩充的布匹检测数据集替换,该扩充的布匹检测数据集通过以下步骤获取:将步骤1得到的布匹检测数据集中的任一组数据,对其布匹纹理图像和布匹外部图像随机进行相同的旋转、平移的数据增强操作,以生成一组新的数据;对布匹检测数据集中的其余组数据分别进行所述数据增强操作;将生成的所有组新数据加入到布匹检测数据集中,得到所述扩充的布匹检测数据集。
PCT/CN2020/111380 2019-11-19 2020-08-26 一种基于多模态融合深度学习的布匹缺陷检测方法 WO2021098323A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/281,923 US20220414856A1 (en) 2019-11-19 2020-08-26 A fabric defect detection method based on multi-modal deep learning

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911130805.7 2019-11-19
CN201911130805.7A CN111028204B (zh) 2019-11-19 2019-11-19 一种基于多模态融合深度学习的布匹缺陷检测方法

Publications (1)

Publication Number Publication Date
WO2021098323A1 true WO2021098323A1 (zh) 2021-05-27

Family

ID=70200531

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/111380 WO2021098323A1 (zh) 2019-11-19 2020-08-26 一种基于多模态融合深度学习的布匹缺陷检测方法

Country Status (3)

Country Link
US (1) US20220414856A1 (zh)
CN (1) CN111028204B (zh)
WO (1) WO2021098323A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022412A (zh) * 2021-10-12 2022-02-08 上海伯耶信息科技有限公司 一种基于深度学习视觉检测的卷烟辅料纸张缺陷检测方法
CN114066825A (zh) * 2021-10-29 2022-02-18 浙江工商大学 一种改进的基于深度学习的复杂纹理图像瑕疵检测方法
CN114723750A (zh) * 2022-06-07 2022-07-08 南昌大学 基于改进yolox算法的输电线路耐张线夹缺陷检测方法
CN114858802A (zh) * 2022-07-05 2022-08-05 天津大学 一种织物多尺度图像采集方法与装置
CN114859022A (zh) * 2022-07-05 2022-08-05 泉州市颖秀科技发展有限公司 一种织物品质评估方法、系统、电子设备和存储介质
CN114972952A (zh) * 2022-05-29 2022-08-30 重庆科技学院 一种基于模型轻量化的工业零部件缺陷识别方法
CN116664586A (zh) * 2023-08-02 2023-08-29 长沙韶光芯材科技有限公司 一种基于多模态特征融合的玻璃缺陷检测方法及系统
CN117726628A (zh) * 2024-02-18 2024-03-19 青岛理工大学 一种基于半监督目标检测算法的钢材表面缺陷检测方法

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028204B (zh) * 2019-11-19 2021-10-08 清华大学 一种基于多模态融合深度学习的布匹缺陷检测方法
CN111751294A (zh) * 2020-06-30 2020-10-09 深兰科技(达州)有限公司 基于学习记忆的布匹缺陷检测方法
CN112270659A (zh) * 2020-08-31 2021-01-26 中国科学院合肥物质科学研究院 一种动力电池极片表面缺陷快速检测方法和系统
CN112464762A (zh) * 2020-11-16 2021-03-09 中国科学院合肥物质科学研究院 基于图像处理的农产品筛选系统及其方法
CN112867022B (zh) * 2020-12-25 2022-04-15 北京理工大学 一种基于融合无线网络的云边协同环境感知方法及系统
CN113138589B (zh) * 2021-03-12 2022-06-07 深圳智造谷工业互联网创新中心有限公司 工业设备控制方法、电子装置及存储介质
CN112802016B (zh) * 2021-03-29 2023-08-08 深圳大学 基于深度学习的实时布匹缺陷检测方法及系统
WO2024071670A1 (ko) * 2022-09-27 2024-04-04 주식회사 엠파파 인공지능 기반 봉제결함 탐지 및 분류 방법 및 시스템
CN115760805B (zh) * 2022-11-24 2024-02-09 中山大学 一种基于视触觉进行加工元件表面凹陷的定位方法
CN115861951B (zh) * 2022-11-27 2023-06-09 石家庄铁道大学 一种基于双特征提取网络的复杂环境车道线精准检测方法
CN116703923A (zh) * 2023-08-08 2023-09-05 曲阜师范大学 基于并行注意力机制的织物瑕疵检测模型

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052911A (zh) * 2017-12-20 2018-05-18 上海海洋大学 基于深度学习的多模态遥感影像高层特征融合分类方法
JP2018096689A (ja) * 2016-12-07 2018-06-21 花王株式会社 不織布の検査装置
CN108614548A (zh) * 2018-04-03 2018-10-02 北京理工大学 一种基于多模态融合深度学习的智能故障诊断方法
CN110175988A (zh) * 2019-04-25 2019-08-27 南京邮电大学 基于深度学习的布匹缺陷检测方法
CN111028204A (zh) * 2019-11-19 2020-04-17 清华大学 一种基于多模态融合深度学习的布匹缺陷检测方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101382502B (zh) * 2007-09-07 2011-07-27 鸿富锦精密工业(深圳)有限公司 表面污点检测系统及其检测方法
CN107123107A (zh) * 2017-03-24 2017-09-01 广东工业大学 基于神经网络深度学习的布匹缺陷检测方法
CN107463952B (zh) * 2017-07-21 2020-04-03 清华大学 一种基于多模态融合深度学习的物体材质分类方法
WO2022060472A2 (en) * 2020-08-12 2022-03-24 The Penn State Research Foundaton In-situ process monitoring for powder bed fusion additive manufacturing (pbf am) processes using multi-modal sensor fusion machine learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018096689A (ja) * 2016-12-07 2018-06-21 花王株式会社 不織布の検査装置
CN108052911A (zh) * 2017-12-20 2018-05-18 上海海洋大学 基于深度学习的多模态遥感影像高层特征融合分类方法
CN108614548A (zh) * 2018-04-03 2018-10-02 北京理工大学 一种基于多模态融合深度学习的智能故障诊断方法
CN110175988A (zh) * 2019-04-25 2019-08-27 南京邮电大学 基于深度学习的布匹缺陷检测方法
CN111028204A (zh) * 2019-11-19 2020-04-17 清华大学 一种基于多模态融合深度学习的布匹缺陷检测方法

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022412A (zh) * 2021-10-12 2022-02-08 上海伯耶信息科技有限公司 一种基于深度学习视觉检测的卷烟辅料纸张缺陷检测方法
CN114066825A (zh) * 2021-10-29 2022-02-18 浙江工商大学 一种改进的基于深度学习的复杂纹理图像瑕疵检测方法
CN114066825B (zh) * 2021-10-29 2024-05-28 浙江工商大学 一种改进的基于深度学习的复杂纹理图像瑕疵检测方法
CN114972952B (zh) * 2022-05-29 2024-03-22 重庆科技学院 一种基于模型轻量化的工业零部件缺陷识别方法
CN114972952A (zh) * 2022-05-29 2022-08-30 重庆科技学院 一种基于模型轻量化的工业零部件缺陷识别方法
CN114723750A (zh) * 2022-06-07 2022-07-08 南昌大学 基于改进yolox算法的输电线路耐张线夹缺陷检测方法
CN114859022B (zh) * 2022-07-05 2022-09-02 泉州市颖秀科技发展有限公司 一种织物品质评估方法、系统、电子设备和存储介质
CN114859022A (zh) * 2022-07-05 2022-08-05 泉州市颖秀科技发展有限公司 一种织物品质评估方法、系统、电子设备和存储介质
CN114858802A (zh) * 2022-07-05 2022-08-05 天津大学 一种织物多尺度图像采集方法与装置
CN116664586A (zh) * 2023-08-02 2023-08-29 长沙韶光芯材科技有限公司 一种基于多模态特征融合的玻璃缺陷检测方法及系统
CN116664586B (zh) * 2023-08-02 2023-10-03 长沙韶光芯材科技有限公司 一种基于多模态特征融合的玻璃缺陷检测方法及系统
CN117726628A (zh) * 2024-02-18 2024-03-19 青岛理工大学 一种基于半监督目标检测算法的钢材表面缺陷检测方法
CN117726628B (zh) * 2024-02-18 2024-04-19 青岛理工大学 一种基于半监督目标检测算法的钢材表面缺陷检测方法

Also Published As

Publication number Publication date
US20220414856A1 (en) 2022-12-29
CN111028204A (zh) 2020-04-17
CN111028204B (zh) 2021-10-08

Similar Documents

Publication Publication Date Title
WO2021098323A1 (zh) 一种基于多模态融合深度学习的布匹缺陷检测方法
CN111047655A (zh) 基于卷积神经网络的高清摄像机布料疵点检测方法
CN107679106B (zh) 一种快速反应的色织面料设计生产方法
CN101957326A (zh) 一种纺织品表面质量多光谱检测的方法与装置
CN109307675A (zh) 一种产品外观检测方法和系统
CN105223208A (zh) 一种电路板检测模板及其制作方法、电路板检测方法
CN106650614A (zh) 一种动态校准方法和装置
CN109035248A (zh) 疵点检测方法、装置、终端设备、服务器和存储介质
CN109145985A (zh) 一种布匹疵点的检测及分类方法
CN106886989A (zh) 键盘按键的自动光学检测方法
CN110084246A (zh) 一种色织物疵点自动识别方法
CN110097538A (zh) 一种织机在线验布装置及疵点识别方法
CN109325940A (zh) 织物检测方法及装置、计算机设备和计算机可读介质
Huang et al. Research on surface defect intelligent detection technology of non-woven fabric based on support vector machine
CN113916899B (zh) 基于视觉识别的大输液软袋产品的检测方法、系统及装置
CN114858802B (zh) 一种织物多尺度图像采集方法与装置
CN108827974B (zh) 一种瓷砖缺陷检测方法及系统
CN111028250A (zh) 一种实时智能验布方法及系统
CN115239615A (zh) 基于ctpn的布匹缺陷检测方法
CN212846839U (zh) 织品信息媒合系统
CN206441190U (zh) 一种商标唛头(唛头、印唛)智能识别检测系统
Guang et al. Fabric Defect Detection Method Based on Image Distance Difference
Li et al. Fabric Linear Defect Detection Based on Mask RCNN
Çelik Development of an intelligent fabric defect inspection system
Qian et al. Design and Implementation of Human Computer Interactive Gesture Recognition System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20889295

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20889295

Country of ref document: EP

Kind code of ref document: A1