CN116894836A - Yarn defect detection method and device based on machine vision - Google Patents

Yarn defect detection method and device based on machine vision Download PDF

Info

Publication number
CN116894836A
CN116894836A CN202310950857.9A CN202310950857A CN116894836A CN 116894836 A CN116894836 A CN 116894836A CN 202310950857 A CN202310950857 A CN 202310950857A CN 116894836 A CN116894836 A CN 116894836A
Authority
CN
China
Prior art keywords
yarn
sequence
image
pooling
defects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310950857.9A
Other languages
Chinese (zh)
Inventor
徐云
杨承翰
张建鹏
张建新
陈宥融
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Taitan Co ltd
Zhejiang Sci Tech University ZSTU
Original Assignee
Zhejiang Taitan Co ltd
Zhejiang Sci Tech University ZSTU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Taitan Co ltd, Zhejiang Sci Tech University ZSTU filed Critical Zhejiang Taitan Co ltd
Priority to CN202310950857.9A priority Critical patent/CN116894836A/en
Publication of CN116894836A publication Critical patent/CN116894836A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Treatment Of Fiber Materials (AREA)

Abstract

The application discloses a yarn flaw detection method and device based on machine vision, wherein the method comprises the steps of obtaining a yarn image; performing binarization processing on the image of the yarn to obtain a binarized image of the yarn; performing dimension reduction treatment on the binarized image to obtain a one-dimensional sequence of the yarn width value; carrying out average value pooling on the one-dimensional sequence to obtain a characteristic sequence; constructing an optimal feature vector based on the feature sequence; inputting the optimal feature vector into a deep learning neural network, and outputting the feature attribute of the yarn. The application can realize automatic flaw detection in the yarn production process, solves the problems that the detection precision of the traditional yarn flaw detection method is easy to be influenced, has low automation level and is easy to be subjected to false detection, and has the advantages of low occupancy rate of computing resources, high detection speed and high recognition precision.

Description

基于机器视觉的纱线瑕疵检测方法和装置Yarn defect detection method and device based on machine vision

技术领域Technical field

本申请属于纺织品计算机检测技术领域,具体涉及一种基于机器视觉的纱线瑕疵检测方法和装置。This application belongs to the field of textile computer detection technology, and specifically relates to a yarn defect detection method and device based on machine vision.

背景技术Background technique

纱线在生产过程中,受机械传动设备、纺纱原料等影响,不可避免地出现一些棉结、细节等纱线瑕疵。因此,在生产过程中对纱线开展瑕疵检测,对把控纱线生产质量、提高纱线生产效率具有重要的研究意义。During the yarn production process, affected by mechanical transmission equipment, spinning raw materials, etc., yarn defects such as neps and details will inevitably appear. Therefore, detecting defects in yarn during the production process is of great research significance for controlling yarn production quality and improving yarn production efficiency.

目前,常用的纱线瑕疵检测方法主要有:光电式检测法、电容式检测法和人工目测法。At present, the commonly used yarn defect detection methods mainly include: photoelectric detection method, capacitive detection method and manual visual inspection method.

光电式检测方式一般由发光体、光学系统、受光体三部分组成。发光体所产生的红外光经过光学系统后,形成光场均匀的检测区,由受光体将光能转化为模拟量输出。当纱线运行于检测区时,挡住了部分光线,使得受光体接收到的光能减少,光能的改变量反映了检测区内纱线直径的大小。受光体将光能转化为电信号,其幅值大小对应于纱线的直径。光电式检测法的检测精度易受光电器件老化、纱线毛羽透光等影响。The photoelectric detection method generally consists of three parts: a luminous body, an optical system, and a light receiver. After the infrared light generated by the luminous body passes through the optical system, a detection area with a uniform light field is formed, and the light receiver converts the light energy into an analog output. When the yarn runs in the detection area, it blocks part of the light, reducing the light energy received by the photoreceptor. The change in light energy reflects the diameter of the yarn in the detection area. The photoreceptor converts light energy into electrical signals, the amplitude of which corresponds to the diameter of the yarn. The detection accuracy of the photoelectric detection method is easily affected by the aging of the photoelectric device and the light transmission of yarn hairiness.

电容式检测法由两个垂直的电容极板构成,当纱线以非接触的方式经过两极板形成的检测区域时,因纤维介质的加入使得两极板间的介电常数发生改变,从而导致两极板的电容值发生变化,间接性的反映纱线质量的变化。该方法能够实现普通纱线的检测,但检测精度易受环境中空气湿度、纱线含水量、空间电场不匀等影响。The capacitive detection method consists of two vertical capacitive plates. When the yarn passes through the detection area formed by the two plates in a non-contact manner, the dielectric constant between the two plates changes due to the addition of fiber medium, resulting in The capacitance value of the plate changes, which indirectly reflects the changes in yarn quality. This method can detect ordinary yarns, but the detection accuracy is easily affected by air humidity, yarn moisture content, spatial electric field unevenness, etc. in the environment.

人工目测法通过检测员抽检同批次纱线,凭借经验和肉眼观察,来获得纱线瑕疵的种类及数量。该方法自动化水平低,检测结果依赖操作者的主观性,容易出现误检。The manual visual inspection method uses inspectors to randomly inspect the same batch of yarn and rely on experience and naked eye observation to obtain the type and quantity of yarn defects. This method has a low level of automation, the detection results rely on the subjectivity of the operator, and are prone to false detections.

发明内容Contents of the invention

本申请的目的在于提供一种基于机器视觉的纱线瑕疵检测方法和装置,以解决现有技术中的纱线瑕疵检测方法存在的检测精度易受影响,自动化水平低,容易出现误检的技术问题。The purpose of this application is to provide a yarn defect detection method and device based on machine vision to solve the problems of yarn defect detection methods in the prior art that are susceptible to detection accuracy, low automation level, and prone to misdetection. question.

为实现上述目的,本申请采用的一个技术方案是:In order to achieve the above purpose, a technical solution adopted in this application is:

提供了一种基于机器视觉的纱线瑕疵检测方法,包括:A yarn defect detection method based on machine vision is provided, including:

获取纱线的图像;Get an image of yarn;

对所述纱线的图像进行二值化处理,得到所述纱线的二值化图像;Perform binarization processing on the image of the yarn to obtain a binary image of the yarn;

对所述二值化图像进行降维处理,得到所述纱线宽度值的一维序列;Perform dimensionality reduction processing on the binarized image to obtain a one-dimensional sequence of yarn width values;

对所述一维序列进行平均值池化,得到特征序列,其中,所述平均值池化的池化尺寸Pi=f(ai),ai=1,2,3...N;Perform average pooling on the one-dimensional sequence to obtain a feature sequence, wherein the pooling size of the average pooling is P i =f(a i ), a i =1, 2, 3...N;

基于所述特征序列,构建最优特征向量;Based on the feature sequence, construct an optimal feature vector;

将所述最优特征向量输入深度学习神经网络,输出所述纱线的特征属性,其中,所述特征属性包括正常或所述纱线瑕疵的类型。The optimal feature vector is input into a deep learning neural network and the characteristic attributes of the yarn are output, where the characteristic attributes include normal or the type of yarn defect.

在一个或多个实施方式中,所述对所述纱线的图像进行二值化处理,得到所述纱线的二值化图像的步骤包括:In one or more embodiments, the step of performing binarization processing on the image of the yarn to obtain the binarized image of the yarn includes:

基于OSTU阈值法计算所述纱线的图像的OSTU阈值TOSTUCalculate the OSTU threshold T OSTU of the image of the yarn based on the OSTU threshold method;

基于以下公式对所述OSTU阈值进行修正,得到修正阈值Tg-global-OSTUThe OSTU threshold is corrected based on the following formula to obtain the corrected threshold T g-global-OSTU :

式中,Vq是所述纱线的图像的所有像素点中灰度值相同数量最多的像素点的灰度值,Zx是所述纱线的图像的所有像素点中灰度值最小的像素点的灰度值; In the formula, V q is the gray value of the pixel with the largest number of the same gray value among all the pixels in the image of the yarn, and Z x is the smallest gray value among all the pixels in the image of the yarn. The gray value of the pixel;

基于所述修正阈值对所述图像进行二值化处理,得到所述纱线的二值化图像。The image is binarized based on the correction threshold to obtain a binarized image of the yarn.

在一个或多个实施方式中,所述对所述二值化图像进行降维处理,得到所述纱线宽度值的一维序列的步骤包括:In one or more embodiments, the step of performing dimensionality reduction processing on the binarized image to obtain a one-dimensional sequence of yarn width values includes:

以所述二值化图像的背景像素为值1,前景像素为值0,得到所述二值化图像的矩阵Xn*m,其中n为所述二值化图像在所述纱线宽度方向上的像素数量,m为所述二值化图像在所述纱线长度方向上的像素数量;Taking the background pixel of the binary image as value 1 and the foreground pixel as value 0, the matrix X n*m of the binary image is obtained, where n is the width direction of the yarn in the binary image The number of pixels on, m is the number of pixels of the binary image in the length direction of the yarn;

基于以下公式,计算得到所述纱线宽度值的一维序列D:Based on the following formula, the one-dimensional sequence D of the yarn width value is calculated:

D=[n n…n]1*m-[1 1…1]1*n*Xn*mD=[nn...n] 1*m -[1 1...1] 1*n *X n*m .

在一个或多个实施方式中,所述纱线瑕疵包括细节瑕疵和长缺节瑕疵,所述对所述一维序列进行平均值池化,得到特征序列的步骤包括:In one or more embodiments, the yarn defects include detail defects and long gap defects, and the step of performing average pooling on the one-dimensional sequence to obtain a feature sequence includes:

对所述一维序列进行平均值池化,池化尺寸得到获得特征序列S1(a1),其中,a1=1,2,3...N1,m为所述一维序列的长度,/> Perform average pooling on the one-dimensional sequence, and the pooling size Obtain the characteristic sequence S 1 (a 1 ), where a 1 =1, 2, 3...N 1 , m is the length of the one-dimensional sequence, />

在一个或多个实施方式中,所述纱线瑕疵包括粗节瑕疵,所述对所述一维序列进行平均值池化,得到特征序列的步骤包括:In one or more embodiments, the yarn defects include slub defects, and the step of performing average pooling on the one-dimensional sequence to obtain the feature sequence includes:

对所述一维序列进行平均值池化,池化尺寸得到特征序列S2(a2),其中,a2=1,2,3...N2,m为所述一维序列的长度,/> Perform average pooling on the one-dimensional sequence, and the pooling size Obtain the characteristic sequence S 2 (a 2 ), where a 2 =1, 2, 3...N 2 , m is the length of the one-dimensional sequence, />

在一个或多个实施方式中,所述纱线瑕疵包括短缺节瑕疵和饰纱交错瑕疵,所述对所述一维序列进行平均值池化,得到特征序列的步骤包括:In one or more embodiments, the yarn defects include missing knot defects and decorative yarn staggered defects. The step of performing average pooling on the one-dimensional sequence to obtain the feature sequence includes:

对所述一维序列分别进行两次平均值池化,两次平均值池化的池化尺寸分别为P3-1=4a3+2、P3-2=2a3+1,得到序列Mshort和Mlong,其中,a3=1,2,3...N3m为所述一维序列的长度。Perform average pooling twice on the one-dimensional sequence. The pooling sizes of the two average poolings are P 3-1 =4a 3 +2 and P 3-2 =2a 3 +1 respectively, and the sequence M is obtained. short and M long , where a 3 =1, 2, 3...N 3 , m is the length of the one-dimensional sequence.

分别对所述序列Mshort和Mlong进行后向搜索求差和前向搜索求差,得到后向求差序列和前向求差序列;Perform backward search and difference and forward search and difference on the sequences M short and M long respectively to obtain a backward difference sequence and a forward difference sequence;

比对所述后向求差序列和前向求差序列,取所述后向求差序列和所述前向求差序列每一对应位置的较大值构建特征序列S3(a3)。Compare the backward difference sequence and the forward difference sequence, and take the larger value of each corresponding position of the backward difference sequence and the forward difference sequence to construct the feature sequence S 3 (a 3 ).

在一个或多个实施方式中,所述分别对所述序列Mshort和Mlong进行后向搜索求差和前向搜索求差,得到后向求差序列和前向求差序列的步骤包括:In one or more embodiments, the steps of performing backward search and difference and forward search and difference on the sequences M short and M long respectively, and obtaining the backward difference sequence and the forward difference sequence include:

去除所述序列Mshort的头端的前a3个数值,由头端依序比对所述序列Mshort和Mlong直至尾端,得到后向求差序列S3-1=|Mshort[a3+1:m+1-4a3]-Mlong[1:m-1-5a3]|,其中,m为所述一维序列的长度;The first a 3 values at the head end of the sequence M short are removed, and the sequences M short and M long are sequentially compared from the head end to the tail end to obtain the backward difference sequence S 3-1 =|M short [a 3 +1:m+1-4a 3 ]-M long [1:m-1-5a 3 ]|, where m is the length of the one-dimensional sequence;

去除所述序列Mshort的尾端的后a3个数值,由尾端依序比对所述序列Mshort和Mlong直至头端,得到前向求差序列S3-2=|Mshort[1:m-1-5a3]-Mlong[3a3+2:m-2a3]|,其中,m为所述一维序列的长度。Remove the last 3 values of the tail end of the sequence M short , and compare the sequences M short and M long in sequence from the tail end to the head end, and obtain the forward difference sequence S 3-2 =|M short [1 :m-1-5a 3 ]-M long [3a 3 +2:m-2a 3 ]|, where m is the length of the one-dimensional sequence.

在一个或多个实施方式中,所述基于所述特征序列,构建最优特征向量的步骤包括:In one or more embodiments, the step of constructing an optimal feature vector based on the feature sequence includes:

基于偏最小二乘的随机蛙跳算法计算ai的最优解;The stochastic leapfrog algorithm based on partial least squares calculates the optimal solution of a i ;

基于所述最优解,计算所述特征序列的最大值、最小值和平均值,收集得到所述最优特征向量。Based on the optimal solution, the maximum value, minimum value and average value of the feature sequence are calculated, and the optimal feature vector is collected.

在一个或多个实施方式中,所述将所述最优特征向量输入深度学习神经网络,输出所述纱线的特征属性的步骤中,所述神经网络为双层ANN分类器,且所述深度学习神经网络的训练方法包括:In one or more embodiments, in the step of inputting the optimal feature vector into a deep learning neural network and outputting the characteristic attributes of the yarn, the neural network is a two-layer ANN classifier, and the Training methods for deep learning neural networks include:

获取样本训练集,所述样本训练集包括若干组标注有标签值的纱线图像的所述最优特征向量;Obtain a sample training set, the sample training set includes the optimal feature vectors of several groups of yarn images marked with label values;

将所述样本训练集输入,基于交叉熵损失函数和所述标签值,沿梯度下降的方向更新所述深度学习神经网络的权重,直至所述交叉熵损失函数收敛,得到所述深度学习神经网络。The sample training set is input, and based on the cross-entropy loss function and the label value, the weight of the deep learning neural network is updated in the direction of gradient descent until the cross-entropy loss function converges, and the deep learning neural network is obtained .

为实现上述目的,本申请采用的另一个技术方案是:In order to achieve the above purpose, another technical solution adopted by this application is:

提供了一种基于机器视觉的纱线瑕疵检测装置,包括:A yarn defect detection device based on machine vision is provided, including:

获取模块,用于获取纱线的图像;Get module, used to get the image of yarn;

二值化处理模块,用于对所述纱线的图像进行二值化处理,得到所述纱线的二值化图像;A binarization processing module is used to perform binarization processing on the image of the yarn to obtain a binary image of the yarn;

降维处理模块,用于对所述二值化图像进行降维处理,得到所述纱线宽度值的一维序列;A dimensionality reduction processing module, used to perform dimensionality reduction processing on the binary image to obtain a one-dimensional sequence of yarn width values;

均值池化模块,用于对所述一维序列进行平均值池化,得到特征序列,其中,所述平均值池化的池化尺寸Pi=f(ai),ai=1,2,3...N;A mean pooling module, used to perform mean pooling on the one-dimensional sequence to obtain a feature sequence, wherein the pooling size of the mean pooling is Pi = f (a i ), a i = 1, 2 ,3...N;

构建模块,用于基于所述特征序列,构建最优特征向量;A building module for constructing an optimal feature vector based on the feature sequence;

分类输出模块,用于将所述最优特征序列输入深度学习神经网络,输出所述纱线的特征属性,其中,所述特征属性包括正常或所述纱线瑕疵的类型。A classification output module, configured to input the optimal feature sequence into a deep learning neural network and output characteristic attributes of the yarn, where the characteristic attributes include normal or types of yarn defects.

区别于现有技术,本申请的有益效果是:Different from the existing technology, the beneficial effects of this application are:

本申请的纱线瑕疵检测方法基于机器视觉的手段,通过采集纱线的图像进行二值化处理后降维,得到纱线宽度值的一维序列,通过对一维序列进行不同池化尺寸的平均值池化,得到能够突显不同瑕疵的特征序列,基于特征序列能够生成最优特征向量,并带入深度学习神经网络中自动输出纱线的瑕疵类型,可实现纱线生产过程中的自动化瑕疵检测,解决了传统纱线瑕疵检测方法的检测精度易受影响,自动化水平低,容易出现误检的问题,并具备计算资源占用率低、检测速度速、识别精度高的优势。The yarn defect detection method of this application is based on machine vision. It collects yarn images and performs binarization processing and then reduces the dimension to obtain a one-dimensional sequence of yarn width values. The one-dimensional sequence is processed with different pooling sizes. Average pooling is used to obtain a feature sequence that can highlight different defects. Based on the feature sequence, the optimal feature vector can be generated and brought into the deep learning neural network to automatically output the yarn defect type, which can realize automated defect detection in the yarn production process. Detection solves the problem that traditional yarn defect detection methods are susceptible to detection accuracy, low automation level, and prone to false detections. It also has the advantages of low computing resource usage, fast detection speed, and high recognition accuracy.

附图说明Description of the drawings

图1是本申请基于机器视觉的纱线瑕疵检测方法一实施方式的流程示意图;Figure 1 is a schematic flow chart of an embodiment of the yarn defect detection method based on machine vision of the present application;

图2是雪尼尔纱线的瑕疵类型图;Figure 2 is a diagram of the defect types of chenille yarn;

图3是本申请后向搜索一实施方式的示意图;Figure 3 is a schematic diagram of an implementation of backward search in this application;

图4是本申请后向搜索一实施方式的示意图;Figure 4 is a schematic diagram of an implementation of backward search in this application;

图5是本申请中不同瑕疵类型的纱线的特征序列图;Figure 5 is a characteristic sequence diagram of yarns with different defect types in this application;

图6是本申请深度学习神经网络一实施方式的结构示意图;Figure 6 is a schematic structural diagram of an embodiment of the deep learning neural network of this application;

图7是本申请基于机器视觉的纱线瑕疵检测装置一实施方式的结构示意图;Figure 7 is a schematic structural diagram of an embodiment of the machine vision-based yarn defect detection device of the present application;

图8是本申请电子设备一实施方式的硬件结构图。FIG. 8 is a hardware structure diagram of an embodiment of the electronic device of the present application.

具体实施方式Detailed ways

以下将结合附图所示的各实施方式对本申请进行详细描述。但该等实施方式并不限制本申请,本领域的普通技术人员根据该等实施方式所做出的结构、方法、或功能上的变换均包含在本申请的保护范围内。The present application will be described in detail below with reference to each embodiment shown in the accompanying drawings. However, these embodiments do not limit this application, and any structural, method, or functional changes made by those of ordinary skill in the art based on these embodiments are included in the protection scope of this application.

如背景技术中,目前纱线瑕疵的检测方法存在较大的局限性,难以有效把控纱线生产质量和效率。As mentioned in the background art, current yarn defect detection methods have major limitations, making it difficult to effectively control yarn production quality and efficiency.

为此,申请人开发了一种基于机器视觉的纱线瑕疵检测方法,该方法通过采集纱线的图像,能够对图像进行处理,自动识别纱线的多种瑕疵,检测速度快,识别精度高,计算资源占用率低,可广泛应用于各种纱线的瑕疵检测。To this end, the applicant has developed a yarn defect detection method based on machine vision. This method can process the image by collecting images of the yarn and automatically identify multiple defects in the yarn. It has fast detection speed and high recognition accuracy. , has low computing resource usage and can be widely used in defect detection of various yarns.

具体地,请参阅图1,图1是本申请基于机器视觉的纱线瑕疵检测方法一实施方式的流程示意图。Specifically, please refer to Figure 1, which is a schematic flow chart of an embodiment of the machine vision-based yarn defect detection method of the present application.

检测方法包括:Detection methods include:

S100、获取纱线的图像。S100. Obtain the image of yarn.

首先,可以在生产过程中获取纱线的图像。其中,纱线可以处于运动状态,也可以处于静止状态。First, images of the yarn can be obtained during the production process. Among them, the yarn can be in a moving state or a static state.

一实施方式中,为了保证纱线图像的采集效果,可在背光照射条件下进行采集。In one embodiment, in order to ensure the collection effect of the yarn image, the collection can be carried out under backlight illumination conditions.

S200、对纱线的图像进行二值化处理,得到纱线的二值化图像。S200: Perform binarization processing on the yarn image to obtain a binary image of the yarn.

为了准确分割图像中纱线的范围,可对图像进行二值化处理。一实施方式中,可采用阈值法进行纱线范围的分割,在其他实施方式中,也可以采用其他方法进行纱线的分割,例如通过深度学习神经网络进行纱线范围的分割。In order to accurately segment the range of yarns in the image, the image can be binarized. In one embodiment, the threshold method can be used to segment the yarn range. In other embodiments, other methods can also be used to segment the yarn range, such as using a deep learning neural network to segment the yarn range.

具体地,在一个实施方式中,可以首先基于OSTU阈值法计算纱线的图像的OSTU阈值TOSTU,之后对OSTU阈值进行修正以提高分割精度。Specifically, in one embodiment, the OSTU threshold T OSTU of the yarn image can be first calculated based on the OSTU threshold method, and then the OSTU threshold is corrected to improve the segmentation accuracy.

其中,OSTU阈值法是本领域常规的阈值分割方法,OSTU阈值的计算过程在此不再赘述。Among them, the OSTU threshold method is a conventional threshold segmentation method in this field, and the calculation process of the OSTU threshold will not be described in detail here.

对OSTU阈值进行修正的公式可以如下:The formula for correcting the OSTU threshold can be as follows:

式中,Vq是纱线的图像的所有像素点中灰度值相同数量最多的像素点的灰度值,Zx是纱线的图像的所有像素点中灰度值最小的像素点的灰度值。In the formula, V q is the gray value of the pixel with the largest number of the same gray value among all the pixels in the yarn image, and Z x is the gray value of the pixel with the smallest gray value among all the pixels in the yarn image. degree value.

之后可以基于修正阈值Tg-global-OSTU对采集的纱线图像进行二值化处理,即将灰度小于该修正阈值的像素点的灰度设置为0,灰度大于该修正阈值的像素点的灰度设置为255,从而得到二值化图像。The collected yarn image can then be binarized based on the correction threshold T g-global-OSTU , that is, the gray level of the pixels whose gray level is less than the correction threshold is set to 0, and the gray level of the pixels whose gray level is greater than the correction threshold is set to 0. The grayscale is set to 255, resulting in a binary image.

S300、对二值化图像进行降维处理,得到纱线宽度值的一维序列。S300: Perform dimensionality reduction processing on the binary image to obtain a one-dimensional sequence of yarn width values.

可以理解的,二值化图像中包括了纱线以及背景的图像,基于该二值化图像,可以计算得到纱线长度方向上任一处的宽度。It can be understood that the binary image includes the image of the yarn and the background. Based on the binary image, the width of any place in the length direction of the yarn can be calculated.

具体计算方法可以为以二值化图像的背景像素为值1,前景像素为值0,从而得到二值化图像的矩阵Xn*m,其中n为二值化图像在纱线宽度方向上的像素数量,m为二值化图像在纱线长度方向上的像素数量。The specific calculation method can be as follows: taking the background pixel of the binary image as value 1 and the foreground pixel as value 0, thereby obtaining the matrix X n*m of the binary image, where n is the value of the binary image in the yarn width direction. The number of pixels, m is the number of pixels of the binary image in the length direction of the yarn.

基于公式D1=[1 1 … 1]1*n*Xn*m,可以计算得到纱线图像纱线宽度方向上的背景像素的数量。之后将纱线图像纱线宽度方向上的总像素数量n,即可计算得到纱线的宽度值,具体可参照以下公式计算得到纱线宽度值的一维序列D:Based on the formula D1=[1 1 ... 1] 1*n *X n*m , the number of background pixels in the yarn width direction of the yarn image can be calculated. Then the total number of pixels n in the yarn width direction of the yarn image can be calculated to obtain the yarn width value. Specifically, the one-dimensional sequence D of the yarn width value can be calculated by referring to the following formula:

D=[n n … n]1*m-[1 1 … 1]1*n*Xn*mD=[nn...n] 1*m -[1 1...1] 1*n *X n*m .

S400、对一维序列进行平均值池化,得到特征序列。S400: Perform average pooling on the one-dimensional sequence to obtain the feature sequence.

其中,平均值池化的池化尺寸Pi=f(ai),ai=1,2,3...N。Among them, the pooling size of average pooling is P i =f(a i ), a i =1, 2, 3...N.

在得到纱线宽度指的一维序列D后,可以对该一维序列进行平均值池化,从而反应纱线宽度在长度方向上的变化趋势,基于该变化趋势能够反应纱线可能存在的瑕疵问题。After obtaining the one-dimensional sequence D of the yarn width index, the one-dimensional sequence can be average pooled to reflect the changing trend of the yarn width in the length direction. Based on this changing trend, possible defects in the yarn can be reflected question.

在一个应用场景中,以雪尼尔纱线为例,雪尼尔纱线在生产过程中可能存在细节瑕疵、长缺节瑕疵、粗节瑕疵、短缺节瑕疵和饰纱交错瑕疵五大类瑕疵。具体地,可参照图2,图2是雪尼尔纱线的瑕疵类型图。In one application scenario, taking chenille yarn as an example, chenille yarn may have five major types of defects during the production process: detail defects, long missing joint defects, thick joint defects, short joint defects and interlacing decorative yarn defects. Specifically, reference can be made to Figure 2, which is a diagram of defect types of chenille yarn.

如图2所示,不同瑕疵类型的雪尼尔纱线在长度方向上宽度值的变化趋势是不同的,其中,细节瑕疵和长缺节瑕疵是长段瑕疵,粗节瑕疵是短段瑕疵,而短缺节瑕疵和饰纱交错瑕疵是突变瑕疵。因此,针对不同类型的瑕疵,平均值池化的池化尺寸应有所不同,例如针对长段瑕疵,每次取平均值的数据量应较大,从而能够凸显长短纱线宽度值的变化;而针对短段瑕疵,则相应的每次取平均值的数据量应较小,从而突显瑕疵的特征。As shown in Figure 2, the changing trends of width values in the length direction of chenille yarns with different defect types are different. Among them, detail defects and long missing joint defects are long-segment defects, and thick joint defects are short-segment defects. The missing knot defects and interlacing decorative yarn defects are mutation defects. Therefore, for different types of defects, the pooling size of the average pooling should be different. For example, for long-segment defects, the amount of data averaged each time should be larger, so as to highlight the changes in the width values of long and short yarns; For short-segment defects, the corresponding amount of data averaged each time should be smaller to highlight the characteristics of the defects.

在一个实施方式中,纱线瑕疵包括细节瑕疵和长缺节瑕疵时,平均值池化的池化尺寸可以为其中,a1=1,2,3...N1,m为一维序列的长度, In one embodiment, when yarn defects include detail defects and long gap defects, the pooling size of the average pooling can be Among them, a 1 =1, 2, 3...N 1 , m is the length of the one-dimensional sequence,

基于该池化尺寸可以得到特征序列S1(a1),其长度为示例性地,当m=100,a1=1时,池化尺寸为50,即每次50个数值进行平均值运算,依次平移,最终能够得到51个数值构建的特征序列S1(a1)。Based on the pooling size, the feature sequence S 1 (a 1 ) can be obtained, whose length is For example, when m=100 and a 1 =1, the pooling size is 50, that is, 50 values are averaged each time and translated sequentially. Finally, the feature sequence S 1 (a 1 ) constructed from 51 values can be obtained. ).

一实施方式中,纱线瑕疵包括粗节瑕疵,平均值池化的池化尺寸可以为其中,a2=1,2,3...N2,m为一维序列的长度,/> In one embodiment, yarn defects include slub defects, and the pooling size of the average pooling can be Among them, a 2 =1, 2, 3...N 2 , m is the length of the one-dimensional sequence,/>

基于该池化尺寸可以得到特征序列S2(a2),其长度为示例性地,当m=100,a2=1时,池化尺寸为2,即每次2个数值进行平均值运算,依次平移,最终能够得到99个数值构建的特征序列S2(a2)。Based on the pooling size, the feature sequence S 2 (a 2 ) can be obtained, whose length is For example, when m = 100, a 2 = 1, the pooling size is 2, that is, two values are averaged each time and translated sequentially, and finally the feature sequence S 2 (a 2 ) constructed with 99 values can be obtained. ).

一实施方式中,纱线瑕疵包括短缺节瑕疵和饰纱交错瑕疵,由于这两种瑕疵的突变特点,因此在进行平均值池化时,可以采用不同池化尺寸均值池化两次,之后取两者的差异值来构建特征序列。In one embodiment, yarn defects include short knot defects and decorative yarn staggered defects. Due to the mutation characteristics of these two defects, when performing average pooling, average pooling with different pooling sizes can be used twice, and then the The difference value between the two is used to construct the feature sequence.

具体地,可以对一维序列分别进行两次平均值池化,两次平均值池化的池化尺寸分别为P3-1=4a3+2、P3-2=2a3+1,得到序列Mshort和Mlong,其中,a3=1,2,3...N3m为一维序列的长度。Specifically, two average pooling can be performed on the one-dimensional sequence respectively. The pooling sizes of the two average poolings are P 3-1 =4a 3 +2 and P 3-2 =2a 3 +1 respectively, and we get Sequences M short and M long , where a 3 =1, 2, 3...N 3 , m is the length of the one-dimensional sequence.

其中,序列Mshort的长度为m-1-a3×4,序列Mlong的长度为m-a3×2。之后可以分别对序列Mshort和Mlong进行后向搜索求差和前向搜索求差,得到后向求差序列和前向求差序列。Among them, the length of the sequence M short is m-1-a 3 × 4, and the length of the sequence M long is ma 3 × 2. Afterwards, backward search and difference and forward search and difference can be performed on the sequences M short and M long respectively to obtain the backward difference sequence and the forward difference sequence.

具体地,由于两个序列的长度不同,因此后向搜索求差的方法为:去除序列Mshort的头端的前a3个数值,由头端依序比对序列Mshort和Mlong直至尾端,得到后向求差序列S3-1=|Mshort[a3+1:m+1]|。请参阅图3,图3是本申请后向搜索一实施方式的示意图。Specifically, since the lengths of the two sequences are different, the method for backward search to find the difference is: remove the first 3 values from the head end of the sequence M short , and compare the sequences M short and M long sequentially from the head end to the tail end. The backward difference sequence S 3-1 =|M short [a 3 +1:m+1]| is obtained. Please refer to Figure 3, which is a schematic diagram of an implementation of backward search in this application.

前向搜索求差的方法为:去除序列Mshort的尾端的后a3个数值,由尾端依序比对序列Mshort和Mlong直至头端,得到前向求差序列S3-2=|Mshort[1:m-1-5a3]-Mlong[3a3+2:m-2a3]|。请参阅图4,图4是本申请后向搜索一实施方式的示意图。The method of forward search and difference is: remove the last a 3 values from the tail end of the sequence M short , and sequentially compare the sequences M short and M long from the tail end to the head end, and obtain the forward difference sequence S 3-2 = |M short [1:m-1-5a 3 ]-M long [3a 3 +2:m-2a 3 ]|. Please refer to Figure 4, which is a schematic diagram of an implementation of backward search in this application.

在获取到后向求差序列和前向求差序列后,可以取后向求差序列和前向求差序列每一对应位置的较大值构建特征序列S3(a3)。After obtaining the backward difference sequence and the forward difference sequence, the larger value of each corresponding position of the backward difference sequence and the forward difference sequence can be taken to construct the feature sequence S 3 (a 3 ).

可以理解的,通过上述的平均值池化方法能够有效突显出突变瑕疵的特征,从而有助于后续的分类。It can be understood that the above-mentioned average pooling method can effectively highlight the characteristics of mutation defects, thereby facilitating subsequent classification.

请参阅图5,图5是本申请中不同瑕疵类型的纱线的特征序列图,如图5所示,对于不同瑕疵类型的纱线,不同的特征序列能够很好的表征其特征,从而为后续的瑕疵检测奠定可靠的基础。Please refer to Figure 5. Figure 5 is a feature sequence diagram of yarns with different defect types in this application. As shown in Figure 5, for yarns with different defect types, different feature sequences can well characterize their characteristics, thereby providing It lays a reliable foundation for subsequent defect detection.

S500、基于特征序列,构建最优特征向量。S500. Based on the feature sequence, construct the optimal feature vector.

具体地,由于特征序列是与变量ai相关的序列,当ai的值不同时,特征序列也不相同。因此,需要计算变量ai的最优解。Specifically, since the feature sequence is a sequence related to the variable a i , when the value of a i is different, the feature sequence is also different. Therefore, it is necessary to calculate the optimal solution for variable a i .

一实施方式中,可以基于偏最小二乘的随机蛙跳算法计算ai的最优解,下面详细介绍具体方法。In one implementation, the optimal solution of a i can be calculated based on the stochastic leapfrog algorithm of partial least squares. The specific method is introduced in detail below.

首先,可以计算ai取任意值时计算特征序列的最大值、最小值和平均值统计。First, you can calculate the maximum, minimum and average statistics of the feature sequence when a i takes any value.

示例性地,可以对上述步骤S400计算得到的三个特征序列S1(a1)、S2(a2)、S3(a3)在变量取1~N时的最大值、最小值和平均值进行计算,构建特征序列组合V(L)。以N=5为例,具体可参见下式。For example, the maximum value, minimum value and sum of the three characteristic sequences S 1 (a 1 ), S 2 (a 2 ), and S 3 (a 3 ) calculated in the above step S400 when the variables are 1 to N can be calculated. The average value is calculated to construct the feature sequence combination V(L). Taking N=5 as an example, see the following formula for details.

V(L)=[L1(S1)L2(S2)L3(S3)]V(L)=[L 1 (S 1 )L 2 (S 2 )L 3 (S 3 )]

如上述公式,可知序列V(L)的尺寸为:1×45。之后可以基于偏最小二乘(PLS)的随机蛙跳算法,寻找序列V(L)中最优的a1,a2,a3的值,以此重建特征向量L={l[S1(a1)] l[S2(a2)] l[S3(a3)]}From the above formula, it can be seen that the size of sequence V(L) is: 1×45. Afterwards, the random leapfrog algorithm based on partial least squares (PLS) can be used to find the optimal values of a 1 , a 2 , and a 3 in the sequence V(L), so as to reconstruct the feature vector L={l[S 1 ( a 1 )] l[S 2 (a 2 )] l[S 3 (a 3 )]}

其中,偏最小二乘的随机蛙跳算法实现步骤如下:Among them, the implementation steps of the partial least squares stochastic frog leaping algorithm are as follows:

Step1:初始化,包括包含Q个变量的子集V0。迭代次数1000,变量个数Q=5,η=0.1、ω=3。Step1: Initialization, including a subset V0 containing Q variables. The number of iterations is 1000, the number of variables Q=5, η=0.1, and ω=3.

Step2:从正态分布Norm(Q,θQ)中随机生成Q*,包含候选子集V*。Step2: Randomly generate Q* from the normal distribution Norm(Q,θQ), including the candidate subset V*.

Step3:如果Q*=Q,那么V*=V0。如果Q*<Q,那么使用V0建立PLS模型,计算各个变量回归系数,删除与最小绝对回归系数相关的Q-Q*个变量,剩余Q*个变量构成V*。如果Q*>Q,那么从V-V0中随机选取ω(Q*-Q)个变量构成变量子集S,使用V0和S构建PLS模型,计算各变量回归系数,保留该PLS模型中具有最大绝对回归系数的Q*个变量构成V*。Step3: If Q*=Q, then V*=V0. If Q*<Q, then use V0 to build a PLS model, calculate the regression coefficients of each variable, delete the Q-Q* variables related to the minimum absolute regression coefficient, and the remaining Q* variables constitute V*. If Q*>Q, then randomly select ω (Q*-Q) variables from V-V0 to form a variable subset S, use V0 and S to build a PLS model, calculate the regression coefficients of each variable, and retain the largest value in the PLS model The Q* variables of absolute regression coefficients constitute V*.

Step4:计算V0和V*的交叉验证均方根误差RMSECV和RMSECV*。如果RMSECV*<=RMSECV,那么将V*作为V1。否则,以概率ηRMSECV/RMSECV*接受V*作为V1。用V1更新V0,回到Step2,直到迭代截止。Step4: Calculate the cross-validation root mean square errors RMSECV and RMSECV* of V0 and V*. If RMSECV*<=RMSECV, then use V* as V1. Otherwise, accept V* as V1 with probability ηRMSECV/RMSECV*. Update V0 with V1 and return to Step2 until the iteration ends.

Step5:计算迭代完成后每个变量的选择概率。变量子集中选择的第j个变量频率表示为Nj。每个变量的选择概率如下:Step5: Calculate the selection probability of each variable after the iteration is completed. The frequency of the jth variable selected in the variable subset is denoted as Nj. The selection probability of each variable is as follows:

根据每个变量的选择概率,分别计算[S1(a1)](a1=1,2,3,4,5)、[S2(a2)](a2=1,2,3,4,5)和[S3(a3)](a3=1,2,3,4,5)的概率之和,选择最高概率的[S1(ai)]、[S2(ai)]、[S3(ai)]中的a1,a2,a3,最终重建特征向量L,即最优特征向量。According to the selection probability of each variable, calculate [S 1 (a 1 )](a 1 =1,2,3,4,5) and [S 2 (a 2 )](a 2 =1,2,3 ,4,5) and [S 3 (a 3 )] (a 3 =1,2,3,4,5), select the highest probability [S 1 (a i )], [S 2 ( a i )], [S 3 (a i )], a 1 , a 2 , a 3 in the final reconstructed feature vector L, which is the optimal feature vector.

应当说明的,上述实施方式仅示例性地阐述了三个特征序列同时计算最优解的方法,最终得到的最优特征向量为1×9的向量。在其他实施方式中,也可以单独基于偏最小二乘的随机蛙跳算法计算一个特征序列的最优特征向量,或者计算两个特征序列的最优特征向量,均能够实现本实施方式的效果。It should be noted that the above embodiments only illustrate the method of simultaneously calculating the optimal solution for three feature sequences, and the optimal feature vector finally obtained is a 1×9 vector. In other implementations, the optimal feature vector of one feature sequence can also be calculated solely based on the random leapfrog algorithm of partial least squares, or the optimal feature vectors of two feature sequences can be calculated, both of which can achieve the effects of this implementation.

S600、将最优特征向量输入深度学习神经网络,输出纱线的特征属性。S600. Input the optimal feature vector into the deep learning neural network and output the characteristic attributes of the yarn.

其中,特征属性包括正常或纱线瑕疵的类型。Among them, the characteristic attributes include normal or yarn defect type.

一实施方式中,深度学习神经网络可以为双层ANN分类器,该双层ANN分类器可以包括位于两端的输入层和输出层,以及位于内部的两个隐藏层。In one embodiment, the deep learning neural network may be a two-layer ANN classifier. The two-layer ANN classifier may include an input layer and an output layer located at both ends, and two hidden layers located inside.

具体地,请参阅图6,图6是本申请深度学习神经网络一实施方式的结构示意图。如图6所示,在一个实施方式中,针对上述步骤S500得到的1×9的最优特征向量,输入层可以包括9个神经元,输出层则可以包括6个神经元,6个神经元分别对应正常状态和五种瑕疵,隐藏层可以是全连接层,每层全连接层可以包括10个隐藏神经元,通过将最优特征向量输入输入层,经过两层全连接层,之后可以通过输出层的一神经元进行分类输出,得到纱线的瑕疵类型。Specifically, please refer to FIG. 6 , which is a schematic structural diagram of an embodiment of the deep learning neural network of the present application. As shown in Figure 6, in one embodiment, for the 1×9 optimal feature vector obtained in the above step S500, the input layer may include 9 neurons, and the output layer may include 6 neurons. Corresponding to the normal state and five defects respectively, the hidden layer can be a fully connected layer. Each fully connected layer can include 10 hidden neurons. By inputting the optimal feature vector into the input layer, after two layers of fully connected layers, it can be passed One neuron in the output layer performs classification output to obtain the yarn defect type.

具体地,上述深度学习神经网络的训练方法包括:Specifically, the training methods of the above-mentioned deep learning neural network include:

获取样本训练集,样本训练集包括若干组标注有标签值的纱线图像的最优特征向量;Obtain a sample training set, which includes several groups of optimal feature vectors of yarn images marked with label values;

将样本训练集输入,基于交叉熵损失函数和标签值,沿梯度下降的方向更新深度学习神经网络的权重,直至交叉熵损失函数收敛,得到深度学习神经网络。Input the sample training set, and based on the cross-entropy loss function and label value, update the weight of the deep learning neural network in the direction of gradient descent until the cross-entropy loss function converges, and obtain the deep learning neural network.

具体地,上述深度学习神经网络为了实现分类,需要在训练过程中进行前向传播以及反向传播计算。Specifically, in order to achieve classification, the above-mentioned deep learning neural network needs to perform forward propagation and back propagation calculations during the training process.

双隐层神经网络的前向传播指的是数据从输入层经过各个隐藏层传递到输出层的过程。将x作为神经元输入第一个隐藏层以以下公式进行计算,The forward propagation of a double hidden layer neural network refers to the process of data being transferred from the input layer through each hidden layer to the output layer. Input x as a neuron into the first hidden layer and calculate it with the following formula,

h1=σ(w1x+b1)h 1 =σ(w 1 x+b 1 )

其中,w1和b1分别是第一个隐藏层的权重和偏置,σ是ReLU(RectifiedLinearUnit)激活函数,为σ=max(0,x)。Among them, w 1 and b 1 are the weight and bias of the first hidden layer respectively, and σ is the ReLU (RectifiedLinearUnit) activation function, which is σ=max(0,x).

将结果h1传递到第二个隐藏层以以下公式进行计算,Pass the result h 1 to the second hidden layer to be calculated with the following formula,

h2=σ(w2h1+b2)h 2 =σ(w 2 h 1 +b 2 )

其中,w2和b2分别是第2个隐藏层的权重和偏置。Among them, w 2 and b 2 are the weight and bias of the second hidden layer respectively.

将结果h2传递到输出层进行计算,Pass the result h 2 to the output layer for calculation,

y=softmax(w3h2+b3)y=softmax(w 3 h 2 +b 3 )

其中,w3和b3分别是输出层的权重和偏置,softmax为激活函数,Among them, w 3 and b 3 are the weight and bias of the output layer respectively, softmax is the activation function,

可以将神经网络的输出变为0到1之间的概率分布,输出y是一个维度为6的向量,包括正常纱线、粗节、细节、长缺节、短缺节以及饰纱交错,最终实现了根据特征向量L的纱线分类。The output of the neural network can be changed into a probability distribution between 0 and 1. The output y is a vector with a dimension of 6, including normal yarn, thick sections, details, long and missing sections, short sections and decorative yarn interleaving, and finally achieves Yarn classification based on feature vector L.

双层神经网络的反向传播通过计算标签值和结果值的误差,根据误差沿着网络的反方向传递误差,更新每个层中的权重w和偏差值b。The back propagation of the two-layer neural network calculates the error between the label value and the result value, propagates the error along the reverse direction of the network according to the error, and updates the weight w and bias value b in each layer.

①计算输出层的误差① Calculate the error of the output layer

根据分类结果和实际标签的交叉熵损失函数计算分类结果与预期的偏差E:Calculate the deviation E between the classification result and the expectation based on the cross-entropy loss function of the classification result and the actual label:

其中,m是训练样本数,C是类别数,tij和yij分别是第i个训练样本的第j个类别的真实标签和模型预测为第j个类别的概率。交叉熵损失函数的目标是最小化预测值与真实值之间的差距。当预测概率越接近真实标签,损失函数的值越接近于0,反之则越大。因此,交叉熵损失函数可以帮助模型更好地拟合训练数据,并提高分类任务的准确性。Among them, m is the number of training samples, C is the number of categories, t ij and y ij are respectively the true label of the j-th category of the i-th training sample and the probability that the model predicts the j-th category. The goal of the cross-entropy loss function is to minimize the gap between the predicted value and the true value. When the predicted probability is closer to the true label, the value of the loss function is closer to 0, and vice versa. Therefore, the cross-entropy loss function can help the model fit the training data better and improve the accuracy of the classification task.

②计算第二个隐藏层的误差② Calculate the error of the second hidden layer

误差可以沿着网络传递到第二个隐藏层,通过链式法则,误差可表达为:The error can be passed along the network to the second hidden layer. Through the chain rule, the error can be expressed as:

其中,δ2为第二个隐藏层的误差。Among them, δ 2 is the error of the second hidden layer.

③计算第一个隐藏层的误差③Calculate the error of the first hidden layer

第二个隐藏层误差δ2也可以沿着网络传递到第一个隐藏层,计算第一个隐藏层的误差:The second hidden layer error δ 2 can also be passed along the network to the first hidden layer to calculate the error of the first hidden layer:

其中,δ1表示第二个隐藏层的误差,(δ2>0)为指示函数,当δ2>0时,值为1,当δ2<0时,值为0。Among them, δ 1 represents the error of the second hidden layer, (δ 2 >0) is the indicator function, when δ 2 >0, the value is 1, when δ 2 <0, the value is 0.

④更新权重和偏差④Update weights and biases

使用误差来更新权重和偏差。权重和偏差的更新可以用梯度下降算法来实现,即通过让误差减小的方向调整权值和偏差值。权值和偏差的更新如下:Use the error to update the weights and biases. The update of weights and biases can be achieved using the gradient descent algorithm, that is, by adjusting the weights and bias values in the direction of decreasing error. The weights and biases are updated as follows:

其中,lr为学习率,每次更新时可根据误差来调整幅值。Among them, lr is the learning rate, and the amplitude can be adjusted according to the error each time it is updated.

使用反向传播算法更新权重和偏差后,再次计算前向传播,重复迭代上述过程,直到每一轮的偏差值收敛。记录所有权值和偏差用于前向传播,即可建立雪尼尔纱线疵点分类模型,并利用该模型对雪尼尔纱线进行检测。After updating the weights and biases using the backpropagation algorithm, calculate the forward propagation again, and repeat the above process until the bias values in each round converge. By recording all values and deviations for forward propagation, a chenille yarn defect classification model can be established, and the model can be used to detect chenille yarns.

本申请还提供了一种基于机器视觉的纱线瑕疵检测装置,请参阅图7,图7是本申请基于机器视觉的纱线瑕疵检测装置一实施方式的结构示意图。This application also provides a machine vision-based yarn defect detection device. Please refer to Figure 7 . Figure 7 is a structural schematic diagram of an embodiment of the machine vision-based yarn defect detection device of this application.

该装置包括获取模块21,二值化处理模块22,降维处理模块23,均值池化模块24,构建模块25和分类输出模块26。The device includes an acquisition module 21, a binarization processing module 22, a dimensionality reduction processing module 23, a mean pooling module 24, a construction module 25 and a classification output module 26.

其中,获取模块21用于获取纱线的图像;Among them, the acquisition module 21 is used to acquire the image of the yarn;

二值化处理模块22用于对纱线的图像进行二值化处理,得到纱线的二值化图像;The binarization processing module 22 is used to perform binarization processing on the image of the yarn to obtain a binary image of the yarn;

降维处理模块23用于对二值化图像进行降维处理,得到纱线宽度值的一维序列;The dimensionality reduction processing module 23 is used to perform dimensionality reduction processing on the binary image to obtain a one-dimensional sequence of yarn width values;

均值池化模块24用于对一维序列进行平均值池化,得到特征序列,其中,平均值池化的池化尺寸Pi=f(ai),ai=1,2,3...N;The mean pooling module 24 is used to perform mean pooling on the one-dimensional sequence to obtain the feature sequence, where the pooling size of the mean pooling is P i =f(a i ), a i =1, 2, 3.. .N;

构建模块25用于基于特征序列,构建最优特征向量;The construction module 25 is used to construct the optimal feature vector based on the feature sequence;

分类输出模块26用于将最优特征序列输入深度学习神经网络,输出纱线的特征属性,其中,特征属性包括正常或纱线瑕疵的类型。The classification output module 26 is used to input the optimal feature sequence into the deep learning neural network and output the characteristic attributes of the yarn, where the characteristic attributes include normal or yarn defect types.

如上参照图1到图6,对根据本说明书实施例基于机器视觉的纱线瑕疵检测方法进行了描述。在以上对方法实施例的描述中所提及的细节,同样适用于本说明书实施例的基于机器视觉的纱线瑕疵检测装置。上面的基于机器视觉的纱线瑕疵检测装置可以采用硬件实现,也可以采用软件或者硬件和软件的组合来实现。As above, with reference to Figures 1 to 6, the yarn defect detection method based on machine vision according to the embodiment of this specification is described. The details mentioned in the above description of the method embodiments are also applicable to the machine vision-based yarn defect detection device in the embodiments of this specification. The above yarn defect detection device based on machine vision can be implemented by hardware, software or a combination of hardware and software.

图8是本申请电子设备一实施方式的硬件结构图。如图8所示,电子设备30可以包括至少一个处理器31、存储器32(例如非易失性存储器)、内存33和通信接口34,并且至少一个处理器31、存储器32、内存33和通信接口34经由总线35连接在一起。至少一个处理器31执行在存储器32中存储或编码的至少一个计算机可读指令。FIG. 8 is a hardware structure diagram of an embodiment of the electronic device of the present application. As shown in FIG. 8 , the electronic device 30 may include at least one processor 31 , a memory 32 (eg, a non-volatile memory), a memory 33 , and a communication interface 34 , and at least one processor 31 , a memory 32 , a memory 33 , and a communication interface. 34 are connected together via bus 35 . At least one processor 31 executes at least one computer readable instruction stored or encoded in memory 32 .

应该理解,在存储器32中存储的计算机可执行指令当执行时使得至少一个处理器31进行本说明书的各个实施例中以上结合图1-图4描述的各种操作和功能。It should be understood that the computer-executable instructions stored in memory 32, when executed, cause at least one processor 31 to perform the various operations and functions described above in connection with FIGS. 1-4 in various embodiments of this specification.

在本说明书的实施例中,电子设备30可以包括但不限于:个人计算机、服务器计算机、工作站、桌面型计算机、膝上型计算机、笔记本计算机、移动电子设备、智能电话、平板计算机、蜂窝电话、个人数字助理(PDA)、手持装置、消息收发设备、可佩戴电子设备、消费电子设备等等。In the embodiment of the present description, the electronic device 30 may include, but is not limited to: a personal computer, a server computer, a workstation, a desktop computer, a laptop computer, a notebook computer, a mobile electronic device, a smart phone, a tablet computer, a cellular phone, Personal digital assistants (PDAs), handheld devices, messaging devices, wearable electronic devices, consumer electronics devices, etc.

根据一个实施例,提供了一种比如机器可读介质的程序产品。机器可读介质可以具有指令(即,上述以软件形式实现的元素),该指令当被机器执行时,使得机器执行本说明书的各个实施例中以上结合图1-图4描述的各种操作和功能。具体地,可以提供配有可读存储介质的系统或者装置,在该可读存储介质上存储着实现上述实施例中任一实施例的功能的软件程序代码,且使该系统或者装置的计算机或处理器读出并执行存储在该可读存储介质中的指令。According to one embodiment, a program product, such as a machine-readable medium, is provided. The machine-readable medium may have instructions (i.e., the above-mentioned elements implemented in the form of software) that, when executed by a machine, cause the machine to perform various operations described above in connection with FIGS. 1-4 in various embodiments of this specification and Function. Specifically, a system or device equipped with a readable storage medium may be provided, on which the software program code that implements the functions of any of the above embodiments is stored, and the computer or device of the system or device may The processor reads and executes the instructions stored in the readable storage medium.

在这种情况下,从可读介质读取的程序代码本身可实现上述实施例中任何一项实施例的功能,因此机器可读代码和存储机器可读代码的可读存储介质构成了本说明书的一部分。In this case, the program code itself read from the readable medium can implement the functions of any one of the above embodiments, and therefore the machine-readable code and the readable storage medium storing the machine-readable code constitute this specification a part of.

可读存储介质的实施例包括软盘、硬盘、磁光盘、光盘(如CD-ROM、CD-R、CD-RW、DVD-ROM、DVD-RAM、DVD-RW、DVD-RW)、磁带、非易失性存储卡和ROM。可选择地,可以由通信网络从服务器计算机上或云上下载程序代码。Examples of readable storage media include floppy disks, hard disks, magneto-optical disks, optical disks (such as CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD-RW), magnetic tape, non- Volatile memory cards and ROM. Alternatively, the program code can be downloaded from the server computer or the cloud by the communication network.

本领域技术人员应当理解,上面公开的各个实施例可以在不偏离发明实质的情况下做出各种变形和修改。因此,本说明书的保护范围应当由所附的权利要求书来限定。Those skilled in the art will understand that various variations and modifications can be made to the various embodiments disclosed above without departing from the essence of the invention. Therefore, the protection scope of this specification should be defined by the appended claims.

需要说明的是,上述各流程和各系统结构图中不是所有的步骤和单元都是必须的,可以根据实际的需要忽略某些步骤或单元。各步骤的执行顺序不是固定的,可以根据需要进行确定。上述各实施例中描述的装置结构可以是物理结构,也可以是逻辑结构,即,有些单元可能由同一物理客户实现,或者,有些单元可能分由多个物理客户实现,或者,可以由多个独立设备中的某些部件共同实现。It should be noted that not all steps and units in the above-mentioned processes and system structure diagrams are necessary, and some steps or units can be ignored according to actual needs. The execution order of each step is not fixed and can be determined as needed. The device structure described in the above embodiments may be a physical structure or a logical structure, that is, some units may be implemented by the same physical client, or some units may be implemented by multiple physical clients, or may be implemented by multiple physical clients. Some components in separate devices are implemented together.

以上各实施例中,硬件单元或模块可以通过机械方式或电气方式实现。例如,一个硬件单元、模块或处理器可以包括永久性专用的电路或逻辑(如专门的处理器,FPGA或ASIC)来完成相应操作。硬件单元或处理器还可以包括可编程逻辑或电路(如通用处理器或其它可编程处理器),可以由软件进行临时的设置以完成相应操作。具体的实现方式(机械方式、或专用的永久性电路、或者临时设置的电路)可以基于成本和时间上的考虑来确定。In each of the above embodiments, the hardware unit or module can be implemented mechanically or electrically. For example, a hardware unit, module or processor may include permanent dedicated circuitry or logic (such as a dedicated processor, FPGA or ASIC) to complete the corresponding operation. The hardware unit or processor may also include programmable logic or circuits (such as a general-purpose processor or other programmable processors), which may be temporarily set by software to complete corresponding operations. The specific implementation method (mechanical method, or dedicated permanent circuit, or temporarily installed circuit) can be determined based on cost and time considerations.

上面结合附图阐述的具体实施方式描述了示例性实施例,但并不表示可以实现的或者落入权利要求书的保护范围的所有实施例。在整个本说明书中使用的术语“示例性”意味着“用作示例、实例或例示”,并不意味着比其它实施例“优选”或“具有优势”。出于提供对所描述技术的理解的目的,具体实施方式包括具体细节。然而,可以在没有这些具体细节的情况下实施这些技术。在一些实例中,为了避免对所描述的实施例的概念造成难以理解,公知的结构和装置以框图形式示出。The detailed description set forth above in conjunction with the drawings describes exemplary embodiments and does not represent all embodiments that can be implemented or that fall within the scope of the claims. The term "exemplary" as used throughout this specification means "serving as an example, instance, or illustration" and does not mean "preferred" or "advantageous" over other embodiments. The detailed description includes specific details for the purpose of providing an understanding of the described technology. However, these techniques can be implemented without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described embodiments.

本公开内容的上述描述被提供来使得本领域任何普通技术人员能够实现或者使用本公开内容。对于本领域普通技术人员来说,对本公开内容进行的各种修改是显而易见的,并且,也可以在不脱离本公开内容的保护范围的情况下,将本文所对应的一般性原理应用于其它变型。因此,本公开内容并不限于本文所描述的示例和设计,而是与符合本文公开的原理和新颖性特征的最广范围相一致。The above description of the disclosure is provided to enable any person of ordinary skill in the art to make or use the disclosure. Various modifications to the present disclosure will be obvious to those of ordinary skill in the art, and the general principles corresponding to this article may also be applied to other modifications without departing from the scope of the disclosure. . Thus, the present disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1.一种基于机器视觉的纱线瑕疵检测方法,其特征在于,包括:1. A yarn defect detection method based on machine vision, which is characterized by including: 获取纱线的图像;Get an image of yarn; 对所述纱线的图像进行二值化处理,得到所述纱线的二值化图像;Perform binarization processing on the image of the yarn to obtain a binary image of the yarn; 对所述二值化图像进行降维处理,得到所述纱线宽度值的一维序列;Perform dimensionality reduction processing on the binarized image to obtain a one-dimensional sequence of yarn width values; 对所述一维序列进行平均值池化,得到特征序列,其中,所述平均值池化的池化尺寸Pi=f(ai),ai=1,2,3...N;Perform average pooling on the one-dimensional sequence to obtain a feature sequence, wherein the pooling size of the average pooling is P i =f(a i ), a i =1, 2, 3...N; 基于所述特征序列,构建最优特征向量;Based on the feature sequence, construct an optimal feature vector; 将所述最优特征向量输入深度学习神经网络,输出所述纱线的特征属性,其中,所述特征属性包括正常或所述纱线瑕疵的类型。The optimal feature vector is input into a deep learning neural network and the characteristic attributes of the yarn are output, where the characteristic attributes include normal or the type of yarn defect. 2.根据权利要求1所述的纱线瑕疵检测方法,其特征在于,所述对所述纱线的图像进行二值化处理,得到所述纱线的二值化图像的步骤包括:2. The yarn defect detection method according to claim 1, wherein the step of performing binarization processing on the image of the yarn to obtain the binary image of the yarn includes: 基于OSTU阈值法计算所述纱线的图像的OSTU阈值TOSTUCalculate the OSTU threshold T OSTU of the image of the yarn based on the OSTU threshold method; 基于以下公式对所述OSTU阈值进行修正,得到修正阈值Tg-global-OSTUThe OSTU threshold is corrected based on the following formula to obtain the corrected threshold T g-global-OSTU : 式中,Vq是所述纱线的图像的所有像素点中灰度值相同数量最多的像素点的灰度值,Zx是所述纱线的图像的所有像素点中灰度值最小的像素点的灰度值; In the formula, V q is the gray value of the pixel with the largest number of the same gray value among all the pixels in the image of the yarn, and Z x is the smallest gray value among all the pixels in the image of the yarn. The gray value of the pixel; 基于所述修正阈值对所述图像进行二值化处理,得到所述纱线的二值化图像。The image is binarized based on the correction threshold to obtain a binarized image of the yarn. 3.根据权利要求1所述的纱线瑕疵检测方法,其特征在于,所述对所述二值化图像进行降维处理,得到所述纱线宽度值的一维序列的步骤包括:3. The yarn defect detection method according to claim 1, wherein the step of performing dimensionality reduction processing on the binary image to obtain a one-dimensional sequence of yarn width values includes: 以所述二值化图像的背景像素为值1,前景像素为值0,得到所述二值化图像的矩阵Xn*m,其中n为所述二值化图像在所述纱线宽度方向上的像素数量,m为所述二值化图像在所述纱线长度方向上的像素数量;Taking the background pixel of the binary image as value 1 and the foreground pixel as value 0, the matrix X n*m of the binary image is obtained, where n is the width direction of the yarn in the binary image The number of pixels on, m is the number of pixels of the binary image in the length direction of the yarn; 基于以下公式,计算得到所述纱线宽度值的一维序列D:Based on the following formula, the one-dimensional sequence D of the yarn width value is calculated: D=[n n...n]1*m-[1 1...1]1*n*Xn*mD=[n n...n] 1*m -[1 1...1] 1*n *X n*m . 4.根据权利要求1所述的纱线瑕疵检测方法,其特征在于,所述纱线瑕疵包括细节瑕疵和长缺节瑕疵,所述对所述一维序列进行平均值池化,得到特征序列的步骤包括:4. The yarn defect detection method according to claim 1, wherein the yarn defects include detail defects and long missing joint defects, and the one-dimensional sequence is averaged and pooled to obtain a feature sequence. The steps include: 对所述一维序列进行平均值池化,池化尺寸得到获得特征序列S1(a1),其中,a1=1,2,3...N1,m为所述一维序列的长度,/> Perform average pooling on the one-dimensional sequence, and the pooling size Obtain the characteristic sequence S 1 (a 1 ), where a 1 =1, 2, 3...N 1 , m is the length of the one-dimensional sequence, /> 5.根据权利要求1所述的纱线瑕疵检测方法,其特征在于,所述纱线瑕疵包括粗节瑕疵,所述对所述一维序列进行平均值池化,得到特征序列的步骤包括:5. The yarn defect detection method according to claim 1, characterized in that the yarn defects include slub defects, and the step of performing average pooling on the one-dimensional sequence to obtain the feature sequence includes: 对所述一维序列进行平均值池化,池化尺寸得到特征序列S2(a2),其中,a2=1,2,3...N2,m为所述一维序列的长度,/> Perform average pooling on the one-dimensional sequence, and the pooling size Obtain the characteristic sequence S 2 (a 2 ), where a 2 =1, 2, 3...N 2 , m is the length of the one-dimensional sequence, /> 6.根据权利要求1所述的纱线瑕疵检测方法,其特征在于,所述纱线瑕疵包括短缺节瑕疵和饰纱交错瑕疵,所述对所述一维序列进行平均值池化,得到特征序列的步骤包括:6. The yarn defect detection method according to claim 1, characterized in that the yarn defects include missing knot defects and decorative yarn staggered defects, and the one-dimensional sequence is averaged and pooled to obtain features. The steps of the sequence include: 对所述一维序列分别进行两次平均值池化,两次平均值池化的池化尺寸分别为P3-1=4a3+2、P3-2=2a3+1,得到序列Mshort和Mlong,其中,a3=1,2,3...N3m为所述一维序列的长度;Perform average pooling twice on the one-dimensional sequence. The pooling sizes of the two average poolings are P 3-1 =4a 3 +2 and P 3-2 =2a 3 +1 respectively, and the sequence M is obtained. short and M long , where a 3 =1, 2, 3...N 3 , m is the length of the one-dimensional sequence; 分别对所述序列Mshort和Mlong进行后向搜索求差和前向搜索求差,得到后向求差序列和前向求差序列;Perform backward search and difference and forward search and difference on the sequences M short and M long respectively to obtain a backward difference sequence and a forward difference sequence; 比对所述后向求差序列和前向求差序列,取所述后向求差序列和所述前向求差序列每一对应位置的较大值构建特征序列S3(a3)。Compare the backward difference sequence and the forward difference sequence, and take the larger value of each corresponding position of the backward difference sequence and the forward difference sequence to construct the feature sequence S 3 (a 3 ). 7.根据权利要求6所述的纱线瑕疵检测方法,其特征在于,所述分别对所述序列Mshort和Mlong进行后向搜索求差和前向搜索求差,得到后向求差序列和前向求差序列的步骤包括:7. The yarn defect detection method according to claim 6, wherein the backward search difference and the forward search difference are performed on the sequences M short and M long respectively to obtain a backward difference sequence. The steps for summing up the forward difference sequence include: 去除所述序列Mshort的头端的前a3个数值,由头端依序比对所述序列Mshort和Mlong直至尾端,得到后向求差序列S3-1=|Mshort[a3+1:m+1-4a3]-Mlong[1:m-1-5a3]|,其中,m为所述一维序列的长度;The first a 3 values at the head end of the sequence M short are removed, and the sequences M short and M long are sequentially compared from the head end to the tail end to obtain the backward difference sequence S 3-1 =|M short [a 3 +1:m+1-4a 3 ]-M long [1:m-1-5a 3 ]|, where m is the length of the one-dimensional sequence; 去除所述序列Mshort的尾端的后a3个数值,由尾端依序比对所述序列Mshort和Mlong直至头端,得到前向求差序列S3-2=|Mshort[1:m-1-5a3]-Mlong[3a3+2:m-2a3]|,其中,m为所述一维序列的长度。Remove the last 3 values of the tail end of the sequence M short , and compare the sequences M short and M long in sequence from the tail end to the head end, and obtain the forward difference sequence S 3-2 =|M short [1 :m-1-5a 3 ]-M long [3a 3 +2: m-2a 3 ]|, where m is the length of the one-dimensional sequence. 8.根据权利要求1所述的纱线瑕疵检测方法,其特征在于,所述基于所述特征序列,构建最优特征向量的步骤包括:8. The yarn defect detection method according to claim 1, wherein the step of constructing an optimal feature vector based on the feature sequence includes: 基于偏最小二乘的随机蛙跳算法计算ai的最优解;The stochastic leapfrog algorithm based on partial least squares calculates the optimal solution of a i ; 基于所述最优解,计算所述特征序列的最大值、最小值和平均值,收集得到所述最优特征向量。Based on the optimal solution, the maximum value, minimum value and average value of the feature sequence are calculated, and the optimal feature vector is collected. 9.根据权利要求1所述的纱线瑕疵检测方法,其特征在于,所述将所述最优特征向量输入深度学习神经网络,输出所述纱线的特征属性的步骤中,所述神经网络为双层ANN分类器,且所述深度学习神经网络的训练方法包括:9. The yarn defect detection method according to claim 1, characterized in that, in the step of inputting the optimal feature vector into a deep learning neural network and outputting the characteristic attributes of the yarn, the neural network It is a two-layer ANN classifier, and the training method of the deep learning neural network includes: 获取样本训练集,所述样本训练集包括若干组标注有标签值的纱线图像的所述最优特征向量;Obtain a sample training set, the sample training set includes the optimal feature vectors of several groups of yarn images marked with label values; 将所述样本训练集输入,基于交叉熵损失函数和所述标签值,沿梯度下降的方向更新所述深度学习神经网络的权重,直至所述交叉熵损失函数收敛,得到所述深度学习神经网络。The sample training set is input, and based on the cross-entropy loss function and the label value, the weight of the deep learning neural network is updated in the direction of gradient descent until the cross-entropy loss function converges, and the deep learning neural network is obtained . 10.一种基于机器视觉的纱线瑕疵检测装置,其特征在于,包括:10. A yarn defect detection device based on machine vision, characterized by including: 获取模块,用于获取纱线的图像;Get module, used to get the image of yarn; 二值化处理模块,用于对所述纱线的图像进行二值化处理,得到所述纱线的二值化图像;A binarization processing module is used to perform binarization processing on the image of the yarn to obtain a binary image of the yarn; 降维处理模块,用于对所述二值化图像进行降维处理,得到所述纱线宽度值的一维序列;A dimensionality reduction processing module, used to perform dimensionality reduction processing on the binary image to obtain a one-dimensional sequence of yarn width values; 均值池化模块,用于对所述一维序列进行平均值池化,得到特征序列,其中,所述平均值池化的池化尺寸Pi=f(ai),ai=1,2,3...N;A mean pooling module, used to perform mean pooling on the one-dimensional sequence to obtain a feature sequence, wherein the pooling size of the mean pooling is Pi = f (a i ), a i = 1, 2 ,3...N; 构建模块,用于基于所述特征序列,构建最优特征向量;A building module for constructing an optimal feature vector based on the feature sequence; 分类输出模块,用于将所述最优特征序列输入深度学习神经网络,输出所述纱线的特征属性,其中,所述特征属性包括正常或所述纱线瑕疵的类型。A classification output module, configured to input the optimal feature sequence into a deep learning neural network and output characteristic attributes of the yarn, where the characteristic attributes include normal or types of yarn defects.
CN202310950857.9A 2023-07-31 2023-07-31 Yarn defect detection method and device based on machine vision Pending CN116894836A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310950857.9A CN116894836A (en) 2023-07-31 2023-07-31 Yarn defect detection method and device based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310950857.9A CN116894836A (en) 2023-07-31 2023-07-31 Yarn defect detection method and device based on machine vision

Publications (1)

Publication Number Publication Date
CN116894836A true CN116894836A (en) 2023-10-17

Family

ID=88313460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310950857.9A Pending CN116894836A (en) 2023-07-31 2023-07-31 Yarn defect detection method and device based on machine vision

Country Status (1)

Country Link
CN (1) CN116894836A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11993868B1 (en) * 2023-09-15 2024-05-28 Zhejiang Hengyi Petrochemical Co., Ltd. Control method for yarn route inspection equipment, electronic device and storage medium
CN118470029A (en) * 2024-07-15 2024-08-09 吴江市兰天织造有限公司 A method for detecting environmentally friendly yarn defects

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11993868B1 (en) * 2023-09-15 2024-05-28 Zhejiang Hengyi Petrochemical Co., Ltd. Control method for yarn route inspection equipment, electronic device and storage medium
US12110614B1 (en) 2023-09-15 2024-10-08 Zhejiang Hengyi Petrochemical Co., Ltd. Control method for yarn route inspection equipment, electronic device and storage medium
CN118470029A (en) * 2024-07-15 2024-08-09 吴江市兰天织造有限公司 A method for detecting environmentally friendly yarn defects
CN118470029B (en) * 2024-07-15 2025-01-03 吴江市兰天织造有限公司 Environment-friendly yarn defect detection method

Similar Documents

Publication Publication Date Title
CN116894836A (en) Yarn defect detection method and device based on machine vision
CN113670610B (en) Fault detection method, system and medium based on wavelet transform and neural network
Fan et al. Automatic pavement crack detection based on structured prediction with the convolutional neural network
CN111192237B (en) A glue detection system and method based on deep learning
CN113711234B (en) Yarn quality control
CN111429415B (en) A method for constructing an efficient detection model for product surface defects based on network collaborative pruning
CN110287975B (en) Flotation dosing abnormity detection method based on NSST morphological characteristics and depth KELM
CN113269647B (en) Graph-based transaction abnormity associated user detection method
CN108596203B (en) An optimization method for the surface wear detection model of the pantograph carbon sliding plate by parallel pooling layers
CN110245745A (en) Equipment remaining life prediction technique based on integrated bi-directional Recognition with Recurrent Neural Network
CN114282443B (en) Remaining service life prediction method based on MLP-LSTM supervised joint model
CN113077450B (en) Cherry grading detection method and system based on deep convolutional neural network
CN115049627B (en) Steel Surface Defect Detection Method and System Based on Domain Adaptive Deep Migration Network
Ghazvini et al. Defect detection of tiles using 2D-wavelet transform and statistical features
CN113096088A (en) Concrete structure detection method based on deep learning
CN119130269B (en) A surface quality management method for stator core mold
CN118587510B (en) Roller conveying deviation detection method and system
CN118981684B (en) A method and system for out-of-distribution fault detection based on energy propagation and graph learning
CN118505601B (en) Semiconductor equipment wafer defect identification method based on hybrid quantum Al algorithm
CN113177578A (en) Agricultural product quality classification method based on LSTM
CN117058115A (en) Cable stranded wire quality detection method and system based on image
CN116152194A (en) Object defect detection method, system, equipment and medium
Shih et al. Integrated Image Sensor and Deep Learning Network for Fabric Pilling Classification.
CN115082726A (en) Ceramic biscuit product classification method for toilet based on PointNet optimization
Zhao et al. Research on glass relics based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination