CN116894836A - Yarn flaw detection method and device based on machine vision - Google Patents

Yarn flaw detection method and device based on machine vision Download PDF

Info

Publication number
CN116894836A
CN116894836A CN202310950857.9A CN202310950857A CN116894836A CN 116894836 A CN116894836 A CN 116894836A CN 202310950857 A CN202310950857 A CN 202310950857A CN 116894836 A CN116894836 A CN 116894836A
Authority
CN
China
Prior art keywords
yarn
sequence
image
value
pooling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310950857.9A
Other languages
Chinese (zh)
Inventor
徐云
杨承翰
张建鹏
张建新
陈宥融
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Taitan Co ltd
Zhejiang Sci Tech University ZSTU
Original Assignee
Zhejiang Taitan Co ltd
Zhejiang Sci Tech University ZSTU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Taitan Co ltd, Zhejiang Sci Tech University ZSTU filed Critical Zhejiang Taitan Co ltd
Priority to CN202310950857.9A priority Critical patent/CN116894836A/en
Publication of CN116894836A publication Critical patent/CN116894836A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Treatment Of Fiber Materials (AREA)

Abstract

The application discloses a yarn flaw detection method and device based on machine vision, wherein the method comprises the steps of obtaining a yarn image; performing binarization processing on the image of the yarn to obtain a binarized image of the yarn; performing dimension reduction treatment on the binarized image to obtain a one-dimensional sequence of the yarn width value; carrying out average value pooling on the one-dimensional sequence to obtain a characteristic sequence; constructing an optimal feature vector based on the feature sequence; inputting the optimal feature vector into a deep learning neural network, and outputting the feature attribute of the yarn. The application can realize automatic flaw detection in the yarn production process, solves the problems that the detection precision of the traditional yarn flaw detection method is easy to be influenced, has low automation level and is easy to be subjected to false detection, and has the advantages of low occupancy rate of computing resources, high detection speed and high recognition precision.

Description

Yarn flaw detection method and device based on machine vision
Technical Field
The application belongs to the technical field of textile computer detection, and particularly relates to a yarn flaw detection method and device based on machine vision.
Background
In the production process of the yarn, the yarn is affected by mechanical transmission equipment, spinning raw materials and the like, and some yarn flaws such as neps, details and the like inevitably occur. Therefore, the flaw detection is carried out on the yarn in the production process, and the method has important research significance for controlling the production quality of the yarn and improving the production efficiency of the yarn.
At present, the common yarn flaw detection method mainly comprises the following steps: photoelectric detection, capacitive detection, and manual visual inspection.
The photoelectric detection mode generally comprises a light emitter, an optical system and a light receiver. After passing through the optical system, the infrared light generated by the illuminant forms a detection area with uniform light field, and the light energy is converted into analog quantity by the photoreceptor to be output. When the yarn runs in the detection area, part of light is blocked, so that the light energy received by the light receiver is reduced, and the change amount of the light energy reflects the diameter of the yarn in the detection area. The light receiver converts the light energy into an electrical signal, the magnitude of which corresponds to the diameter of the yarn. The detection precision of the photoelectric detection method is easily affected by the aging of the photoelectric device, the light transmission of yarn hairiness and the like.
The capacitive detection method consists of two vertical capacitive polar plates, and when yarn passes through a detection area formed by the two polar plates in a non-contact mode, the dielectric constant between the two polar plates is changed due to the addition of a fiber medium, so that the capacitance values of the two polar plates are changed, and the change of the yarn quality is indirectly reflected. The method can realize the detection of common yarns, but the detection precision is easily affected by air humidity, yarn water content, uneven space electric field and the like in the environment.
The manual visual inspection method obtains the types and the numbers of yarn flaws by inspecting the yarns in the same batch by a inspector through sampling inspection and by experience and visual inspection. The method has low automation level, the detection result depends on the subjectivity of an operator, and false detection is easy to occur.
Disclosure of Invention
The application aims to provide a yarn flaw detection method and device based on machine vision, which are used for solving the technical problems that the detection precision of the yarn flaw detection method in the prior art is easily affected, the automation level is low and false detection is easy to occur.
In order to achieve the above purpose, the application adopts a technical scheme that:
the yarn flaw detection method based on machine vision comprises the following steps:
acquiring an image of the yarn;
performing binarization processing on the image of the yarn to obtain a binarized image of the yarn;
performing dimension reduction treatment on the binarized image to obtain a one-dimensional sequence of the yarn width value;
carrying out average value pooling on the one-dimensional sequence to obtain a characteristic sequence, wherein the pooled size P of the average value pooling i =f(a i ),a i =1,2,3...N;
Constructing an optimal feature vector based on the feature sequence;
inputting the optimal feature vector into a deep learning neural network, and outputting the feature attribute of the yarn, wherein the feature attribute comprises the type of a normal yarn or the yarn flaw.
In one or more embodiments, the step of binarizing the image of the yarn to obtain a binarized image of the yarn includes:
calculating an OSTU threshold T of an image of the yarn based on an OSTU threshold method OSTU
Correcting the OSTU threshold based on the following formula to obtain a correctionPositive threshold T g-global-OSTU
Wherein V is q Is the gray value of the pixel point with the same gray value and the maximum number of the gray values among all the pixel points of the image of the yarn, Z x The gray value of the pixel point with the minimum gray value in all the pixel points of the yarn image;
and carrying out binarization processing on the image based on the correction threshold value to obtain a binarized image of the yarn.
In one or more embodiments, the step of performing a dimension reduction process on the binarized image to obtain the one-dimensional sequence of yarn width values includes:
taking the background pixel of the binarized image as a value 1 and the foreground pixel as a value 0 to obtain a matrix X of the binarized image n*m Wherein n is the number of pixels of the binarized image in the yarn width direction, and m is the number of pixels of the binarized image in the yarn length direction;
the one-dimensional sequence D of yarn width values is calculated based on the following formula:
D=[n n…n] 1*m -[1 1…1] 1*n *X n*m
in one or more embodiments, the yarn faults include detail faults and short segment faults, and the step of pooling the average value of the one-dimensional sequence to obtain a characteristic sequence includes:
average pooling the one-dimensional sequence, pooling the sizeObtaining the obtained characteristic sequence S 1 (a 1 ) Wherein a is 1 =1,2,3...N 1 M is the length of the one-dimensional sequence, < >>
In one or more embodiments, the yarn faults include nub faults, and the step of pooling the average value of the one-dimensional sequence to obtain a characteristic sequence includes:
average pooling the one-dimensional sequence, pooling the sizeObtaining a characteristic sequence S 2 (a 2 ) Wherein a is 2 =1,2,3...N 2 M is the length of the one-dimensional sequence, < >>
In one or more embodiments, the yarn defects include a short yarn defect and a yarn interlacing defect, and the step of pooling the average value of the one-dimensional sequence to obtain a characteristic sequence includes:
respectively carrying out twice average pooling on the one-dimensional sequence, wherein the pooling sizes of the twice average pooling are respectively P 3-1 =4a 3 +2、P 3-2 =2a 3 +1, giving sequence M short And M long Wherein a is 3 =1,2,3...N 3m is the length of the one-dimensional sequence.
Respectively for the sequences M short And M long Performing backward search differencing and forward search differencing to obtain a backward differencing sequence and a forward differencing sequence;
comparing the backward difference solving sequence with the forward difference solving sequence, and obtaining a larger value of each corresponding position of the backward difference solving sequence and the forward difference solving sequence to construct a characteristic sequence S 3 (a 3 )。
In one or more embodiments, the sequences M are respectively aligned with short And M long The step of performing backward search differencing and forward search differencing to obtain a backward differencing sequence and a forward differencing sequence comprises:
removing the sequence M short Front a of head end of (2) 3 The numerical values are compared with the sequence M by the head end in sequence short And M long Until the tail end, obtaining a backward difference sequence S 3-1 =|M short [a 3 +1:m+1-4a 3 ]-M long [1:m-1-5a 3 ]I, wherein m is the length of the one-dimensional sequence;
removing the sequence M short Rear a of the tail end of (2) 3 Numerical values, the sequences M are aligned in sequence from the tail end short And M long Up to the head end, a forward difference sequence S is obtained 3-2 =|M short [1:m-1-5a 3 ]-M long [3a 3 +2:m-2a 3 ]I, wherein m is the length of the one-dimensional sequence.
In one or more embodiments, the step of constructing an optimal feature vector based on the feature sequence includes:
random frog-leaping algorithm calculation a based on partial least square i Is the optimal solution of (a);
and calculating the maximum value, the minimum value and the average value of the feature sequence based on the optimal solution, and collecting to obtain the optimal feature vector.
In one or more embodiments, in the step of inputting the optimal feature vector into a deep learning neural network and outputting the feature attribute of the yarn, the neural network is a dual-layer ANN classifier, and the training method of the deep learning neural network includes:
acquiring a sample training set, wherein the sample training set comprises a plurality of groups of optimal feature vectors marked with yarn images of label values;
and inputting the sample training set, and updating the weight of the deep learning neural network along the gradient descending direction based on the cross entropy loss function and the label value until the cross entropy loss function converges to obtain the deep learning neural network.
In order to achieve the above purpose, another technical scheme adopted by the application is as follows:
provided is a yarn flaw detection device based on machine vision, comprising:
the acquisition module is used for acquiring an image of the yarn;
the binarization processing module is used for performing binarization processing on the image of the yarn to obtain a binarized image of the yarn;
the dimension reduction processing module is used for carrying out dimension reduction processing on the binarized image to obtain a one-dimensional sequence of the yarn width value;
the average pooling module is used for carrying out average pooling on the one-dimensional sequence to obtain a characteristic sequence, wherein the pooled size P of the average pooling i =f(a i ),a i =1,2,3...N;
The construction module is used for constructing an optimal feature vector based on the feature sequence;
and the classification output module is used for inputting the optimal characteristic sequence into a deep learning neural network and outputting the characteristic attribute of the yarn, wherein the characteristic attribute comprises the type of a normal yarn or the yarn flaw.
Compared with the prior art, the application has the beneficial effects that:
according to the yarn flaw detection method, based on a machine vision means, the dimension is reduced after binarization processing is carried out on the acquired yarn images, a one-dimensional sequence of the yarn width value is obtained, the one-dimensional sequence is subjected to average value pooling of different pooling dimensions, a characteristic sequence capable of highlighting different flaws is obtained, an optimal characteristic vector can be generated based on the characteristic sequence, and the optimal characteristic vector is brought into a deep learning neural network to automatically output flaw types of yarns, so that automatic flaw detection in the yarn production process can be realized, the problems that the detection precision of the traditional yarn flaw detection method is easily affected, the automation level is low, false detection is easy to occur are solved, and the method has the advantages of low computing resource occupancy rate, high detection speed and high recognition precision.
Drawings
FIG. 1 is a flow chart of an embodiment of a machine vision-based yarn flaw detection method according to the present application;
FIG. 2 is a graph of defect types for chenille yarns;
FIG. 3 is a schematic diagram of one embodiment of a backward search of the present application;
FIG. 4 is a schematic diagram of one embodiment of a backward search of the present application;
FIG. 5 is a characteristic sequence of yarns of different flaw types in the present application;
FIG. 6 is a schematic diagram of a deep learning neural network according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a machine vision-based yarn flaw detection device according to an embodiment of the present application;
fig. 8 is a hardware configuration diagram of an embodiment of the electronic device of the present application.
Detailed Description
The present application will be described in detail below with reference to the embodiments shown in the drawings. The embodiments are not intended to limit the application, but structural, methodological, or functional modifications of the application from those skilled in the art are included within the scope of the application.
In the background art, the existing yarn flaw detection method has great limitation, and the yarn production quality and efficiency are difficult to control effectively.
Therefore, the applicant develops a yarn flaw detection method based on machine vision, and the method can process images by collecting the images of yarns, automatically identify various flaws of the yarns, has high detection speed, high identification precision and low calculation resource occupancy rate, and can be widely applied to flaw detection of various yarns.
Specifically, referring to fig. 1, fig. 1 is a flow chart illustrating an embodiment of a machine vision-based yarn defect detecting method according to the present application.
The detection method comprises the following steps:
s100, acquiring an image of the yarn.
First, an image of the yarn may be acquired during the production process. The yarn can be in a motion state or a static state.
In one embodiment, in order to ensure the collection effect of the yarn image, the collection may be performed under the condition of backlight illumination.
S200, performing binarization processing on the yarn image to obtain a yarn binarization image.
In order to accurately segment the range of yarns in the image, the image may be binarized. In one embodiment, the yarn range may be divided by a thresholding method, and in other embodiments, the yarn range may be divided by another method, for example, by a deep learning neural network.
Specifically, in one embodiment, an OSTU threshold T for an image of a yarn may be first calculated based on an OSTU threshold method OSTU The OSTU threshold is then modified to improve segmentation accuracy.
The OSTU threshold method is a conventional threshold segmentation method in the art, and the calculation process of the OSTU threshold is not described herein.
The formula for correcting the OSTU threshold may be as follows:
wherein V is q Is the gray value of the pixel point with the same gray value and the largest number of the gray values among all the pixel points of the yarn image, Z x Is the gray value of the pixel with the smallest gray value among all the pixels of the image of the yarn.
Can then be based on the correction threshold T g-global-OSTU And performing binarization processing on the acquired yarn image, namely setting the gray level of the pixel point with the gray level smaller than the correction threshold value as 0 and setting the gray level of the pixel point with the gray level larger than the correction threshold value as 255, so as to obtain a binarized image.
S300, performing dimension reduction treatment on the binarized image to obtain a one-dimensional sequence of yarn width values.
It will be appreciated that the binarized image includes an image of the yarn and background, based on which the width of any portion of the yarn in the length direction can be calculated.
The specific calculation method can be to use the back of the binarized imageThe foreground pixel has a value of 1 and the foreground pixel has a value of 0, so that a matrix X of the binarized image is obtained n*m Where n is the number of pixels of the binarized image in the yarn width direction and m is the number of pixels of the binarized image in the yarn length direction.
Based on formula d1= [1 1 … 1 ]] 1*n *X n*m The number of background pixels in the yarn width direction of the yarn image can be calculated. And then the total pixel number n in the yarn width direction of the yarn image is calculated to obtain a yarn width value, and specifically, the one-dimensional sequence D of the yarn width value can be calculated by referring to the following formula:
D=[n n … n] 1*m -[1 1 … 1] 1*n *X n*m
s400, carrying out average value pooling on the one-dimensional sequence to obtain a characteristic sequence.
Wherein the average pooled size P i =f(a i ),a i =1,2,3...N。
After the one-dimensional sequence D of yarn width fingers is obtained, the one-dimensional sequence D can be subjected to average pooling, so that the variation trend of the yarn width in the length direction is reflected, and the possible defect problem of the yarn can be reflected based on the variation trend.
In one application scenario, taking chenille yarn as an example, five types of defects, namely detail defects, grown-in defects, thick-knot defects, short-knot defects and decoration yarn interlacing defects, may exist in the production process of the chenille yarn. Specifically, referring to fig. 2, fig. 2 is a defect type diagram of chenille yarn.
As shown in fig. 2, the trend of the width values in the length direction of chenille yarns of different defect types is different, wherein the detail defect and the short segment defect are long segment defects, the thick segment defect is short segment defect, and the short segment defect and the ornamental yarn interlacing defect are abrupt defects. Thus, the pooling size of the average pooling should be different for different types of flaws, for example, for long section flaws, the data amount of each averaging should be large, so that the variation of the long and short yarn width values can be highlighted; for short-segment flaws, the corresponding data amount of each averaging should be smaller, so as to highlight the flaw features.
In one embodiment, where yarn defects include detail defects and short segment defects, the average pooled size may beWherein a is 1 =1,2,3...N 1 M is the length of the one-dimensional sequence,
based on the pooled size, a feature sequence S can be obtained 1 (a 1 ) Having a length ofIllustratively, when m=100, a 1 When=1, the pooling size is 50, that is, 50 values are averaged each time, and the values are sequentially translated, so as to obtain 51 feature sequences S constructed by the values 1 (a 1 )。
In one embodiment, the yarn defects include nub defects, and the average pooling pool size may beWherein a is 2 =1,2,3...N 2 M is the length of the one-dimensional sequence, +.>
Based on the pooled size, a feature sequence S can be obtained 2 (a 2 ) Having a length ofIllustratively, when m=100, a 2 When=1, the pooling size is 2, i.e. 2 values are averaged each time, and translated in sequence, so as to obtain 99 feature sequences S constructed by the values 2 (a 2 )。
In one embodiment, the yarn defects include short yarn defects and yarn interlacing defects, and due to the abrupt nature of the two defects, when the average value is pooled, the average value of different pooling sizes can be used for twice, and then the difference value of the two defects is taken to construct the characteristic sequence.
Specifically, the one-dimensional sequence can be respectively subjected to twice average pooling, and the pooling sizes of the twice average pooling are respectively P 3-1 =4a 3 +2、P 3-2 =2a 3 +1, giving sequence M short And M long Wherein a is 3 =1,2,3...N 3m is the length of the one-dimensional sequence.
Wherein the sequence M short Has a length of m-1-a 3 X 4, sequence M long Has a length of m-a 3 X 2. Then can be respectively applied to the sequences M short And M long And performing backward search differencing and forward search differencing to obtain a backward differencing sequence and a forward differencing sequence.
Specifically, since the two sequences are different in length, the method of backward search for differences is: removal of sequence M short Front a of head end of (2) 3 The numerical values are compared with the sequence M by the head end in sequence short And M long Until the tail end, obtaining a backward difference sequence S 3-1 =|M short [a 3 +1:m+1]| a. The application relates to a method for producing a fibre-reinforced plastic composite. Referring to fig. 3, fig. 3 is a schematic diagram of a backward search according to an embodiment of the application.
The forward search difference solving method comprises the following steps: removal of sequence M short Rear a of the tail end of (2) 3 The numerical values are compared with the sequence M from the tail end in sequence short And M long Up to the head end, a forward difference sequence S is obtained 3-2 =|M short [1:m-1-5a 3 ]-M long [3a 3 +2:m-2a 3 ]| a. The application relates to a method for producing a fibre-reinforced plastic composite. Referring to fig. 4, fig. 4 is a schematic diagram of a backward search according to an embodiment of the application.
After the backward difference solving sequence and the forward difference solving sequence are obtained, the larger value of each corresponding position of the backward difference solving sequence and the forward difference solving sequence can be obtained to construct the characteristicSequence S 3 (a 3 )。
It will be appreciated that the above-described mean pooling method is effective in highlighting the characteristics of mutant flaws, thereby facilitating subsequent classification.
Referring to fig. 5, fig. 5 is a characteristic sequence diagram of the yarn with different flaw types in the present application, as shown in fig. 5, for the yarn with different flaw types, different characteristic sequences can well characterize the characteristics of the yarn, so as to lay a reliable foundation for subsequent flaw detection.
S500, constructing an optimal feature vector based on the feature sequence.
In particular, since the signature sequence is associated with variable a i Related sequences, when a i When the values of (2) are different, the feature sequences are also different. Therefore, the variable a needs to be calculated i Is a solution to the optimization of (3).
In one embodiment, a may be calculated based on a partial least squares random frog-leaping algorithm i The specific method is described in detail below.
First, a can be calculated i And calculating the maximum value, the minimum value and the average value of the feature sequence by taking any value.
For example, three feature sequences S calculated in the above step S400 may be used 1 (a 1 )、S 2 (a 2 )、S 3 (a 3 ) And calculating the maximum value, the minimum value and the average value when the variables take 1-N, and constructing a characteristic sequence combination V (L). Taking n=5 as an example, the following can be seen in particular.
V(L)=[L 1 (S 1 )L 2 (S 2 )L 3 (S 3 )]
As the above formula, the ruler of the sequence V (L) can be knownThe dimensions are: 1X 45. Then searching the optimal a in the sequence V (L) based on a random frog-leaping algorithm of Partial Least Squares (PLS) 1 ,a 2 ,a 3 To reconstruct the eigenvector l= { L [ S ] 1 (a 1 )] l[S 2 (a 2 )] l[S 3 (a 3 )]}
The partial least square random frog-leaping algorithm comprises the following implementation steps:
step1, initialization, including subset V0 containing Q variables. The number of iterations is 1000, the number of variables q=5, η=0.1, ω=3.
Step2, randomly generating Q from normal distribution Norm (Q, thetaQ), including candidate subset V.
Step3 if q=q, then v=v0. If Q < Q, then using V0 to build PLS model, calculating the regression coefficients of each variable, deleting Q-Q variables associated with the minimum absolute regression coefficient, the remaining Q variables constituting V. If Q > Q, then ω (Q-Q) variables are randomly selected from V-V0 to form variable subset S, a PLS model is constructed using V0 and S, regression coefficients for each variable are calculated, and Q variables with the largest absolute regression coefficient in the PLS model are retained to form V.
Step4, calculating cross-validation root mean square errors RMSECV and RMSECV of V0 and V. If RMSECV < = RMSECV, V is taken as V1. Otherwise, accept V as V1 with probability ηrmsecv/RMSECV. Update V0 with V1, return to Step2 until the iteration is terminated.
Step5, calculating the selection probability of each variable after the iteration is completed. The j-th variable frequency selected in the variable subset is denoted as Nj. The probability of selection for each variable is as follows:
respectively calculating S according to the selection probability of each variable 1 (a 1 )](a 1 =1,2,3,4,5)、[S 2 (a 2 )](a 2 =1, 2,3,4, 5) and [ S 3 (a 3 )](a 3 Probability sum of =1, 2,3,4, 5), select [ S ] of highest probability 1 (a i )]、[S 2 (a i )]、[S 3 (a i )]A of (a) 1 ,a 2 ,a 3 And finally reconstructing the characteristic vector L, namely an optimal characteristic vector.
It should be noted that the above embodiment only exemplarily illustrates a method for simultaneously calculating the optimal solution by using three feature sequences, and the optimal feature vector is a vector of 1×9. In other embodiments, the effect of this embodiment can be achieved by calculating the optimal feature vector of one feature sequence based on the partial least square random frog-leaping algorithm alone, or by calculating the optimal feature vectors of two feature sequences.
S600, inputting the optimal feature vector into the deep learning neural network, and outputting the feature attribute of the yarn.
Wherein the characteristic attribute comprises a type of normal or yarn flaw.
In one embodiment, the deep learning neural network may be a two-layer ANN classifier, which may include an input layer and an output layer at both ends, and two hidden layers at the inside.
Specifically, referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of the deep learning neural network according to the present application. As shown in fig. 6, in one embodiment, for the optimal feature vector of 1×9 obtained in the step S500, the input layer may include 9 neurons, the output layer may include 6 neurons, the 6 neurons respectively correspond to the normal state and five flaws, the hidden layer may be a fully connected layer, each fully connected layer may include 10 hidden neurons, and the type of flaws of the yarn may be obtained by inputting the optimal feature vector into the input layer, passing through two fully connected layers, and then performing classified output through one neuron of the output layer.
Specifically, the training method of the deep learning neural network includes:
acquiring a sample training set, wherein the sample training set comprises a plurality of groups of optimal feature vectors of yarn images marked with label values;
and inputting the sample training set, and updating the weight of the deep learning neural network along the gradient descending direction based on the cross entropy loss function and the label value until the cross entropy loss function converges to obtain the deep learning neural network.
Specifically, in order to implement classification, the deep learning neural network needs to perform forward propagation and backward propagation calculation in the training process.
Forward propagation of a dual hidden layer neural network refers to the process by which data passes from an input layer through the various hidden layers to an output layer. X is input as neurons to the first hidden layer to be calculated as follows,
h 1 =σ(w 1 x+b 1 )
wherein w is 1 And b 1 The first hidden layer weight and bias, respectively, σ is the ReLU (Rectified LinearUnit) activation function, σ=max (0, x).
Result h 1 Passes to the second hidden layer for calculation with the following formula,
h 2 =σ(w 2 h 1 +b 2 )
wherein w is 2 And b 2 The weight and bias of the 2 nd hidden layer, respectively.
Result h 2 And the data is transmitted to an output layer for calculation,
y=softmax(w 3 h 2 +b 3 )
wherein w is 3 And b 3 The weights and offsets of the output layers, respectively, softmax is the activation function,
the output of the neural network can be changed into a probability distribution between 0 and 1, the output y is a vector with the dimension of 6, and the vector comprises normal yarns, thick knots, details, short knots and decoration yarns which are staggered, so that the yarn classification according to the characteristic vector L is finally realized.
The back propagation of the two-layer neural network updates the weights w and bias values b in each layer by calculating the errors of the tag values and the resulting values, and passing the errors in the opposite direction along the network based on the errors.
(1) Calculating errors of output layer
Calculating deviation E of the classification result from the expected value according to the classification result and the cross entropy loss function of the actual label:
wherein m is the number of training samples, C is the number of categories, t ij And y ij The true labels and models of the j-th class of the i-th training sample predict the probability of the j-th class, respectively. The goal of the cross entropy loss function is to minimize the gap between the predicted value and the true value. The closer the prediction probability is to the real tag, the closer the value of the loss function is to 0, and vice versa. Thus, the cross entropy loss function may help the model fit the training data better and improve the accuracy of the classification task.
(2) Calculating the error of the second hidden layer
Errors can be passed along the network to a second hidden layer, which can be expressed by the chain law:
wherein delta 2 Is the error of the second hidden layer.
(3) Calculating the error of the first hidden layer
Second hidden layer error delta 2 Or may be passed along the network to the first hidden layer to calculate the error of the first hidden layer:
wherein delta 1 Representing the error of the second hidden layer, (delta) 2 > 0) is an indicator function, when delta 2 At > 0, the value is 1, when delta 2 When < 0, the value is 0.
(4) Updating weights and biases
The error is used to update the weights and the bias. The updating of the weights and deviations may be implemented with a gradient descent algorithm, i.e. by adjusting the weights and deviations in a direction that allows the error to be reduced. The weight and bias are updated as follows:
where lr is the learning rate, and the amplitude can be adjusted according to the error during each update.
After the weight and the deviation are updated by using a back propagation algorithm, the forward propagation is calculated again, and the process is iterated until the deviation value of each round is converged. And recording the ownership and deviation for forward propagation, namely establishing a chenille yarn defect classification model, and detecting chenille yarns by using the model.
The application also provides a yarn defect detection device based on machine vision, referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of the yarn defect detection device based on machine vision.
The device comprises an acquisition module 21, a binarization processing module 22, a dimension reduction processing module 23, a mean value pooling module 24, a construction module 25 and a classification output module 26.
Wherein the acquisition module 21 is used for acquiring an image of the yarn;
the binarization processing module 22 is used for performing binarization processing on the yarn image to obtain a yarn binarization image;
the dimension reduction processing module 23 is used for performing dimension reduction processing on the binarized image to obtain a one-dimensional sequence of yarn width values;
the average pooling module 24 is configured to average-pool the one-dimensional sequence to obtain a feature sequence, where the average-pooled pool size P i =f(a i ),a i =1,2,3...N;
The construction module 25 is configured to construct an optimal feature vector based on the feature sequence;
the classification output module 26 is configured to input the optimal feature sequence into the deep learning neural network and output the feature attribute of the yarn, wherein the feature attribute includes a normal or yarn flaw type.
As described above with reference to fig. 1 to 6, the yarn flaw detection method based on machine vision according to the embodiment of the present specification is described. The details mentioned in the description of the method embodiments above apply equally to the machine vision based yarn flaw detection device of the embodiments of the present description. The yarn flaw detection device based on machine vision can be realized by adopting hardware, or can be realized by adopting software or a combination of hardware and software.
Fig. 8 is a hardware configuration diagram of an embodiment of the electronic device of the present application. As shown in fig. 8, the electronic device 30 may include at least one processor 31, a memory 32 (e.g., a non-volatile memory), a memory 33, and a communication interface 34, and the at least one processor 31, the memory 32, the memory 33, and the communication interface 34 are connected together via a bus 35. The at least one processor 31 executes at least one computer readable instruction stored or encoded in the memory 32.
It should be appreciated that the computer-executable instructions stored in the memory 32, when executed, cause the at least one processor 31 to perform the various operations and functions described above in connection with fig. 1-4 in various embodiments of the present description.
In embodiments of the present description, electronic device 30 may include, but is not limited to: personal computers, server computers, workstations, desktop computers, laptop computers, notebook computers, mobile electronic devices, smart phones, tablet computers, cellular phones, personal Digital Assistants (PDAs), handsets, messaging devices, wearable electronic devices, consumer electronic devices, and the like.
According to one embodiment, a program product, such as a machine-readable medium, is provided. The machine-readable medium may have instructions (i.e., elements described above implemented in software) that, when executed by a machine, cause the machine to perform the various operations and functions described above in connection with fig. 1-4 in various embodiments of the specification. In particular, a system or apparatus provided with a readable storage medium having stored thereon software program code implementing the functions of any of the above embodiments may be provided, and a computer or processor of the system or apparatus may be caused to read out and execute instructions stored in the readable storage medium.
In this case, the program code itself read from the readable medium may implement the functions of any of the above embodiments, and thus the machine-readable code and the readable storage medium storing the machine-readable code form part of the present specification.
Examples of readable storage media include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer or cloud by a communications network.
It will be appreciated by those skilled in the art that various changes and modifications can be made to the embodiments disclosed above without departing from the spirit of the application. Accordingly, the scope of protection of this specification should be limited by the attached claims.
It should be noted that not all the steps and units in the above flowcharts and the system configuration diagrams are necessary, and some steps or units may be omitted according to actual needs. The order of execution of the steps is not fixed and may be determined as desired. The apparatus structures described in the above embodiments may be physical structures or logical structures, that is, some units may be implemented by the same physical client, or some units may be implemented by multiple physical clients, or may be implemented jointly by some components in multiple independent devices.
In the above embodiments, the hardware units or modules may be implemented mechanically or electrically. For example, a hardware unit, module or processor may include permanently dedicated circuitry or logic (e.g., a dedicated processor, FPGA or ASIC) to perform the corresponding operations. The hardware unit or processor may also include programmable logic or circuitry (e.g., a general purpose processor or other programmable processor) that may be temporarily configured by software to perform the corresponding operations. The particular implementation (mechanical, or dedicated permanent, or temporarily set) may be determined based on cost and time considerations.
The detailed description set forth above in connection with the appended drawings describes exemplary embodiments, but does not represent all embodiments that may be implemented or fall within the scope of the claims. The term "exemplary" used throughout this specification means "serving as an example, instance, or illustration," and does not mean "preferred" or "advantageous over other embodiments. The detailed description includes specific details for the purpose of providing an understanding of the described technology. However, the techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described embodiments.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A machine vision-based yarn flaw detection method, comprising:
acquiring an image of the yarn;
performing binarization processing on the image of the yarn to obtain a binarized image of the yarn;
performing dimension reduction treatment on the binarized image to obtain a one-dimensional sequence of the yarn width value;
the one-dimensional sequence is subjected to average value pooling to obtain a characteristic sequence, whichIn the average pooled pool size P i =f(a i ),a i =1,2,3...N;
Constructing an optimal feature vector based on the feature sequence;
inputting the optimal feature vector into a deep learning neural network, and outputting the feature attribute of the yarn, wherein the feature attribute comprises the type of a normal yarn or the yarn flaw.
2. The yarn flaw detection method according to claim 1, wherein the step of binarizing the image of the yarn to obtain a binarized image of the yarn comprises:
calculating an OSTU threshold T of an image of the yarn based on an OSTU threshold method OSTU
Correcting the OSTU threshold based on the following formula to obtain a corrected threshold T g-global-OSTU
Wherein V is q Is the gray value of the pixel point with the same gray value and the maximum number of the gray values among all the pixel points of the image of the yarn, Z x The gray value of the pixel point with the minimum gray value in all the pixel points of the yarn image;
and carrying out binarization processing on the image based on the correction threshold value to obtain a binarized image of the yarn.
3. The method of claim 1, wherein the step of performing a dimension reduction process on the binarized image to obtain the one-dimensional sequence of yarn width values comprises:
taking the background pixel of the binarized image as a value 1 and the foreground pixel as a value 0 to obtain a matrix X of the binarized image n*m Wherein n is the number of pixels of the binarized image in the yarn width direction, and m is the number of pixels of the binarized image in the yarn length direction;
the one-dimensional sequence D of yarn width values is calculated based on the following formula:
D=[n n...n] 1*m -[1 1...1] 1*n *X n*m
4. the method of claim 1, wherein the yarn faults include detail faults and short segment faults, and the step of pooling the average value of the one-dimensional sequence to obtain a characteristic sequence includes:
average pooling the one-dimensional sequence, pooling the sizeObtaining the obtained characteristic sequence S 1 (a 1 ) Wherein a is 1 =1,2,3...N 1 M is the length of the one-dimensional sequence, < >>
5. The method of claim 1, wherein the yarn faults include slub faults, and wherein the step of pooling the average value of the one-dimensional sequence to obtain a signature sequence comprises:
average pooling the one-dimensional sequence, pooling the sizeObtaining a characteristic sequence S 2 (a 2 ) Wherein a is 2 =1,2,3...N 2 M is the length of the one-dimensional sequence, < >>
6. The method of claim 1, wherein the yarn defects include a short yarn defect and a yarn interlacing defect, and the step of pooling the average value of the one-dimensional sequence to obtain a signature sequence includes:
respectively carrying out twice average pooling on the one-dimensional sequence, wherein the pooling sizes of the twice average pooling are respectively P 3-1 =4a 3 +2、P 3-2 =2a 3 +1, giving sequence M short And M long Wherein a is 3 =1,2,3...N 3m is the length of the one-dimensional sequence;
respectively for the sequences M short And M long Performing backward search differencing and forward search differencing to obtain a backward differencing sequence and a forward differencing sequence;
comparing the backward difference solving sequence with the forward difference solving sequence, and obtaining a larger value of each corresponding position of the backward difference solving sequence and the forward difference solving sequence to construct a characteristic sequence S 3 (a 3 )。
7. The yarn flaw detection method according to claim 6, wherein the respective pairs of the sequences M short And M long The step of performing backward search differencing and forward search differencing to obtain a backward differencing sequence and a forward differencing sequence comprises:
removing the sequence M short Front a of head end of (2) 3 The numerical values are compared with the sequence M by the head end in sequence short And M long Until the tail end, obtaining a backward difference sequence S 3-1 =|M short [a 3 +1:m+1-4a 3 ]-M long [1:m-1-5a 3 ]I, wherein m is the length of the one-dimensional sequence;
removing the sequence M short Rear a of the tail end of (2) 3 Numerical values, the sequences M are aligned in sequence from the tail end short And M long Up to the head end, a forward difference sequence S is obtained 3-2 =|M short [1:m-1-5a 3 ]-M long [3a 3 +2:m-2a 3 ]I, wherein m is the length of the one-dimensional sequence.
8. The yarn flaw detection method according to claim 1, wherein the step of constructing an optimal feature vector based on the feature sequence includes:
random frog-leaping algorithm calculation a based on partial least square i Is the optimal solution of (a);
and calculating the maximum value, the minimum value and the average value of the feature sequence based on the optimal solution, and collecting to obtain the optimal feature vector.
9. The yarn flaw detection method according to claim 1, wherein in the step of inputting the optimal feature vector into a deep learning neural network and outputting the feature attribute of the yarn, the neural network is a double-layer ANN classifier, and the training method of the deep learning neural network includes:
acquiring a sample training set, wherein the sample training set comprises a plurality of groups of optimal feature vectors marked with yarn images of label values;
and inputting the sample training set, and updating the weight of the deep learning neural network along the gradient descending direction based on the cross entropy loss function and the label value until the cross entropy loss function converges to obtain the deep learning neural network.
10. Yarn flaw detection device based on machine vision, characterized by comprising:
the acquisition module is used for acquiring an image of the yarn;
the binarization processing module is used for performing binarization processing on the image of the yarn to obtain a binarized image of the yarn;
the dimension reduction processing module is used for carrying out dimension reduction processing on the binarized image to obtain a one-dimensional sequence of the yarn width value;
the average value pooling module is used for carrying out average value pooling on the one-dimensional sequence to obtain a characteristic sequence, whereinThe average pooled size P i =f(a i ),a i =1,2,3...N;
The construction module is used for constructing an optimal feature vector based on the feature sequence;
and the classification output module is used for inputting the optimal characteristic sequence into a deep learning neural network and outputting the characteristic attribute of the yarn, wherein the characteristic attribute comprises the type of a normal yarn or the yarn flaw.
CN202310950857.9A 2023-07-31 2023-07-31 Yarn flaw detection method and device based on machine vision Pending CN116894836A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310950857.9A CN116894836A (en) 2023-07-31 2023-07-31 Yarn flaw detection method and device based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310950857.9A CN116894836A (en) 2023-07-31 2023-07-31 Yarn flaw detection method and device based on machine vision

Publications (1)

Publication Number Publication Date
CN116894836A true CN116894836A (en) 2023-10-17

Family

ID=88313460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310950857.9A Pending CN116894836A (en) 2023-07-31 2023-07-31 Yarn flaw detection method and device based on machine vision

Country Status (1)

Country Link
CN (1) CN116894836A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11993868B1 (en) * 2023-09-15 2024-05-28 Zhejiang Hengyi Petrochemical Co., Ltd. Control method for yarn route inspection equipment, electronic device and storage medium
CN118470029A (en) * 2024-07-15 2024-08-09 吴江市兰天织造有限公司 Environment-friendly yarn defect detection method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11993868B1 (en) * 2023-09-15 2024-05-28 Zhejiang Hengyi Petrochemical Co., Ltd. Control method for yarn route inspection equipment, electronic device and storage medium
US12110614B1 (en) 2023-09-15 2024-10-08 Zhejiang Hengyi Petrochemical Co., Ltd. Control method for yarn route inspection equipment, electronic device and storage medium
CN118470029A (en) * 2024-07-15 2024-08-09 吴江市兰天织造有限公司 Environment-friendly yarn defect detection method

Similar Documents

Publication Publication Date Title
CN116894836A (en) Yarn flaw detection method and device based on machine vision
CN110111297B (en) Injection molding product surface image defect identification method based on transfer learning
Su et al. Concrete cracks detection using convolutional neuralnetwork based on transfer learning
CN108171209B (en) Face age estimation method for metric learning based on convolutional neural network
CN106462746B (en) Analyzing digital holographic microscopy data for hematology applications
CN111815564B (en) Method and device for detecting silk ingots and silk ingot sorting system
Mathavan et al. Use of a self-organizing map for crack detection in highly textured pavement images
JP2021515885A (en) Methods, devices, systems and programs for setting lighting conditions and storage media
Ghazvini et al. Defect detection of tiles using 2D-wavelet transform and statistical features
KR102402194B1 (en) Deep learning based end-to-end o-ring defect inspection method
Gao et al. A novel VBM framework of fiber recognition based on image segmentation and DCNN
Wang et al. Automatic rebar counting using image processing and machine learning
CN113269647A (en) Graph-based transaction abnormity associated user detection method
CN117152119A (en) Profile flaw visual detection method based on image processing
Maestro-Watson et al. Deep learning for deflectometric inspection of specular surfaces
CN110458809B (en) Yarn evenness detection method based on sub-pixel edge detection
CN115937143A (en) Fabric defect detection method
Mi et al. Research on steel rail surface defects detection based on improved YOLOv4 network
Yang et al. Automated defect detection and classification for fiber-optic coil based on wavelet transform and self-adaptive GA-SVM
CN117636045A (en) Wood defect detection system based on image processing
Lu et al. Bearing defect classification algorithm based on autoencoder neural network
CN113177578A (en) Agricultural product quality classification method based on LSTM
Zou et al. Improved ResNet-50 model for identifying defects on wood surfaces
Shih et al. Integrated Image Sensor and Deep Learning Network for Fabric Pilling Classification.
CN113077461A (en) Steel surface quality detection method based on semi-supervised deep clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination