CN113658131A - Tour type ring spinning broken yarn detection method based on machine vision - Google Patents
Tour type ring spinning broken yarn detection method based on machine vision Download PDFInfo
- Publication number
- CN113658131A CN113658131A CN202110936624.4A CN202110936624A CN113658131A CN 113658131 A CN113658131 A CN 113658131A CN 202110936624 A CN202110936624 A CN 202110936624A CN 113658131 A CN113658131 A CN 113658131A
- Authority
- CN
- China
- Prior art keywords
- yarn
- image
- target
- class
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 36
- 238000007378 ring spinning Methods 0.000 title claims abstract description 12
- 238000012549 training Methods 0.000 claims abstract description 24
- 238000000034 method Methods 0.000 claims abstract description 17
- 238000009499 grossing Methods 0.000 claims abstract description 9
- 230000033001 locomotion Effects 0.000 claims abstract description 7
- 238000012545 processing Methods 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 11
- 238000009987 spinning Methods 0.000 claims description 11
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 9
- 238000012360 testing method Methods 0.000 claims description 7
- 230000002159 abnormal effect Effects 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 6
- 238000007689 inspection Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000005728 strengthening Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000012800 visualization Methods 0.000 claims description 3
- 230000003213 activating effect Effects 0.000 claims description 2
- 150000001875 compounds Chemical class 0.000 claims description 2
- 230000008859 change Effects 0.000 abstract description 5
- 230000009467 reduction Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 9
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000007547 defect Effects 0.000 description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- KCXVZYZYPLLWCC-UHFFFAOYSA-N EDTA Chemical compound OC(=O)CN(CC(O)=O)CCN(CC(O)=O)CC(O)=O KCXVZYZYPLLWCC-UHFFFAOYSA-N 0.000 description 2
- 229960001484 edetic acid Drugs 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 240000007651 Rubus glaucus Species 0.000 description 1
- 235000011034 Rubus glaucus Nutrition 0.000 description 1
- 235000009122 Rubus idaeus Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 238000009985 spun yarn production Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/89—Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles
- G01N21/892—Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles characterised by the flaw, defect or object feature examined
- G01N21/898—Irregularities in textured or patterned surfaces, e.g. textiles, wood
- G01N21/8983—Irregularities in textured or patterned surfaces, e.g. textiles, wood for testing textile webs, i.e. woven material
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Textile Engineering (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Wood Science & Technology (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a tour type ring spinning broken yarn detection method based on machine vision, which is characterized by comprising the following steps of: the method comprises the following steps that an industrial camera with an industrial camera performs tour detection on yarns, and a processor acquires pictures of the tour of the industrial camera; establishing a target area identification model, training the target area identification model, inputting the obtained real-time image into the trained target area identification model, judging whether the image contains a yarn target or not, and extracting an interested area of the image containing the yarn target; carrying out smoothing treatment on the normal rectangular image; and calculating the length of each yarn so as to judge whether the yarn breakage phenomenon occurs. The invention considers the problem of detecting the dynamic change of the yarn quantity; the problems of detection of a ring spinning frame and image distortion under movement in different occasions are considered, and the robustness is good; the method can be suitable for the problem of the change of the yarn space of the image; the problem of noise reduction for many times is considered, and the detection accuracy is effectively improved.
Description
Technical Field
The invention relates to a machine vision-based visual detection method for ring spun yarns, and belongs to the technical field of image processing and deep learning.
Background
The spinning process is an important link of the spinning process, and the interruption of the spinning process is directly caused by the problem of yarn breakage (the phenomenon that continuous spun yarn strips are broken between the front roller and the bobbin) in the spinning production process. The broken end rate of the spun yarn is one of main technical indexes of spun yarn production, the high broken end rate is a main reason for limiting the productivity of ring spinning and is an important factor influencing labor productivity, and the broken ends of the spun yarn not only increase the labor intensity of workers but also influence energy loss, yarn quality and raw material loss.
The prior art generally adopts a mechanical type, an electric contact type, an air pressure non-contact type, a photoelectric non-contact type, a temperature sensing type and the like for detecting the broken yarn of the spun yarn. The method is single spindle type detection, and has the advantages of high accuracy, strong real-time performance, long service life and stable performance, but the single spindle type detection has the problem of high modification cost of the existing spinning frame, and part of the methods can also influence the spinning process. Therefore, the single-spindle detection has the defect that the modification cost is high or even the modification cannot be carried out.
At present, with the gradual reduction of the price of a vision sensor, the digital image processing technology is continuously developed, machine vision is widely adopted in various production fields in industry for surface defect detection, workpiece positioning, target identification and the like, and a mode of using the machine vision can be suitable for most spinning machines on the market for yarn breakage detection, so that the price is low, the changeability is strong, and the spinning process is not influenced.
Although the machine vision has the advantages, a set of relatively complete machine vision yarn breakage detection methods still does not exist in the market, most methods are still in a theoretical stage, and a plurality of defects still exist in practical application, and meanwhile, the problems generated in the movement detection are not considered.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the accuracy rate of the detection of the broken yarn of the tour type ring spinning is low.
In order to solve the technical problem, the technical scheme of the invention is to provide a tour type ring spinning broken yarn detection method based on machine vision, which is characterized by comprising the following steps of:
step 1, an industrial camera with an industrial camera performs tour inspection on yarns, and a processor acquires pictures of the tour inspection of the industrial camera;
step 2, establishing a target area identification model, training the target area identification model, inputting the real-time image obtained in the step 1 into the trained target area identification model, judging whether the image contains a yarn target, and extracting an interested area of the image containing the yarn target, wherein the method specifically comprises the following steps:
step 201, establishing a deep neural network structure based on target detection as a target area identification model. The target region identification model consists of an input layer, a convolutional layer COV.1, 5 inversed Residual Block layers, a convolutional layer COV.2, an averaging pooling layer and an output layer, wherein:
each imported Residual Block layer uses a depth separable convolution and Residual structure;
the output layer outputs 3 coordinate points and a CLASS through the full connection layer, wherein the CLASS is divided into two classes, one CLASS represents that the input image contains a yarn target, and the other CLASS represents that the input image does not contain the yarn target; determining a rectangular frame through 4 coordinate points, and when an input image displayed through CLASS comprises a yarn target, selecting a target area frame with yarns in the image by using the rectangular frame;
step 202, training the target area identification model constructed in step 201 by using a training data set, including:
step 2021, acquiring a training data set:
collecting related data pictures of spinning in a factory by using an industrial camera, and shooting a plurality of images with yarn targets and a plurality of black and white images without the yarn targets;
using a rectangular frame to frame and select the yarn target in the image of the yarn target, thereby obtaining three corner coordinates (X) of the rectangular frame1,Y1)、(X2,Y2)、(X3,Y3) The rectangular frame is in an inclined trapezoid shape;
the values of the four corner coordinates corresponding to the image without the yarn target are 0;
setting label data for each image, wherein the label data consists of three corner coordinates and 4 parameters of CLASS CLASS;
2022, training the target area recognition model by using the training data set obtained in the previous step, and calculating the output of the full connection layer during trainingWith tag data in the training datasetIs a distance of ToThe three corner coordinates output for the fully connected layer,the CLASS output for the full connectivity layer,toFor the three corner coordinates of the tag data,CLASS which is tag data; by minimizing the distanceTo reduce the prediction error of the target region identification model, the Loss function Loss of the target region identification model is defined as the following formula:
in the above formula, LsA cross entropy function for judging whether the image contains a yarn target, wherein p (x) is expected output, and q (x) is actual output; l iscRepresenting the covariance between the output coordinate point and the true coordinate point;
step 203, inputting a real-time image into a trained target area identification model, outputting CLASS CLASS of the image by the target area identification model, if the current image is judged to contain a yarn target through the CLASS CLASS, outputting 4 coordinate points for determining a rectangular frame by the target area identification model, selecting the target area frame with yarns in the image by using the rectangular frame, and extracting the target area frame into an independent picture;
step 3, restoring the picture obtained in the step 2 into a normal rectangular image through perspective processing, and smoothing the normal rectangular image to achieve the purposes of highlighting yarn characteristics and smoothing the image background;
step 5, removing the background of the image obtained in the previous step by using an improved sobel operator, and simultaneously keeping the relevant characteristics of the yarns in the image;
and 7, calculating the length of each yarn based on the yarn characteristics extracted in the previous step so as to judge whether the yarn breakage phenomenon occurs, and marking the yarn with the yarn breakage phenomenon in an output image so as to achieve the visualization purpose.
Preferably, in step 1, the picture is a black-and-white image.
Preferably, in step 201, the formula of convolutional layer cov.1 and convolutional layer cov.2 is as follows:
in the above formula: zijIs a feature map obtained by convolution operation, wherein (i, j) represents the height and width of the feature map,h and W denote the height and width of the image data input to convolutional layer cov.1 or convolutional layer cov.2, and F denotes the height and width of the convolutional kernel; ρ represents a nonlinear activation function; w is arcRepresents the weight of the convolution kernel at the (r, c) position; x is the number of(r+i×t)(c+j×t)Image data representing the input convolutional layer cov.1 or convolutional layer cov.2, t represents the step size of the convolutional kernel motion; b represents a deviation.
Preferably, in step 201, the imported Residual Block layer performs dimension-increasing operation by using 1 × 1 normal convolution, then performs feature extraction by using 3 × 3 depth separable convolution, then performs dimension-reducing operation by using 1 × 1 normal convolution, and finally directly connects the input and the output;
the 3 x 3 depth separable convolution uses the following equation:
in the above formula: mijnIs a characteristic diagram of the nth channel; w is arcnIs the weight at the (r, c) position of the nth channel in the convolution kernel; x is the number of(r+i×t)(c+j×t)nImage data representing an nth channel of the input depth-separable convolutional layer; mij' feature images representing the output N channels; w is an' weight of the feature map representing the nth channel; moutRepresenting the sum of the input data and the convolved data; x is the number ofrcnThe image data of the nth channel is input on behalf of this time.
Preferably, in step 201, the full connection layer performs the following formula:
in the formula (I), the compound is shown in the specification,a weight representing the kth output of the fully connected layer of the nth channel; bkRepresents the deviation of the kth output of the full link layer; h isnParameters representing the nth channel adaptive average pooling layer input.
Preferably, in step 2021, the confidence that the current image includes the yarn target is represented by p (x), which is shown as follows:
wherein P (X) is probability, and is obtained by activating CLASS output with activation function softmax to obtain [0, 1%]A value of between uxIndicates a predetermined threshold value, exceeding the threshold value uxAnd considering that the current image is reliable, and selecting the yarn target frame by using the rectangular frame, otherwise, not selecting the yarn target frame.
Preferably, in step 5, the sobel operator for enhancing the yarn features is shown as follows:
Gij=sobel×Hij
in the above formula:
Gijthe method comprises the steps of (1) processing a pixel point at a (i, j) position in an image after sobel operator processing oriented to yarn characteristic strengthening;
Hijthe pixel points at the (i, j) positions in the image which are not subjected to the sobel operator processing oriented to the yarn characteristic strengthening are obtained;
the sobel is a matrix operator, and the matrix operator sobel adopts an improved sobel operator matrix facing the yarn characteristic enhancement as shown in the following formula:
sobel=2f(x,y+i)+f(x-1,y+i)+f(x+1,y+i)-2f(x+2,y+i)-2f(x+2,y-i)
where f (x, y) represents input image data, i ∈ [ -1,0,1 ].
Preferably, in step 6, the image is thresholded using the following formula:
in the above formula, PijFor the thresholded image pixel at the (i, j) position, Pij' is the value of the image pixel at the (i, j) position before thresholding, and u is the threshold parameter.
Preferably, in step 6, the edge contour search is performed based on the following formula:
in the above formula, I (x) is the image content of each part after threshold segmentation,and alpha is the size of a rated threshold value for the length of the image frame selection.
Preferably, the step 7 of determining whether the yarn breakage phenomenon occurs includes the following steps:
step 701, calculating the gribbs test statistic of each yarn as shown in the following formula:
Gi=(di-u)/s
in the formula: giThe Grabbs test statistic for the ith yarn; diThe real-time yarn length of the ith yarn; u is the average value of the samples,s is the standard deviation of the samples and,
step 702, determining a detection level alpha according to the length of each real-time yarn, and looking up a table to obtain a corresponding Grabas detection critical value alpha (n); when G isi>Alpha (n), then d is judgediIf the yarn is abnormal, the ith yarn is broken, otherwise, the abnormal value is not generated, and the yarn is normal.
Compared with the prior art, the invention has the following beneficial effects: (1) considering the problem of detecting the dynamic change of the number of the yarns; (2) the problems of detection of a ring spinning frame and image distortion under movement in different occasions are considered, and the robustness is good; (3) the method can be suitable for the problem of the change of the yarn space of the image; (4) the problem of noise reduction for many times is considered, and the detection accuracy is effectively improved.
Drawings
FIG. 1 is a flow chart of a tour type broken yarn detection based on machine vision according to an embodiment of the invention;
FIG. 2 is an image data acquisition of an embodiment of the present invention;
FIG. 3 is a selected deep neural network structure of a target area according to an embodiment of the present invention;
FIG. 4 is a target selection result image of an embodiment of the present invention;
fig. 5 to 9 are target image processing flows of the embodiment of the present invention.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
As shown in fig. 1, the tour type ring spinning broken yarn detection method based on machine vision disclosed in this embodiment is written in python language, and includes the following steps:
step 1, an industrial camera with an industrial camera performs tour inspection on yarns, and a processor acquires pictures of tour inspection of the industrial camera at a rate of five frames per second.
In the embodiment, the industrial camera can select an FA lens of an MVL-KF3528M-12MP of Haicanwei vision, and the industrial camera can select a video camera of MV-CA050-20UM, so that the definition of the acquired picture is ensured to the maximum extent. Processor selection to use JESTONNANO
The pictures obtained by the industrial camera are black and white images so as to achieve the purpose of reducing the data transmission quantity. The black and white image acquired by the industrial camera is shown in fig. 2.
Step 2, selecting an interested area from the obtained black and white image containing the yarn target, which mainly comprises the following steps:
step 201, establishing a deep neural network structure based on target detection as a target area identification model. The target region identification model consists of an input layer, a convolutional layer COV.1, 5 inversed Residual Block layers, a convolutional layer COV.2, an averaging pooling layer and an output layer.
The convolutional layer COV.1 and convolutional layer COV.2 adopt the following formula (1):
in formula (1): zijIs a feature map obtained by convolution operation, wherein (i, j) represents the height and width of the feature map,h and W denote the height and width of the image data input to convolutional layer cov.1 or convolutional layer cov.2, and F denotes the height and width of the convolutional kernel; ρ represents a nonlinear activation function; w is arcRepresents the weight of the convolution kernel at the (r, c) position; x is the number of(r+i×t)(c+j×t)Image data representing the input convolutional layer cov.1 or convolutional layer cov.2, t represents the step size of the convolutional kernel motion; b represents a deviation.
As shown in fig. 3, each imported Residual Block layer uses a depth separable convolution and Residual structure for reducing the parametric quantity boosting operation efficiency suppressing the over-fitting problem. The depth separable convolution can reduce the operation amount and parameters, so that the method can operate on the raspberry derivative, and the residual error structure is used for preventing image information loss.
Specifically, the imported Residual Block layer performs dimension-increasing operation by using 1 × 1 ordinary convolution, then performs feature extraction by using 3 × 3 depth separable convolution, then performs dimension-reducing by using 1 × 1 ordinary convolution, and finally directly connects the input and the output.
The 3 x 3 depth separable convolution uses the following equation (2):
in formula (2): mijnIs a characteristic diagram of the nth channel; w is arcnIs the weight at the (r, c) position of the nth channel in the convolution kernel; x is the number of(r+i×t)(c+j×t)nRepresenting the image data of the nth channel of the input depth separable convolution layer, t representing the step size of the convolution kernel motion; mij' feature images representing the output N channels; w is an' weight of the feature map representing the nth channel; moutRepresenting the sum of the input data and the convolved data; x is the number ofrcnThe image data of the nth channel is input on behalf of this time.
The output parameters of the output layer are 4 through the full connection layer, and represent 3 coordinate points and one CLASS. The CLASS is divided into two classes, one CLASS indicating that the input black-and-white image contains a yarn target, and the other CLASS indicating that the input black-and-white image does not contain a yarn target. A rectangular frame is determined by the 3 coordinate points, and the rectangular frame is approximately in a tilted trapezoid shape. Under normal conditions, the target area is a regular rectangle, so that the deformation of the target area has consistency (for example, the inclination jitter is to change the whole image), and the finally obtained matrix frame is approximately in an inclined trapezoid shape, so that the target area can be determined by three coordinates, and only the three coordinates can be used for carrying out subsequent affine transformation. When the black-and-white image input through the CLASS display contains a yarn target, the rectangular frame is used to select a target area frame with yarns in the black-and-white image.
The full interconnect layer performs the following equation (3):
in formula (3):the kth output of the full link layer is represented, k is 1,2,3,4, in the embodiment, the 1 st output to the 3 rd output of the full link layer are 3 coordinate points, and the 4 th output of the full link layer is CLASS;a weight representing the kth output of the fully connected layer of the nth channel; bkRepresents the deviation of the kth output of the full link layer; h isnParameters representing the nth channel adaptive average pooling layer input.
Step 202, training the target area identification model constructed in step 201 by using a training data set, including:
step 2021, acquiring a training data set:
an industrial camera is used to collect data pictures related to spinning in a factory, and 2000 black and white images of targets with yarns and 1000 black and white images of targets without yarns are shot.
Using a rectangular frame to frame and select the yarn target in the black and white image of the yarn target, thereby obtaining three corner coordinates (X) of the rectangular frame1,Y1)、(X2,Y2)、(X3,Y3)。
The three corner coordinates corresponding to the black and white image without the yarn object have a value of 0.
Label data consisting of three corner coordinates and 4 parameters of CLASS is set for each black-and-white image.
By POUT(X) to represent the confidence that the current black-and-white image contains a yarn target, as shown in equation (4) below:
in the formula (4), uxIndicates a predetermined threshold value, and P (X) exceeds the threshold value uxConsidering that the current black and white image is reliable, and using rectangular frame to make yarn target frame selection, otherwise not making yarn target frame selection, P (X) is probability, and it is activated by CLASS CLASS output and activation function softmax to obtain [0,1]The numerical value in between.
2022, training the target area recognition model by using the training data set obtained in the previous step, and calculating the output of the full connection layer during trainingWith tag data in the training datasetIs a distance ofBy minimizing the distanceTo reduce the prediction error of the target region identification model, whereinThe first three data represent positioning targets, the fourth data are CLASS CLASS data, and different loss functions are needed to perform the positioningAnd (4) calculating.
In the present embodiment, the Loss function Loss of the target region identification model is defined as the following formula (5):
in the formula (5), LsA cross entropy function for determining whether the image contains a yarn target, p (x) is the desired output, isq (x) is the actual output, isLcRepresenting the covariance between the output coordinate point and the true coordinate point.
Step 203, inputting the black-and-white image obtained in real time into the trained target area identification model, outputting CLASS of the black-and-white image by the target area identification model, if the current black-and-white image is judged to contain the yarn target by the CLASS of CLASS, outputting 3 coordinate points for determining a rectangular frame by the target area identification model, selecting the target area frame with the yarn in the black-and-white image by using the rectangular frame, extracting the target area frame into an individual picture, and the effect is approximately as shown in fig. 4.
The camera can also have distortion phenomenon (for example, serious distortion deformation is generated at the edge when a wide-angle camera shoots) when part of cameras shoot because the camera moves at irregular speed, stops suddenly and assembled parts shake, redundant images can be caused if a fixed matrix frame is used for frame selection, and the accuracy of broken yarn detection judgment can be influenced by the selection of a subsequent yarn area. Therefore, the massage selects the target area by adopting a rectangular frame mode aiming at the characteristics, and restores the image of the target area through self-adaptive affine transformation.
And 3, restoring the picture obtained in the step 2 into a normal rectangular image through perspective processing, and smoothing the image after regularization by L0 to keep characteristics. As shown in fig. 5-6, L0 regularization is a method for preliminary smoothing of the image, highlighting yarn features, smoothing the image background by L0 regularization. Other smoothing methods, such as mean filtering, gaussian filtering, etc., may also be used, with faster but less effective.
And 4, removing the background of the image obtained in the previous step by using a sobel operator oriented to yarn feature reinforcement, and simultaneously keeping the yarn related features in the image. And processing the image by thresholding to extract a yarn target. And further screening and eliminating noise through edge contour searching, and extracting yarn characteristics.
The sobel operator for the feature enhancement of the yarn is used as shown in the following formula (6):
Gij=sobel×Hij (6)
in the formula (6), GijThe method is characterized in that H is a pixel point at a (i, j) position in an image after the sobel operator processing oriented to yarn characteristic strengtheningijThe sobel is a matrix operator, and is a pixel point at the (i, j) position in the image which is not subjected to the sobel operator processing oriented to yarn characteristic strengthening. And (3) carrying out primary product on each pixel point and surrounding pixel points by using a sobel operator oriented to yarn characteristic reinforcement so as to obtain output pixel points.
In this embodiment, the matrix operator sobel adopts an improved sobel operator matrix as shown in the following formula (7):
sobel=2f(x,y+i)+f(x-1,y+i)+f(x+1,y+i)-2f(x+2,y+i)-2f(x+2,y-i) (7)
in equation (7), f (x, y) represents input image data, and i ∈ [ -1,0,1 ].
Thresholding the image using the following equation (8):
in the formula (8), PijFor the thresholded image pixel at the (i, j) position, Pij' is the value of the image pixel at the (i, j) position before thresholding, and u is the threshold parameter.
The aforementioned edge contour search is performed based on the following equation (9):
in the formula (9), I (x) is the image content of each part after threshold segmentation,and alpha is the size of a rated threshold value for the length of the image frame selection.
As shown in fig. 8, the small noise is filtered out by edge contour finding, which enhances the yarn characteristics, but also enhances some of the large noise. Therefore, in the image processing, by adding hough transform, a straight line reaching a required length in the numerical direction is extracted, and the final result is as shown in fig. 9.
And 5, calculating the length of the yarn based on the yarn characteristics extracted in the previous step so as to judge whether the yarn breakage phenomenon occurs, and marking the yarn breakage phenomenon by using a rectangle in an output image so as to achieve the visualization purpose.
If n yarn characteristics are obtained through extraction in the step 4, judging whether the yarn breakage phenomenon occurs comprises the following steps:
step 501, calculating the gribbs test statistic of each yarn, as shown in the following formula (10):
Gi=(di-u)/s (10)
in formula (10): giThe Grabbs test statistic for the ith yarn; diThe real-time yarn length of the ith yarn; u is the average value of the samples,s is the standard deviation of the samples and,
502, determining a detection level alpha according to the length of each real-time yarn, and looking up a table(GB4883) gives the corresponding Grabbs test threshold α (n). When G isi>Alpha (n), then d is judgediIf the yarn is abnormal, the ith yarn is broken, otherwise, the abnormal value is not generated, and the yarn is normal.
Finally, it should be noted that: although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that: modifications and equivalents may be made thereto without departing from the spirit and scope of the invention and it is intended to cover in the claims the invention as defined in the appended claims.
Claims (10)
1. A tour type ring spinning broken yarn detection method based on machine vision is characterized by comprising the following steps:
step 1, an industrial camera with an industrial camera performs tour inspection on yarns, and a processor acquires pictures of the tour inspection of the industrial camera;
step 2, establishing a target area identification model, training the target area identification model, inputting the real-time image obtained in the step 1 into the trained target area identification model, judging whether the image contains a yarn target, and extracting an interested area of the image containing the yarn target, wherein the method specifically comprises the following steps:
step 201, establishing a deep neural network structure based on target detection as a target area identification model. The target region identification model consists of an input layer, a convolutional layer COV.1, 5 inversed Residual Block layers, a convolutional layer COV.2, an averaging pooling layer and an output layer, wherein:
each imported ResidualBlock layer uses a depth separable convolution and residual structure;
the output layer outputs 3 coordinate points and a CLASS through the full connection layer, wherein the CLASS is divided into two classes, one CLASS represents that the input image contains a yarn target, and the other CLASS represents that the input image does not contain the yarn target; determining a rectangular frame through 4 coordinate points, and when an input image displayed through CLASS comprises a yarn target, selecting a target area frame with yarns in the image by using the rectangular frame;
step 202, training the target area identification model constructed in step 201 by using a training data set, including:
step 2021, acquiring a training data set:
collecting related data pictures of spinning in a factory by using an industrial camera, and shooting a plurality of images with yarn targets and a plurality of black and white images without the yarn targets;
using a rectangular frame to frame and select the yarn target in the image of the yarn target, thereby obtaining three corner coordinates (X) of the rectangular frame1,Y1)、(X2,Y2)、(X3,Y3) The rectangular frame is in an inclined trapezoid shape;
the values of the three corner coordinates corresponding to the image without the yarn target are 0;
setting label data for each image, wherein the label data consists of three corner coordinates and 4 parameters of CLASS CLASS;
2022, training the target area recognition model by using the training data set obtained in the previous step, and calculating the output of the full connection layer during trainingWith tag data in the training datasetIs a distance of ToThe three corner coordinates output for the fully connected layer,the CLASS output for the full connectivity layer,toFor the three corner coordinates of the tag data,CLASS which is tag data; by minimizing the distanceTo reduce the prediction error of the target region identification model, the Loss function Loss of the target region identification model is defined as the following formula:
in the above formula, LsA cross entropy function for judging whether the image contains a yarn target, wherein p (x) is expected output, and q (x) is actual output; l iscRepresenting the covariance between the output coordinate point and the true coordinate point;
step 203, inputting a real-time image into a trained target area identification model, outputting CLASS CLASS of the image by the target area identification model, if the current image is judged to contain a yarn target through the CLASS CLASS, outputting 4 coordinate points for determining a rectangular frame by the target area identification model, selecting the target area frame with yarns in the image by using the rectangular frame, and extracting the target area frame into an independent picture;
step 3, restoring the picture obtained in the step 2 into a normal rectangular image through perspective processing, and smoothing the normal rectangular image to achieve the purposes of highlighting yarn characteristics and smoothing the image background;
step 5, removing the background of the image obtained in the previous step by using an improved sobel operator, and simultaneously keeping the relevant characteristics of the yarns in the image;
step 6, processing the image by thresholding to extract a yarn target, further screening and eliminating noise by edge contour search, extracting yarn characteristics, and extracting a straight line reaching a required length in a numerical direction by adding Hough transform to finally obtain n yarns;
and 7, calculating the length of each yarn based on the yarn characteristics extracted in the previous step so as to judge whether the yarn breakage phenomenon occurs, and marking the yarn with the yarn breakage phenomenon in an output image so as to achieve the visualization purpose.
2. The machine vision-based touring type ring spun broken yarn detecting method according to claim 1, wherein in the step 1, the picture is a black and white image.
3. The machine vision based roving ring-spun broken yarn detection method of claim 1, wherein in step 201, the formula of the convolutional layer cov.1 and convolutional layer cov.2 is as follows:
in the above formula: zijIs a feature map obtained by convolution operation, wherein (i, j) represents the height and width of the feature map,h and W denote the height and width of the image data input to convolutional layer cov.1 or convolutional layer cov.2, and F denotes the height and width of the convolutional kernel; ρ represents a nonlinear activation function; w is arcRepresents the weight of the convolution kernel at the (r, c) position; x is the number of(r+i×t)(c+j×t)Image data representing the input convolutional layer cov.1 or convolutional layer cov.2, t represents the step size of the convolutional kernel motion; b represents a deviation.
4. The machine vision-based touring type ring spun broken yarn detecting method according to claim 3, wherein in step 201, the invested Residual Block layer is firstly subjected to dimension-increasing operation by using 1 x 1 ordinary convolution, then is subjected to feature extraction by using 3 x 3 depth separable convolution, and then is subjected to dimension-reducing by using 1 x 1 ordinary convolution, and finally the input and the output are directly connected;
the 3 x 3 depth separable convolution uses the following equation:
in the above formula: mijnIs a characteristic diagram of the nth channel; w is arcnIs the weight at the (r, c) position of the nth channel in the convolution kernel; x is the number of(r+i×t)(c+j×t)nImage data representing an nth channel of the input depth-separable convolutional layer; mij' feature images representing the output N channels; w is an' weight of the feature map representing the nth channel; moutRepresenting the sum of the input data and the convolved data; x is the number ofrcnThe image data of the nth channel is input on behalf of this time.
5. The machine vision based roving ring-spun broken yarn detection method according to claim 4, wherein in step 201, the full-link layer performs the following formula:
in the formula (I), the compound is shown in the specification,a weight representing the kth output of the fully connected layer of the nth channel; bkRepresents the deviation of the kth output of the full link layer; h isnParameters representing the nth channel adaptive average pooling layer input.
6. The method for detecting broken yarn in roving ring spinning based on machine vision according to claim 1, wherein in step 2021, the confidence that the current image contains the yarn target is represented by p (x), which is represented by the following formula:
wherein P (X) is probability, and is obtained by activating CLASS output with activation function softmax to obtain [0, 1%]A value of between uxIndicates a predetermined threshold value, exceeding the threshold value uxAnd considering that the current image is reliable, and selecting the yarn target frame by using the rectangular frame, otherwise, not selecting the yarn target frame.
7. The machine vision-based touring type ring spun broken yarn detection method according to claim 1, wherein in the step 5, the sobel operator for feature enhancement of the yarn is used as shown in the following formula:
Gij=sobel×Hij
in the above formula:
Gijthe method comprises the steps of (1) processing a pixel point at a (i, j) position in an image after sobel operator processing oriented to yarn characteristic strengthening;
Hijthe pixel points at the (i, j) positions in the image which are not subjected to the sobel operator processing oriented to the yarn characteristic strengthening are obtained;
the sobel is a matrix operator, and the matrix operator sobel adopts an improved sobel operator matrix facing the yarn characteristic enhancement as shown in the following formula:
sobel=2f(x,y+i)+f(x-1,y+i)+f(x+1,y+i)-2f(x+2,y+i)-2f(x+2,y-i)
where f (x, y) represents input image data, i ∈ [ -1,0,1 ].
8. The machine-vision-based roving ring-spun broken yarn detection method according to claim 1, characterized in that in step 6, the image is thresholded using the following formula:
in the above formula, PijFor the thresholded image pixel at the (i, j) position, Pij' is the value of the image pixel at the (i, j) position before thresholding, and u is the threshold parameter.
9. The machine-vision-based roving ring-spun broken yarn detection method according to claim 8, wherein in step 6, the edge profile search is performed based on the following formula:
in the above formula, I (x) is the image content of each part after threshold segmentation, LI(x)And alpha is the size of a rated threshold value for the length of the image frame selection.
10. The machine vision-based touring type ring spinning yarn breakage detection method according to claim 1, wherein the step 7 of judging whether the yarn breakage phenomenon occurs comprises the following steps:
step 701, calculating the gribbs test statistic of each yarn as shown in the following formula:
Gi=(di-u)/s
in the formula: giThe Grabbs test statistic for the ith yarn; diThe real-time yarn length of the ith yarn; u is the average value of the samples,s is the standard deviation of the samples and,
step 702, determining a detection level alpha according to the length of each real-time yarn, and looking up a table to obtain a corresponding Grabas detection critical value alpha (n); when G isiIf alpha (n) is greater, d is judgediIf the yarn is abnormal, the ith yarn is broken, otherwise, the abnormal value is not generated, and the yarn is normal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110936624.4A CN113658131B (en) | 2021-08-16 | 2021-08-16 | Machine vision-based tour ring spinning broken yarn detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110936624.4A CN113658131B (en) | 2021-08-16 | 2021-08-16 | Machine vision-based tour ring spinning broken yarn detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113658131A true CN113658131A (en) | 2021-11-16 |
CN113658131B CN113658131B (en) | 2024-06-18 |
Family
ID=78479239
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110936624.4A Active CN113658131B (en) | 2021-08-16 | 2021-08-16 | Machine vision-based tour ring spinning broken yarn detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113658131B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114581404A (en) * | 2022-03-03 | 2022-06-03 | 常州市宏发纵横新材料科技股份有限公司 | Broken yarn detection method for interweaving binding yarns |
CN114596269A (en) * | 2022-03-01 | 2022-06-07 | 常州市新创智能科技有限公司 | Method and device for detecting few-yarn winding of glass fiber cloth cover warp yarns |
CN114998268A (en) * | 2022-06-07 | 2022-09-02 | 常州市新创智能科技有限公司 | Detection method and device for doubling and breaking of lace binding yarns |
CN115690037A (en) * | 2022-10-28 | 2023-02-03 | 富尔美技术纺织(苏州)有限公司 | Method, device and system for detecting yarn broken ends of ring spinning frame and storage medium |
CN116815365A (en) * | 2023-08-28 | 2023-09-29 | 江苏恒力化纤股份有限公司 | Automatic detection method for broken yarn of ring spinning frame |
CN117670843A (en) * | 2023-12-07 | 2024-03-08 | 常州市宏发纵横新材料科技股份有限公司 | Method, device, equipment and storage medium for detecting broken yarn of color yarn |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107247930A (en) * | 2017-05-26 | 2017-10-13 | 西安电子科技大学 | SAR image object detection method based on CNN and Selective Attention Mechanism |
CN108315852A (en) * | 2018-02-12 | 2018-07-24 | 首都师范大学 | Spinning machine threading method and device |
CN110717903A (en) * | 2019-09-30 | 2020-01-21 | 天津大学 | Method for detecting crop diseases by using computer vision technology |
CN111235709A (en) * | 2020-03-18 | 2020-06-05 | 东华大学 | Online detection system for spun yarn evenness of ring spinning based on machine vision |
WO2020181685A1 (en) * | 2019-03-12 | 2020-09-17 | 南京邮电大学 | Vehicle-mounted video target detection method based on deep learning |
CN111738413A (en) * | 2020-06-04 | 2020-10-02 | 东华大学 | Spinning full-process energy consumption monitoring method based on feature self-matching transfer learning |
CN112150423A (en) * | 2020-09-16 | 2020-12-29 | 江南大学 | Longitude and latitude sparse mesh defect identification method |
WO2021139069A1 (en) * | 2020-01-09 | 2021-07-15 | 南京信息工程大学 | General target detection method for adaptive attention guidance mechanism |
-
2021
- 2021-08-16 CN CN202110936624.4A patent/CN113658131B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107247930A (en) * | 2017-05-26 | 2017-10-13 | 西安电子科技大学 | SAR image object detection method based on CNN and Selective Attention Mechanism |
CN108315852A (en) * | 2018-02-12 | 2018-07-24 | 首都师范大学 | Spinning machine threading method and device |
WO2020181685A1 (en) * | 2019-03-12 | 2020-09-17 | 南京邮电大学 | Vehicle-mounted video target detection method based on deep learning |
CN110717903A (en) * | 2019-09-30 | 2020-01-21 | 天津大学 | Method for detecting crop diseases by using computer vision technology |
WO2021139069A1 (en) * | 2020-01-09 | 2021-07-15 | 南京信息工程大学 | General target detection method for adaptive attention guidance mechanism |
CN111235709A (en) * | 2020-03-18 | 2020-06-05 | 东华大学 | Online detection system for spun yarn evenness of ring spinning based on machine vision |
CN111738413A (en) * | 2020-06-04 | 2020-10-02 | 东华大学 | Spinning full-process energy consumption monitoring method based on feature self-matching transfer learning |
CN112150423A (en) * | 2020-09-16 | 2020-12-29 | 江南大学 | Longitude and latitude sparse mesh defect identification method |
Non-Patent Citations (4)
Title |
---|
吴旭东;吕汉明;: "基于深度学习的细纱断头检测模型", 天津纺织科技, no. 02 * |
吴易洲;苗瑞;朱健华;张洁;: "基于三维激光扫描的窄搭接焊特征提取与缺陷识别", 应用激光, no. 05 * |
常海涛;苟军年;李晓梅: "Faster R-CNN在工业CT图像缺陷检测中的应用", 中国图象图形学报, no. 007 * |
牟新刚;蔡逸超;周晓;陈国良;: "基于机器视觉的筒子纱缺陷在线检测系统", 纺织学报, no. 01 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114596269A (en) * | 2022-03-01 | 2022-06-07 | 常州市新创智能科技有限公司 | Method and device for detecting few-yarn winding of glass fiber cloth cover warp yarns |
CN114596269B (en) * | 2022-03-01 | 2022-07-29 | 常州市新创智能科技有限公司 | Method and device for detecting few-yarn winding of glass fiber cloth cover warp yarns |
CN114581404A (en) * | 2022-03-03 | 2022-06-03 | 常州市宏发纵横新材料科技股份有限公司 | Broken yarn detection method for interweaving binding yarns |
CN114998268A (en) * | 2022-06-07 | 2022-09-02 | 常州市新创智能科技有限公司 | Detection method and device for doubling and breaking of lace binding yarns |
CN114998268B (en) * | 2022-06-07 | 2022-11-25 | 常州市新创智能科技有限公司 | Method and device for detecting doubling and yarn breaking of lace binding yarns |
CN115690037A (en) * | 2022-10-28 | 2023-02-03 | 富尔美技术纺织(苏州)有限公司 | Method, device and system for detecting yarn broken ends of ring spinning frame and storage medium |
CN116815365A (en) * | 2023-08-28 | 2023-09-29 | 江苏恒力化纤股份有限公司 | Automatic detection method for broken yarn of ring spinning frame |
CN116815365B (en) * | 2023-08-28 | 2023-11-24 | 江苏恒力化纤股份有限公司 | Automatic detection method for broken yarn of ring spinning frame |
CN117670843A (en) * | 2023-12-07 | 2024-03-08 | 常州市宏发纵横新材料科技股份有限公司 | Method, device, equipment and storage medium for detecting broken yarn of color yarn |
CN117670843B (en) * | 2023-12-07 | 2024-05-24 | 常州市宏发纵横新材料科技股份有限公司 | Method, device, equipment and storage medium for detecting broken yarn of color yarn |
Also Published As
Publication number | Publication date |
---|---|
CN113658131B (en) | 2024-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113658131B (en) | Machine vision-based tour ring spinning broken yarn detection method | |
WO2022099598A1 (en) | Video dynamic target detection method based on relative statistical features of image pixels | |
CN115063421B (en) | Pole piece region detection method, system and device, medium and defect detection method | |
CN112734690A (en) | Surface defect detection method and device and computer readable storage medium | |
CN113177924A (en) | Industrial production line product flaw detection method | |
CN113449606B (en) | Target object identification method and device, computer equipment and storage medium | |
CN111861990B (en) | Method, system and storage medium for detecting bad appearance of product | |
CN114119591A (en) | Display screen picture quality detection method | |
CN107480678A (en) | A kind of chessboard recognition methods and identifying system | |
CN107958253A (en) | A kind of method and apparatus of image recognition | |
CN113298809A (en) | Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation | |
CN110880184A (en) | Method and device for carrying out automatic camera inspection based on optical flow field | |
CN109544513A (en) | A kind of steel pipe end surface defect extraction knowledge method for distinguishing | |
CN116228651A (en) | Cloth defect detection method, system, equipment and medium | |
CN114758125A (en) | Gear surface defect detection method and system based on deep learning | |
CN117115171A (en) | Slight bright point defect detection method applied to subway LCD display screen | |
CN111563869B (en) | Stain test method for quality inspection of camera module | |
CN111192261A (en) | Method for identifying lens defect types | |
CN111753572A (en) | Complex background low-quality two-dimensional bar code detection method based on deep learning | |
CN114550069B (en) | Piglet nipple counting method based on deep learning | |
CN115482207A (en) | Bolt looseness detection method and system | |
CN115908301A (en) | Defect detection method and device based on enhanced input and storage medium | |
CN115131355A (en) | Intelligent method for detecting abnormality of waterproof cloth by using data of electronic equipment | |
CN110827272B (en) | Tire X-ray image defect detection method based on image processing | |
CN114092441A (en) | Product surface defect detection method and system based on dual neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |