CN113610852A - Yarn drafting quality monitoring method based on image processing - Google Patents

Yarn drafting quality monitoring method based on image processing Download PDF

Info

Publication number
CN113610852A
CN113610852A CN202111177959.9A CN202111177959A CN113610852A CN 113610852 A CN113610852 A CN 113610852A CN 202111177959 A CN202111177959 A CN 202111177959A CN 113610852 A CN113610852 A CN 113610852A
Authority
CN
China
Prior art keywords
image
value
gray
pixel
fiber
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111177959.9A
Other languages
Chinese (zh)
Other versions
CN113610852B (en
Inventor
沈拥军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Xiangshun Fabric Co ltd
Original Assignee
Jiangsu Xiangshun Fabric Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Xiangshun Fabric Co ltd filed Critical Jiangsu Xiangshun Fabric Co ltd
Priority to CN202111177959.9A priority Critical patent/CN113610852B/en
Publication of CN113610852A publication Critical patent/CN113610852A/en
Application granted granted Critical
Publication of CN113610852B publication Critical patent/CN113610852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a yarn drafting quality monitoring method based on image processing. The method comprises the following steps: acquiring a gray image of the drafted yarn as an original image, performing fiber analysis on the original image as an image to be processed, and fusing a self-adaptive corrosion image, an expansion image, a confidence binary image and the original image in the fiber analysis process to obtain a new gray image; taking a new gray-scale image obtained by fiber analysis as an image to be processed for fiber analysis, continuously iterating until the new gray-scale image obtained by fiber analysis is stable, and taking the stable new gray-scale image as a fiber distribution map; and obtaining a texture image map according to the fiber distribution diagram, and judging the yarn quality. Compared with the prior art, the method has the advantages that the fiber edge characteristics are iteratively optimized according to the edge distribution field by using the edge adaptive corrosion method based on the mixed model, so that the interwoven and folded fiber edges can be clearly distinguished, and the yarn drafting quality monitoring result is more accurate.

Description

Yarn drafting quality monitoring method based on image processing
Technical Field
The invention relates to the field of digital image processing, identification and spinning, in particular to a yarn drafting quality monitoring method based on image processing.
Background
In the spinning process, the yarns are prepared by carrying out technological processes of mixing, drafting, mixing, twisting and the like on the cotton slivers, so that the yarns have certain twist and strength, and fibers on the yarns can be straightened as far as possible and kept parallel; the quality of the final yarn is related to these flows. In the process, the quality of the yarn is affected by defects and damages of the equipment and unreasonable control parameters of the equipment, so that the quality of the yarn in each process link needs to be strictly controlled in some spinning and weaving processes, and problems in the production process, such as equipment problems, control parameter problems and the like, need to be found in time. In the yarn production process, the fiber elongation degree after drafting is an important link for evaluating and controlling the yarn quality.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a yarn drafting quality monitoring method based on image processing, which obtains the straightening degree of the fiber on the yarn, detects the yarn with quality problems with low stretching degree, and timely adjusts the production parameters or overhauls the equipment. The adopted technical scheme is as follows:
the invention provides a yarn drafting quality monitoring method based on image processing.
Acquiring a gray image of the drafted yarn as an original image, and performing fiber analysis by taking the original image as an image to be processed; the fiber analysis comprises: respectively obtaining a de-noised image of an image to be processed and a distribution vector of each pixel point; selecting a window to slide the denoised image, performing coordinate centralization on all pixels in the window, and projecting the pixel points after coordinate centralization onto a normal vector of a distribution vector to obtain the positions of the pixel points in the normal vector direction; obtaining a maximum value point under a mapping relation according to the mapping relation between the gray values of all pixel points in the window and the positions of the pixel points in the normal vector direction; constructing a Gaussian mixture model according to the gray value of the pixels in the window, wherein the number of single Gaussian models in the Gaussian mixture model is the same as the number of maximum value points; determining a new pixel value of the center position of the window according to the output of zero taking of the independent variable of the single Gaussian model with the mean value closest to zero; traversing the denoised image by using a sliding window to obtain new pixel values of all pixel points to form a self-adaptive corrosion image; obtaining an expansion image of the original image, and obtaining a confidence binary image according to the self-adaptive corrosion image and the expansion image; fusing the original image, the self-adaptive corrosion image and the expansion image by using the confidence coefficient binary image to obtain a new gray level image;
taking a new gray-scale image obtained by fiber analysis as an image to be processed for fiber analysis, continuously iterating until the new gray-scale image obtained by fiber analysis is stable, and taking the stable new gray-scale image as a fiber distribution map; and obtaining a texture image map according to the fiber distribution diagram, and judging the yarn quality.
Preferably, the coordinate centering specifically includes: and acquiring the pixel coordinates of the center of the sliding window, and making the coordinates of all pixel points in the window to be different from the center coordinates to realize the coordinate centralization.
Preferably, the obtaining of the texture image map according to the fiber distribution map specifically comprises: obtaining a texture image map according to the fiber distribution map, and calculating the fiber straightening degree of all pixel positions in the texture image map; setting a degree threshold value, binarizing the texture image map according to the relationship between the fiber straightening degree and the degree threshold value to obtain a binarized image, and calculating the area of a connected domain of the binarized image; setting an area threshold, and judging that the yarn has a quality problem if the area of the connected domain is larger than the area threshold; otherwise, the yarn quality is qualified.
Preferably, the method for acquiring the new pixel value at the center position of the window includes: acquiring a single Gaussian model with a positive mean value and closest to zero and a single Gaussian model with a negative mean value and closest to zero, and determining the pixel gray value of the center position of the window according to the zero output of the acquired independent variable of the single Gaussian model if only one single Gaussian model is acquired; if two single Gaussian models are obtained, correcting the output value of the single Gaussian model with larger square difference by using a first fusion weight for the two single Gaussian models with the mean value closest to zero to obtain a first correction value; correcting the output value with zero of the independent variable of the single Gaussian model with smaller square difference by using a second fusion weight to obtain a second corrected value; and obtaining a new pixel value of the window center position according to the first correction value and the second correction value.
Preferably, the first fusion weight is:
Figure DEST_PATH_IMAGE001
wherein,
Figure 817578DEST_PATH_IMAGE002
is the first of the fusion weights to be applied,
Figure 203560DEST_PATH_IMAGE003
in order to be of a smaller variance,
Figure 576772DEST_PATH_IMAGE004
is the mean of a single gaussian model with a small variance,
Figure 440823DEST_PATH_IMAGE005
in the case of a large variance, the variance is high,
Figure 848670DEST_PATH_IMAGE006
the mean of the single gaussian model with the larger variance.
Preferably, the second fusion weight is:
Figure 139974DEST_PATH_IMAGE007
wherein,
Figure 483DEST_PATH_IMAGE008
is the second of the fusion weights and is,
Figure 527279DEST_PATH_IMAGE009
is the first fusion weight.
Preferably, the method for obtaining the confidence binary image according to the adaptive erosion image and the expansion image comprises: obtaining a confidence coefficient distribution diagram according to the gray value difference of each pixel position of the self-adaptive corrosion image and the expansion image, setting a confidence coefficient threshold, setting the gray value of a pixel point which is larger than the confidence coefficient threshold in the confidence coefficient distribution diagram as a first set value, and setting the gray value of the rest pixel points as a second set value to obtain a confidence coefficient two-value diagram.
Preferably, the method for obtaining the new gray scale map by fusing the original image, the adaptive erosion image and the expansion image by using the confidence binary image comprises:
Figure 665000DEST_PATH_IMAGE010
wherein,
Figure 251839DEST_PATH_IMAGE011
in order to be a new gray-scale image,
Figure 475010DEST_PATH_IMAGE012
is a gray matrix of the pixels of the original image,
Figure 805497DEST_PATH_IMAGE013
is a gray matrix of the pixel points of the confidence binary image,
Figure 797724DEST_PATH_IMAGE014
in order to expand the gray matrix of the pixels of the image,
Figure 821043DEST_PATH_IMAGE015
is a pixel gray matrix of the self-adaptive corrosion image,
Figure 531510DEST_PATH_IMAGE016
to represent
Figure 400109DEST_PATH_IMAGE013
And
Figure 512422DEST_PATH_IMAGE017
the product of the Hadamard sum of (C),
Figure 175484DEST_PATH_IMAGE018
to represent
Figure 373247DEST_PATH_IMAGE019
And
Figure 45537DEST_PATH_IMAGE012
the hadamard product of (a).
Preferably, the fiber straightening degree is determined by the gray value of a pixel point in the texture image.
Preferably, the method for binarizing the texture image map according to the straightening degree of the fibers to obtain a binarized image comprises the following steps: and setting a degree threshold value, setting the gray value of the pixel point with the fiber straightening degree in the texture image greater than the degree threshold value as a first set value, and setting the gray value of the rest pixel points as a second set value to obtain a binary image.
The embodiment of the invention at least has the following beneficial effects:
according to the invention, the distribution information of the fibers on the yarns is extracted by collecting the image data of the yarns, the fiber straightening distribution map of the yarns is obtained, and the quality of the yarns is accurately evaluated. In the process, the fiber edge characteristics are iteratively optimized according to the edge distribution field by using the edge adaptive corrosion method and the expansion algorithm based on the mixed model, so that the interwoven and folded fiber edges can be clearly distinguished, and the subsequent fiber straightening distribution map is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for monitoring yarn drafting quality based on image processing according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the method for monitoring the yarn drafting quality based on image processing according to the present invention will be made with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention aims to provide a yarn drafting quality monitoring method based on image processing, which is used for acquiring the straightening degree of fibers on yarns, detecting yarns with quality problems and low stretching degree, and adjusting production parameters or overhauling equipment in time.
The specific scenes aimed by the invention are as follows: and installing a high-definition high-frame-rate camera above the drafted yarns, overlooking the yarns by the camera, vertically irradiating the yarns downwards by using parallel white light, randomly acquiring an image by the camera (random sampling detection), wherein each acquired image is a gray image. The image comprises a yarn, a plurality of fibers are arranged on the yarn, each fiber is expected to be parallel and completely straightened, but the fiber straightening method cannot be realized in actual production, different fibers can only be straightened and parallel as far as possible, but due to equipment or control reasons, some fibers on the yarn are gathered together, the fibers are not straightened, and the quality of the yarn is affected.
The following describes a specific scheme of the yarn drafting quality monitoring method based on image processing in detail with reference to the accompanying drawings.
The specific embodiment is as follows:
referring to fig. 1, a flow chart of steps of a method for monitoring yarn draft quality based on image processing according to an embodiment of the present invention is shown, the method includes the following steps:
acquiring a gray image of the drafted yarn as an original image, and performing fiber analysis by taking the original image as an image to be processed; the fiber analysis comprises: respectively obtaining a de-noised image of an image to be processed and a distribution vector of each pixel point; selecting a window to slide the denoised image, performing coordinate centralization on all pixels in the window, and projecting the pixel points after coordinate centralization onto a normal vector of a distribution vector to obtain the positions of the pixel points in the normal vector direction; obtaining a maximum value point under a mapping relation according to the mapping relation between the gray values of all pixel points in the window and the positions of the pixel points in the normal vector direction; constructing a Gaussian mixture model according to the gray value of the pixels in the window, wherein the number of single Gaussian models in the Gaussian mixture model is the same as the number of maximum value points; determining a new pixel value of the center position of the window according to the output of zero taking of the independent variable of the single Gaussian model with the mean value closest to zero; traversing the denoised image by using a sliding window to obtain new pixel values of all pixel points to form a self-adaptive corrosion image; obtaining an expansion image of the original image, and obtaining a confidence binary image according to the self-adaptive corrosion image and the expansion image; fusing the original image, the self-adaptive corrosion image and the expansion image by using the confidence coefficient binary image to obtain a new gray level image;
taking a new gray-scale image obtained by fiber analysis as an image to be processed for fiber analysis, continuously iterating until the new gray-scale image obtained by fiber analysis is stable, and taking the stable new gray-scale image as a fiber distribution map; and obtaining a texture image map according to the fiber distribution diagram, and judging the yarn quality.
The specific implementation steps are as follows:
firstly, a gray image of the drafted yarn is obtained as an original image, and the original image is used as an image to be processed.
Specifically, a high-definition high-frame-rate camera is installed above the drafted yarn, the camera looks down the yarn and vertically irradiates the yarn downwards by using parallel white light, the camera randomly collects one image (random sampling detection), each collected image is a gray image, the gray image is called as an original image, and the original image is used as an image to be processed.
At this point, an image to be processed is obtained.
And secondly, performing fiber analysis on the image to be processed to obtain a new gray-scale image.
Respectively obtaining a de-noised image of an image to be processed and a distribution vector of each pixel point; selecting a window to slide the denoised image, performing coordinate centralization on all pixels in the window, and projecting the pixel points after coordinate centralization onto a normal vector of a distribution vector to obtain the positions of the pixel points in the normal vector direction; obtaining a maximum value point under a mapping relation according to the mapping relation between the gray values of all pixel points in the window and the positions of the pixel points in the normal vector direction; constructing a Gaussian mixture model according to the gray value of the pixels in the window, wherein the number of single Gaussian models in the Gaussian mixture model is the same as the number of maximum value points; determining a new pixel value of the center position of the window according to the output of zero taking of the independent variable of the single Gaussian model with the mean value closest to zero; traversing the denoised image by using a sliding window to obtain new pixel values of all pixel points to form a self-adaptive corrosion image; obtaining an expansion image of the original image, and obtaining a confidence binary image according to the self-adaptive corrosion image and the expansion image; and fusing the original image, the self-adaptive corrosion image and the expansion image by using the confidence coefficient binary image to obtain a new gray level image.
Specifically, the specific method for obtaining a new gray scale map by performing fiber analysis on an image to be processed comprises the following steps:
(1) and acquiring a distribution vector of each pixel point of the image to be processed. Firstly, enhancing the high-frequency characteristics of an image to be processed by utilizing an unsharp mask algorithm. The high-frequency features comprise edge features on the image and also comprise noise features. The purpose of image enhancement is to: because the fibers on the yarns are staggered and covered, the fiber structure is fine, and the yarns move in production, the fiber structure is possibly fuzzy and unclear in detail, and accurate fiber information on the yarns cannot be directly acquired, so that the details and high-frequency characteristics of an image to be processed need to be enhanced by using an unsharp mask method, and edge information of the fibers is highlighted. Secondly, extracting edges on the image after feature enhancement by using a Canny operator to obtain an edge binary image, wherein the edges represent the distribution of the fibers. However, since the noise of the original image is also enhanced, and the fibers on the yarn are staggered and covered, the noise is originally much, the edges on the obtained edge binary image may be discontinuous or adhered and covered together, the fibers on the image are not easy to distinguish, the stretching condition of a whole fiber on the yarn cannot be distinguished, and the straightening degree of the fiber cannot be obtained, so that the subsequent processing is required: the method comprises the steps of obtaining any pixel with the gray value of 1 on an edge binary image, obtaining a Hessian matrix of the pixel position, and then calculating two eigenvalues and two corresponding eigenvectors of the Hessian matrix. The feature vector is a two-dimensional unit vector, the vector represents the distribution direction of the edge at the position, i.e. the pixel coordinates of the edge at the position are distributed approximately along the direction of the feature vector, and the feature vector is called as the distribution vector of the position; and finally, acquiring distribution vectors of all pixels with the gray value of 1 on the edge binary image, and differentiating the distribution vectors of unknown positions (positions with the gray value of 0) according to the distribution vectors of the known positions (positions with the gray value of 1) by utilizing a linear interpolation algorithm of a two-dimensional space. To this end, a distribution vector for all positions is obtained, representing the trend of the edge of each position. The distribution constants of all positions are considered as one edge distribution field.
(2) Acquiring a de-noised image of an image to be processed, selecting a window to slide the de-noised image, performing coordinate centering on all pixels in the window, projecting pixel points after coordinate centering onto a normal vector of a distribution vector of pixel points in the center of the window, and obtaining the positions of the pixel points in the direction of the normal vector. Specifically, first, the image to be processed (i.e., the original) is blurred with a 3 × 3 gaussian blur kernel in order to remove a large amount of noise in the original. And secondly, performing sliding window on the blurred original image, namely, each position corresponds to the center of one rectangular window, preferably, the width of the window is 5, the height of the window is 11, and the axis of the window in the width direction is parallel to the distribution vector of each position. Thirdly, all the pixels in the window are obtained, and the coordinates of the pixel points of the pixels are centralized, wherein the specific method comprises the following steps: and acquiring the pixel coordinates of the center of the window, and subtracting the coordinates of all pixels in the window from the center coordinates. After centralization, the coordinates of all pixel points in the window are sequentially used for solving the inner product with the normal vector of the distribution vector corresponding to the central pixel of the window. The step aims to reduce the coordinates of all pixel points in the window to one dimension, namely, project all the pixel points to the normal vector of the distribution vector. Because the distribution vector represents the edge trend of the center position of the window, the normal vector of the distribution vector represents the vertical edge trend, each pixel point after projection is regarded as a position x on the number axis of the normal vector, corresponds to a gray value y, and all the x and y are used for reflecting the gray distribution information in the edge vertical direction.
(3) And obtaining a maximum value point under the mapping relation according to the mapping relation between the gray values of all the pixel points in the window and the positions of the pixel points in the normal vector direction. Specifically, a 5 th-order polynomial curve is fitted by using the least square method by taking the gray values of all positions as a data set
Figure 12356DEST_PATH_IMAGE020
The mapping relation between the position x and the gray value y is roughly described; and acquiring the number of the curve maximum value points, wherein the value is K.
(4) Constructing a Gaussian mixture model according to the gray value of the pixels in the window, wherein the number of single Gaussian models in the Gaussian mixture model is the same as the number of maximum value points; determining a new pixel value of the center position of the window according to the output of zero taking of the independent variable of the single Gaussian model with the mean value closest to zero; and traversing the denoised image by using a sliding window to obtain new pixel values of all pixel points to form the self-adaptive corrosion image.
Specifically, the process of obtaining the adaptive corrosion image is as follows:
first, the invention establishes a mixed Gaussian model
Figure 377478DEST_PATH_IMAGE021
The gaussian mixture model G is formed by superimposing K single gaussian models, and in particular, the K value is the maximum number of curves. The invention roughly evaluates how many single Gaussian models are superposed to form the mixed Gaussian model by utilizing a polynomial curve. Wherein,
Figure 796959DEST_PATH_IMAGE022
mean, variance and pixel point projection of kth single Gaussian model in Gaussian mixture model on window center pixelThe point normal vector is located at position x on the numerical axis. Solving a Gaussian mixture model by using gray values of all positions as a data set, pixel positions x in a window as independent variables and pixel gray values as dependent variables through an EM (effective electromagnetic radiation) algorithm
Figure 7360DEST_PATH_IMAGE023
All of the parameters in
Figure 953319DEST_PATH_IMAGE024
. In the Gaussian mixture model, each single Gaussian model represents a possible edge, the mean value represents the position of the edge, and the variance represents the thickness of the edge.
And secondly, determining a new pixel value of the center position of the window according to the output of zero-taking of the independent variable of the single Gaussian model with the mean value closest to zero.
The method for acquiring the new pixel value of the window center position comprises the following steps: acquiring a single Gaussian model with a positive mean value and closest to zero and a single Gaussian model with a negative mean value and closest to zero, and determining the pixel gray value of the center position of the window according to the zero output of the acquired independent variable of the single Gaussian model if only one single Gaussian model is acquired; if two single Gaussian models are obtained, correcting the output value of the single Gaussian model with larger square difference by using a first fusion weight for the two single Gaussian models with the mean value closest to zero to obtain a first correction value; correcting the output value with zero of the independent variable of the single Gaussian model with smaller square difference by using a second fusion weight to obtain a second corrected value; and obtaining a new pixel value of the window center position according to the first correction value and the second correction value.
In particular, assume that
Figure 364709DEST_PATH_IMAGE025
And
Figure 396119DEST_PATH_IMAGE026
two parameters that are the most close to 0, one with positive sign and the other with negative sign. If it is not
Figure 285578DEST_PATH_IMAGE004
And
Figure 187560DEST_PATH_IMAGE006
only one of which exists, provided that the parameters exist
Figure 769851DEST_PATH_IMAGE004
Then construct a single Gaussian model
Figure 22978DEST_PATH_IMAGE027
Then the present invention considers that the new pixel at the center of the window is located at
Figure 716127DEST_PATH_IMAGE027
On the corresponding edge, the gray value of the new pixel at the center of the window is set to
Figure 371100DEST_PATH_IMAGE028
Wherein
Figure 983347DEST_PATH_IMAGE028
is a single Gaussian model
Figure 864715DEST_PATH_IMAGE027
An output value when the argument x is 0.
If both exist
Figure 486189DEST_PATH_IMAGE004
And
Figure 871034DEST_PATH_IMAGE006
then the present invention considers the pixel at the center of the window to be at
Figure 654182DEST_PATH_IMAGE029
On the corresponding edge, again at
Figure 22847DEST_PATH_IMAGE030
On the corresponding edge; however, the invention expects the pixel at the center of the window to be positioned on a certain edge and simply does soBy obtaining
Figure 182433DEST_PATH_IMAGE004
And
Figure 952943DEST_PATH_IMAGE006
the parameter with the minimum absolute value is assumed as
Figure 906992DEST_PATH_IMAGE004
Then the new pixel value for the center position of the window is set to
Figure 762953DEST_PATH_IMAGE028
Wherein
Figure 726229DEST_PATH_IMAGE028
is a single Gaussian model
Figure 85667DEST_PATH_IMAGE029
In the independent variable
Figure 476197DEST_PATH_IMAGE031
An output value at 0.
This simple situation is
Figure 553874DEST_PATH_IMAGE004
And
Figure 320842DEST_PATH_IMAGE006
are usable when the absolute values of (A) are all large, but if they are large
Figure 659419DEST_PATH_IMAGE004
And
Figure 96217DEST_PATH_IMAGE006
when the absolute value of the window is small, that is, the corresponding edges of the two single gaussian models are almost overlapped, it cannot be determined simply by using the above method to which edge the pixel at the center of the window belongs, that is, it cannot be determined the new pixel value at the center of the window. The present invention proposes a new method for determiningNew pixel value of window center position: suppose that
Figure 785824DEST_PATH_IMAGE032
Let the new pixel value of the pixel at the center of the window be set
Figure 966270DEST_PATH_IMAGE033
. Wherein,
Figure 424933DEST_PATH_IMAGE034
is a single Gaussian model
Figure 32632DEST_PATH_IMAGE030
The output value when the argument is 0,
Figure 209535DEST_PATH_IMAGE028
is a single Gaussian model
Figure 662513DEST_PATH_IMAGE029
An output value when the argument is 0; i.e. the new pixel value of the pixel at the center of the window is
Figure 241262DEST_PATH_IMAGE034
And
Figure 19862DEST_PATH_IMAGE028
weighted fusion of (2). Wherein,
Figure 684062DEST_PATH_IMAGE002
is the first of the fusion weights to be applied,
Figure 206310DEST_PATH_IMAGE008
is the second of the fusion weights and is,
Figure 639565DEST_PATH_IMAGE035
is a first correction value for the first image data,
Figure 589067DEST_PATH_IMAGE036
is a second correction value, specifically,
Figure 474983DEST_PATH_IMAGE002
Figure 800922DEST_PATH_IMAGE008
respectively as follows:
Figure 88684DEST_PATH_IMAGE001
Figure 209087DEST_PATH_IMAGE007
wherein,
Figure 847879DEST_PATH_IMAGE037
representing the difference in the absolute values of the means of the two gaussian models,
Figure 711930DEST_PATH_IMAGE037
the larger the distance of the pixel to the center of the window is, the Gaussian model
Figure 854198DEST_PATH_IMAGE030
The closer the mean value of (c) is, the more attention is paid to the new pixel value of the pixel at the center position of the window
Figure 411081DEST_PATH_IMAGE034
Figure 271590DEST_PATH_IMAGE037
The smaller the pixel distance Gaussian model representing the center position of the window
Figure 939332DEST_PATH_IMAGE029
The closer the mean value of (c) is, the more attention is paid to the new pixel value of the pixel at the center position of the window
Figure 936106DEST_PATH_IMAGE028
Figure 522946DEST_PATH_IMAGE038
The larger the distance between the edges represented by the two single Gaussian models, the less attention is paid to the situation
Figure 746116DEST_PATH_IMAGE037
The size of (d);
Figure 76604DEST_PATH_IMAGE038
the smaller the distance between the edges represented by the two Gaussian models, the more the dependence on the distance is required
Figure 68830DEST_PATH_IMAGE037
To determine the new pixel value of the pixel at the center of the window to approach to the corresponding edge of the single gaussian model. Given a
Figure 92150DEST_PATH_IMAGE032
Figure 537038DEST_PATH_IMAGE039
The smaller, i.e.
Figure 671216DEST_PATH_IMAGE005
Ratio of
Figure 783529DEST_PATH_IMAGE003
The larger the difference between the corresponding edge thicknesses of the two single Gaussian models is, the larger the difference is
Figure 712170DEST_PATH_IMAGE040
The smaller the new pixel value of the pixel at the center of the window is, the more interesting it is
Figure 909933DEST_PATH_IMAGE028
I.e. the thin edges are of greater concern, which is advantageous for enhancing the structure of the thin edges. It is noted that if not present
Figure 582223DEST_PATH_IMAGE006
Will be at once
Figure 549042DEST_PATH_IMAGE006
And
Figure 648585DEST_PATH_IMAGE005
viewed as a parameter of negative and positive infinity, respectively, i.e. Gaussian model
Figure 333644DEST_PATH_IMAGE030
Is a single Gaussian model with infinite variance at infinity, when
Figure 809625DEST_PATH_IMAGE041
The general applicability of the method is demonstrated.
Thus, a new pixel value of the center pixel of the sliding window is obtained.
And finally, traversing the denoised image by using a sliding window to obtain new pixel values of all pixel points to form a self-adaptive corrosion image.
Thus, an adaptive corrosion image is obtained.
In summary, the new pixel value of the window center position obtained by the method can satisfy the following condition: firstly, the single Gaussian model close to the center of the window can determine the gray value of the pixel; secondly, if the variances of the two single gaussian models are inconsistent, that is, the thicknesses of the two edges are inconsistent, the gray value of the pixel at the center of the window is preferentially dependent on the value of the single gaussian model corresponding to the thin edge. The method has the advantages that the edge of the pixel at the center of the window can be estimated, the overlapped edges are distinguished after the gray value of the pixel at the center of the window is reset, and the thin edge structure is prevented from being submerged. According to the method, each pixel point on the original image corresponds to one window, a gray value is distributed for the pixel in the center of the window by utilizing the distribution characteristics of the gray values of the pixels in the window, and the redistributed gray value is smaller than the initial gray value and is equivalent to the fact that the pixel in the center of the window is subjected to corrosion filtering operation. Therefore, the process of reassigning a gray value to the pixel at the center of the window is called an edge adaptive erosion process based on a hybrid model, which is called edge adaptive erosion for short.
It should be added that, in practice, the pixel gray value distribution of the vertical edge trend is not processed by gaussian distribution, but the invention fits the pixel gray value distribution to gaussian distribution, and can also represent edge information, and simultaneously, the invention adopts gaussian filtering to remove noise, so that a large part of image details can be accurately restored by using a mixed gaussian model. Therefore, the invention uses the Gaussian model to describe the gray scale distribution of the vertical direction of the edge trend reasonably and accurately.
(5) And obtaining an expansion image of the original image, and obtaining a confidence binary image according to the self-adaptive corrosion image and the expansion image. Obtaining a confidence coefficient distribution diagram according to the gray value difference of each pixel position of the self-adaptive corrosion image and the expansion image, setting a confidence coefficient threshold, setting the gray value of a pixel point which is larger than the confidence coefficient threshold in the confidence coefficient distribution diagram as a first set value, and setting the gray value of the rest pixel points as a second set value to obtain a confidence coefficient two-value diagram.
Specifically, first, a dilated image of the image to be processed (i.e., the original image) is acquired: and performing sliding window on the blurred image to be processed, namely each position corresponds to the center of a rectangular window, and preferably, the window width is 11 and the height is 3. The axis of the window along the width direction is parallel to the distribution vector of each position, and the expansion operation is performed on the pixel point in the center of the window by using the window, wherein the specific method comprises the following steps: and acquiring gray values of all pixels in the window, selecting the maximum gray value, and taking the maximum gray value as a new gray value of a central pixel point of the window, wherein the process is expansion operation. The dilation operation can connect broken detail structures, restoring some of the image detail structures. And performing expansion operation on all pixel points of the blurred image to be processed to obtain an expanded image. Therefore, each pixel on the original image corresponds to one erosion result and one expansion result. Secondly, a confidence distribution map of the image to be processed is obtained. If the expansion and erosion results for a location do not differ significantly, the results are accurate, and if the difference is significant, the erosion and expansion results are incongruous. Subtracting the expansion result and the corrosion result at the same position and marking as a, wherein exp (-3a) is used as a confidence coefficient of the position to represent the accuracy of the corrosion expansion result of the position after the corrosion expansion treatment is carried out on the original image; the confidence of all the positions constitutes a confidence distribution graph. And finally, obtaining a confidence coefficient binary image. Carrying out mean value filtering on the confidence coefficient distribution diagram by using a 3 multiplied by 3 window to obtain a filtering result diagram; the filtering result graph is subjected to threshold processing, and the specific method comprises the following steps: setting the gray value of the pixel with the gray value larger than the first threshold (0.3) as a first set value, preferably, the first set value is 1, and setting the gray values of the rest pixels as a second set value, preferably, the second set value is 0; a binary image is obtained, i.e. a confidence binary image. The corrosion result and the expansion result obtained at the pixel position with the gray value of 1 on the confidence coefficient binary image are consistent, and the confidence coefficient is high; the corrosion result and the expansion result obtained by the pixel position with the gray value of 0 are inconsistent, and the reliability is low.
(6) And fusing the original image, the self-adaptive corrosion image and the expansion image by using the confidence coefficient binary image to obtain a new gray level image. Setting a confidence binary image as I1, a dilation image as I2, an adaptive erosion image as I3 and an original image as I0, wherein the images are all regarded as two-dimensional matrixes with equal sizes, and then the new gray-scale map matrix is:
Figure 365371DEST_PATH_IMAGE010
wherein,
Figure 901395DEST_PATH_IMAGE011
in order to be a new gray-scale image,
Figure 808171DEST_PATH_IMAGE012
is a gray matrix of the pixels of the original image,
Figure 822263DEST_PATH_IMAGE013
is a gray matrix of the pixel points of the confidence binary image,
Figure 763675DEST_PATH_IMAGE014
in order to expand the gray matrix of the pixels of the image,
Figure 205020DEST_PATH_IMAGE015
is a pixel gray matrix of the self-adaptive corrosion image,
Figure 599093DEST_PATH_IMAGE017
meaning that the expansion image and the adaptive erosion image are averaged,
Figure 416876DEST_PATH_IMAGE016
to represent
Figure 212794DEST_PATH_IMAGE013
And
Figure 90620DEST_PATH_IMAGE017
the product of the Hadamard sum of (C),
Figure 971988DEST_PATH_IMAGE018
to represent
Figure 62304DEST_PATH_IMAGE019
And
Figure 837362DEST_PATH_IMAGE012
the hadamard product of (a). The Hadamard product of any two matrices is a matrix formed by products of corresponding elements of the two matrices, and the Hadamard product of any two matrices is still a matrix.
Figure 761455DEST_PATH_IMAGE042
The mean (of the expansion result and the corrosion result) representing the position where only the corrosion result and the expansion result agree is noted, and the result of the position where the corrosion result and the expansion result disagree is 0;
Figure 254754DEST_PATH_IMAGE018
results indicating the location of inconsistency between the erosion results and the expansion results are
Figure 289706DEST_PATH_IMAGE012
And the result of the position where the corrosion result and the expansion result agree is 0. Therefore, for
Figure 919270DEST_PATH_IMAGE011
If the corrosion result and the expansion result of the position are consistent, the gray value of the position is determined by the average value of the corrosion result and the expansion result, otherwise, the gray value of the position is determined by the original image
Figure 14265DEST_PATH_IMAGE012
And (6) determining.
And finally, completing the fiber analysis process of the image to be processed to obtain a new gray-scale image.
And thirdly, taking the new gray-scale image obtained by the fiber analysis as an image to be processed for fiber analysis, continuously iterating until the new gray-scale image obtained by the fiber analysis is stable, and taking the stable new gray-scale image as a fiber distribution map. Specifically, will
Figure 729280DEST_PATH_IMAGE011
And the image to be processed is regarded as the image to be processed, and the processing process of the original image is executed again. Iteratively, preferably, the above-described part of the invention is cycled four or more times, so that a result is obtained
Figure 302344DEST_PATH_IMAGE011
When no change occurs, the last one is obtained
Figure 51994DEST_PATH_IMAGE043
Namely, the stable new gray-scale image is taken as the fiber distribution image. In this iterative iteration, it is constantly updated
Figure 317891DEST_PATH_IMAGE011
The edge information is obtained, and the edge distribution field is continuously updated, so that the corrosion result graph and the expansion result graph are consistent in region, and finally, a stable new gray scale graph is obtained
Figure 520202DEST_PATH_IMAGE043
The fiber distribution pattern on the yarn can be expressed, and the problem that the fibers cannot be distinguished due to the interweaving covering of the fibers and the noise and the blurring can be eliminatedAnd the fiber is intermittent, so that the fiber distribution is clear and complete and is easy to distinguish.
And obtaining a fiber distribution diagram according to the new gray-scale image.
And finally, obtaining a texture image map according to the fiber distribution map, and judging the yarn quality.
Preferably, the obtaining of the texture image map according to the fiber distribution map specifically comprises: obtaining a texture image map according to the fiber distribution map, and calculating the fiber straightening degree of all pixel positions in the texture image map; setting a degree threshold value, binarizing the texture image map according to the relationship between the fiber straightening degree and the degree threshold value to obtain a binarized image, and calculating the area of a connected domain of the binarized image; setting an area threshold, and judging that the yarn has a quality problem if the area of the connected domain is larger than the area threshold; otherwise, the yarn quality is qualified.
Preferably, the method for obtaining the binary image by binarizing the texture image according to the straightening degree of the fibers comprises the following steps: and setting a degree threshold value, setting the gray value of the pixel point with the fiber straightening degree in the texture image greater than the degree threshold value as a first set value, and setting the gray value of the rest pixel points as a second set value to obtain a binary image. The degree of fiber straightening is determined by the gray value of the pixel points in the texture image.
Specifically, first, according to the fiber distribution pattern
Figure 162536DEST_PATH_IMAGE043
The texture image map is calculated. The specific method comprises the following steps: a 17 x 17 window is set up,
Figure 766692DEST_PATH_IMAGE043
all pixels in each small window form a sub-image, entropy values of the small window image gray level co-occurrence matrix and the gray level co-occurrence matrix are calculated, then the entropy values are assigned to the central point of the window, texture feature calculation of the first small window is completed, then the window is moved by one pixel to form another small window image, and new co-occurrence matrix and entropy values are calculated repeatedly. By analogy, according to the fibre divisionThe layout generates a texture characteristic image step by step, the size of the texture characteristic image is the same as that of the original image, each pixel of the texture image represents the complexity or disorder degree of the texture, the larger the value is, the more complex the texture distribution is, the more disordered the fiber distribution at the position is, and the lower the straightening degree is. Next, a fiber straightening profile is obtained. The pixel at any position on the texture image is p, and the extension degree of the fiber at the position is p
Figure 203490DEST_PATH_IMAGE044
Thus, the degree of extension at each position is obtained, and the degree of extension at all positions constitutes the fiber extension profile. Again, the fiber straightening profile is thresholded. The specific method comprises the following steps: and setting the gray value of the pixel with the gray value smaller than the second threshold value as a first set value, and setting the gray values of the rest pixels as second set values to obtain a binary image. Preferably, the second threshold is 0.2, the first set value is 1, and the second set value is 0. And finally, monitoring the yarn drafting quality and judging the yarn quality. And calculating the area of the connected domain of the binary image, and if the area is larger than an area threshold value of the area of the ROI (the ROI is considered to be defined in advance) of the yarn, indicating that the quality of the yarn has problems, wherein the area threshold value is preferably one tenth of the area of the ROI of the yarn. If the yarns obtained by the images acquired by the invention for three times have quality problems, the production equipment or the production parameters have problems, and the yarns need to be adjusted in time.
And finishing monitoring the yarn drafting quality.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A yarn drafting quality monitoring method based on image processing comprises the following steps:
acquiring a gray image of the drafted yarn as an original image, and performing fiber analysis by taking the original image as an image to be processed; the fiber analysis comprises: respectively obtaining a de-noised image of an image to be processed and a distribution vector of each pixel point; selecting a window to slide the denoised image, performing coordinate centralization on all pixels in the window, and projecting the pixel points after coordinate centralization onto a normal vector of a distribution vector to obtain the positions of the pixel points in the normal vector direction; obtaining a maximum value point under a mapping relation according to the mapping relation between the gray values of all pixel points in the window and the positions of the pixel points in the normal vector direction; constructing a Gaussian mixture model according to the gray value of the pixels in the window, wherein the number of single Gaussian models in the Gaussian mixture model is the same as the number of maximum value points; determining a new pixel value of the center position of the window according to the output of zero taking of the independent variable of the single Gaussian model with the mean value closest to zero; traversing the denoised image by using a sliding window to obtain new pixel values of all pixel points to form a self-adaptive corrosion image; obtaining an expansion image of the original image, and obtaining a confidence binary image according to the self-adaptive corrosion image and the expansion image; fusing the original image, the self-adaptive corrosion image and the expansion image by using the confidence coefficient binary image to obtain a new gray level image;
taking a new gray-scale image obtained by fiber analysis as an image to be processed for fiber analysis, continuously iterating until the new gray-scale image obtained by fiber analysis is stable, and taking the stable new gray-scale image as a fiber distribution map; and obtaining a texture image map according to the fiber distribution diagram, and judging the yarn quality.
2. The yarn drafting quality monitoring method based on image processing as claimed in claim 1, wherein the coordinate centering is specifically: and acquiring the pixel coordinates of the center of the sliding window, and making the coordinates of all pixel points in the window to be different from the center coordinates to realize the coordinate centralization.
3. The yarn drafting quality monitoring method based on image processing according to claim 1, wherein the texture image map is obtained according to the fiber distribution map, and the judgment of the yarn quality specifically comprises:
obtaining a texture image map according to the fiber distribution map, and calculating the fiber straightening degree of all pixel positions in the texture image map; setting a degree threshold value, binarizing the texture image map according to the relationship between the fiber straightening degree and the degree threshold value to obtain a binarized image, and calculating the area of a connected domain of the binarized image; setting an area threshold, and judging that the yarn has a quality problem if the area of the connected domain is larger than the area threshold; otherwise, the yarn quality is qualified.
4. The yarn drafting quality monitoring method based on image processing as claimed in claim 1, wherein the new pixel value of the window center position is obtained by:
acquiring a single Gaussian model with a positive mean value and closest to zero and a single Gaussian model with a negative mean value and closest to zero, and determining the pixel gray value of the center position of the window according to the zero output of the acquired independent variable of the single Gaussian model if only one single Gaussian model is acquired;
if two single Gaussian models are obtained, correcting the output value of the single Gaussian model with larger square difference by using a first fusion weight for the two single Gaussian models with the mean value closest to zero to obtain a first correction value; correcting the output value with zero of the independent variable of the single Gaussian model with smaller square difference by using a second fusion weight to obtain a second corrected value; and obtaining a new pixel value of the window center position according to the first correction value and the second correction value.
5. The method for monitoring the yarn drafting quality based on the image processing as claimed in claim 4, wherein the first blending weight is:
Figure 428375DEST_PATH_IMAGE002
wherein,
Figure DEST_PATH_IMAGE003
is the first of the fusion weights to be applied,
Figure 820173DEST_PATH_IMAGE004
in order to be of a smaller variance,
Figure DEST_PATH_IMAGE005
is the mean of a single gaussian model with a small variance,
Figure 802035DEST_PATH_IMAGE006
in the case of a large variance, the variance is high,
Figure DEST_PATH_IMAGE007
the mean of the single gaussian model with the larger variance.
6. The method for monitoring the yarn drafting quality based on the image processing as claimed in claim 4, wherein the second blending weight is:
Figure DEST_PATH_IMAGE009
wherein,
Figure 984011DEST_PATH_IMAGE010
is the second of the fusion weights and is,
Figure DEST_PATH_IMAGE011
is the first fusion weight.
7. The method for monitoring the yarn drafting quality based on the image processing as claimed in claim 1, wherein the method for obtaining the confidence binary image according to the adaptive erosion image and the expansion image is as follows:
obtaining a confidence coefficient distribution diagram according to the gray value difference of each pixel position of the self-adaptive corrosion image and the expansion image, setting a confidence coefficient threshold, setting the gray value of a pixel point which is larger than the confidence coefficient threshold in the confidence coefficient distribution diagram as a first set value, and setting the gray value of the rest pixel points as a second set value to obtain a confidence coefficient two-value diagram.
8. The method for monitoring the yarn drafting quality based on the image processing as claimed in claim 1, wherein the method for fusing the original image, the adaptive erosion image and the expansion image by using the confidence binary image to obtain the new gray level image comprises:
Figure DEST_PATH_IMAGE013
wherein,
Figure 912784DEST_PATH_IMAGE014
in order to be a new gray-scale image,
Figure DEST_PATH_IMAGE015
is a gray matrix of the pixels of the original image,
Figure 657624DEST_PATH_IMAGE016
is a gray matrix of the pixel points of the confidence binary image,
Figure DEST_PATH_IMAGE017
in order to expand the gray matrix of the pixels of the image,
Figure 544809DEST_PATH_IMAGE018
is a pixel gray matrix of the self-adaptive corrosion image,
Figure DEST_PATH_IMAGE019
to represent
Figure 774933DEST_PATH_IMAGE016
And
Figure 71178DEST_PATH_IMAGE020
the product of the Hadamard sum of (C),
Figure DEST_PATH_IMAGE021
to represent
Figure 703148DEST_PATH_IMAGE022
And
Figure 292392DEST_PATH_IMAGE015
the hadamard product of (a).
9. The method of claim 3, wherein the fiber straightening degree is determined by gray-scale values of pixels in the texture image.
10. The yarn drafting quality monitoring method based on image processing as claimed in claim 3, characterized in that the method for binarizing the texture image map according to the fiber straightening degree to obtain a binarized image comprises:
and setting a degree threshold value, setting the gray value of the pixel point with the fiber straightening degree in the texture image greater than the degree threshold value as a first set value, and setting the gray value of the rest pixel points as a second set value to obtain a binary image.
CN202111177959.9A 2021-10-10 2021-10-10 Yarn drafting quality monitoring method based on image processing Active CN113610852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111177959.9A CN113610852B (en) 2021-10-10 2021-10-10 Yarn drafting quality monitoring method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111177959.9A CN113610852B (en) 2021-10-10 2021-10-10 Yarn drafting quality monitoring method based on image processing

Publications (2)

Publication Number Publication Date
CN113610852A true CN113610852A (en) 2021-11-05
CN113610852B CN113610852B (en) 2021-12-10

Family

ID=78343412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111177959.9A Active CN113610852B (en) 2021-10-10 2021-10-10 Yarn drafting quality monitoring method based on image processing

Country Status (1)

Country Link
CN (1) CN113610852B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067227A (en) * 2021-11-29 2022-02-18 黑龙江农垦建工路桥有限公司 Side slope potential safety hazard monitoring method based on unmanned aerial vehicle aerial photography
CN115272303A (en) * 2022-09-26 2022-11-01 睿贸恒诚(山东)科技发展有限责任公司 Textile fabric defect degree evaluation method, device and system based on Gaussian blur
CN115345882A (en) * 2022-10-18 2022-11-15 南通莱晋纺织科技有限公司 Textile compactness evaluation method based on image enhancement
CN116934749A (en) * 2023-09-15 2023-10-24 山东虹纬纺织有限公司 Textile flaw rapid detection method based on image characteristics
CN117994236A (en) * 2024-02-21 2024-05-07 徐州普路通纺织科技有限公司 Vortex spun broken yarn intelligent detection method and system based on image analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107870172A (en) * 2017-07-06 2018-04-03 黎明职业大学 A kind of Fabric Defects Inspection detection method based on image procossing
CN108346141A (en) * 2018-01-11 2018-07-31 浙江理工大学 Unilateral side incidence type light guide plate defect extracting method
CN109461136A (en) * 2018-09-20 2019-03-12 天津工业大学 The detection method of fiber distribution situation in a kind of blended fibre products
EP3732470A2 (en) * 2017-12-26 2020-11-04 Petr Perner Devices and methods for yarn quality monitoring

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107870172A (en) * 2017-07-06 2018-04-03 黎明职业大学 A kind of Fabric Defects Inspection detection method based on image procossing
EP3732470A2 (en) * 2017-12-26 2020-11-04 Petr Perner Devices and methods for yarn quality monitoring
CN108346141A (en) * 2018-01-11 2018-07-31 浙江理工大学 Unilateral side incidence type light guide plate defect extracting method
CN109461136A (en) * 2018-09-20 2019-03-12 天津工业大学 The detection method of fiber distribution situation in a kind of blended fibre products

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SUBHASISH ROY等: "Yarn Hairiness Evaluation Using Image Processing", 《2014 INTERNATIONAL CONFERENCE ON CONTROL, INSTRUMENTATION, ENERGY & COMMUNICATION(CIEC) 》 *
章国红等: "图像处理技术在纱线毛羽检测方面的应用", 《河北科技大学学报》 *
解德义等: "基于数字图像处理的交叉纤维的分离方法", 《山东轻工业学院学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067227A (en) * 2021-11-29 2022-02-18 黑龙江农垦建工路桥有限公司 Side slope potential safety hazard monitoring method based on unmanned aerial vehicle aerial photography
CN115272303A (en) * 2022-09-26 2022-11-01 睿贸恒诚(山东)科技发展有限责任公司 Textile fabric defect degree evaluation method, device and system based on Gaussian blur
CN115272303B (en) * 2022-09-26 2023-03-10 睿贸恒诚(山东)科技发展有限责任公司 Textile fabric defect degree evaluation method, device and system based on Gaussian blur
CN115345882A (en) * 2022-10-18 2022-11-15 南通莱晋纺织科技有限公司 Textile compactness evaluation method based on image enhancement
CN116934749A (en) * 2023-09-15 2023-10-24 山东虹纬纺织有限公司 Textile flaw rapid detection method based on image characteristics
CN116934749B (en) * 2023-09-15 2023-12-19 山东虹纬纺织有限公司 Textile flaw rapid detection method based on image characteristics
CN117994236A (en) * 2024-02-21 2024-05-07 徐州普路通纺织科技有限公司 Vortex spun broken yarn intelligent detection method and system based on image analysis

Also Published As

Publication number Publication date
CN113610852B (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN113610852B (en) Yarn drafting quality monitoring method based on image processing
CN113538433B (en) Mechanical casting defect detection method and system based on artificial intelligence
CN110619618B (en) Surface defect detection method and device and electronic equipment
CN114723681B (en) Concrete crack defect detection method based on machine vision
CN110378313B (en) Cell cluster identification method and device and electronic equipment
CN116934749B (en) Textile flaw rapid detection method based on image characteristics
CN117764989B (en) Visual-aided display screen defect detection method
CN110889837A (en) Cloth flaw detection method with flaw classification function
CN117422712B (en) Plastic master batch visual detection method and system based on image filtering processing
CN114565607B (en) Fabric defect image segmentation method based on neural network
CN116152261B (en) Visual inspection system for quality of printed product
CN113592782A (en) Method and system for extracting X-ray image defects of composite carbon fiber core rod
CN115330758A (en) Welding quality detection method based on denoising processing
CN107895371B (en) Textile flaw detection method based on peak coverage value and Gabor characteristics
CN115937186A (en) Textile defect identification method and system
CN114905712A (en) Injection molding machine control method based on computer vision
CN117067112B (en) Water cutting machine and control method thereof
Tao Enhanced Canny Algorithm for Image Edge Detection in Print Quality Assessment
CN115008255B (en) Tool wear identification method and device for machine tool
CN116363064A (en) Defect identification method and device integrating target detection model and image segmentation model
CN114612384B (en) Method and system for detecting defects of appearance material of sport protector
CN109064425A (en) A kind of image de-noising method of adaptive non local total variation
WO2021260765A1 (en) Dimension measuring device, semiconductor manufacturing device, and semiconductor device manufacturing system
CN110647843B (en) Face image processing method
CN113870342A (en) Appearance defect detection method, intelligent terminal and storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant