CN115294097A - Textile surface defect detection method based on machine vision - Google Patents

Textile surface defect detection method based on machine vision Download PDF

Info

Publication number
CN115294097A
CN115294097A CN202211169528.2A CN202211169528A CN115294097A CN 115294097 A CN115294097 A CN 115294097A CN 202211169528 A CN202211169528 A CN 202211169528A CN 115294097 A CN115294097 A CN 115294097A
Authority
CN
China
Prior art keywords
window
pixel point
point
value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211169528.2A
Other languages
Chinese (zh)
Inventor
赵双龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Junheqing Textile Co ltd
Original Assignee
Nantong Junheqing Textile Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Junheqing Textile Co ltd filed Critical Nantong Junheqing Textile Co ltd
Priority to CN202211169528.2A priority Critical patent/CN115294097A/en
Publication of CN115294097A publication Critical patent/CN115294097A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Abstract

The invention relates to the field of data processing, in particular to a textile surface defect detection method based on machine vision, which comprises the steps of carrying out noise processing on gray information by acquiring the gray information in a textile image and determining the shape and the size of a window according to a template image, namely determining a self-adaptive weighted denoising mean value according to the gray information and the distance information of each pixel point in the gray information, carrying out denoising processing on the gray information, and finally calculating the denoised data to obtain the corresponding qualification rate; that is, the scheme of the invention can accurately detect the defects on the surface of the textile by processing the data contained in the image.

Description

Textile surface defect detection method based on machine vision
Technical Field
The application relates to the field of data processing, in particular to a textile surface defect detection method based on machine vision.
Background
The appearance quality of textiles is one of the most important indexes for measuring the product quality, and many defects not only affect the aesthetic appearance of the fabrics, but also can cause difficulty in subsequent processing and leave quality hidden troubles which are difficult to eliminate.
At present, the automatic detection technology for textiles by using machine vision is widely used. The technology replaces manual visual detection with a high-speed camera and a computer, so that the detection standard is unified, and the detection efficiency is greatly improved. However, due to the production environment of the textile and the influence of the shooting equipment, the collected image can contain a large amount of noise, and due to the unevenness of the surface of the textile, the texture features in the image are numerous. Therefore, in the process of machine vision detection, the accuracy of texture feature extraction has an important influence on the precision of defect detection.
In the traditional denoising method, the edges and texture features in the image become fuzzy, the image quality is reduced, the accuracy of texture feature extraction is influenced, and the precision of defect detection is further influenced.
Disclosure of Invention
In order to solve the above technical problems, the present invention aims to provide a method for detecting defects on a textile surface based on machine vision, which adopts the following technical scheme:
the invention discloses a textile surface defect detection method based on machine vision, which comprises the following steps:
the method comprises the following steps: acquiring a textile image on a production line, performing semantic segmentation to identify the textile surface image, and performing graying processing to obtain a grayscale image;
setting the shape and size of a sliding window, and carrying out self-adaptive weighted mean denoising on the gray level image based on the shape and size of the sliding window to obtain a denoised gray level image;
step three: segmenting the defect region of the denoised gray level image, calculating the defect area of the defect region, and obtaining the qualification rate of the surface quality of the textile based on the defect area;
wherein, the process of setting the size of the sliding window is as follows:
firstly, taking a flawless textile surface image as a template image, randomly selecting a plurality of pixel points on the upper edge of the template image, traversing the pixel points vertically downwards by taking each pixel point as a starting point, and obtaining line segments of the corresponding pixel points;
establishing a plane coordinate system, wherein the ordinate represents the gray value of the pixel points, the abscissa represents the number of the traversed pixel points, and the step length is a single pixel point; taking a line segment as an example, for a plane coordinate systemPerforming smooth curve fitting on the points, representing the period of the curve by the distance between two adjacent wave troughs to obtain a period set
Figure 416606DEST_PATH_IMAGE001
Wherein n is the number of wave troughs on the curve; computing a set of cycles
Figure 836086DEST_PATH_IMAGE002
Has a mean value of
Figure 921854DEST_PATH_IMAGE003
The standard length of the meridian single bulge on the line segment in the image is represented; obtaining the standard lengths of all the line segments, and further obtaining a standard length mean value; then randomly selecting a plurality of pixel points on the left edge of the template image so as to obtain a standard diameter length mean value;
and (4) rounding the standard length average value and the standard diameter length average value downwards, if the rounded numerical value is an even number, subtracting 1 from the even number to obtain the transverse length and the longitudinal length of the window which are respectively B and A, wherein the shape of the window is a cross.
Further, the adaptive weighted mean denoising process is as follows:
traversing the gray level image line by line with a window, and calculating the gray level difference value of two adjacent pixel points from left to right to obtain a difference value sequence; acquiring turning points in the difference sequence;
if the number of the turning points is less than or equal to the set value, the pixel points corresponding to the turning points are not noise points, the distance between two adjacent pixel points is made to be 1, the pixel points are traversed leftwards and rightwards by taking the central pixel point of the window as a starting point, the distance between each pixel point and the central pixel point is counted, and a first distance set of each non-central non-salt-pepper noise pixel point from the central pixel point to the central pixel point from left to right in the transverse direction in the window is obtained
Figure 946441DEST_PATH_IMAGE004
(ii) a And counting a first gray value set corresponding to the elements in the first distance set
Figure 325208DEST_PATH_IMAGE005
(ii) a Obtaining a first self-adaptive weight according to the first distance set and the first gray value set; obtaining a first self-adaptive weighted denoising mean value based on the first self-adaptive weight;
if the turning point is larger than the set value, the turning point is a noise point or a defect point, and a pixel point corresponding to the turning point is removed; and counting a second gray value set of each non-central non-break-point non-salt-and-pepper noise pixel point from left to right in the transverse direction in the window, so as to obtain a second self-adaptive weight of the gray value of the window central pixel point in the transverse direction, and obtaining a second self-adaptive weighted denoising mean value based on the second self-adaptive weight.
Further, the first adaptive weight is:
Figure 966405DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure 324705DEST_PATH_IMAGE007
Figure 734958DEST_PATH_IMAGE008
Figure 19046DEST_PATH_IMAGE009
representing the number of non-center non-salt-and-pepper noise pixel points in the horizontal direction in the window,
Figure 147539DEST_PATH_IMAGE010
the distance weight is represented by a distance weight,
Figure 309530DEST_PATH_IMAGE004
the distance between the g-th noncentral non-salt-pepper noise pixel point from left to right and the center pixel point in the window is represented, and when the distance is shorter, the weight is closer
Figure 231348DEST_PATH_IMAGE010
The larger the size of the tube is,
Figure 453382DEST_PATH_IMAGE011
representing the pixel gray level difference weight, C representing the gray level of the center pixel,
Figure 69171DEST_PATH_IMAGE005
the gray value of the g-th non-center non-salt-and-pepper noise pixel point from left to right in the transverse direction of the window is represented, so that when the distance between the non-center non-salt-and-pepper noise pixel point and the center pixel point in the transverse direction of the window is shorter and the gray value difference is smaller,
Figure 769274DEST_PATH_IMAGE010
and
Figure 387075DEST_PATH_IMAGE011
the larger the value, the adaptive weight value
Figure 780010DEST_PATH_IMAGE012
The larger.
Further, the first adaptive mean value is:
Figure 883095DEST_PATH_IMAGE013
wherein
Figure 386889DEST_PATH_IMAGE009
Representing the number of non-center non-salt-and-pepper noise pixel points in the horizontal direction in the window,
Figure 124776DEST_PATH_IMAGE005
representing the gray value of the non-center non-salt-pepper noise pixel point from left to right in the transverse direction of the window,
Figure 688612DEST_PATH_IMAGE012
and representing the weight of the g-th non-center non-salt-and-pepper noise pixel point in the window from left to right in the transverse direction.
Further, the first adaptive mean value is:
Figure 154360DEST_PATH_IMAGE014
wherein C is the gray value of the central pixel point of the window,
Figure 960379DEST_PATH_IMAGE015
indicating that the window is laterally second from left to right
Figure 319817DEST_PATH_IMAGE016
The gray value of each non-central non-break-point non-salt-and-pepper-noise pixel point is smaller when the difference between the gray value of each non-central non-break-point non-salt-and-pepper-noise pixel point and the gray value of the central pixel point is smaller, and the weight is smaller
Figure 54554DEST_PATH_IMAGE017
The larger.
Further, the defect area of the denoised gray level image is segmented by adopting a maximum entropy method.
Further, the qualification rate is
Figure 511993DEST_PATH_IMAGE018
Wherein the content of the first and second substances,
Figure 888747DEST_PATH_IMAGE019
and
Figure 837112DEST_PATH_IMAGE020
the area of the gray image and the defect area of the defect area are respectively.
The invention has the beneficial effects that:
according to the scheme, the textile image is collected through a camera above a production line, the textile surface image is obtained through semantic segmentation, and then angle correction is carried out. And then setting the shape and size of a window according to the spinning mode and the spinning line characteristics in the image, and carrying out self-adaptive weighted mean denoising. And finally, identifying the surface defects of the textile, and judging whether the textile is qualified, wherein the method is easy to realize and can accurately identify the surface defects of the textile.
Meanwhile, according to the periodic texture change in the textile surface image, a cross-shaped window and a determined window size are designed, the gray level change of pixels in each window is a periodic linear change similar to a sine curve, and then the image is subjected to self-adaptive weighted mean denoising according to the linear change, so that the edge and texture characteristics in the image are protected.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a machine vision based method of detecting defects on a textile surface in accordance with the present invention;
fig. 2 is a surface grayscale image of a textile.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the embodiments, structures, features and effects thereof according to the present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The present invention is directed to the following scenarios: when textile defects are identified, due to the fact that noise exists in a surface image, the edge and texture features in the image can be blurred by an existing denoising mode, image quality is reduced, and the precision of defect detection is affected. The method comprises the steps of processing collected textile surface images by using a computer vision technology, designing a cross-shaped window and determining the size of the window according to periodic texture changes in the textile surface images, enabling the gray level of pixels in each window to change into periodic linear changes in a similar sine curve, denoising the images according to the linear changes in a self-adaptive weighted mean value, protecting the edge and texture characteristics in the images, further accurately identifying defect areas, and judging whether the textile surface quality is qualified.
Specifically, referring to FIG. 1, a flowchart illustrating steps of a method embodiment of a machine vision based textile surface defect detection method of the present invention includes the steps of:
the method comprises the following steps: and acquiring a textile image on a production line, performing semantic segmentation to identify the textile surface image, and performing graying processing to obtain a grayscale image.
In this embodiment, through set up image acquisition device on the production line, carry out image acquisition to the fabrics, obtain the surface image of fabrics. Wherein the image acquisition device is a camera. Meanwhile, the LED lamp annular light source is adopted for illumination, so that the illumination of the collected image is uniform.
In the embodiment, semantic segmentation is performed on the acquired image of the solar panel, and specifically, a DNN semantic segmentation mode is adopted to identify a target in the segmented image.
The relevant content of the DNN network is as follows:
the data set used is a production line textile image data set acquired from an overhead view.
The pixels to be segmented are divided into 2 types, namely the labeling process of the training set corresponding to the labels is as follows: the semantic label of the single channel, the pixel of the corresponding position belongs to the background class and is marked as 0, and the pixel of the corresponding position belongs to the fabric surface and is marked as 1.
The task of the network is classification, so the loss function used is a cross entropy loss function.
So far, the DNN is used for realizing the processing of the textile image on the production line and obtaining the connected domain information of the textile surface in the image.
In the embodiment, the angle of the spinning line is obtained by utilizing Hough line detection, so that the image is rotated, and the warps are longitudinally distributed and the wefts are transversely distributed in the image; and finally, carrying out gray processing on the surface image of the textile to obtain a final gray image.
Step two: and setting the shape and the size of the sliding window, and carrying out self-adaptive weighted mean denoising on the gray level image based on the shape and the size of the sliding window to obtain a denoised gray level image.
Taking a textile with warp and weft characteristics as an example, as shown in fig. 2, the specific process of performing adaptive weighted mean denoising in this embodiment is:
(1) And setting the shape and size of the window according to the warp and weft characteristics in the surface image of the textile.
Firstly, taking a flawless textile surface image as a template, randomly selecting 10 pixel points on the upper side edge of the textile surface template image, and traversing pixel points vertically downwards by taking the 10 pixel points as starting points to obtain 10 line segments;
establishing a plane coordinate system, wherein the ordinate represents the gray value of the pixel points, the abscissa represents the number of the traversed pixel points, and the step length is a single pixel point; taking a line segment as an example, a smooth curve fitting is performed to the points on the plane coordinate system, and the curve is similar to a sine curve. Therefore, the period of the curve is represented by the distance between two adjacent wave troughs to obtain a period set
Figure 8330DEST_PATH_IMAGE001
Wherein n is the number of wave troughs on the curve; computing a set of cycles
Figure 540680DEST_PATH_IMAGE002
Has a mean value of
Figure 924388DEST_PATH_IMAGE003
And the standard length of the meridian single bulge on the line segment in the image is shown.
Similarly, the curve fitted on the other 9 line segments is similar to a sine curve and the period is similar, so that the standard length of the meridian single bulge on the other 9 line segments in the image is obtained
Figure 992838DEST_PATH_IMAGE003
Taking the average of the 10 data as
Figure 69379DEST_PATH_IMAGE021
And represents the standard length of a single protrusion on the meridian in the image.
Then randomly selecting 10 pixel points on the left side edge of the textile surface template image, traversing pixel points vertically and rightwards by taking the 10 pixel points as starting points to obtain 10 line segments; obtaining the standard diameter length of the meridian cylinder in the image
Figure 354604DEST_PATH_IMAGE022
The shape of the window is cross-shaped according to the gray scale change of pixels in rows and columns in the surface image of the textile, the transverse length and the longitudinal length of the cross-shaped window are respectively standard periods of fitted curves on the rows and the columns of the image, and the transverse length and the longitudinal length of the cross-shaped window are odd numbers, so that the cross-shaped window is aligned to the standard period of the fitted curves on the rows and the columns of the image
Figure 542003DEST_PATH_IMAGE022
And
Figure 464960DEST_PATH_IMAGE021
and rounding downwards, if the rounded value is an even number, subtracting 1 from the even number to obtain the transverse length and the longitudinal length of the window which are respectively B and A.
It should be noted that the textile in the scheme uses a weaving mode of interweaving two warps and one weft, every two warps are vertically interwoven with one weft in a group, so that the gray value of the pixel points in each row in the image changes according to the cylindrical structure of the warps, the horizontal change of the gray value on a single warp is that the gray value is gradually increased to the middle of the warp, and then the gray value is gradually reduced. And the gray value of each row of pixel points in the image is jacked up by the weft according to the longitude, so that the raised change is also that the gray value is gradually increased to the highest position of the raised part and then gradually reduced.
(2) And carrying out self-adaptive weighted mean denoising on each pixel point in the textile surface image.
Firstly, traversing pixel points in the textile surface image by using a designed window, and only analyzing the pixel points of the textile surface image in the window if the center point of the window is at the edge of the image.
For the pixel points in the window, salt and pepper noise points with pixel gray values of 0 and 255 are removed, the pixel points on the leftmost side of the window are taken as starting points, traversal is performed right by pixel points, the gray difference value of two adjacent pixel points is calculated, if the difference value is positive, the difference value is marked as 1, if the difference value is negative, the difference value is marked as-1, and if the difference value is 0, the difference value is not marked. One sequence was obtained as follows: [1,1,1, -1, -1, -1];
and traversing from left to right in the sequence, and if the value of the next point is different from that of the previous point, marking the pixel point between the two difference values as a break point.
If the number of the marked folding points is not more than 2, and when the number of the folding points is 2, and the two folding points are respectively arranged at the left side and the right side of the central pixel point of the window, the fact that the pixel gray scale change in the transverse direction of the window is less influenced by noise is shown.
It should be noted that: the gray value change of the pixels in the transverse direction and the longitudinal direction in the window is in a curve fluctuation period, if the positions of the pixels are changed to a certain extent, the curve shape in the period can be a parabola with a downward opening, and therefore the distance change from the central pixel to the rest of the pixels can be set according to the positions of the central pixel of the window on the parabola.
Making the distance between two adjacent pixels be 1, traversing pixel point by pixel point leftwards by taking the central pixel point of the window as a starting point, counting the distance between each pixel point and the central pixel point, recording the distances to 1,2,3 and … in sequence, making the distance to the marked break point be n, making the distances of the pixel points behind the break point be n-1,n-2,n-3 and … in sequence, when reaching n-n, making the distance be 1, and making the distance to be 2,3 and … in sequence. Thereby obtaining a distance set of the center pixel point of the window and each non-salt-and-pepper noise pixel point at the left side
Figure 712402DEST_PATH_IMAGE023
Wherein e is a windowThe number of non-salt-and-pepper noise pixel points on the left side of the pixel point at the center of the mouth. Similarly, a distance set of the center pixel point of the window and each non-salt-and-pepper noise pixel point on the right side of the distance is obtained
Figure 360289DEST_PATH_IMAGE024
Wherein q is the number of non-salt-and-pepper noise pixel points on the right side of the center pixel point of the window; thereby obtaining a distance set of each non-central non-salt-and-pepper noise pixel point from left to right in the window to the central pixel point
Figure 616958DEST_PATH_IMAGE025
The farther its distance, the smaller the weight.
Then, the gray value set of each non-central non-salt-and-pepper noise pixel point from left to right in the transverse direction in the statistical window is counted
Figure 394422DEST_PATH_IMAGE026
The larger the gray difference is, the smaller the weight is. If the center pixel point of the window is not the salt and pepper noise point, the gray value C is unchanged. If the central pixel point of the window is a salt and pepper noise point, because the gray value of the pixel in the window should be in a curve fluctuation period in the horizontal direction and the longitudinal direction, the gray value of one pixel point should be similar to the gray values of two adjacent points, so the gray value C of the central pixel point of the window is:
Figure 571020DEST_PATH_IMAGE027
wherein
Figure 66723DEST_PATH_IMAGE028
And
Figure 2449DEST_PATH_IMAGE029
representing the pixel gray values of two non-salt-and-pepper noise points which are adjacent to each other in the horizontal direction of the central pixel point in the window,
Figure 132954DEST_PATH_IMAGE030
and
Figure 987777DEST_PATH_IMAGE031
and expressing the pixel gray values of two non-salt-pepper noise points of the central pixel point in the window, which are vertically adjacent.
At the moment, the self-adaptive weight of the gray value of the central pixel point of the window in the transverse direction
Figure 970777DEST_PATH_IMAGE012
Comprises the following steps:
Figure 303669DEST_PATH_IMAGE032
Figure 554260DEST_PATH_IMAGE033
Figure 579985DEST_PATH_IMAGE034
Figure 784701DEST_PATH_IMAGE035
wherein
Figure 921284DEST_PATH_IMAGE009
Representing the number of horizontal non-center non-salt-and-pepper noise pixel points in the window,
Figure 167327DEST_PATH_IMAGE010
the distance weight is represented by a distance weight,
Figure 223007DEST_PATH_IMAGE004
representing the distance from the g-th noncentral non-salt-pepper noise pixel point from left to right to the center pixel point in the window, when the distance is closer, the weight is closer
Figure 790386DEST_PATH_IMAGE010
The larger the size of the tube is,
Figure 31792DEST_PATH_IMAGE011
representing the pixel gray level difference weight, C representing the gray level of the center pixel,
Figure 492861DEST_PATH_IMAGE005
the gray value of the g-th non-center non-salt-and-pepper noise pixel point from left to right in the transverse direction of the window is represented, so when the distance between the non-center non-salt-and-pepper noise pixel point and the center pixel point in the transverse direction of the window is shorter and the gray value difference is smaller,
Figure 594809DEST_PATH_IMAGE010
and
Figure 774118DEST_PATH_IMAGE011
the larger the value, the adaptive weight value
Figure 751039DEST_PATH_IMAGE012
The larger.
Therefore, the self-adaptive weighted denoising mean value of the gray value of the central pixel point of the window in the transverse direction
Figure 66613DEST_PATH_IMAGE036
Comprises the following steps:
Figure 745988DEST_PATH_IMAGE037
wherein
Figure 911127DEST_PATH_IMAGE009
Representing the number of non-center non-salt-and-pepper noise pixel points in the horizontal direction in the window,
Figure 193204DEST_PATH_IMAGE005
representing the gray value of the non-center non-salt-pepper noise pixel point from left to right in the transverse direction of the window,
Figure 628865DEST_PATH_IMAGE012
representing the weight of the non-center non-salt-pepper noise pixel point from the left to the g-th in the transverse direction in the windowThe value is obtained. Wherein
Figure 72616DEST_PATH_IMAGE009
Individual weight value
Figure 725051DEST_PATH_IMAGE012
The sum of (1). According to the weight value of the image texture period characteristic and the gray value change, the image edge and the texture period characteristic can be protected after denoising.
If the number of the marked folding points is more than 2, and the number of the folding points on one side of the left side and the right side of the central pixel point of the window is more than 1, the influence of noise or defects on the pixel gray scale change in the transverse direction of the window is large, and part of the folding points are noise points or defect points, so that the pixel points corresponding to the marked folding points are removed.
Then, the gray value set of non-center non-break-point non-salt-and-pepper noise pixel points from left to right in the transverse direction in the statistical window is counted
Figure 545240DEST_PATH_IMAGE038
H is the number of non-center non-break-point non-salt-and-pepper noise pixel points in the horizontal direction in the window, and at the moment, the self-adaptive weight of the gray value of the center pixel point of the window in the horizontal direction
Figure 835407DEST_PATH_IMAGE017
Comprises the following steps:
Figure 184480DEST_PATH_IMAGE014
wherein C is the gray value of the central pixel point of the window,
Figure 76211DEST_PATH_IMAGE015
indicating that the window is laterally second from left to right
Figure 700090DEST_PATH_IMAGE016
The gray value of each non-central non-break-point non-salt-and-pepper-noise pixel point is smaller when the difference between the gray value of each non-central non-break-point non-salt-and-pepper-noise pixel point and the gray value of the central pixel point is smaller, and the weight is smaller
Figure 844764DEST_PATH_IMAGE017
The larger, then the weights for h calculated
Figure 895896DEST_PATH_IMAGE017
Performing normalization processing to obtain a set
Figure 257345DEST_PATH_IMAGE039
Therefore, the horizontal self-adaptive weighted denoising mean value of the gray value of the central pixel point of the window
Figure 419336DEST_PATH_IMAGE036
Comprises the following steps:
Figure 684096DEST_PATH_IMAGE040
wherein h is the number of horizontal non-center non-break-point non-salt-and-pepper noise pixel points in the window,
Figure 906129DEST_PATH_IMAGE015
indicating that the window is horizontally from left to right
Figure 489295DEST_PATH_IMAGE016
The gray values of the non-center non-break-point non-salt-and-pepper noise pixel points,
Figure 454977DEST_PATH_IMAGE039
the window is shown to be horizontally from left to right
Figure 574243DEST_PATH_IMAGE016
And (4) normalizing the non-center non-break-point non-salt-and-pepper noise pixel points to obtain the weight. The periodic texture is damaged due to the influence of the noise points and the defect points in the window, so that the noise points in the window are removed, the weighting is carried out according to the gray level difference of the pixels, and the gray level difference between the defect pixel points and the normal pixel points is increased while the edge of the image is protected.
Self-adaptive weighted denoising method for obtaining gray value of window center pixel point in longitudinal direction in same mannerMean value
Figure 967178DEST_PATH_IMAGE041
. Self-adaptive weighted denoising mean value for gray value of central pixel point in event window
Figure 303219DEST_PATH_IMAGE042
And (6) replacing.
And in the same way, replacing the gray values of all the pixel points on the surface image of the textile to finish the denoising treatment of the surface image of the textile.
Step three: and segmenting the defect region of the denoised gray image, calculating the area of the defect region, and obtaining the qualification rate of the surface quality of the textile based on the defect area.
Acquiring a textile surface image which is denoised and protects the image edge and texture characteristics according to the second step, then segmenting defect regions in the image by using a maximum entropy method, and counting the areas of the textile surface and the defect regions according to the number of pixel points, wherein the area is respectively
Figure 807013DEST_PATH_IMAGE019
And
Figure 780785DEST_PATH_IMAGE020
therefore, the pass probability P is:
Figure 344622DEST_PATH_IMAGE043
and when the P is more than 99%, judging that the surface quality of the textile is qualified.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (7)

1. A textile surface defect detection method based on machine vision is characterized by comprising the following steps:
the method comprises the following steps: acquiring a textile image on a production line, performing semantic segmentation to identify the textile surface image, and performing graying processing to obtain a grayscale image;
step two: setting the shape and size of a sliding window, and carrying out adaptive weighted mean denoising on the gray level image based on the shape and size of the sliding window to obtain a denoised gray level image;
step three: segmenting the defect region of the denoised gray level image, calculating the defect area of the defect region, and obtaining the qualification rate of the surface quality of the textile based on the defect area;
wherein, the process of setting the size of the sliding window is as follows:
firstly, taking a flawless textile surface image as a template image, randomly selecting a plurality of pixel points on the upper edge of the template image, traversing the pixel points vertically downwards by taking each pixel point as a starting point, and obtaining line segments of the corresponding pixel points;
establishing a plane coordinate system, wherein the ordinate represents the gray value of the pixel points, the abscissa represents the number of the traversed pixel points, and the step length is a single pixel point; taking a line segment as an example, performing smooth curve fitting on points on a plane coordinate system, representing the period of the curve by the distance between two adjacent wave troughs, and obtaining a period set
Figure 712659DEST_PATH_IMAGE001
Wherein n is the number of troughs on the curve; computing a set of cycles
Figure 990056DEST_PATH_IMAGE002
Has a mean value of
Figure 843350DEST_PATH_IMAGE003
The standard length of the meridian single bulge on the line segment in the image is represented; to this endObtaining the standard lengths of all the line segments, and further obtaining a standard length mean value; then randomly selecting a plurality of pixel points on the left edge of the template image so as to obtain a standard diameter length mean value;
and (4) rounding the standard length average value and the standard diameter length average value downwards, if the rounded numerical value is an even number, subtracting 1 from the even number to obtain the transverse length and the longitudinal length of the window which are respectively B and A, wherein the shape of the window is a cross.
2. A machine-vision based textile surface defect detection method as in claim 1, wherein adaptive weighted mean denoising is performed by:
traversing the gray level image line by line with a window, and calculating the gray level difference value of two adjacent pixel points from left to right to obtain a difference value sequence; acquiring a break point in the difference sequence;
if the number of the break points is less than or equal to a set value, pixel points corresponding to the break points are not noise points, the distance between two adjacent pixel points is made to be 1, the pixel points are traversed leftwards and rightwards by taking a central pixel point of a window as a starting point, the distance between each pixel point and the central pixel point is counted, and a first distance set of each non-central non-salt-and-pepper noise pixel point from the central pixel point to the central pixel point from the left to the right in the window in the transverse direction is obtained
Figure 762764DEST_PATH_IMAGE004
(ii) a And counting a first gray value set corresponding to the elements in the first distance set
Figure 519367DEST_PATH_IMAGE005
(ii) a Obtaining a first self-adaptive weight according to the first distance set and the first gray value set; obtaining a first self-adaptive weighted denoising mean value based on the first self-adaptive weight;
if the break point is larger than the set value, the break point is a noise point or a defect point, and a pixel point corresponding to the break point is removed; and counting a second gray value set of each non-central non-break-point non-salt-and-pepper-noise pixel point from left to right in the transverse direction in the window, so as to obtain a second self-adaptive weight of the gray value of the central pixel point in the window in the transverse direction, and obtaining a second self-adaptive weighted denoising mean value based on the second self-adaptive weight.
3. A machine-vision based textile surface defect detection method as claimed in claim 2, wherein said first adaptive weight is:
Figure 29983DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure 639081DEST_PATH_IMAGE007
Figure 96607DEST_PATH_IMAGE008
Figure 442138DEST_PATH_IMAGE009
representing the number of non-center non-salt-and-pepper noise pixel points in the horizontal direction in the window,
Figure 592497DEST_PATH_IMAGE010
the distance weight is represented by a distance weight,
Figure 593951DEST_PATH_IMAGE004
representing the distance from the g-th noncentral non-salt-pepper noise pixel point from left to right to the center pixel point in the window, when the distance is closer, the weight is closer
Figure 88124DEST_PATH_IMAGE010
The larger the size of the tube is,
Figure 553740DEST_PATH_IMAGE011
representing the pixel gray level difference weight, C representing the gray level of the center pixel,
Figure 671738DEST_PATH_IMAGE005
the gray value of the g-th non-center non-salt-and-pepper noise pixel point from left to right in the transverse direction of the window is represented, so that when the distance between the non-center non-salt-and-pepper noise pixel point and the center pixel point in the transverse direction of the window is shorter and the gray value difference is smaller,
Figure 222805DEST_PATH_IMAGE010
and
Figure 523598DEST_PATH_IMAGE011
the larger the value, the adaptive weight value
Figure 781404DEST_PATH_IMAGE012
The larger.
4. A machine-vision based textile surface defect detection method according to claim 3, characterized in that said first adaptive mean value is:
Figure 742407DEST_PATH_IMAGE013
wherein
Figure 46350DEST_PATH_IMAGE009
Representing the number of non-center non-salt-and-pepper noise pixel points in the horizontal direction in the window,
Figure 446107DEST_PATH_IMAGE005
representing the gray value of the non-center non-salt-pepper noise pixel point from left to right in the transverse direction of the window,
Figure 384851DEST_PATH_IMAGE012
and representing the weight of the non-center non-salt-and-pepper noise pixel point from left to right in the transverse direction of the window.
5. A machine-vision based textile surface defect detection method according to claim 2, characterized in that said first adaptive mean value is:
Figure 782334DEST_PATH_IMAGE014
wherein C is the gray value of the central pixel point of the window,
Figure 980097DEST_PATH_IMAGE015
indicating that the window is horizontally from left to right
Figure 386808DEST_PATH_IMAGE016
The gray value of each non-central non-break-point non-salt-and-pepper-noise pixel point is smaller when the difference between the gray value of each non-central non-break-point non-salt-and-pepper-noise pixel point and the gray value of the central pixel point is smaller, and the weight is smaller
Figure 681523DEST_PATH_IMAGE017
The larger.
6. A machine-vision based textile surface defect detection method as claimed in claim 1 wherein the segmentation of defect regions from de-noised gray scale images employs a maximum entropy method to segment defect regions in gray scale images.
7. A machine-vision based textile surface defect detection method as claimed in claim 1 wherein said yield is
Figure 548110DEST_PATH_IMAGE018
Wherein the content of the first and second substances,
Figure 826644DEST_PATH_IMAGE019
and
Figure 771467DEST_PATH_IMAGE020
the area of the gray image and the defect area of the defect area are respectively.
CN202211169528.2A 2022-09-26 2022-09-26 Textile surface defect detection method based on machine vision Pending CN115294097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211169528.2A CN115294097A (en) 2022-09-26 2022-09-26 Textile surface defect detection method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211169528.2A CN115294097A (en) 2022-09-26 2022-09-26 Textile surface defect detection method based on machine vision

Publications (1)

Publication Number Publication Date
CN115294097A true CN115294097A (en) 2022-11-04

Family

ID=83834827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211169528.2A Pending CN115294097A (en) 2022-09-26 2022-09-26 Textile surface defect detection method based on machine vision

Country Status (1)

Country Link
CN (1) CN115294097A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439481A (en) * 2022-11-09 2022-12-06 青岛平电锅炉辅机有限公司 Deaerator welding quality detection method based on image processing
CN116740065B (en) * 2023-08-14 2023-11-21 山东伟国板业科技有限公司 Quick tracing method and system for defective products of artificial board based on big data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663706A (en) * 2012-04-23 2012-09-12 河北师范大学 Adaptive weighted mean value filtering method based on diamond template
CN107123114A (en) * 2017-04-21 2017-09-01 佛山市南海区广工大数控装备协同创新研究院 A kind of cloth defect inspection method and device based on machine learning
CN108414525A (en) * 2018-01-30 2018-08-17 广东溢达纺织有限公司 Fabric defect detection method, device, computer equipment and storage medium
US20210343002A1 (en) * 2020-07-28 2021-11-04 Jiangnan University Online Detection Method of Circular Weft Knitting Stripe Defects Based on Gray Gradient Method
CN113643201A (en) * 2021-07-27 2021-11-12 西安理工大学 Image denoising method of self-adaptive non-local mean value
CN115082460A (en) * 2022-08-18 2022-09-20 聊城市恒丰电子有限公司 Weaving production line quality monitoring method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663706A (en) * 2012-04-23 2012-09-12 河北师范大学 Adaptive weighted mean value filtering method based on diamond template
CN107123114A (en) * 2017-04-21 2017-09-01 佛山市南海区广工大数控装备协同创新研究院 A kind of cloth defect inspection method and device based on machine learning
CN108414525A (en) * 2018-01-30 2018-08-17 广东溢达纺织有限公司 Fabric defect detection method, device, computer equipment and storage medium
US20210343002A1 (en) * 2020-07-28 2021-11-04 Jiangnan University Online Detection Method of Circular Weft Knitting Stripe Defects Based on Gray Gradient Method
CN113643201A (en) * 2021-07-27 2021-11-12 西安理工大学 Image denoising method of self-adaptive non-local mean value
CN115082460A (en) * 2022-08-18 2022-09-20 聊城市恒丰电子有限公司 Weaving production line quality monitoring method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨曼等: "基于改进迭代匹配滤波的织物疵点检测", 《西安工程大学学报》 *
杨曼等: "基于改进迭代匹配滤波的织物疵点检测", 《西安工程大学学报》, vol. 31, no. 03, 29 June 2017 (2017-06-29), pages 383 - 389 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439481A (en) * 2022-11-09 2022-12-06 青岛平电锅炉辅机有限公司 Deaerator welding quality detection method based on image processing
CN115439481B (en) * 2022-11-09 2023-02-21 青岛平电锅炉辅机有限公司 Deaerator welding quality detection method based on image processing
CN116740065B (en) * 2023-08-14 2023-11-21 山东伟国板业科技有限公司 Quick tracing method and system for defective products of artificial board based on big data

Similar Documents

Publication Publication Date Title
CN110349126B (en) Convolutional neural network-based marked steel plate surface defect detection method
CN114219805B (en) Intelligent detection method for glass defects
CN115351598A (en) Numerical control machine tool bearing detection method
CN115131348B (en) Method and system for detecting textile surface defects
CN115100221B (en) Glass defect segmentation method
CN109816644A (en) A kind of bearing defect automatic checkout system based on multi-angle light source image
CN115311303B (en) Textile warp and weft defect detection method
CN115311265B (en) Loom intelligence control system based on weaving quality
CN115359237B (en) Gear broken tooth identification method based on pattern identification
CN116977358A (en) Visual auxiliary detection method for corrugated paper production quality
CN115100206B (en) Printing defect identification method for textile with periodic pattern
CN113888536B (en) Printed matter double image detection method and system based on computer vision
CN115272347A (en) Bearing defect identification method
CN116523899A (en) Textile flaw detection method and system based on machine vision
CN109540917A (en) A kind of multi-angle mode yarn under working external appearance characteristic parameter extraction and analysis method
CN115294097A (en) Textile surface defect detection method based on machine vision
CN110458809B (en) Yarn evenness detection method based on sub-pixel edge detection
CN115266732A (en) Carbon fiber tow defect detection method based on machine vision
CN113936001B (en) Textile surface flaw detection method based on image processing technology
Jing et al. Automatic recognition of weave pattern and repeat for yarn-dyed fabric based on KFCM and IDMF
CN114565607A (en) Fabric defect image segmentation method based on neural network
CN111402225B (en) Cloth folding false-detection defect discriminating method
CN117011291A (en) Watch shell quality visual detection method
CN114913180A (en) Intelligent detection method for defect of cotton cloth reed mark
CN109211937B (en) Detection system and detection method for bending defect of elastic braid of underwear

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination