CN110473190B - Adaptive fabric defect detection method based on scale - Google Patents

Adaptive fabric defect detection method based on scale Download PDF

Info

Publication number
CN110473190B
CN110473190B CN201910733837.XA CN201910733837A CN110473190B CN 110473190 B CN110473190 B CN 110473190B CN 201910733837 A CN201910733837 A CN 201910733837A CN 110473190 B CN110473190 B CN 110473190B
Authority
CN
China
Prior art keywords
pixel
area
similar
image
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910733837.XA
Other languages
Chinese (zh)
Other versions
CN110473190A (en
Inventor
李岳阳
杜帅
罗海驰
樊启高
朱一昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201910733837.XA priority Critical patent/CN110473190B/en
Publication of CN110473190A publication Critical patent/CN110473190A/en
Application granted granted Critical
Publication of CN110473190B publication Critical patent/CN110473190B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/89Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles
    • G01N21/8914Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles characterised by the material examined
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • G01N2021/8877Proximity analysis, local statistics
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biochemistry (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Textile Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a scale-based adaptive fabric defect detection method, and belongs to the technical field of textile product detection. The defect detection method aims at the problems that similar areas with fixed sizes are preset manually when the existing defect detection method detects small target defects, and the defect detection is inaccurate when the similar areas with the same sizes are set for pixels in different areas (such as a background area, a defect area and a transition edge area of the defect and the background); the size of the similar area corresponding to each pixel is determined in a self-adaptive mode by utilizing a scale idea, namely the size of the similar area is determined in a self-adaptive mode by comprehensively considering the area where the similar area is located and the similarity degree between the similar area and the surrounding pixels, and meanwhile the problem is solved by combining a measuring method of area similarity, so that the detection of the small defects of the fabric is more accurate.

Description

Adaptive fabric defect detection method based on scale
Technical Field
The invention relates to a scale-based adaptive fabric defect detection method, and belongs to the technical field of textile product detection.
Background
During the production of the fabric, the generation of fabric defects is inevitable. The traditional manual detection method has the problems that the detection result is influenced by human subjectivity, the omission factor is high, the labor cost is high and the like, and is increasingly the bottleneck of enterprise development. With the development of computer technology, the method for automatically detecting the fabric defects by means of image processing related algorithms has the advantages of high stability, labor cost saving, production efficiency improvement and the like, and is gradually developed and applied.
In general, minor defects in finished fabrics have little influence on the quality of fabrics, but in the actual production process, once minor defects appear, the defects are discovered and treated in time, and the defects are enlarged continuously, so that the quality of the fabrics is reduced. Thus, in a practical production process, smaller defects need to be discovered and treated as early as possible.
The conventional fabric defect detection method is effective in detecting common defects in the fabric, but is not stable enough in detecting smaller defects in the fabric, and the detection effect is poor. Therefore, how to effectively improve the accuracy of detecting the tiny targets and increase the stability and robustness of the algorithm becomes a difficult problem which needs to be solved urgently.
Disclosure of Invention
The invention provides a scale-based adaptive fabric defect detection method, aiming at solving the problems of low detection accuracy, unstable detection and poor detection effect of the existing defect detection method for smaller defects.
A fabric defect detection method, the method comprising:
s1, preprocessing an image to be detected;
s2, for each pixel in the preprocessed image, determining a similar area according to the area of the pixel in the image and the similarity degree between the pixel and the surrounding pixels;
s3, taking the similar region obtained in S2 as a central region, finding a neighborhood which is the same as the central region in size and has the highest similarity with the central region around the central region, and calling the neighborhood as the most similar neighborhood;
and S4, obtaining the background image after evaluation of the image to be detected according to the most similar field, and further obtaining a defect detection result image.
Optionally, the S2 includes:
and respectively obtaining the corresponding sphere scale according to the similarity degree between the area of each pixel in the preprocessed image and the surrounding pixels, and taking the area where the sphere scale is located as the similar area of the pixel.
Optionally, the S2 includes:
the degree of similarity FO between each pixel in the image and its surrounding pixels is calculatedr(c):
Figure BDA0002161493530000021
Wherein D isr(c) Representing the pixel in the radius area by taking the pixel c as the center of a circle and r as the radius area; i Dr(c)-Dr-1(c) I denotes the region Dr(c)-Dr-1(c) The number of all pixels in the pixel table, f (c) and f (d) respectively represent the gray values of the pixels c and d; whIs a value range of [0,1]And Wh(0) A monotonically non-increasing function of 1;
when FO is generatedr(c) If the value is less than the threshold value tau, D is indicatedr(c)-Dr-1(c) If the pixels in the formed region are not in the similar region where the central pixel c is located, the sphere dimension r corresponding to the central pixel cs(c) Namely r-1, and the corresponding similar area is the area which takes the pixel c as the center of a circle and r-1 as the radius.
Optionally, the S3 includes:
determining the range of the protection area of the similar area obtained in the step S2 as a central area according to the size of the central area;
the most similar neighborhood to the central region is found around the extent of the protected region.
Optionally, the finding a nearest neighbor to the central region around the protection region range includes:
assuming that n neighborhoods exist around the protection area;
let Drfi(x0,y0) (i-1, …, m) represents the gray value of each pixel in the central region, wherein m represents the number of pixels in the central region; drfi(xk,yk) Representing the k-th surrounding neighborhood VkGray value of pixel corresponding to each pixel in the central region, the central region and the k-th neighborhood VkThe similarity between them is set as SkThen S iskCalculated by the following formula:
Figure BDA0002161493530000022
Skhas a value range of [0,1]]Will SkValue closest to 1 corresponds toThe neighborhood is determined as the most similar neighborhood Vs
Optionally, the determining the range of the protection area according to the size of the central area includes:
assuming a central region V0Has a radius of r0(c) Then protect area E0Has a radius of rp(c)=r0(c)+P;
Wherein P is the size of the protection area and is less than or equal to r0(c)。
Optionally, in S1, a homomorphic filtering algorithm is used to preprocess the image to be detected.
Optionally, the WhThe expression is as follows:
Figure BDA0002161493530000023
wherein
Figure BDA0002161493530000024
Is a constant coefficient; e is the base of the natural logarithm.
Optionally, the value range of the threshold τ is [0.8, 0.9 ].
The invention also provides application of the fabric defect detection method in the technical field of textile product detection.
The invention has the beneficial effects that:
the method solves the defects and defects of the existing defect detection method for detecting the small target defects by utilizing the scale thought to adaptively determine the size of the similar area corresponding to each pixel and combining the measurement method of the area similarity, namely, the similar area is adaptively determined according to the area of the pixel in the image and the similarity degree between the pixel and the surrounding pixels, and compared with the method for artificially determining a rectangular area with fixed size adaptive to the small targets in advance according to the specific conditions of the small defects with different sizes in the prior art, the method provided by the application comprehensively considers the area where the pixel is located and the similarity degree between the pixel and the surrounding pixels to adaptively determine the similar area, so that the method is more accurate in detecting the small defects compared with the prior art, the quality of the fabric is improved, and the secondary yield is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Figure 1 is a flow chart of scale-based adaptive fabric defect detection.
Fig. 2 is a diagram of pixels within a similar region at a scale radius of 1.
Fig. 3 is a diagram of pixels within a similar region at a scale radius of 2.
Fig. 4 is a functional diagram of a characteristic of a gray scale difference between pixels.
FIG. 5 is a similar area diagram centered on a diamond center point pixel.
Fig. 6 is a schematic diagram of finding the most similar neighborhood around the central region.
Fig. 7 is an image to be detected.
Fig. 8 is an image of the detection result of the detection of the minute defect by manually determining the size of the central area in advance.
Figure 9 is a defect detection result image of a minor defect detection performed using the method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The first embodiment is as follows:
the present embodiment provides a scale-based adaptive fabric defect detection method, referring to fig. 1, the method includes:
(1) the image is preprocessed by using a homomorphic filtering algorithm, and the homomorphic filtering algorithm achieves the purposes of reducing the influence of uneven illumination and enhancing the contrast and detail of the image by attenuating the illumination component and enhancing the reflection component.
The method specifically comprises the following steps:
considering the original image f (x, y) as an illumination function, expressed as the product of the illumination component i (x, y) and the reflection component r (x, y), the function of the original image f (x, y) is expressed as:
f(x,y)=i(x,y)r(x,y) (1)
the multiplication operation of the original image function is simplified into addition operation, namely, the logarithm operation is carried out on the original image function so as to carry out Fourier transformation in the following step:
lnf(x,y)=lni(x,y)+lnr(x,y) (2)
in order to convert the image to the frequency domain, the function after the above logarithm operation needs to be fourier transformed, where F (u, v), I (u, v), and R (u, v) are ln F (x, y), lni (x, y), and lnr (x, y) fourier transform results:
F(u,v)=I(u,v)+R(u,v) (3)
selecting a proper filter function H (u, v) to process F (u, v), and achieving the purpose of reducing the influence of uneven illumination and enhancing the contrast and detail by attenuating the low-frequency component, i.e. the illumination component i (x, y) and enhancing the high-frequency component, i.e. the reflection component r (x, y), the processing results are as follows:
S(u,v)=H(u,v)F(u,v)=H(u,v)I(u,v)+H(u,v)R(u,v) (4)
inverse fourier transform to space domain yields:
s(x,y)=F-1(S(u,v)) (5)
and then taking the index to obtain a homomorphic filtered result:
f′(x,y)=exp(s(x,y)) (6)
(2) the method for adaptively determining the similar area corresponding to each pixel in the preprocessed image based on the scale thought specifically comprises the following steps:
in a gray scale image containing defects, the method can be divided into the following steps: background area, defect area, transition edge area of defect and background. To determine whether an image pixel is a normal pixel (background region pixel) or a defective pixel, a region needs to be determined with the pixel as a center, and all pixels in the region are locally similar to the center pixel.
In the existing small object detection method, for each pixel in an image, the corresponding similar area is a rectangular area with fixed size, such as 3x3, 5x5, 5x7, etc., when detecting, according to the specific situation of small objects (defects) with different sizes, a rectangular area with fixed size corresponding to the small objects is artificially determined in advance. However, in an image, there are relatively flat areas, such as the background, and some edge detail areas, such as patterns or defects. That is, for pixels of different regions in an image, the similar region sizes centered on the pixel may not be the same. If the pixel is a background area pixel, more pixels around the pixel are similar to the pixel, and the corresponding similar area is larger; if it is an edge detail or defect pixel, the number of pixels around the pixel that are similar to the pixel should be small, and its corresponding similar area should be small.
It is therefore not reasonable to specify a similar region of the same size for all pixels in an image. Therefore, in the present invention, based on the idea of scale, a method for adaptively determining the size of the similar region is proposed, that is, for each pixel in the image, the corresponding sphere scale size is respectively obtained according to the region where the pixel is located in the image and the similarity between the pixel and its surrounding pixels, and the region where the sphere scale is located is taken as the similar region of the pixel, specifically:
assuming that the processed image can be defined as C ═ C, f, where C is a rectangular arrangement of pixels, called the field on C, f represents the image gray scale intensity function, which is defined as [ L, H ], and L and H are the minimum and maximum values of the pixel gray scale values, respectively, and when the image gray scale values are represented in 8-bit binary, L is 0 and H is 255.
For any pixel C epsilon C in the image, taking C as the center of a circle and r as the pixel D in a radius arear(c) Can be expressed as:
Dr(c)={d∈C|‖c-d‖≤r} (7)
fig. 2 and 3 show the pixels D in the similar area when the radius is r 1 and r 2, respectively, with c as the centerr(c)。
To calculate the sphere scale corresponding to each pixel in the image, let FOr(c) Comprises the following steps:
Figure BDA0002161493530000051
wherein | Dr(c)-Dr-1(c) I denotes the region Dr(c)-Dr-1(c) The number of all pixels in the pixel table, f (c) and f (d) respectively represent the gray values of the pixels c and d;
Whthe function should satisfy the requirement:
(i) the value range is [0,1 ];
(ii)Wh(0)=1;
(iii) is a monotone non-increasing function.
In fuzzy space, many functions can meet the requirements, and the invention selects typical functions
Figure BDA0002161493530000052
As shown in fig. 4. The function can satisfy that when the pixel d and the pixel c have similar gray values, namely the gray difference value tends to 0, the function value tends to 1; when the difference in the gradation values of the pixel d and the pixel c is large, i.e., the gradation difference value tends to 255, the function value tends to 0.
The sphere scale corresponding to each pixel in the image can be obtained from equation (8). For any pixel c, the radius of the circle is gradually increased from r to 1 by taking c as the center of the circle, and corresponding FO can be obtained corresponding to each value of rr(c) The value is obtained. When FO is generatedr(c) Values less than a threshold τ are considered to be comprised of Dr(c)-Dr-1(c) The pixels in the formed region are not in the similar region where the central pixel c is located, and the sphere dimension r of the corresponding central pixel cs(c) Namely r-1.
The value of the threshold τ is 0.85: assuming that there are 8 neighboring pixels around a 3 × 3 region defined by c as the central pixel, if only one pixel is greatly different from the central pixel, it is considered that the neighboring elements in the region are still in the same region as c, and if there are two or more pixels greatly different from c, it is considered that the neighboring pixels in the region are not in the same region as c. Therefore, τ is 7/8, and when τ is between [0.8 and 0.9], τ has little influence on the experimental results, so τ is taken to be 0.85.
For each pixel c in the image, its corresponding sphere dimension rs(c) The calculation method of (c) can be described as follows:
Figure BDA0002161493530000053
Figure BDA0002161493530000061
(3) taking the similar area obtained in the step (2) as a central area, and finding a neighborhood which is the same as the central area in size and is most similar to the central area around the central area, wherein the neighborhood specifically comprises the following steps:
in order to accurately judge whether each pixel in the image is a defect pixel, a most similar neighborhood around each pixel is found, namely, a neighborhood which has the same size as and is most similar to the central region is found around the central region by taking the current pixel as the center and taking a region in a certain range around the pixel as the central region.
Meanwhile, in order to avoid the interference of noise, a protection area is arranged around the central area. When the most similar neighborhood is found, the gray value of the current pixel can be obtained according to the pixel evaluation in the most similar neighborhood.
According to the method (2), for each pixel in the image, the corresponding sphere scale r is obtaineds(c) The scale region of each pixel is taken as its central region. As shown in fig. 5, if there is one pixel (x)0,y0) Dimension r of0(c) And 2, the area centered on this pixel is all the gray and black pixels in the figure.
In fig. 6, pixel (x)0,y0) Corresponding to a dimension of 2, the central region of which is V0,V0There are 13 pixels in total. Setting the size of the protection area to be 1, then protectingProtective zone E0Has a dimension of rp(c)=r0(c) +1 ═ 3. In a protection area E0N and V can be found0Neighborhood V of the same sizek(where k is 1, …, n) with a central pixel point of (x)k,yk) And a black pixel diamond is formed.
It should be noted that the size of the protection area does not normally exceed the size of the corresponding central area, and thus, for the central area V with the above-mentioned dimension of 20The protection area thereof may be set to 1 or 2.
And determining the range of the protection area, and searching the most similar neighborhood around the range of the protection area. Let Drfi(x0,y0) (i-1, …, m) represents the gray-level value of each pixel in the central region, where m represents the number of pixels in the central region, and m is 13 when the scale is 2. Drfi(xk,yk) Representing the k-th surrounding neighborhood VkGray value of pixel corresponding to each pixel in the central region, the central region and the k-th neighborhood VkThe similarity between them is set as SkThen S iskCalculated by the following formula:
Figure BDA0002161493530000062
Skhas a value range of [0,1]]Will SkThe neighborhood with the value closest to 1 is determined as the most similar neighborhood Vs
Specifically, refer to FIG. 6, let Drfi(x0,y0) (i-1, …,13) represents the gray value of each pixel in the central region, Drfi(xk,yk) (i-1, …,13) represents the gray scale value of the pixel corresponding to each pixel in the central region in the k-th surrounding neighborhood, and the similarity between the central region and the neighborhood is set as SkThen S iskCan be calculated from the following formula:
Figure BDA0002161493530000071
Skhas a value range of [0,1]]When S iskWhen the value tends to 0, the gray values of all the pixels in the central neighborhood and the corresponding pixels in the kth neighborhood around the central neighborhood are different greatly, namely the similarity degree of the two areas is weak; when S iskWhen the value is 1, the gray value of each pixel in the central neighborhood and the corresponding pixel in the k-th neighborhood around the pixel are very different, namely, the similarity degree of the two areas is high.
Calculating according to the formula (9) to obtain the similarity SkFinding out the neighborhood with the maximum similarity with the central region, namely the most similar neighborhood Vs
Figure BDA0002161493530000072
If the current pixel is a defect pixel, determining a central area by using the pixel, wherein the area occupied in the image is smaller because the defect is a small target, the similarity between a most similar neighborhood found in the surrounding neighborhood and the central area is not high, and the difference between the gray value of the current pixel estimated according to all pixels in the most similar neighborhood and the gray value of the pixel in the original image is large, so that the current pixel is judged to be the defect pixel according to a method in the subsequent step.
If the current pixel is a non-defect pixel (background region pixel), determining a central region by the pixel, finding out a neighborhood with high similarity in the surrounding neighborhoods, and judging that the current pixel is a non-defect pixel according to a method in the subsequent steps because the difference between the gray value of the current pixel evaluated according to all pixels in the most similar neighborhood and the gray value of the pixel in the original image is very small.
(4) Obtaining an initial defect distinguishing image according to a background difference principle, which specifically comprises the following steps:
from (3), the neighborhood V most similar to the central region can be foundsThe gray scale gradient weighting values for all pixels in the most similar neighborhood may then be used to evaluateThe gray value of the central pixel of the central area is calculated as follows:
Figure BDA0002161493530000073
wherein f (x)0,y0) And
Figure BDA0002161493530000074
respectively representing the original and the estimated grey value, D, of the central pixel of the central arearfi(xs,ys) (i-1, …,13) represents the most similar neighborhood VsThe gray value of each pixel in (1), W represents a weight, and can be calculated by the following formula:
Figure BDA0002161493530000075
for each pixel in the original image, the evaluated gray value can be obtained by equation (11), so as to obtain the evaluated background image.
Known from the calculation method of the most similar neighborhood described in (3), if the current pixel is a background region or an edge region pixel, a neighborhood with higher similarity to the current pixel can be found around the similar region with the current pixel as the center, so that the difference between the evaluation gray value of the current pixel and the original gray value is very small; if the current pixel is a defect pixel, the degree of similarity between the most similar neighborhood found around the similar region centered on the pixel and the central region is still relatively small, so that the evaluated gray value of the pixel may be greatly different from the original gray value. According to the difference principle, an initial defect distinguishing image can be obtained from a difference image of the original image and the evaluated background image.
The background difference principle is to subtract the estimated background image from the original image, and is given by the following formula:
fT(x,y)=f(x,y)-fB(x,y)
where f (x, y) represents the original image, fB(x, y) shows the background image after evaluationImage, fT(x, y) represents the differentiated image, i.e., the initial defect discriminating image.
(5) The method comprises the following steps of performing binarization processing on an initial defect judgment image to obtain a final defect detection result image, wherein the binarization processing specifically comprises the following steps:
the method is to carry out binarization processing on the initial defect distinguishing image, and the method is to carry out classification processing on each pixel in the image through threshold segmentation to remove residual noise points and class target points. The threshold segmentation can be expressed as:
Figure BDA0002161493530000081
wherein, B (x, y) is an image after binarization processing, namely a final defect detection result image, and white pixels (with the value of 1) in the image represent defects. T is a threshold value, and the value of T can be adjusted according to the threshold segmentation effect.
The simulation experiment results are shown in fig. 7, 8 and 9.
Fig. 7 is an image of an original fabric with defects to be detected, fig. 8 is a detection result obtained by a method for manually determining the size of a central area in advance according to the prior art, and fig. 9 is a final defect detection result image according to the present invention. Some steps in the embodiments of the present invention may be implemented by software, and the corresponding software program may be stored in a readable storage medium, such as an optical disc or a hard disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A method of fabric defect detection, said method comprising:
s1, preprocessing an image to be detected;
s2, for each pixel in the preprocessed image, determining a similar area according to the area of the pixel in the image and the similarity degree between the pixel and the surrounding pixels;
s3, taking the similar region obtained in S2 as a central region, finding a neighborhood which is the same as the central region in size and has the highest similarity with the central region around the central region, and calling the neighborhood as the most similar neighborhood;
s4, obtaining an evaluated background image of the image to be detected according to the most similar neighborhood, and further obtaining a defect detection result image; the S2 includes:
respectively obtaining the corresponding sphere scale according to the similarity degree between the area of each pixel in the preprocessed image and the surrounding pixels, and taking the area where the sphere scale is located as the similar area of the pixel; the degree of similarity FO between each pixel in the image and its surrounding pixels is calculatedr(c):
Figure FDA0003388592320000011
Wherein D isr(c) Representing the pixel in the radius area by taking the pixel c as the center of a circle and r as the radius area; i Dr(c)-Dr-1(c) I denotes the region Dr(c)-Dr-1(c) The number of all pixels in the pixel table, f (c) and f (d) respectively represent the gray values of the pixels c and d; whIs a value range of [0,1]And Wh(0) A monotonically non-increasing function of 1;
when FO is generatedr(c) If the value is less than the threshold value tau, D is indicatedr(c)-Dr-1(c) If the pixels in the formed region are not in the similar region where the central pixel c is located, the sphere dimension r corresponding to the central pixel cs(c) Namely r-1, and the corresponding similar area is the area which takes the pixel c as the center of a circle and r-1 as the radius.
2. The method according to claim 1, wherein the S3 includes:
determining the range of the protection area of the similar area obtained in the step S2 as a central area according to the size of the central area;
the most similar neighborhood to the central region is found around the extent of the protected region.
3. The method of claim 2, wherein finding the most similar neighborhood around the protection region comprises:
assuming that n neighborhoods exist around the protection area;
let Drfi(x0,y0) (i-1, …, m) represents the gray value of each pixel in the central region, wherein m represents the number of pixels in the central region; drfi(xk,yk) Representing the k-th surrounding neighborhood VkGray value of pixel corresponding to each pixel in the central region, the central region and the k-th neighborhood VkThe similarity between them is set as SkThen S iskCalculated by the following formula:
Figure FDA0003388592320000021
Skhas a value range of [0,1]]Will SkThe neighborhood with the value closest to 1 is determined as the most similar neighborhood Vs
4. The method of claim 2, wherein determining the extent of the protection area of the central area according to the size of the central area comprises:
assuming a central region V0Has a radius of r0(c) Then protect area E0Has a radius of rp(c)=r0(c)+P;
Wherein P is the size of the protection area and is less than or equal to r0(c)。
5. The method according to any one of claims 1 to 4, wherein the step S1 is implemented by using a homomorphic filtering algorithm to preprocess the image to be detected.
6. The method of claim 1, wherein W ish(x) The expression is as follows:
Figure FDA0003388592320000022
wherein
Figure FDA0003388592320000023
Is a constant coefficient; e is the base of the natural logarithm.
7. The method of claim 1, wherein the threshold τ is in a range of [0.8, 0.9 ].
CN201910733837.XA 2019-08-09 2019-08-09 Adaptive fabric defect detection method based on scale Active CN110473190B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910733837.XA CN110473190B (en) 2019-08-09 2019-08-09 Adaptive fabric defect detection method based on scale

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910733837.XA CN110473190B (en) 2019-08-09 2019-08-09 Adaptive fabric defect detection method based on scale

Publications (2)

Publication Number Publication Date
CN110473190A CN110473190A (en) 2019-11-19
CN110473190B true CN110473190B (en) 2022-03-04

Family

ID=68511339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910733837.XA Active CN110473190B (en) 2019-08-09 2019-08-09 Adaptive fabric defect detection method based on scale

Country Status (1)

Country Link
CN (1) CN110473190B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082508B (en) * 2022-08-18 2022-11-22 山东省蓝睿科技开发有限公司 Ocean buoy production quality detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105261003A (en) * 2015-09-10 2016-01-20 西安工程大学 Defect point detection method on basis of self structure of fabric
CN109035195A (en) * 2018-05-08 2018-12-18 武汉纺织大学 A kind of fabric defect detection method
CN109961437A (en) * 2019-04-04 2019-07-02 江南大学 A kind of conspicuousness fabric defect detection method under the mode based on machine teaching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105261003A (en) * 2015-09-10 2016-01-20 西安工程大学 Defect point detection method on basis of self structure of fabric
CN109035195A (en) * 2018-05-08 2018-12-18 武汉纺织大学 A kind of fabric defect detection method
CN109961437A (en) * 2019-04-04 2019-07-02 江南大学 A kind of conspicuousness fabric defect detection method under the mode based on machine teaching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于改进局部自适应对比法的织物疵点检测";杜帅等;《纺织学报》;20190228;第40卷(第2期);论文第2-3节 *
杜帅等."基于改进局部自适应对比法的织物疵点检测".《纺织学报》.2019,第40卷(第2期), *

Also Published As

Publication number Publication date
CN110473190A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN115375676A (en) Stainless steel product quality detection method based on image recognition
CN113963042B (en) Metal part defect degree evaluation method based on image processing
CN110443807A (en) A kind of even carrying out image threshold segmentation method of uneven illumination based on luminance proportion
CN111161222B (en) Printing roller defect detection method based on visual saliency
CN115249246B (en) Optical glass surface defect detection method
CN114219805B (en) Intelligent detection method for glass defects
TWI393073B (en) Image denoising method
CN102156996A (en) Image edge detection method
CN110648330B (en) Defect detection method for camera glass
CN116758083A (en) Quick detection method for metal wash basin defects based on computer vision
CN111598801B (en) Identification method for weak Mura defect
CN116883408B (en) Integrating instrument shell defect detection method based on artificial intelligence
WO2021118463A1 (en) Defect detection in image space
CN109377450A (en) A kind of edge-protected denoising method
CN116777916B (en) Defect detection method based on metal shell of pump machine
CN116486091B (en) Fan blade defect area rapid segmentation method and system based on artificial intelligence
CN115131356B (en) Steel plate defect classification method based on richness
CN115619775B (en) Material counting method and device based on image recognition
CN115359044A (en) Metal part surface scratch detection method based on image enhancement
CN116245880A (en) Electric vehicle charging pile fire risk detection method based on infrared identification
CN110473190B (en) Adaptive fabric defect detection method based on scale
CN116152242A (en) Visual detection system of natural leather defect for basketball
CN114359083B (en) High-dynamic thermal infrared image self-adaptive preprocessing method for interference environment
CN116363097A (en) Defect detection method and system for photovoltaic panel
CN114998186B (en) Method and system for detecting surface scab defect of copper starting sheet based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant