CN113838040A - Detection method for defect area of color texture fabric - Google Patents
Detection method for defect area of color texture fabric Download PDFInfo
- Publication number
- CN113838040A CN113838040A CN202111153679.4A CN202111153679A CN113838040A CN 113838040 A CN113838040 A CN 113838040A CN 202111153679 A CN202111153679 A CN 202111153679A CN 113838040 A CN113838040 A CN 113838040A
- Authority
- CN
- China
- Prior art keywords
- image
- color texture
- texture fabric
- model
- generator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000004744 fabric Substances 0.000 title claims abstract description 93
- 230000007547 defect Effects 0.000 title claims abstract description 56
- 238000001514 detection method Methods 0.000 title claims description 41
- 238000000034 method Methods 0.000 claims abstract description 48
- 230000002950 deficient Effects 0.000 claims abstract description 26
- 238000012549 training Methods 0.000 claims abstract description 26
- 238000000605 extraction Methods 0.000 claims abstract description 5
- 238000011084 recovery Methods 0.000 claims abstract description 4
- 238000001914 filtration Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 10
- 238000009826 distribution Methods 0.000 claims description 3
- 230000008485 antagonism Effects 0.000 claims description 2
- 230000003042 antagnostic effect Effects 0.000 claims 1
- 230000008439 repair process Effects 0.000 abstract description 5
- 238000002474 experimental method Methods 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 4
- 239000004753 textile Substances 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 238000004451 qualitative analysis Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000004445 quantitative analysis Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- 241000512668 Eunectes Species 0.000 description 1
- 101100333868 Homo sapiens EVA1A gene Proteins 0.000 description 1
- 102100025297 Mannose-P-dolichol utilization defect 1 protein Human genes 0.000 description 1
- 101710089919 Mannose-P-dolichol utilization defect 1 protein Proteins 0.000 description 1
- 102100031798 Protein eva-1 homolog A Human genes 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30124—Fabrics; Textile; Paper
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Probability & Statistics with Applications (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for detecting a defect area of a color texture fabric, which comprises the following steps: constructing a denoisingGAN model, wherein the model consists of a generator G and a discriminator D; superimposing Gaussian noise on a non-defective color texture fabric image, sending the non-defective color texture fabric image into a constructed DenoisingGAN model, performing feature extraction and recovery on an input image by a generator G, continuously adjusting the gradient by a discriminator D, feeding the gradient back to the generator G, guiding the training of the generator G, and obtaining the trained DenoisingGAN model when the training times reach the set iteration times; and inputting the color texture fabric image to be detected into a trained DenoisingGAN model to output a corresponding reconstructed image, and detecting to determine a defect region. The method can effectively reconstruct and repair the color texture fabric image, thereby being capable of rapidly and accurately detecting the defects of the color texture fabric.
Description
Technical Field
The invention belongs to the technical field of textile appearance detection methods, and relates to a detection method for a defect area of a color texture fabric.
Background
As one of the textiles, the color texture fabric is popular with more and more customers because of its beautiful pattern. However, in the production of fabrics, various defects may occur on the fabric surface due to mechanical problems, yarn problems, processing problems, and the like. The existence of defects can cause the price of the fabric to be greatly reduced, so the defect detection of the surface of the fabric is an important link in the quality control of textile products. At present, most textile enterprises rely on the visual inspection of experienced workers to detect defects, but the methods are limited by subjective influences of people, visual fatigue and other factors, and the manual detection method has low efficiency, low speed and unstable accuracy, so that an automatic detection method is urgently needed to replace the manual visual inspection with low precision and low efficiency.
In recent years, automatic detection of fabric defects based on machine vision algorithms has become one of the popular research areas. The traditional defect detection algorithm mainly relies on manual design feature engineering to extract the features of the defects, and can be divided into detection methods based on statistics, spectral features, structures, models, dictionary learning and mixing. For plain fabrics with simple background textures, a traditional detection method can achieve a relatively ideal detection result through structural feature engineering, however, for colored pattern texture fabrics with complex patterns, manual design of defect features is increasingly difficult to achieve. The deep learning method can automatically learn the deep characteristics of the image, and the supervised deep learning has strong image recognition capability, so that the problem that the characteristics of the color texture fabric cannot be effectively and manually designed can be solved. But the color texture fabric defect samples in actual production are difficult to obtain in large quantities, and the cost for marking large-scale defect samples is very high. Therefore, the method based on supervised deep learning is difficult to be applied to scenes with small pattern batch and fast change of the color texture fabric. Unsupervised deep learning gradually becomes the focus of attention of researchers because sample information does not need to be labeled. Some researchers have tried fabric defect detection algorithms based on unsupervised learning, and such methods do not require a large number of defect samples with labeled information in the training phase, but only require easily obtained defect-free samples as input.
Zhang et al propose a color texture fabric defect detection algorithm based on unsupervised Denoising Convolution Autoencoder (DCAE), which realizes the detection and positioning of color texture fabric defects by processing the residual error between the image to be detected and the reconstructed image thereof, but the method is only suitable for fabrics with simpler background texture. Mei et al propose a multi-scale convolution de-noising self-coding network Model (MSDCAE) which combines the image pyramid hierarchy structure idea and the convolution de-noising self-coding network to detect the fabric image defects, but for color texture fabrics with complex textures, the method is easy to generate the over-detection phenomenon. Zhang et al combined with classical U-Net network on the basis of traditional self-encoder, propose a U-type convolution denoising self-encoder model (UDCAE), but the model only compressed encoding and decoding the fabric image on a single scale, so inevitably have the problem of information loss. Hu et al propose an unsupervised fabric defect detection model based on a deep convolution generation countermeasure network (DCGAN), the model detects defect areas by constructing a reconstructed image of an image to be detected and then performing residual analysis by combining the reconstructed image and a likelihood map of an original image.
Disclosure of Invention
The invention aims to provide a method for detecting a defect area of a color texture fabric, which can effectively reconstruct and repair a color texture fabric image so as to quickly and accurately detect the defect of the color texture fabric.
The technical scheme adopted by the invention is that the method for detecting the defect area of the colored texture fabric is implemented according to the following steps:
step 2, denoisingGAN model training
Superimposing Gaussian noise on a non-defective color texture fabric image, sending the image subjected to noise superimposition into the denoisingGAN model constructed in the step 1, performing feature extraction and recovery on the input image through coding and decoding operations by a generator G, continuously adjusting the gradient by a discriminator D, feeding the gradient back to the generator G, guiding the training of the generator G, and obtaining the trained denoisingGAN model when the training times reach the set iteration times;
and 3, inputting the color texture fabric image to be detected into the trained DenoisingGAN model to output a corresponding reconstructed image, and detecting to determine the defect area.
The present invention is also characterized in that,
in the step 1, an input layer and an output layer of a generator G are both three-channel image structures, an encoder of the generator G comprises a convolutional layer with a convolutional kernel size of 7 × 7 and a step size of 1, a convolutional layer with four convolutional kernel sizes of 3 × 3 and a step size of 2, which are sequentially connected, nine depth residual error network ResNet structures are added between the encoder and a decoder of the generator G, and a decoder of the generator G comprises a deconvolution layer with four convolutional kernel sizes of 3 × 3 and a convolutional layer with a convolutional kernel size of 7 × 7 and a step size of 1, which are sequentially connected.
The discriminator D comprises five convolutional layers which are connected in sequence and have the sizes of 4 multiplied by 4 and the step length of 2, wherein the convolutional layers compress the image size input into the discriminator D into the size of 16 multiplied by 16, the number of channels is gradually changed from 3 to 512, the size of one layer of convolutional core is 4 multiplied by 4, the convolutional layers with the step length of 1 change the number of characteristic channels from 512 to 1, a Flatten layer flattens the characteristic diagram of 16 multiplied by 16 into a one-dimensional vector of 512, and two fully-connected layers output the final discrimination result.
In the step 2, the superposition of Gaussian noise on the non-defective color texture fabric image is specifically shown in the formula (1):
wherein X is a defect-free image, N (0,1) is Gaussian noise following a standard normal distribution with a mean value of 0 and a standard deviation of 1, c is a ratio representing superimposed noise,the image is the image after the noise is superimposed.
Loss function of model training process in step 2 is by confrontation loss Ladv-lossAnd content loss Lcontent-lossThe method comprises two parts, wherein the loss of antagonism adopts WGAN-GP loss, the loss of content adopts L1 loss function, and the definitions are respectively shown as formula (2) and formula (3):
wherein, X (i) is the color texture fabric original image, G () and D () respectively represent the result obtained after the processing by the generator and the discriminator,and (3) reconstructing a graph of the color texture fabric output by the model, wherein n is the number of training samples,andweights and biases in the training process;
the total loss function is as in equation (4):
Ltotal=Ladv-loss+λ·Lcontent-loss (3)
where λ is the coefficient that adjusts the weight ratio.
The step 3 is implemented according to the following steps:
step 3.1, carrying out gray processing on the image to be detected and the corresponding reconstructed image;
step 3.2, adopting Gaussian kernel with the size of 3 multiplied by 3 to respectively carry out Gaussian filtering operation on the image to be detected after the image is grayed and the image reconstructed after the image is grayed;
step 3.3, calculating the difference value of the image to be detected and the corresponding reconstructed image after Gaussian filtering in the step 3.2 to obtain a residual image;
step 3.4, carrying out binarization operation on the residual image obtained in the step 2.3 by using a self-adaptive threshold value method;
step 3.5, carrying out opening operation processing on the residual image after binarization to obtain a detection result image, analyzing the value of each pixel point in the detection result image, and determining whether a defect area exists, wherein if the detection result image has no difference, namely the pixel values in the image are all 0, the input color texture fabric has no defect; if two pixel values of 0 and 1 exist on the detection result image, the defect exists in the input color texture fabric image, and the defect area is an area with the pixel value of 1.
The specific operation of the graying processing in step 3.1 is as shown in formula (5):
Xgray=0.2125Xr+0.7154Xg+0.0721Xb (5)
in the formula: xgrayThe image is grayed; xr、Xg、XbThe pixel values of the image to be detected and the corresponding reconstructed image under three RGB different color channels are respectively, and the range of the pixel value of the image after graying is 0 to 255.
Step 3.2, the gaussian kernel function in the gaussian filtering operation is as follows (6):
wherein (x, y) is the grayed image or reconstructed image of the image to be detectedPixel coordinates of the image after graying; sigmaxThe standard deviation of pixels in the x-axis direction of an image to be detected or a reconstructed image after graying; sigmayThe standard deviation of the pixels in the y-axis direction of the image to be detected or the reconstructed image after graying is obtained.
In step 3.3, the residual image is obtained specifically according to formula (7):
in the formula, XresAs residual image, Xgray&Gaussian、The images are respectively the images of the image to be detected after being subjected to Gaussian filtration and the images of the reconstructed images after being subjected to Gaussian filtration.
The binarization operation in step 3.4 is shown as formula (8):
where f (p) is a binarized value, p is a pixel value of the residual image, T is a binarization threshold value, μ is a mean value of the residual image, σ is a standard deviation of the residual image, and k is a coefficient of the standard deviation.
The invention has the advantages that
The unsupervised color texture fabric image reconstruction restoration model DenoisingGAN model is constructed, the constructed database is used for training the model, and the trained model obtains the color texture fabric image reconstruction restoration capability, so that when the color texture fabric image to be detected is detected, the defects of the color texture fabric can be quickly and accurately detected by analyzing the residual image between the original image and the reconstructed image of the color texture fabric to be detected.
Drawings
FIG. 1 is a schematic flow chart of a training phase in a method for detecting a defective area of a color texture fabric according to the present invention;
FIG. 2 is a schematic flow chart of the detection stage of the detection method for the defective area of the color texture fabric according to the invention;
fig. 3 is a structural diagram of a generator G of a DenoisingGAN model in a method for detecting a defective area of a color texture fabric according to the present invention;
fig. 4 is a structural diagram of a discriminator D of a DenoisingGAN model in a method for detecting a defective area of a color texture fabric according to the present invention;
FIG. 5 is a partial non-defective sample of an experimental sample in a method for detecting a defective area of a colored textured fabric according to the present invention;
FIG. 6 is a partial defect sample of an experimental sample in a method for detecting a defective area of a color texture fabric according to the present invention;
fig. 7 is a diagram illustrating the detection result of the DenoisingGAN model used in the experiment in the method for detecting the defective area of the color texture fabric according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a method for detecting a defective area of a color texture fabric, which has a flow shown in figure 1 and is implemented according to the following steps:
step 1, constructing an unsupervised learning-based image reconstruction restoration model DenoisingGAN model, wherein the model consists of a generator G and a discriminator D, as shown in fig. 3, an input layer and an output layer of the generator G are both in a three-channel image structure, an encoder of the generator G comprises a convolution layer with convolution kernel size of 7 x 7 and step length of 1 which are sequentially connected, and performing convolution operation on an input image to increase a receptive field; then, in order to obtain deeper and more detailed characteristics, four layers of convolution layers with convolution kernel size of 3 multiplied by 3 and step length of 2 are sequentially used, so that the dimension becomes smaller, the number of characteristic channels is correspondingly increased, and the characteristic extraction capability of the neural network is fully utilized to extract the characteristics of the color texture fabric image; nine depth residual error network ResNet structures are added between an encoder and a decoder of a generator G to improve the accommodating capacity of the model, the decoder of the generator G comprises four layers of deconvolution layers which are sequentially connected, the size of each convolution kernel is 3 multiplied by 3, decoding recovery is carried out on the characteristic information extracted by the encoder, the size of each convolution kernel is 7 multiplied by 7, the step length is 1, and the number of characteristic channels is converted into 3 channels;
as shown in fig. 4, the discriminator D includes five convolutional layers with a size of 4 × 4, which are connected in sequence, the convolutional layer with a step size of 2 compresses the image size input to the discriminator D to a size of 16 × 16, the number of channels gradually changes from 3 to 512, the size of one layer of convolutional core is 4 × 4, the convolutional layer with a step size of 1 changes the number of characteristic channels from 512 to 1, the Flatten layer flattens the 16 × 16 characteristic diagram into a one-dimensional vector of 512, and two fully-connected layers output the final discrimination result;
step 2, denoisingGAN model training
As shown in fig. 1, superimposing gaussian noise on a non-defective color texture fabric image, sending the noise-superimposed image into the DenoisingGAN model constructed in step 1, performing feature extraction and restoration on the input image through encoding and decoding operations by a generator G, continuously adjusting a gradient by a discriminator D, feeding the gradient back to the generator G, guiding training of the generator G, and obtaining a trained DenoisingGAN model when the training times reach a set iteration number;
wherein, the superposition of Gaussian noise on the flawless color texture fabric image is shown according to the formula (1):
wherein X is a defect-free image, N (0,1) is Gaussian noise following a standard normal distribution with a mean value of 0 and a standard deviation of 1, c is a ratio representing superimposed noise,the image is the image after the noise is superimposed.
Loss function in model training process by confrontation loss Ladv-lossAnd content loss Lcontent-lossTwo parts, with WGA for loss resistanceN-GP loss and content loss are defined by the following formulas (2) and (3) respectively by adopting an L1 loss function:
wherein, X (i) is the color texture fabric original image, G () and D () respectively represent the result obtained after the processing by the generator and the discriminator,and (3) reconstructing a graph of the color texture fabric output by the model, wherein n is the number of training samples,andweights and biases in the training process;
the total loss function is as in equation (4):
Ltotal=Ladv-loss+λ·Lcontent-loss (9)
where λ is the coefficient that adjusts the weight ratio.
step 3.1, carrying out gray processing on the image to be detected and the corresponding reconstructed image, wherein the specific operation is as shown in formula (5):
Xgray=0.2125Xr+0.7154Xg+0.0721Xb (5)
in the formula: xgrayThe image is grayed; xr、Xg、XbRespectively being images to be detectedPixel values under three different color channels of RGB of the reconstructed image corresponding to the pixel values, wherein the range of the pixel values of the image after graying is 0 to 255;
step 3.2, adopting Gaussian kernel with the size of 3 multiplied by 3 to respectively carry out Gaussian filtering operation on the image to be detected after the image is grayed and the image reconstructed after the image is grayed; the gaussian kernel function in the gaussian filtering operation is as follows (6):
wherein, (x, y) is the pixel coordinate of the image after the graying of the image to be detected or the image after the graying of the reconstructed image; sigmaxThe standard deviation of pixels in the x-axis direction of an image to be detected or a reconstructed image after graying; sigmayThe standard deviation of pixels in the y-axis direction of an image to be detected or a reconstructed image after graying;
step 3.3, calculating the difference value of the image to be detected and the corresponding reconstructed image after Gaussian filtering in the step 3.2 to obtain a residual image; the residual image is specifically obtained according to equation (7):
in the formula, XresAs residual image, Xgray&Gaussian、Respectively obtaining a Gaussian filtered image of the image to be detected and a Gaussian filtered image of the reconstructed image;
step 3.4, carrying out binarization operation on the residual image obtained in the step 2.3 by using a self-adaptive threshold value method; the binarization operation is shown as formula (8):
wherein f (p) is a binarized value, p is a pixel value of the residual image, T is a binarized threshold value, μ is a mean value of the residual image, σ is a standard deviation of the residual image, k is a coefficient of the standard deviation, and k is 2 in the experiment;
step 3.5, carrying out opening operation processing on the residual image after binarization to obtain a detection result image, analyzing the value of each pixel point in the detection result image, and determining whether a defect area exists, wherein if the detection result image has no difference, namely the pixel values in the image are all 0, the input color texture fabric has no defect; if two pixel values of 0 and 1 exist on the detection result image, the defect exists in the input color texture fabric image, and the defect area is an area with the pixel value of 1.
The following describes a method for detecting a defective area of a color textured fabric according to the present invention with specific embodiments:
preparation of experimental apparatus: the hardware configuration of the DenoisingGAN model modeling, training and defect detection experiment is as the CPU: intel (R) core (TM) i7-6850K CPU (3.60 GHz); GPU: NVIDIA GeForce GTX 1080Ti (11G); the memory is 64G. And (3) software environment configuration: the operating system is Ubuntu 16.04.6 LTS; the deep learning framework is Keras 2.1.3 and TensorFlow1.12.0; anaconda 3.
Testing a sample to be tested: the data set used in the experiment is from Guangdong Yida textile Co., Ltd, and can be divided into three types according to the complexity of the pattern: simple Patterns (SL 1-SL 19), Stripe Patterns (SP 1-SP 26) and Complex Patterns (CL 1-CL 21) comprise 66 color texture fabric samples with different Patterns in total. Ten different pattern data sets were selected from the three types of data sets for training and testing, SL1, SL11, SL15, SL16, SP3, SP5, SP19, SP24, CL5, and CL6, respectively. The data set portion samples are shown in fig. 5 and 6, wherein fig. 5 is a sample of a color textured fabric portion without defects, and fig. 6 is a sample of a color textured fabric portion with defects.
And (3) experimental evaluation indexes: and qualitative and quantitative analysis is carried out on the detection result image. The qualitative analysis is a visual representation of the defect detection area. The quantitative analysis adopts five index evaluation models of Precision (P), Recall (R), F1-measure (F1), Accuracy (Accuracy, Acc) and average cross-over ratio (IoU). Wherein, the definition of precision, F1-measure, recall, accuracy and average cross-over ratio are respectively shown in formulas (10), (11), (12), (13) and (14):
in the formula, TP indicates that the positive sample prediction is positive, TN indicates that the positive sample prediction is negative, FP indicates that the negative sample prediction is positive, and FN indicates that the negative sample prediction is negative.
The experimental process comprises the following steps: firstly, establishing a denoisingGAN model of an unsupervised yarn-dyed fabric reconstruction and repair model; secondly, training the model by using a flawless color texture fabric sample, wherein the trained model has reconstruction and repair capabilities; and finally, when detecting the color texture fabric image to be detected, rapidly detecting the defect area of the color texture fabric by analyzing a residual image between the original image of the color texture fabric to be detected and the reconstructed and repaired color-woven shirt cut-part image.
And (3) carrying out qualitative analysis on experimental results: the denoisingGAN model is trained by using the non-defective color texture fabric image in the experiment, and the trained denoisingGAN model has the capability of reconstructing and repairing the color texture fabric image. And finally, calculating a residual error image of the color texture fabric image to be detected and the reconstructed image, and detecting and positioning the defect area through residual error analysis. The experimental result is shown in fig. 7, and as can be seen from fig. 7, the DenoisingGAN model of the application can repair the defect area in the color texture fabric image well on the basis of accurately reducing the color texture fabric image with different patterns. Compared with the Ground route of the defect area, the method has the advantages that the multiple patterns of the DenoisingGAN model have good detection results.
And (3) quantitatively analyzing an experimental result: through experiments, the precision (P), the recall (R), the F1-measure (F1), the accuracy (Acc) and the average intersection ratio (IoU) of the defect image detection results of ten color texture fabric data sets are compared by a DenoisingGAN model, the evaluation indexes range from 0 to 1, the larger the numerical value is, the better the detection result is, and the results are shown in Table 1:
table 1 detection results of DenoisingGAN model under five evaluation indexes
Summary of the experiments: the invention provides a method for detecting a defect area of a color texture fabric, which essentially belongs to an unsupervised modeling method based on a DenoisingGAN model. The method uses the defect-free samples to establish the unsupervised DenoisingGAN model, and can effectively avoid the practical problems of scarce number of defect samples, high cost for marking large-scale data, poor generalization capability of manually designing defect characteristics and the like. Meanwhile, the method provided by the invention can meet the film inspection process requirement in the production of the color texture fabric in the aspect of detection precision, and provides an automatic defect detection scheme easy for engineering practice in the film inspection process for the color texture fabric manufacturing industry.
Claims (10)
1. A method for detecting a defective area of a color texture fabric is characterized by comprising the following steps:
step 1, constructing an unsupervised learning-based denoisingGAN model, wherein the model consists of a generator G and a discriminator D;
step 2, denoisingGAN model training
Superimposing Gaussian noise on a non-defective color texture fabric image, sending the image subjected to noise superimposition into the denoisingGAN model constructed in the step 1, performing feature extraction and recovery on the input image through coding and decoding operations by a generator G, continuously adjusting the gradient by a discriminator D, feeding the gradient back to the generator G, guiding the training of the generator G, and obtaining the trained denoisingGAN model when the training times reach the set iteration times;
and 3, inputting the color texture fabric image to be detected into the trained DenoisingGAN model to output a corresponding reconstructed image, and detecting to determine the defect area.
2. The method according to claim 1, wherein the input layer and the output layer of the generator G in step 1 are both three-channel image structures, the encoder of the generator G includes a convolutional layer with a convolutional kernel size of 7 × 7 and a step size of 1, a convolutional layer with a convolutional kernel size of 3 × 3 and a step size of 2, which are connected in sequence, nine depth residual error network ResNet structures are added between the encoder and the decoder of the generator G, and the decoder of the generator G includes a deconvolution layer with a convolutional kernel size of 3 × 3, a convolutional layer with a convolutional kernel size of 7 × 7 and a convolutional layer with a step size of 1, which are connected in sequence.
3. The method for detecting the defective area of the color texture fabric as claimed in claim 2, wherein the discriminator D comprises five convolutional kernels with a size of 4 × 4, which are connected in sequence, a convolutional layer with a step size of 2 compresses the image size input to the discriminator D to a size of 16 × 16, the number of channels is gradually changed from 3 to 512, one convolutional kernel with a size of 4 × 4, a convolutional layer with a step size of 1 changes the number of characteristic channels from 512 to 1, a Flatten layer flattens the 16 × 16 characteristic map to a one-dimensional vector of 512, and two fully-connected layers output the final discrimination result.
4. The method for detecting the defective area of the color textured fabric as claimed in claim 3, wherein the step 2 of superimposing Gaussian noise on the non-defective color textured fabric image is specifically shown in formula (1):
5. The method as claimed in claim 4, wherein the loss function in the training process of the model in step 2 is represented by the antagonistic loss Ladv-lossAnd content loss Lcontent-lossThe method comprises two parts, wherein the loss of antagonism adopts WGAN-GP loss, the loss of content adopts L1 loss function, and the definitions are respectively shown as formula (2) and formula (3):
wherein, X (i) is the color texture fabric original image, G () and D () respectively represent the result obtained after the processing by the generator and the discriminator,and (3) reconstructing a graph of the color texture fabric output by the model, wherein n is the number of training samples,andweights and biases in the training process;
the total loss function is as in equation (4):
Ltotal=Ladv-loss+λ·Lcontent-loss (3)
where λ is the coefficient that adjusts the weight ratio.
6. The method for detecting the defective area of the color texture fabric as claimed in claim 5, wherein the step 3 is implemented by the following steps:
step 3.1, carrying out gray processing on the image to be detected and the corresponding reconstructed image;
step 3.2, adopting Gaussian kernel with the size of 3 multiplied by 3 to respectively carry out Gaussian filtering operation on the image to be detected after the image is grayed and the image reconstructed after the image is grayed;
step 3.3, calculating the difference value of the image to be detected and the corresponding reconstructed image after Gaussian filtering in the step 3.2 to obtain a residual image;
step 3.4, carrying out binarization operation on the residual image obtained in the step 2.3 by using a self-adaptive threshold value method;
step 3.5, carrying out opening operation processing on the residual image after binarization to obtain a detection result image, analyzing the value of each pixel point in the detection result image, and determining whether a defect area exists, wherein if the detection result image has no difference, namely the pixel values in the image are all 0, the input color texture fabric has no defect; if two pixel values of 0 and 1 exist on the detection result image, the defect exists in the input color texture fabric image, and the defect area is an area with the pixel value of 1.
7. The method for detecting the defective area of the color texture fabric as claimed in claim 6, wherein the graying process in step 3.1 is specifically performed as shown in formula (5):
Xgray=0.2125Xr+0.7154Xg+0.0721Xb (5)
in the formula: xgrayThe image is grayed; xr、Xg、XbThe pixel values of the image to be detected and the corresponding reconstructed image under three RGB different color channels are respectively, and the range of the pixel value of the image after graying is 0 to 255.
8. The method for detecting the defective area of the color texture fabric as claimed in claim 7, wherein the gaussian kernel function in the gaussian filtering operation in the step 3.2 is as follows (6):
wherein, (x, y) is the pixel coordinate of the image after the graying of the image to be detected or the image after the graying of the reconstructed image; sigmaxThe standard deviation of pixels in the x-axis direction of an image to be detected or a reconstructed image after graying; sigmayThe standard deviation of the pixels in the y-axis direction of the image to be detected or the reconstructed image after graying is obtained.
9. The method for detecting the defective area of the color texture fabric as claimed in claim 8, wherein the residual image in the step 3.3 is obtained according to equation (7):
10. The method for detecting the defective area of the color texture fabric as claimed in claim 9, wherein the binarization operation in the step 3.4 is as shown in formula (8):
where f (p) is a binarized value, p is a pixel value of the residual image, T is a binarization threshold value, μ is a mean value of the residual image, σ is a standard deviation of the residual image, and k is a coefficient of the standard deviation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111153679.4A CN113838040A (en) | 2021-09-29 | 2021-09-29 | Detection method for defect area of color texture fabric |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111153679.4A CN113838040A (en) | 2021-09-29 | 2021-09-29 | Detection method for defect area of color texture fabric |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113838040A true CN113838040A (en) | 2021-12-24 |
Family
ID=78967452
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111153679.4A Pending CN113838040A (en) | 2021-09-29 | 2021-09-29 | Detection method for defect area of color texture fabric |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113838040A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114565567A (en) * | 2022-02-15 | 2022-05-31 | 清华大学 | Method and device for detecting defects of complex texture lace cloth |
CN114723705A (en) * | 2022-03-31 | 2022-07-08 | 海门市恒创织带有限公司 | Cloth flaw detection method based on image processing |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728633A (en) * | 2019-09-06 | 2020-01-24 | 上海交通大学 | Multi-exposure high-dynamic-range inverse tone mapping model construction method and device |
CN111340791A (en) * | 2020-03-02 | 2020-06-26 | 浙江浙能技术研究院有限公司 | Photovoltaic module unsupervised defect detection method based on GAN improved algorithm |
CN111402197A (en) * | 2020-02-09 | 2020-07-10 | 西安工程大学 | Detection method for yarn-dyed fabric cut piece defect area |
CN112164033A (en) * | 2020-09-14 | 2021-01-01 | 华中科技大学 | Abnormal feature editing-based method for detecting surface defects of counternetwork texture |
CN112270654A (en) * | 2020-11-02 | 2021-01-26 | 浙江理工大学 | Image denoising method based on multi-channel GAN |
-
2021
- 2021-09-29 CN CN202111153679.4A patent/CN113838040A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728633A (en) * | 2019-09-06 | 2020-01-24 | 上海交通大学 | Multi-exposure high-dynamic-range inverse tone mapping model construction method and device |
CN111402197A (en) * | 2020-02-09 | 2020-07-10 | 西安工程大学 | Detection method for yarn-dyed fabric cut piece defect area |
CN111340791A (en) * | 2020-03-02 | 2020-06-26 | 浙江浙能技术研究院有限公司 | Photovoltaic module unsupervised defect detection method based on GAN improved algorithm |
CN112164033A (en) * | 2020-09-14 | 2021-01-01 | 华中科技大学 | Abnormal feature editing-based method for detecting surface defects of counternetwork texture |
CN112270654A (en) * | 2020-11-02 | 2021-01-26 | 浙江理工大学 | Image denoising method based on multi-channel GAN |
Non-Patent Citations (1)
Title |
---|
张宏伟: ""基于生成对抗网络的色织物缺陷检测"", 《西安工程大学学报》, vol. 36, no. 1, pages 1 - 9 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114565567A (en) * | 2022-02-15 | 2022-05-31 | 清华大学 | Method and device for detecting defects of complex texture lace cloth |
CN114565567B (en) * | 2022-02-15 | 2024-04-09 | 清华大学 | Defect detection method and device for complex texture lace cloth |
CN114723705A (en) * | 2022-03-31 | 2022-07-08 | 海门市恒创织带有限公司 | Cloth flaw detection method based on image processing |
CN114723705B (en) * | 2022-03-31 | 2023-08-22 | 深圳市启灵图像科技有限公司 | Cloth flaw detection method based on image processing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023070911A1 (en) | Self-attention-based method for detecting defective area of color-textured fabric | |
Huang et al. | Fabric defect segmentation method based on deep learning | |
CN111402197B (en) | Detection method for colored fabric cut-parts defect area | |
CN107169956B (en) | Color woven fabric defect detection method based on convolutional neural network | |
CN112070727B (en) | Metal surface defect detection method based on machine learning | |
CN109272500B (en) | Fabric classification method based on adaptive convolutional neural network | |
WO2023050563A1 (en) | Autoencoder-based detection method for defective area of colored textured fabric | |
CN114549522A (en) | Textile quality detection method based on target detection | |
CN110796637A (en) | Training and testing method and device of image defect detection model and storage medium | |
CN111402226A (en) | Surface defect detection method based on cascade convolution neural network | |
CN111223093A (en) | AOI defect detection method | |
CN112837295A (en) | Rubber glove defect detection method based on generation of countermeasure network | |
CN105678788B (en) | A kind of fabric defect detection method based on HOG and low-rank decomposition | |
Li et al. | TireNet: A high recall rate method for practical application of tire defect type classification | |
CN111242185A (en) | Defect rapid preliminary screening method and system based on deep learning | |
CN114119500A (en) | Yarn dyed fabric defect area detection method based on generation countermeasure network | |
CN110827260A (en) | Cloth defect classification method based on LBP (local binary pattern) features and convolutional neural network | |
CN113838040A (en) | Detection method for defect area of color texture fabric | |
CN111080574A (en) | Fabric defect detection method based on information entropy and visual attention mechanism | |
CN115205209A (en) | Monochrome cloth flaw detection method based on weak supervised learning | |
CN115018790A (en) | Workpiece surface defect detection method based on anomaly detection | |
CN114972216A (en) | Construction method and application of texture surface defect detection model | |
CN111862027A (en) | Textile flaw detection method based on low-rank sparse matrix decomposition | |
CN118212196B (en) | Industrial defect detection method based on image restoration | |
CN113902695B (en) | Detection method for colored fabric cut-parts defect area |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |