CN110889838A - Fabric defect detection method and device - Google Patents

Fabric defect detection method and device Download PDF

Info

Publication number
CN110889838A
CN110889838A CN201911173471.1A CN201911173471A CN110889838A CN 110889838 A CN110889838 A CN 110889838A CN 201911173471 A CN201911173471 A CN 201911173471A CN 110889838 A CN110889838 A CN 110889838A
Authority
CN
China
Prior art keywords
defect
fabric
image
model
resnet50
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911173471.1A
Other languages
Chinese (zh)
Inventor
罗维平
陈永恒
陈军
马双宝
游长莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Textile University
Original Assignee
Wuhan Textile University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Textile University filed Critical Wuhan Textile University
Priority to CN201911173471.1A priority Critical patent/CN110889838A/en
Publication of CN110889838A publication Critical patent/CN110889838A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Abstract

The invention discloses a method and a device for detecting fabric defects, wherein the method comprises the following steps: acquiring a ResNet50 model; replacing the first classifier of the ResNet50 model with a second classifier; acquiring a characteristic value of a fabric defect sample image; extracting the weight parameter of the ResNet50 model as an initial value; carrying out transfer learning on the ResNet50 model according to the characteristic value and the initial value to obtain various defect category identification models; and detecting the fabric image to be detected according to the various defect category identification models. The embodiment of the invention adopts transfer learning to make up the problem of extremely lacking defect image samples, specifically adopts ResNet50 model parameters trained on a large-scale image data set ImageNet as improved ResNet50 network initial parameters in the embodiment of the invention, and continues retraining a top layer convolution network in a fabric defect sample image set, so that the fabric defect sample image set is more suitable for fabric defect detection.

Description

Fabric defect detection method and device
Technical Field
The invention relates to the technical field of spinning, in particular to a method and a device for detecting defects of fabrics.
Background
The fabric defect detection occupies a very important position in the production of the textile industry, and the price of the fabric with defects can be reduced by 45% -65%, so the defect detection is the last step of finished fabric delivery and is an important process for quality control in the textile production.
At present, the manual detection is still mainly used for detecting the fabric defects in the industry, the detection speed is only 5-20m/min, the manual operation is easily influenced by external factors, the labor cost is high, and the defects of low efficiency, false detection, high omission ratio and the like exist, so that the manual detection is difficult to adapt to the production requirements of the modern industry, and therefore a rapid and accurate defect detection method is urgently needed in the field of detecting the fabric defects.
In recent years, a large number of scholars propose various defect detection algorithms, but the traditional defect detection algorithms have the problems of high image acquisition requirement, low image processing speed, poor robustness and the like, and are difficult to apply to industrial production sites. With the emergence of deep convolutional neural networks, new directions appear in the study of fabric defect detection. The appearance of deep learning classical image detection models such as AlexNet, GooleNet, VGG, ResNet and the like makes a new breakthrough in the detection research of fabric defects. For example: extracting fabric defect characteristics by using a ResNet101 model, and identifying defect targets by using a Faster RCNN detection network, so that higher detection accuracy is obtained, but the real-time effect in the aspect of identification speed is not ideal; the training of a fabric defect detection model is completed by using a support vector machine classification algorithm, the problem of classifying small-sample and high-dimensional data is solved, the GLCM characteristic value needs to be calculated by using a GPU, the calculation amount is large, the cost is high, the size of an input image pixel needs to be reduced while the calculation speed is improved, the original image characteristic information is weakened to a certain extent, and the reliability of a final detection result is influenced.
For fabric defect detection, although the variety of fabric defects is large, a plurality of defects are not common, defect image samples are extremely lack, and meanwhile, the defect samples lack to cause the problems of low calculation efficiency and low detection result reliability of a detection algorithm in the prior art.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for detecting a fabric defect, so as to solve the problems of low calculation efficiency and low reliability of a detection result of a detection algorithm in the prior art.
The embodiment of the invention provides a fabric defect detection method, which comprises the following steps:
acquiring a ResNet50 model;
replacing the first classifier of the ResNet50 model with a second classifier;
acquiring a characteristic value of a fabric defect sample image;
extracting the weight parameter of the ResNet50 model as an initial value;
carrying out transfer learning on the ResNet50 model according to the characteristic value and the initial value to obtain various defect category identification models;
and detecting the fabric image to be detected according to the various defect category identification models.
Optionally, before obtaining the feature value of the fabric defect sample image, the method further comprises:
acquiring a fabric defect sample image at a resolution specification of 2560 x 1920;
cutting the fabric defect sample image by a sliding window with the resolution of 320 × 320 and a sliding step size of 160 to obtain 8 × 6 small-size images;
carrying out image enhancement on the small-size image;
wherein the image enhancement comprises:
randomly rotating the image, wherein the variation parameter range is 0-90 degrees;
horizontally and/or vertically, and the variation parameter range is 0-30%;
randomly performing miscut transformation, wherein the variation parameter range is 0-90 degrees;
perspective transformation, wherein the range of the variation parameter is 0-20%;
zooming, wherein the variation parameter range is 0-10%;
randomly flipping horizontally and/or vertically.
Optionally, performing transfer learning on the ResNet50 model according to the feature values and the initial values to obtain a plurality of defect category identification models, further comprising:
acquiring learning characteristics of various defect category identification models
Figure BDA0002289355950000031
Obtaining the gradient of the reverse process
Figure BDA0002289355950000032
According to the dimension of the residual mapping (F (x)) result and the different dimensions of the jump connection (x), performing corresponding dimension increasing operation on the jump connection (x);
wherein, the residual structure has a calculation formula of
y1=h(xl)+F(xl,Wl);
xl+1=f(yl);
Wherein x islAnd xl+1Respectively representing the input and output of the ith residual unit, wherein each residual unit comprises a multi-layer structure; f (x)l,Wl) Is the residual function of layer l; h (X) ═ XlIs the identity mapping function of the l-th layer; f (y)l) Is ylConverting the image into an input image of the next layer through an activation function; i. l and L are positive integers.
Optionally, before performing the corresponding dimension-increasing operation on the jump connection (x) according to the dimension of the result of the residual mapping (f (x)) and the different dimension of the jump connection (x), the method further includes:
the hopping connections (x) are pooled evenly.
Optionally, after obtaining the plurality of defect category identification models, further comprising:
performing index evaluation on the various defect category identification models; wherein, the index evaluation adopts Accuracy (Accuracy, Acc) as the evaluation index of the model.
Optionally, performing transfer learning on the ResNet50 model according to the feature values and the initial values to obtain a plurality of defect category identification models, further comprising:
and monitoring the training loss rate, the verification loss rate, the training accuracy rate and the verification accuracy rate of the various defect type identification models.
Optionally, performing transfer learning on the ResNet50 model according to the feature values and the initial values to obtain a plurality of defect category identification models, further comprising:
in the training process of the ResNet50 model in continuous 100 rounds of transfer learning, when the verification accuracy reaches the maximum value and tends to be stable, and the loss rate of the verification set reaches the minimum value and tends to be stable, the training is terminated early.
Optionally, before obtaining the feature value of the fabric defect sample image, the method further comprises:
according to the following steps: 1, selecting an unblemished image and a defective image as a training sample set;
from the flawless image and the flawed image, 80% was selected as a training set, 10% as a verification set, and 10% as a test set.
Optionally, performing transfer learning on the ResNet50 model according to the feature values and the initial values to obtain a plurality of defect category identification models, further comprising:
dropout operation is performed on the small-size image.
The embodiment of the invention also provides a device for detecting the fabric defects, which comprises the following components:
the first acquisition module is used for acquiring a ResNet50 model;
a removal module for removing the classifier of the ResNet50 model;
the second acquisition module is used for acquiring the characteristic value of the fabric defect sample image;
the extraction module is used for extracting the weight parameters of the ResNet50 model as initial values;
the third acquisition module is used for carrying out transfer learning on the ResNet50 model according to the characteristic value and the initial value to acquire various defect type identification models;
and the detection module is used for detecting the fabric image to be detected according to the various defect type identification models.
The embodiment of the invention provides a fabric defect detection device, which comprises: the fabric defect detecting method comprises a first obtaining module, a second obtaining module, a removing module, an extracting module, a third obtaining module, a detecting module, a memory and a processor, wherein the first obtaining module, the second obtaining module, the removing module, the extracting module, the third obtaining module, the detecting module, the memory and the processor are in communication connection with each other, computer instructions are stored in the memory, and the processor executes the computer instructions so as to execute the fabric defect detecting method.
The embodiment of the invention has the following beneficial effects:
1. in machine learning, to train a model with good robustness, enough labeled samples and a large number of training sets and test sets which have the same distribution and are independent of each other are required, which is difficult to meet in the actual training process. For fabric defect detection, although the variety of fabric defects is large, many defects are not common, so that defect image samples are extremely lacking, and aiming at the problem, the embodiment of the invention adopts transfer learning to make up.
The ResNet50 model parameters trained on the large-scale image data set ImageNet are used as initial parameters of the multi-defect-type recognition model in the embodiment of the invention, and then the top-layer convolutional network is continuously retrained in the fabric defect sample image set, so that the method is more suitable for detecting the fabric defects.
2. In the embodiment of the invention, the image enhancement is to increase samples from the existing training samples by utilizing various random transformations capable of generating credible images so as to improve the generalization capability of the model.
3. The mean value pooling is adopted, so that parameters can be saved, network parameters are greatly reduced, and overfitting is avoided; the spatial transformation of the input is more stable.
4. Dropout operation is adopted to prevent overfitting in the training process of the various defect class identification models.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the invention in any way, and in which:
FIG. 1 is a flow chart of a fabric defect detection method in an embodiment of the present invention;
FIG. 2 is a block diagram of a fabric defect detecting apparatus according to an embodiment of the present invention;
FIG. 3 is a block diagram of a fabric defect detection terminal in accordance with an embodiment of the present invention;
FIG. 4 is a diagram of a residual block structure in the prior art;
FIG. 5 is a diagram of a standard ResNet50 network architecture;
FIG. 6 is a diagram of a plurality of defect category identification models of a fabric defect detecting method in accordance with an embodiment of the present invention;
FIG. 7 is a training set image of a fabric defect in accordance with an embodiment of the present invention;
FIG. 8 is a partially amplified image of a hole-erased image after image enhancement applied in accordance with an embodiment of the present invention;
FIG. 9 is a loss curve and accuracy curve for a standard ResNet50 network model training process;
FIG. 10 is a loss curve and an accuracy curve of a training process of a multi-defect category identification model in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present invention provides a method for detecting a fabric defect, as shown in fig. 1, including:
step S10, obtain ResNet50 model.
In this embodiment, the ResNet50 model trained on the ImageNet dataset is downloaded through a network resource.
Step S20, replace the first classifier of the ResNet50 model with the second classifier.
In this embodiment, since the original first classifier (full link) type of the ResNet50 model is used to identify 1000 common object classes and is not suitable for detecting fabric defects, the classifier is replaced to adapt to fabric defect detection. In a specific embodiment, the convolution base parameters (namely, the image general characteristics) trained by the original model can be directly utilized through transfer learning, and then a second classifier is trained according to the classification task of the second classifier and is used for detecting the fabric defects.
And step S30, acquiring the characteristic value of the fabric defect sample image.
In the present embodiment, the feature values of the fabric defect sample images are the inputs of the improved ResNet50 model, i.e., the input of the multiple defect category identification model.
In step S40, the weight parameters of the ResNet50 model are extracted as initial values.
In this embodiment, the weighting parameters of the ResNet50 model are retained.
And step S50, performing transfer learning on the ResNet50 model according to the characteristic values and the initial values to obtain various defect type identification models.
In machine learning, to train a model with good robustness, enough labeled samples and a large number of training sets and test sets which have the same distribution and are independent of each other are required, which is difficult to meet in the actual training process. For fabric defect detection, although the variety of fabric defects is large, many defects are not common, so that defect image samples are extremely lacking, and aiming at the problem, the embodiment of the invention adopts transfer learning to make up.
Transfer learning is the improvement of a new task that needs to be learned by transferring its knowledge from the learned related tasks, without the need to start learning from scratch. In the deep learning field, if there is a large enough raw image data set, the spatial hierarchy of features learned by the pre-trained network model can be effectively used as a general model of the visual world, so that these features can be used for various computer vision problems, even if the categories and original tasks of these new classification problem designs are completely different. In the convolutional neural network model, the universality of the extracted features of a certain convolutional layer depends on the depth of the layer in the model, the information extracted by the layer closer to the bottom of the model is a local and highly-universal feature map such as visual edges, colors, textures and the like, and the information extracted by the layer closer to the top is a more abstract concept such as specific class outlines and shapes.
And step S60, detecting the fabric image to be detected according to the various defect type identification models.
In this embodiment, the ResNet50 model parameters trained on the large image dataset ImageNet are used as the initial parameters of the ResNet50 network improved in the embodiment of the present invention, and the top layer convolutional network is retrained again in the fabric defect sample image set, so that the network defect detection is more suitable.
As an optional implementation manner, before step S30, the method further includes:
at step S21, fabric defect sample images are acquired at a resolution specification of 2560 × 1920.
At step S22, the fabric defect sample image was cut with a sliding window resolution of 320 × 320 and a sliding step size of 160 to obtain 8 × 6 small size images.
In step S23, image enhancement is performed on the small-size image.
Wherein the image enhancement comprises: randomly rotating the image, wherein the variation parameter range is 0-90 degrees; horizontally or vertically translating, wherein the variation parameter range is 0-30%; randomly performing miscut transformation, wherein the variation parameter range is 0-90 degrees; perspective transformation, wherein the range of the variation parameter is 0-20%; zooming, wherein the variation parameter range is 0-10%; and randomly turning horizontally or vertically.
In the embodiment, because the existing fabric defect database is limited, researchers mostly choose to adopt a self-built database, but high-definition fabric defects of complete varieties are high in acquisition difficulty and limited in effective data, and only small data sets can be built for part of specific defect types and cloth materials; in addition, as the existing TILDA fabric database has the defects of small data quantity, small variety, less total data than thousand images, low pixels and limited defect characteristic information, even though the overfitting of the model can be reduced to a certain degree by using data enhancement, the training precision of the model is always limited by the small quantity of original images. Based on the fabric image data set which is sourced from a real enterprise production field by the Jiangsu sunshine group, the resolution of each image is 2560 x 1920, the image feature details are clear and are confirmed by experienced cloth inspectors, then a labeling tool labelImg is used for manually labeling each image to generate a corresponding xml format information file which contains information such as defect types and positions, the data set comprises 4036 ultra-clear fabric data images, wherein 2960 defect-free images and 2676 defect images are contained, the defect types comprise 42 common hanging bows, hair spots, warp jumping, two-dimensional, knotting, weaving rarity, hole pricking, oil stains, lack of warp and lack of weft, and the data set is the currently published fabric defect data set which is complete in type, most in number and highest in image resolution. In the embodiment of the invention, several defect samples with more data concentrated samples are selected, as shown in fig. 7, the defect samples comprise 7 types of defect fabrics such as normal fabrics, hole wiping, rough holes, rarefaction, weft lifting, missing warp, jumping flowers and oil stain, and as other defect type samples are fewer, the model accuracy is influenced when the defect type samples are used for sample training, and other defect samples are classified into one type, and the normal fabric samples are classified into one type, so that the total sample used for model training is 9 types.
The problem of data imbalance exists when the number of the defect-free images in the data set is more than that of various defect images, and overfitting of the model can be caused by direct training; in addition, due to the training hardware limitations (2 block GTX1080Ti 11G GPU), the raw image resolution is large and the computational effort is large during training, so the training data must be preprocessed. The defect identification judges that defect feature details need to be obtained, if image detail features are sacrificed directly by resize, the information amount carried by the image is reduced, so the embodiment of the invention adopts cutting with a sliding window of 320 × 320 size and a sliding step of 160 size, and cuts 2560 × 1920 original images into 8 × 6 pieces, so that each original image is divided into 48 small-size images with unchanged image details. However, the problem that the processed flaw image is just cut at the flaw edge or the ratio of the flaw area is small occurs, therefore, the embodiment of the invention further calculates the ratio of the flaw area to the cut flaw image according to the marked flaw position information, screens out the image with the ratio of the cut flaw area to the original flaw area being more than a certain threshold value as effective data, and selects the flaws with the ratio exceeding 10% as effective flaw images.
In the embodiment of the invention, the image enhancement is to increase samples from the existing training samples by utilizing various random transformations capable of generating credible images so as to improve the generalization capability of the model.
As an optional implementation manner, step S50 further includes:
and step S51, acquiring the learning characteristics of the various defect category identification models.
Figure BDA0002289355950000091
In step S52, the gradient of the reverse process is acquired.
Wherein the gradient is:
Figure BDA0002289355950000092
in this embodiment, equation 2 is obtained by using the chain rule, and the first factor in equation 2
Figure BDA0002289355950000093
Representing the gradient reached by the loss function, 1 in brackets indicates that the short-circuit mechanism can propagate the gradient without loss, while the other residual gradient needs to pass through the weighted layer, and the gradient is not directly transmitted. Residual gradients are not all-1, the calculated value is small, andthe presence of 1 will not cause the gradient to disappear either, so residual learning will be easier.
And step S53, performing corresponding dimension increasing operation on the jump connection (x) according to the dimension of the residual mapping (F (x)) result and the different dimension of the jump connection (x).
Wherein, the calculation formula of the residual structure is y1=h(xl)+F(xl,Wl) (ii) a (formula 3)
xl+1=f(yl) (ii) a (formula 4)
Wherein x islAnd xl+1Respectively representing the input and output of the ith residual unit, wherein each residual unit comprises a multi-layer structure; f (x)l,Wl) Is the residual function of layer l; h (X) ═ XlIs the identity mapping function of the l-th layer; f (y)l) Is ylConverting the image into an input image of the next layer through an activation function; i. l and L are positive integers.
In this embodiment, the learning features from the shallow layer L to the deep layer L, i.e., formula 1, can be obtained from formulas 3 and 4.
As an optional implementation manner, before step S53, the method further includes:
in step S521, the jump connection (x) is averaged and pooled.
In this embodiment, the last layer of a conventional CNN (convolutional neural network) is a full connection layer, the number of parameters is very large, and overfitting (such as Alexnet) is easily caused. Unlike the traditional full-link layer, the global mean pooling is performed on the whole picture of each feature map, so that each feature map can obtain one output. The mean value pooling is adopted, so that parameters can be saved, network parameters are greatly reduced, and overfitting is avoided; on the other hand, each feature map corresponds to an output feature, and the output feature represents the feature of the output class.
Advantages of using average pooling include:
(1) by enhancing the consistency of the feature map and the category, the convolution structure is simpler.
(2) No parameter optimization is required so this layer can avoid overfitting.
(3) It sums the spatial information and thus has more stability to the input spatial transformation.
As an optional implementation manner, after step S50, the method further includes:
and step S54, performing index evaluation on the various defect type identification models. Wherein, the index evaluation adopts Accuracy (Accuracy, Acc) as the evaluation index of the model.
In this embodiment, the classification problem is to classify the instances into Positive (Positive) and Negative (Negative) classes, and the classification model may have the following cases in the actual classification result 4: if an example is a Positive class, the classifier determines it as a Positive class, called True Positive (TP); on the contrary, if the classifier determines it as a Negative class, it is called False Negative (FN); if an instance is a negative class, the classifier determines it as a positive class, called false positive class (FP); on the contrary, if the classifier determines it as a Negative class, it is called True Negative (TN). Thus, the calculation formula for deriving the accuracy Acc is shown in formula 5:
Figure BDA0002289355950000101
in addition, the loss value in the training process is also one of important indexes, which is a feedback signal for evaluating the learning weight tensor, can measure whether the current task is solved, and the minimization is needed in the training process, and is used for measuring the inconsistency degree of the predicted value f (x) and the true value Y of the model, which is a non-negative real value function and is generally expressed by using L (Y, f (x)), and the smaller the loss function is, the better the robustness of the model is.
As an optional implementation manner, step S50 further includes:
step S501, monitoring the training loss rate, the verification loss rate, the training accuracy rate and the verification accuracy rate of the various defect type identification models.
In this embodiment, in order to obtain a model with sufficient accuracy and better generalization capability, it is necessary to monitor the training loss and the verification loss, and the most important training accuracy and verification accuracy during the process of training the model, and if the performance of the verification data of the model starts to decrease, it can be determined as an overfitting.
As an optional implementation manner, step S50 further includes:
step S502, in the training process of the ResNet50 model in continuous 100 rounds of transfer learning, when the verification accuracy reaches the maximum value and tends to be stable, and the loss rate of the verification set reaches the minimum value and tends to be stable, the training is terminated in advance.
In the embodiment, the implementation process in the program is to terminate the training early when the verification accuracy rate is no longer optimized in 100 continuous rounds of training. And (5) repeatedly adjusting the hyper-parameters, trying different Dropout proportions and the like through the feedback of the verification process until the model achieves the optimal performance. The reference standard which is not optimized is that when the verification accuracy reaches the maximum value and tends to be stable, and the loss rate of the verification set reaches the minimum value and tends to be stable.
As an optional implementation manner, before step S60, the method further includes:
step S55, according to 2: 1, selecting an unblemished image and a defective image as a training sample set;
in step S56, 80% of the non-defective images and 10% of the defective images are selected as training sets, 10% are selected as verification sets, and 10% are selected as test sets.
In this embodiment, the model training sample set is as follows: 1 flawless images 10034 and flawed images 5017 were selected as a sample set. In order to test the training precision of the model, 80% of the training sample set is selected as a training set, 10% is selected as a verification set, and 10% is selected as a test set.
As an optional implementation manner, step S50 further includes:
in step S503, a dropout operation is performed on the small-size image.
In this embodiment, dropout operation is employed to prevent overfitting during the training of the multi-defect category identification model.
An embodiment of the present invention further provides a device for detecting a fabric defect, as shown in fig. 2, including: a first obtaining module 201, a removing module 202, a second obtaining module 203, an extracting module 204, a third obtaining module 205 and a detecting module 206, wherein: the first obtaining module 201 is used for obtaining a ResNet50 model; the removal module 202 is used for removing the classifiers of the ResNet50 model; the second obtaining module 203 is used for obtaining the characteristic value of the fabric defect sample image; the extracting module 204 is configured to extract a weight parameter of the ResNet50 model as an initial value; the third obtaining module 205 is configured to perform transfer learning on the ResNet50 model according to the feature value and the initial value, and obtain multiple defect category identification models; the detection module 206 is configured to detect the fabric image to be detected according to the multiple defect category identification models.
Embodiments of the present invention further provide a fabric defect detecting terminal, as shown in fig. 3, which may include a processor 31 and a memory 32, where the processor 31 and the memory 32 may be connected by a bus or by other means, and fig. 3 illustrates the connection by the bus.
The processor 31 may be a Central Processing Unit (CPU). The Processor 31 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 32, which is a non-transitory computer readable storage medium, may be used for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the fabric defect detection method in the embodiment of the present invention (for example, the first acquiring module 201, the removing module 202, the second acquiring module 203, the extracting module 204, the third acquiring module 205, and the detecting module 206 shown in fig. 2). The processor 31 executes various functional applications and data processing of the processor by executing non-transitory software programs, instructions and modules stored in the memory 32, namely, implements the fabric defect detecting method in the above-described method embodiment.
The memory 32 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 31, and the like. Further, the memory 32 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 32 may optionally include memory located remotely from the processor 31, and these remote memories may be connected to the processor 31 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Said one or more modules are stored in said memory 32 and, when executed by said processor 31, perform a fabric defect detection method as in the embodiment shown in figure 1.
The details of the fabric defect detecting device and the terminal can be understood by referring to the corresponding related description and effects in the embodiment shown in fig. 1, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
In the specific embodiment, as shown in fig. 4 to fig. 6, if the dimension of the residual mapping (f (x)) result is different from the dimension of the skip connection (x), there is no way to add them, so the dimension of x must be increased to make the dimension the same before adding the same, so a specific convolution kernel 1 × 1 with stride 2 is usually added to the skip connection (x), and x is passed through the dimension increase to make its output the same as the output of the volume block.
When residual modules are repeatedly stacked, network structures with different depths can be formed, after networks with multiple depths such as ResNet18, ResNet34, ResNet50, ResNet101 and ResNet152 are tried, according to comprehensive consideration of experimental effects and calculated amount, a ResNet50 network with 50 layers is selected, and standard ResNet network structures with different layers are shown in FIG. 5.
It can be seen from figure 5 that all the ResNet network first layers pass through the 7x7 convolutional layer, the field of view is large, and it is sufficient for image extraction features in the ImageNet database, but in the embodiment of the present invention, the fabric defects are many and most of the defects are very small, and it is necessary to extract more effective features for more accurate defect classification, so the first layer 7x7 convolutional layer is improved in designing the network. Three 3x3 stacked convolutional layers are used for replacing a 7x7 convolutional layer, on one hand, 3 convolutional layers use more nonlinear activation functions, so that the decision function is more deterministic; on the other hand, the number of calculation parameters is effectively reduced, and assuming that the input and output feature maps of the convolutional layer are the same in size Z, the number of convolutional layer parameters of three 3x3, 3x ((3x3xZ) xZ) ═ 27 × Z2; the number of parameters of one 7 × 7 convolutional layer is (7 × 7xZ) xZ-49 × Z2, so that the first layer of the network is improved under the condition of not changing the initial receptive field, and better performance can be brought to the defect detection model.
In addition, in the defect detection problem, the defects occupy a smaller area of the whole image, so the occupied ratio of the acquired information is smaller, and in order to avoid the defect information loss, in the embodiment of the invention, a layer of 2x2 average pooling layer (avg-pool) is added before downsampling is performed on a 1x1 convolution kernel with a residual module stride of 2 to integrate spatial information, the average pooling layer has no parameters, the global parameter number is not changed, and meanwhile, overfitting can be prevented from occurring in the layer, so that the information loss is effectively avoided while the calculated amount is not increased.
In an exemplary embodiment, a partial image of a defect type data set is selected as shown in FIG. 7. Partial images of the common defect hole after data enhancement are shown as k, l, m and n in figure 8.
In this embodiment, all the models referred to above are trained, wherein the loss curve and accuracy curve of the standard ResNet50 network model training process are shown in FIG. 9; the loss curve and the accuracy curve of the multi-defect type identification model training process in the embodiment of the invention are shown in FIG. 10.
And (3) according to the trained model, respectively carrying out experimental verification on each model and the reproduced traditional detection algorithm by using a test set in the training sample, wherein the experimental results are shown in table 1.
TABLE 1 comparison of test results on training data sets for mainstream Algorithm models and improved models in the examples of the invention
Figure BDA0002289355950000141
From the experimental data in table 1, it can be seen that the weight parameters of the improved ResNet50 network (multi-defect category identification model) are the least in the experimental model, the training accuracy and the verification accuracy are not the best, but the testing accuracy is the highest, and the testing speed is the fastest, and the improved ResNet50 network improves the accuracy by 4.2% compared with the standard ResNet50 network, and compared with the ResNet152 network, the accuracy is lower by 0.13%, but the network weight is almost reduced by half, and the speed is improved by nearly 2 times. The defect detection method based on the HOG features has the advantages that the test accuracy rate of the defect detection method on the data set is only 70.51%, the analysis is mainly because the algorithm cuts the original image and respectively extracts image blocks for detection processing, but the overall situation is not detected, so that part of cut edges are detected as defects during detection; the LBP characteristic representing the local texture of the fabric is obtained from the whole fabric image, and the defect unsupervised detection method is realized in the embodiment of the invention, the detection accuracy rate is only 69.95%, the main reason for the analysis is that the algorithm is limited by the size of a sub-window for extracting the LBP characteristic, the size of the selected sub-window is about 2 times of the minimum period of the fabric texture, the period size of the tested fabric defect is between 6 and 11 pixels, the experimental sample does not have the size representativeness of the defect of the actual production field, the defect size of the test sample in the embodiment of the invention is different, no manual size screening is needed, and the defect representativeness of industrial field data is realized. The detection time of the two traditional defect detection algorithms is limited by the number of detected sample defects and the pixel sizes of the defects, and effective statistics cannot be carried out, but compared with the two traditional detection algorithms, the accuracy of the algorithm provided by the embodiment of the invention is respectively improved by 25.81% and 26.87%, meanwhile, the detection speed is not influenced by the sizes of the defects, and the speed is constant and quick.
In conclusion, compared with the traditional detection algorithm, the fabric defect detection algorithm based on the transfer learning and the improved ResNet50 network has obvious advantages, is higher in detection speed and precision compared with the common standard deep learning model, and is more suitable for the actual industrial production detection environment.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A method of detecting fabric defects, comprising:
acquiring a ResNet50 model;
replacing a first classifier of the ResNet50 model with a second classifier;
acquiring a characteristic value of a fabric defect sample image;
extracting the weight parameter of the ResNet50 model as an initial value;
performing transfer learning on the ResNet50 model according to the characteristic value and the initial value to obtain various defect category identification models;
and detecting the fabric image to be detected according to the various defect category identification models.
2. A fabric defect detection method according to claim 1 and further comprising, prior to obtaining characteristic values for fabric defect sample images:
acquiring 2560 × 1920 resolution specification of the fabric defect sample image;
cutting the fabric defect sample image by a sliding window with the resolution of 320 × 320 and a sliding step size of 160 to obtain 8 × 6 small-size images;
performing image enhancement on the small-size image;
wherein the image enhancement comprises:
randomly rotating the image, wherein the variation parameter range is 0-90 degrees;
horizontally or vertically translating, wherein the variation parameter range is 0-30%;
randomly performing miscut transformation, wherein the variation parameter range is 0-90 degrees;
perspective transformation, wherein the range of the variation parameter is 0-20%;
zooming, wherein the variation parameter range is 0-10%;
and randomly turning horizontally or vertically.
3. A fabric defect detection method according to claim 1, wherein said ResNet50 model is transfer learned based on said eigenvalues and said initial values to obtain a plurality of defect category identification models, further comprising:
acquiring the learning characteristics of the various defect category identification models
Figure FDA0002289355940000011
Obtaining the gradient of the reverse process
Figure FDA0002289355940000021
According to the dimension of the residual mapping (F (x)) result and the different dimension of the jump connection (x), carrying out corresponding dimension increasing operation on the jump connection (x);
wherein, the residual structure has a calculation formula of
y1=h(xl)+F(xl,Wl);
xl+1=f(yl);
Wherein x islAnd xl+1Respectively representing the input and output of an/th residual unit, wherein each of said residual units comprises a multi-layer structure; f (x)l,Wl) Is the residual function of layer l; h (X) ═ XlIs the identity mapping function of the l-th layer; f (y)l) Is ylConverting the image into an input image of the next layer through an activation function; i. l and L are positive integers.
4. A fabric defect detection method according to claim 3, wherein before performing corresponding dimension-up operations on said jump connection (x) according to the dimension of the result of the residual mapping (f (x)) and the different dimension of the jump connection (x), further comprising:
-average pooling of said hopping connections (x).
5. A fabric defect detection method according to claim 1 further comprising, after obtaining a plurality of defect category identification models:
performing index evaluation on the various defect category identification models; wherein, the index evaluation adopts Accuracy (Accuracy, Acc) as the evaluation index of the model.
6. A fabric defect detection method according to claim 1, wherein said ResNet50 model is transfer learned based on said eigenvalues and said initial values to obtain a plurality of defect category identification models, further comprising:
and monitoring the training loss rate, the verification loss rate, the training accuracy rate and the verification accuracy rate of the various defect type identification models.
7. A fabric defect detection method according to claim 6, wherein said ResNet50 model is transfer learned based on said eigenvalues and said initial values to obtain a plurality of defect category identification models, further comprising:
in the training process of the ResNet50 model in continuous 100 rounds of transfer learning, when the verification accuracy reaches the maximum value and tends to be stable, and the loss rate of the verification set reaches the minimum value and tends to be stable, the training is terminated early.
8. A fabric defect detection method according to claim 1 or 2, further comprising, prior to obtaining characteristic values for fabric defect sample images:
according to the following steps: 1, selecting an unblemished image and a defective image as a training sample set;
80% of each of the non-defective image and the defective image is selected as a training set, 10% is selected as a verification set, and 10% is selected as a test set.
9. A fabric defect detection method according to claim 2, wherein said ResNet50 model is transfer learned based on said eigenvalues and said initial values to obtain a plurality of defect category identification models, further comprising:
and performing dropout operation on the small-size image.
10. A fabric defect detecting apparatus, comprising:
the first acquisition module is used for acquiring a ResNet50 model;
a removal module for removing the classifiers of the ResNet50 model;
the second acquisition module is used for acquiring the characteristic value of the fabric defect sample image;
the extraction module is used for extracting the weight parameter of the ResNet50 model as an initial value;
a third obtaining module, configured to perform transfer learning on the ResNet50 model according to the feature value and the initial value, and obtain multiple defect category identification models;
and the detection module is used for detecting the fabric image to be detected according to the various defect category identification models.
CN201911173471.1A 2019-11-26 2019-11-26 Fabric defect detection method and device Pending CN110889838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911173471.1A CN110889838A (en) 2019-11-26 2019-11-26 Fabric defect detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911173471.1A CN110889838A (en) 2019-11-26 2019-11-26 Fabric defect detection method and device

Publications (1)

Publication Number Publication Date
CN110889838A true CN110889838A (en) 2020-03-17

Family

ID=69748904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911173471.1A Pending CN110889838A (en) 2019-11-26 2019-11-26 Fabric defect detection method and device

Country Status (1)

Country Link
CN (1) CN110889838A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553898A (en) * 2020-04-27 2020-08-18 东华大学 Fabric defect detection method based on convolutional neural network
CN111681229A (en) * 2020-06-10 2020-09-18 创新奇智(上海)科技有限公司 Deep learning model training method, wearable clothes flaw identification method and wearable clothes flaw identification device
CN111753925A (en) * 2020-07-02 2020-10-09 广东技术师范大学 Multi-model fusion medical image classification method and equipment
CN111862064A (en) * 2020-07-28 2020-10-30 桂林电子科技大学 Silver wire surface flaw identification method based on deep learning
CN111882546A (en) * 2020-07-30 2020-11-03 中原工学院 Weak supervised learning-based three-branch convolutional network fabric defect detection method
CN113012153A (en) * 2021-04-30 2021-06-22 武汉纺织大学 Aluminum profile flaw detection method
CN113284147A (en) * 2021-07-23 2021-08-20 常州市新创智能科技有限公司 Foreign matter detection method and system based on yellow foreign matter defects
CN113837209A (en) * 2020-06-23 2021-12-24 乐达创意科技股份有限公司 Method and system for improved machine learning using data for training
WO2024020994A1 (en) * 2022-07-29 2024-02-01 宁德时代新能源科技股份有限公司 Training method and training device for defect detection model of battery cell

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106872487A (en) * 2017-04-21 2017-06-20 佛山市南海区广工大数控装备协同创新研究院 The surface flaw detecting method and device of a kind of view-based access control model
CN109187579A (en) * 2018-09-05 2019-01-11 深圳灵图慧视科技有限公司 Fabric defect detection method and device, computer equipment and computer-readable medium
CN109509172A (en) * 2018-09-25 2019-03-22 无锡动视宫原科技有限公司 A kind of liquid crystal display flaw detection method and system based on deep learning
CN109583489A (en) * 2018-11-22 2019-04-05 中国科学院自动化研究所 Defect classifying identification method, device, computer equipment and storage medium
CN109978847A (en) * 2019-03-19 2019-07-05 东南大学 Drag-line housing disease automatic identifying method based on transfer learning Yu drag-line robot
CN110391022A (en) * 2019-07-25 2019-10-29 东北大学 A kind of deep learning breast cancer pathological image subdivision diagnostic method based on multistage migration

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106872487A (en) * 2017-04-21 2017-06-20 佛山市南海区广工大数控装备协同创新研究院 The surface flaw detecting method and device of a kind of view-based access control model
CN109187579A (en) * 2018-09-05 2019-01-11 深圳灵图慧视科技有限公司 Fabric defect detection method and device, computer equipment and computer-readable medium
CN109509172A (en) * 2018-09-25 2019-03-22 无锡动视宫原科技有限公司 A kind of liquid crystal display flaw detection method and system based on deep learning
CN109583489A (en) * 2018-11-22 2019-04-05 中国科学院自动化研究所 Defect classifying identification method, device, computer equipment and storage medium
CN109978847A (en) * 2019-03-19 2019-07-05 东南大学 Drag-line housing disease automatic identifying method based on transfer learning Yu drag-line robot
CN110391022A (en) * 2019-07-25 2019-10-29 东北大学 A kind of deep learning breast cancer pathological image subdivision diagnostic method based on multistage migration

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
春风琰玉: ""NetWork In Network"", 《HTTPS://WWW.IT610.COM/ARTICLE川76776515451432960.HTM》 *
春风琰玉: ""NetWork In Network"", 《HTTPS://WWW.IT610.COM/ARTICLE川76776515451432960.HTM》, 21 January 2018 (2018-01-21) *
罗俊丽: ""基于卷积神经网络和迁移学习的色织物疵点检测"", 《上海纺织科技》 *
罗俊丽: ""基于卷积神经网络和迁移学习的色织物疵点检测"", 《上海纺织科技》, vol. 47, no. 6, 30 June 2019 (2019-06-30), pages 2 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553898A (en) * 2020-04-27 2020-08-18 东华大学 Fabric defect detection method based on convolutional neural network
CN111681229A (en) * 2020-06-10 2020-09-18 创新奇智(上海)科技有限公司 Deep learning model training method, wearable clothes flaw identification method and wearable clothes flaw identification device
CN113837209A (en) * 2020-06-23 2021-12-24 乐达创意科技股份有限公司 Method and system for improved machine learning using data for training
CN111753925A (en) * 2020-07-02 2020-10-09 广东技术师范大学 Multi-model fusion medical image classification method and equipment
CN111862064A (en) * 2020-07-28 2020-10-30 桂林电子科技大学 Silver wire surface flaw identification method based on deep learning
CN111862064B (en) * 2020-07-28 2022-08-09 桂林电子科技大学 Silver wire surface flaw identification method based on deep learning
CN111882546A (en) * 2020-07-30 2020-11-03 中原工学院 Weak supervised learning-based three-branch convolutional network fabric defect detection method
CN111882546B (en) * 2020-07-30 2023-08-01 中原工学院 Three-branch convolution network fabric defect detection method based on weak supervision learning
CN113012153A (en) * 2021-04-30 2021-06-22 武汉纺织大学 Aluminum profile flaw detection method
CN113284147A (en) * 2021-07-23 2021-08-20 常州市新创智能科技有限公司 Foreign matter detection method and system based on yellow foreign matter defects
WO2024020994A1 (en) * 2022-07-29 2024-02-01 宁德时代新能源科技股份有限公司 Training method and training device for defect detection model of battery cell

Similar Documents

Publication Publication Date Title
CN110889838A (en) Fabric defect detection method and device
US11741593B2 (en) Product defect detection method, device and system
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
US20210374940A1 (en) Product defect detection method, device and system
CN108765412B (en) Strip steel surface defect classification method
US20210374941A1 (en) Product defect detection method, device and system
CN109509187B (en) Efficient inspection algorithm for small defects in large-resolution cloth images
CN111402226A (en) Surface defect detection method based on cascade convolution neural network
CN110619618A (en) Surface defect detection method and device and electronic equipment
CN114549522A (en) Textile quality detection method based on target detection
CN108520114A (en) A kind of textile cloth defect detection model and its training method and application
CN111833306A (en) Defect detection method and model training method for defect detection
WO2022236876A1 (en) Cellophane defect recognition method, system and apparatus, and storage medium
CN111815564B (en) Method and device for detecting silk ingots and silk ingot sorting system
CN110070531B (en) Model training method for detecting fundus picture, and fundus picture detection method and device
CN111798409A (en) Deep learning-based PCB defect data generation method
CN110889837A (en) Cloth flaw detection method with flaw classification function
CN110689524B (en) No-reference online image definition evaluation method and system
CN112070727A (en) Metal surface defect detection method based on machine learning
CN114926407A (en) Steel surface defect detection system based on deep learning
CN112102224A (en) Cloth defect identification method based on deep convolutional neural network
CN115205209A (en) Monochrome cloth flaw detection method based on weak supervised learning
CN110619619A (en) Defect detection method and device and electronic equipment
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN114972216A (en) Construction method and application of texture surface defect detection model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination