CN108520114B - Textile fabric defect detection model and training method and application thereof - Google Patents

Textile fabric defect detection model and training method and application thereof Download PDF

Info

Publication number
CN108520114B
CN108520114B CN201810238038.0A CN201810238038A CN108520114B CN 108520114 B CN108520114 B CN 108520114B CN 201810238038 A CN201810238038 A CN 201810238038A CN 108520114 B CN108520114 B CN 108520114B
Authority
CN
China
Prior art keywords
prediction
defect detection
textile fabric
frame
detection model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810238038.0A
Other languages
Chinese (zh)
Other versions
CN108520114A (en
Inventor
孙志刚
禹万泓
江湧
王卓
肖力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201810238038.0A priority Critical patent/CN108520114B/en
Publication of CN108520114A publication Critical patent/CN108520114A/en
Application granted granted Critical
Publication of CN108520114B publication Critical patent/CN108520114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/129Using chemometrical methods
    • G01N2201/1296Using chemometrical methods using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Treatment Of Fiber Materials (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses a textile fabric defect detection model and a training method and application thereof, wherein the training method comprises the steps of collecting sample textile fabric defect images, establishing a data set, and establishing the textile fabric defect detection model based on YOLOv 2; using dimension clustering before training the model, and performing direct coordinate prediction, loss value calculation and back propagation during model training to obtain a current network weight parameter; and updating the network weight parameters of the textile cloth defect detection model by using the current network weight parameters, and then performing multiple times of network weight calculation and updating by using the training set to obtain the optimal network weight parameters, thereby obtaining the trained textile cloth defect detection model. And then collecting textile cloth images in real time, and detecting by using the trained textile cloth defect detection model to obtain defect detection results of the textile cloth images. The defect detection model for textile fabric defects has the advantages of high defect accuracy, strong real-time property and universality.

Description

Textile fabric defect detection model and training method and application thereof
Technical Field
The invention belongs to the technical field of deep learning and computer vision, and particularly relates to a textile cloth defect detection model and a training method and application thereof.
Background
In the production and development of the textile industry of the world, the quality detection of the textile cloth is always a very important link. However, in the quality detection of the traditional textile cloth, due to the fact that no good automatic detection tool exists, most schemes still make judgment through manual vision, however, the scheme depends on the skill level of workers on the one hand, and on the other hand, the workers are easy to fatigue after working for a long time, and the accuracy is difficult to guarantee. With the rapid increase of the production quantity and the production speed of textile fabrics, the method of artificial vision is increasingly not suitable for the requirements of modern textile industry, and a method for automatically, accurately and rapidly detecting quality or defects is urgently required to be sought. At present, domestic textile fabric defects detection methods comprise a statistical-based detection method, a frequency domain-based detection method, a model-based method and a machine vision-based method, but because the types of defects of textile fabrics are more and the textures are complex, the methods are large in calculation amount and slow in speed, and can only detect certain specific types of defects frequently.
Therefore, the conventional method for detecting the defects of the textile cloth has the technical problems of low accuracy, poor real-time performance and no universality.
Disclosure of Invention
Aiming at the defects or improvement requirements of the prior art, the invention provides a textile cloth defect detection model, a training method and application thereof, so that the technical problems of low accuracy, poor real-time performance and no universality of the existing textile cloth defect detection method are solved.
To achieve the above object, according to one aspect of the present invention, there is provided a training method of a textile cloth defect detection model, comprising:
(1) collecting a sample textile fabric defect image, marking the sample textile fabric defect image to obtain defect types and real frames containing defects, further establishing a data set, and establishing a textile fabric defect detection model based on YOLOv 2;
(2) carrying out dimension clustering on a real frame in a data set to obtain a fixed frame, applying the fixed frame to a textile cloth defect detection model, obtaining a prediction frame by utilizing direct coordinate prediction, carrying out loss value calculation on the basis of the prediction frame by utilizing a loss function to obtain a prediction error, and carrying out back propagation by utilizing the prediction error to obtain a current network weight parameter;
(3) and updating the network weight parameters of the textile cloth defect detection model by using the current network weight parameters, and then performing multiple times of network weight calculation and updating by using the training set to obtain the optimal network weight parameters, thereby obtaining the trained textile cloth defect detection model.
Further, the sample textile cloth defect image includes: broken warp image, broken weft image, broken hole image, foreign matter image, oil stain image and crease image.
Further, the textile cloth defect detection model is based on a YOLOv2 frame and comprises 23 Convolitional layers Conv 1-Conv 23, 5 Maxpool layers Max 1-Max 5, two Route layers Route 1-Route 2, one Reorg layer Reorg1 and one Softmax layer Softmax1 in 32 layers in total, and the textile cloth defect detection model is formed by sequentially connecting Max1, Conv1, Max1, Conv 1-Conv 1, Route1, Conv 1-Conv 1, Reftg 1, Route1, Conv1 and Softmax in a Conv1 cascade mode.
Further, Conv 1-Conv 22 in the textile fabric defect detection model are subjected to batch normalization before convolution, and Conv 1-Conv 22 use a leak-ReLU activation function after convolution; conv23 did not perform batch normalization before convolution and used linear activation function after convolution.
Further, the textile cloth defect detection model is trained in a multi-scale mode in the training process.
Further, the step (2) comprises:
(2-1) carrying out dimension clustering on a real frame in a data set to obtain a clustering frame, obtaining a distance measurement center deviation value d (box, centroid) which is 1-IOU (box, centroid) by utilizing the intersection ratio between the clustering frame and the real frame, and obtaining the width and height of a fixed frame when the distance measurement center deviation value is less than or equal to a measurement threshold value; metric threshold of 10-6
(2-2) applying the fixing frame to a textile fabric defect detection model, and after obtaining a relative parameter of a prediction center and a relative parameter of width and height according to the width and height of the fixing frame, obtaining a center coordinate of the prediction frame and the width and height of the prediction frame by utilizing direct coordinate prediction;
and (2-3) performing loss calculation by using a loss function based on the central coordinate of the prediction frame and the width and the height of the prediction frame to obtain a prediction error, and performing back propagation by using the prediction error to obtain a current network weight parameter.
Further, the direct coordinate prediction is:
bx=σ(tx)+cx
by=σ(ty)+cy
Figure BDA0001603069890000031
Figure BDA0001603069890000032
wherein (b)x,by) To predict the center coordinates of the box, bwAnd bhWidth and height of the prediction box, respectively, (t)x,ty) To predict the central relative parameter, twAnd thRespectively, a predicted width-to-height relative parameter, σ (t)x) And σ (t)y) Respectively predicting the distances of the center of the frame from the upper left corner of the cell in which the frame is positioned in the horizontal direction and the vertical direction, cxAnd cyRespectively the distance between the cell where the center of the fixed frame is located and the upper left corner of the defect image of the sample textile fabric in the horizontal direction and the vertical direction, pwAnd phRespectively the width and the height of the fixed frame.
Further, the loss function is:
Figure BDA0001603069890000033
where loss is the prediction error, the first line of the penalty function represents the confidence penalty for the grid containing defects and those not,
Figure BDA0001603069890000041
confidence that the ith grid contains a defect, CiWhether there is a defect in the ith grid, CiIs a number of 1 or 0, and,
Figure BDA0001603069890000042
represents traversing the prediction boxes containing no fault in j prediction boxes in the i grids,
Figure BDA0001603069890000043
representing that j prediction frames in i grids are traversed, wherein the j prediction frames contain defects; the second line of the penalty function formulates the penalty and gradient of the class prediction,
Figure BDA0001603069890000044
class value, p, predicted for the ith meshi(c) For the true class value of the ith mesh, the third row of the penalty function formulates the bounding box information gradient, w, of the prediction boxiAnd hiRepresenting the width and height of the real box in the ith grid,
Figure BDA0001603069890000045
and
Figure BDA0001603069890000046
represents the width and height of the prediction box in the ith mesh, (x)i,yi) Representing the coordinates of the center of the real box in the ith grid,
Figure BDA0001603069890000047
the fourth line of the penalty function represents the gradient of the prediction box without defects (p)jx,pjy) Center coordinates, p, of the jth prediction box representing no defectjwAnd pjhThe width and height of the jth prediction box representing no defect, l.w and l.h are both 13, l.n is 5, λnoobj=1,λobj=5,λclass=1,λcoord=1。
According to another aspect of the present invention, there is provided a textile fabric defect detecting model trained by the above-described training method for a textile fabric defect detecting model.
According to another aspect of the present invention there is provided a use of a textile fabric defect detection module comprising: and collecting textile cloth images in real time, and detecting by using the trained textile cloth defect detection model to obtain defect detection results of the textile cloth images.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) the invention provides a training method of a textile cloth defect detection model, which is used for constructing the textile cloth defect detection model based on a deep learning frame YOLOv2, extracting and fusing image characteristics through multilayer convolution operation, and using a fixed frame, dimension clustering, direct coordinate prediction, multi-scale training and batch normalization to carry out network optimization, thereby improving the training effect and monitoring data such as accuracy, loss function values and the like in the training process in real time.
(2) When the trained textile cloth defect detection model is used for defect detection, common defects such as broken warp, broken weft, broken holes, foreign matters, creases, oil stains and the like on the textile cloth can be effectively detected, and the accuracy, the real-time performance and the universality of the detection method are improved. Meanwhile, the detection speed is high, each picture only needs 12.5ms, and the precision is high and reaches more than 96%.
(3) According to the method, a traditional Euclidean distance function is not adopted, but the intersection of the clustering frame and the real frame is utilized to obtain a distance measurement central deviation value d (box, central) which is 1-IOU (box, central) compared with the IOU (box, central), so that the obtained data of the fixed frame is more accurate, the accuracy of a subsequent training model is improved, and the accuracy of defect detection is further improved.
(4) The convolution layer stability of the textile fabric defect detection model extracts the edge characteristics of the image through convolution operation, the more the number of convolution layers, the more accurate the obtained image characteristics, but the excessive convolution layers can also increase the operation amount, and even cause overfitting. Therefore, 23 consistent layers are arranged, the accuracy can be guaranteed, meanwhile, the calculation amount is small, and the Maxpool layer, the Route layer, the Reorg layer and the Softmax layer are further involved in the textile fabric defect detection model. The Maxpool layer can effectively reduce the data volume and retain the useful characteristics of the image through downsampling. The Route layer is called a routing layer, and can splice the features of several layers together, thereby being beneficial to extracting and fusing the multi-layer features. The Reorg layer can match the size of the input layer with the size of the output characteristic diagram, so that the purpose of adjusting the size is achieved.
Drawings
FIG. 1 is a flow chart of a textile fabric defect detection model according to an embodiment of the present invention;
FIG. 2(a) is a variation curve of loss provided in example 1 of the present invention;
FIG. 2(b) is a variation curve of IOU provided in embodiment 1 of the present invention;
FIG. 3(a) is a diagram showing the effect of detecting a warp broken by the textile fabric defect detecting model provided in example 1 of the present invention;
FIG. 3(b) is a diagram showing the effect of detecting broken picks by the textile fabric defect detecting model provided in example 1 of the present invention;
FIG. 3(c) is a diagram showing the detection effect of the defect detection model of textile fabric on holes provided in example 1 of the present invention;
FIG. 3(d) is a diagram showing the effect of detecting a foreign matter in the woven fabric defect detecting model according to example 1 of the present invention;
fig. 3(e) is a diagram illustrating an oil stain detection effect of the textile fabric defect detection model provided in embodiment 1 of the present invention;
FIG. 3(f) is a graph showing the effect of detecting creases by the woven fabric defect detection model provided in example 1 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, an application of a woven cloth defect detecting model includes:
(1) collecting a sample textile fabric defect image, comprising: broken warp image, broken weft image, broken hole image, foreign matter image, oil stain image and crease image. Marking the sample textile fabric defect image to obtain defect types and real frames containing the defects, further establishing a data set, and establishing a textile fabric defect detection model based on YOLOv 2; the textile cloth defect detection model is based on a YOLOv2 frame, comprises 32 layers in total, and comprises 23 Convolitional layers Conv 1-Conv 23, 5 Maxpool layers Max 1-Max 5, two Route layers Route 1-Route 2, one Reorg layer Reorg 2 and one Softmax layer Softmax 2, and is cascaded in a manner that Conv2 is connected with Max2, Conv2, Max2, Conv 2-Conv 2, Route2, Conv2, Reorftg 2, Route2, Conv2 and Softmax 3 in sequence. Conv 1-Conv 22 in the textile fabric defect detection model are subjected to batch normalization before convolution, and Conv 1-Conv 22 use a leak-ReLU activation function after convolution; conv23 did not perform batch normalization before convolution and used linear activation function after convolution.
(2) Carrying out dimension clustering on a real frame in a data set to obtain a fixed frame, applying the fixed frame to a textile cloth defect detection model, obtaining a prediction frame by utilizing direct coordinate prediction, carrying out loss value calculation on the basis of the prediction frame by utilizing a loss function to obtain a prediction error, and carrying out back propagation by utilizing the prediction error to obtain a current network weight parameter; the method comprises the following steps:
(2-1) carrying out dimension clustering on a real frame in a data set to obtain a clustering frame, obtaining a distance measurement center deviation value d (box, centroid) which is 1-IOU (box, centroid) by utilizing the intersection ratio between the clustering frame and the real frame, and obtaining the width and height of a fixed frame when the distance measurement center deviation value is less than or equal to a measurement threshold value; metric threshold of 10-6
(2-2) applying the fixing frame to a textile fabric defect detection model, and after obtaining a relative parameter of a prediction center and a relative parameter of width and height according to the width and height of the fixing frame, obtaining a center coordinate of the prediction frame and the width and height of the prediction frame by utilizing direct coordinate prediction;
and (2-3) performing loss calculation by using a loss function based on the central coordinate of the prediction frame and the width and the height of the prediction frame to obtain a prediction error, and performing back propagation by using the prediction error to obtain a current network weight parameter.
The direct coordinate prediction is:
bx=σ(tx)+cx
by=σ(ty)+cy
Figure BDA0001603069890000071
Figure BDA0001603069890000072
wherein (b)x,by) To predict the center coordinates of the box, bwAnd bhWidth and height of the prediction box, respectively, (t)x,ty) To predict the central relative parameter, twAnd thRespectively, a predicted width-to-height relative parameter, σ (t)x) And σ (t)y) Respectively predicting the distances of the center of the frame from the upper left corner of the cell in which the frame is positioned in the horizontal direction and the vertical direction, cxAnd cyRespectively the distance between the cell where the center of the fixed frame is located and the upper left corner of the defect image of the sample textile fabric in the horizontal direction and the vertical direction, pwAnd phRespectively the width and the height of the fixed frame.
Further, the loss function is:
Figure BDA0001603069890000073
Figure BDA0001603069890000081
wherein loss is the prediction errorIn contrast, the first row of the penalty function represents the confidence penalty for a grid containing defects and not,
Figure BDA0001603069890000082
confidence that the ith grid contains a defect, CiWhether there is a defect in the ith grid, CiIs a number of 1 or 0, and,
Figure BDA0001603069890000083
represents traversing the prediction boxes containing no fault in j prediction boxes in the i grids,
Figure BDA0001603069890000084
representing that j prediction frames in i grids are traversed, wherein the j prediction frames contain defects; the second line of the penalty function formulates the penalty and gradient of the class prediction,
Figure BDA0001603069890000085
class value, p, predicted for the ith meshi(c) For the true class value of the ith mesh, the third row of the penalty function formulates the bounding box information gradient, w, of the prediction boxiAnd hiRepresenting the width and height of the real box in the ith grid,
Figure BDA0001603069890000086
and
Figure BDA0001603069890000087
represents the width and height of the prediction box in the ith mesh, (x)i,yi) Representing the coordinates of the center of the real box in the ith grid,
Figure BDA0001603069890000088
representing the coordinates of the center of the prediction box in the ith grid, the fourth line formula of the penalty function exists only before 12800 samples, the term is removed when the number of samples is calculated to be exceeded, the fourth line formula of the penalty function represents the gradient of the prediction box without defects, (p)jx,pjy) Center coordinates, p, of the jth prediction box representing no defectjwAnd pjhThe width and height of the jth prediction box representing no defect, l.w and l.h are both 13, l.n is 5, λnoobj=1,λobj=5,λclass=1,λcoord=1。
(3) Updating the network weight parameters of the textile cloth defect detection model by using the current network weight parameters, and then performing multiple times of network weight calculation and updating by using the training set to obtain optimal network weight parameters, namely obtaining the trained textile cloth defect detection model; and the textile cloth defect detection model adopts multi-scale training in the training process.
(4) And collecting textile cloth images in real time, and detecting by using the trained textile cloth defect detection model to obtain defect detection results of the textile cloth images.
Example 1
The convolutional neural network used by the invention can accept the input of any pixel image in principle, but considering that an image with too large pixels can cause distortion of small defects after resize, an image with too small pixels can cause extraction of some features, and the reference pixel given by the official is 416 × 416, embodiment 1 of the invention adopts a pixel size of 1216 × 1020 in consideration of the largest range of one-time identification of the cloth and the sufficiency of feature extraction.
Use of a textile fabric defect detection module comprising:
(1) collecting and selecting images containing six defects of broken warp, broken weft, broken holes, foreign matters, oil stains and creases as sample textile cloth defect images by using an industrial camera, wherein the sample textile cloth defect images are three-channel color images in a jpg format and with the resolution of 96dpi, and the pixels are 1216 multiplied by 1020; newly building two folders as a training set folder train and a test set folder test which are respectively used for storing training set images and test set images; randomly selecting a large number of images with the same quantity and containing six defects of broken warp, broken weft, broken holes, sundries, oil stains and creases and storing the images into a training set folder trail, wherein 1000 images containing every defect of broken warp, broken weft, broken holes, sundries, oil stains and creases are selected in the example, and the total number of the images is 6000; then randomly selecting images containing six types of defects including broken warp, broken weft, broken holes, sundries, oil stain and crease, wherein the number of the images is 10% of that of the images of the training set folder, and storing the images into a test set folder test, in the embodiment, 100 images containing the defects of broken warp, broken weft, broken holes, sundries, oil stain and crease are selected, the total number of the images is 600, finally, a new folder is created and named as JPEGImages, all the images in the training set folder train and the test set folder test are copied to the folder JPEGImages and numbered, and the images are stored as XXXXXjpg, wherein XXXX is a four-digit number, the image number in the training set is 0000-5999, the image number in the test set is 6000-6599, and the number of the images in the training set and the number of the images in the test set need to be recorded in the process; newly creating folders and options; then, image labeling software is applied, the labelimg software selected in the embodiment labels images, the defect types and defect areas of the images in the folder JPEGImages are labeled, a XXXXX label file is generated by the labelimg software, wherein XXXX is an image number and is stored in the folder indices; then, creating a folder ImageSets, establishing a subdirectory Main, creating a train.txt document and a val.txt document in the folder, storing paths of all training set pictures in the train.txt, and storing paths of all test set pictures in the val.txt, wherein the format is YYYY/JPEGImages/XXXX.jpg, YY represents the path of the JPEGImages folder, and XXXX represents the image number; the VOC data set required by the invention is constructed by all data in three folders JPEGImages, antibiotics and ImageSets.
The textile cloth defect detection model is based on a YOLOv2 frame, comprises 32 layers in total, and comprises 23 Convolitional layers Conv 1-Conv 23, 5 Maxpool layers Max 1-Max 5, two Route layers Route 1-Route 2, one Reorg layer Reorg 2 and one Softmax layer Softmax 2, and is cascaded in a manner that Conv2 is connected with Max2, Conv2, Max2, Conv 2-Conv 2, Route2, Conv2, Reorftg 2, Route2, Conv2 and Softmax 3 in sequence. Conv 1-Conv 22 in the textile fabric defect detection model are subjected to batch normalization before convolution, and Conv 1-Conv 22 use a leak-ReLU activation function after convolution; conv23 did not perform batch normalization before convolution and used linear activation function after convolution.
At level Conv 1: the input image size is 416 × 416 × 3, the convolution kernel is 3 × 3, 32 in total, and the step size is 1;
at level Max 1: the input image size is 416 × 416 × 32, the kernel size is 2 × 2, and the step size is 2;
at level Conv 2: the input image size is 208 × 208 × 32, the convolution kernel is 3 × 3, 64 in total, and the step size is 1;
at level Max 2: the input image size is 208 × 208 × 64, the kernel size is 2 × 2, and the step size is 2;
at level Conv 3: the input image size is 104 × 104 × 64, the convolution kernel is 3 × 3, 128 in total, and the step size is 1;
at level Conv 4: the input image size is 104 × 104 × 128, the convolution kernel is 1 × 1, 64 in total, and the step size is 1;
at level Conv 5: the input image size is 104 × 104 × 64, the convolution kernel is 3 × 3, 128 in total, and the step size is 1;
at level Max 3: the input image size is 104 × 104 × 128, the kernel size is 2 × 2, and the step size is 2;
at level Conv 6: the size of the input image is 52 × 52 × 128, the convolution kernel is 3 × 3, 256 in total, and the step size is 1;
at level Conv 7: the size of the input image is 52 × 52 × 256, the convolution kernel is 1 × 1, 128 in total, and the step size is 1;
at level Conv 8: the size of the input image is 52 × 52 × 128, the convolution kernel is 3 × 3, 256 in total, and the step size is 1;
at level Max 4: the input image size is 52 × 52 × 256, the kernel size is 2 × 2, and the step size is 2;
at level Conv 9: the size of the input image is 26 × 26 × 256, the convolution kernel is 3 × 3, 512 in total, and the step size is 1;
at level Conv 10: the input image size is 26 × 26 × 512, the convolution kernel is 1 × 1, 256 in total, and the step size is 1;
at level Conv 11: the size of the input image is 26 × 26 × 256, the convolution kernel is 3 × 3, 512 in total, and the step size is 1;
at level Conv 12: the input image size is 26 × 26 × 512, the convolution kernel is 1 × 1, 256 in total, and the step size is 1;
at level Conv 13: the size of the input image is 26 × 26 × 256, the convolution kernel is 3 × 3, 512 in total, and the step size is 1;
at level Max 5: the input image size is 26 × 26 × 512, the kernel size is 2 × 2, and the step size is 2;
at level Conv 14: the size of the input image is 13 × 13 × 512, the convolution kernel is 3 × 3, 1024 in total, and the step size is 1;
at level Conv 15: the size of the input image is 13 × 13 × 1024, the convolution kernel is 1 × 1, 512 in total, and the step size is 1;
at level Conv 16: the size of the input image is 13 × 13 × 512, the convolution kernel is 3 × 3, 1024 in total, and the step size is 1;
at level Conv 17: the size of the input image is 13 × 13 × 1024, the convolution kernel is 1 × 1, 512 in total, and the step size is 1;
at level Conv 18: the size of the input image is 13 × 13 × 512, the convolution kernel is 3 × 3, 1024 in total, and the step size is 1;
at level Conv 19: the size of the input image is 13 multiplied by 1024, the number of convolution kernels is 3 multiplied by 3, and the number of the convolution kernels is 1024, and the step length is 1;
at level Conv 20: the size of the input image is 13 multiplied by 1024, the number of convolution kernels is 3 multiplied by 3, and the number of the convolution kernels is 1024, and the step length is 1;
at Route1 level: features of the Conv13 layer output are combined;
at level Conv 21: the input image size is 26 × 26 × 512, the convolution kernel is 1 × 1, 64 in total, and the step size is 1;
at the Reorg1 level: changing the size of the image obtained from the upper layer to 2, wherein the input image is 26 multiplied by 64, and the output image is 13 multiplied by 256;
at Route2 level: combining the feature map output by the Reorg1 and the feature map output by the Conv20 layer, wherein the size of an output image is 13 × 13 × 1280;
at level Conv 22: the size of the input image is 13 × 13 × 1280, the convolution kernel is 3 × 3, 1024 in total, and the step size is 1;
at level Conv 23: the input image size is 13 × 13 × 1024, the convolution kernel is 1 × 1, 55 in total, and the step size is 1;
the softmax layer is classified according to the input 13 × 13 × 55 feature maps, and the final 55 feature maps are selected because 5 fixed frames are selected for prediction when k is 5, and each fixed frame needs to predict 6 categories, 4 bounding box parameters, and one confidence, so 55 is 5 × (6+4+ 1).
The pooling mode of the Maxpool layer is maximum pooling, the sampling core is 2, and the step length is 2.
The leak-ReLU activation function was used for each of Conv1 to Conv22 convolutional layers, and the activation formula of this function was:
f(x)=αx,(x<0)
f(x)=x,(x≥0)
the condition that the ReLU activation function is easy to cause neuron necrosis can be effectively improved, and a linear activation function is used after a Conv23 convolution layer for classification in a linear classification layer.
And the dimension clustering is to perform K-means clustering on the real frames of the training samples by using a K-means clustering method in machine learning, and statistically obtain the number of the fixed frames and the width and height of the fixed frames. The clustering center k is 5, the width and height of five centers are (1.3221, 1.73145), (3.19275, 4.00944), (5.05587, 8.09892), (9.47112, 4.84053), (11.2364, 10.0071), and the maximum of width and height are both 13.
The multi-scale training means that the resolution of the input image is changed after 10 training batches in the training process of the convolutional neural network, and a new resolution is randomly selected in the range of 320, 352, 384 … … 608 for training.
In the model, the number of classifier nodes in the Softmax layer is 6, and the class numbers 0, 1, 2, 3, 4 and 5 respectively correspond to the output results of six types of defects of broken warp, broken weft, broken hole, foreign matter, oil stain and crease. If none of the six types of defects are detected, then a defect is indicated.
(2) Extracting < object > and < bndbox > information in XXXX.xml tag files in the classes by using a data set in a VOC format, and converting frame information into center point widening high information to obtain an input tag type required by a YOLOv2 frame, wherein the format is shown as follows: and object-class < x > < y > < width > < height >, saving all label information as XXXX. Wherein object-class represents defect type, x, y represent defect center point position, width and height represent defect width and height, and XXXX represents image number. Then, the textile defect detection model of the built convolutional neural network based on the YOLOv2 framework and the data set in the VOC format, and all XXXXXX.
The parameters of the model training are configured as: input picture width 416, height 416, three channels. Each batch was divided into 8 groups for 64 images for training, for a total of 80200 batches. Meanwhile, to make the training sample more sufficient, for each iterative training, YOLOv2 will generate a new training picture based on angle (angle), saturation (saturation), exposure (exposure), hue (hue). Meanwhile, the set learning rate is 0.001, the learning rate adjustment strategy is stepping, and the change is performed when the batch is equal to 40000 and 60000 in a manner of sequentially multiplying the values in scales. At the same time, the network also sets jitter (jitter) to generate more data, preventing overfitting on the one hand.
The current training batch and progress can be checked in real time in the training process, and the training can be stopped when the average loss value (avg _ loss) and the image intersection ratio (IOU) are observed to be proper values. As shown in fig. 2(a) and 2(b), after 80200 batches of training, the average loss value (avg _ loss) fluctuates around 2.0, and the image intersection ratio (IOU) reaches above 0.8, so that a trained detection network is obtained.
In the trained detection network, the test set is tested under different training total batches, the Recall ratio (Recall) is 83.39%, the Precision ratio (Precision) is 83.60% and the image cross-over ratio (IOU) is 64.65% when the training total batches are 10000 times; the Recall (Recall) at 20000 training total batches was 93.85%, Precision (Precision) was 92.82%, and image intersection ratio (IOU) was 75.04%; the Recall (Recall) at 30000 times for the total training batch was 95.82%, Precision (Precision) was 91.97%, and image intersection ratio (IOU) was 77.36%; the Recall (Recall) at 40000 times of the total training batch was 96.19%, Precision (Precision) was 92.98%, and image intersection ratio (IOU) was 79.30%; the Recall (Recall) at 50000 times of training total batch was 96.43%, Precision (Precision) was 94.23%, and image intersection ratio (IOU) was 80.59%; the Recall (Recall) at 60000 times for the total training batch was 96.68%, Precision (Precision) was 94.13%, and the image intersection ratio (IOU) was 80.46%; the Recall (Recall) at 70000 training sessions for the total batch was 96.31%, Precision (Precision) 93.88%, and image intersection ratio (IOU) 80.13%; the Recall (Recall) at 80000 times of the total training batch was 96.31%, Precision (Precision) was 94.22%, and image intersection ratio (IOU) was 80.59%; it can be seen that the model has the best effect when looking at the Recall ratio (Recall) and when the training batch is 60000, so the model finally selects the weight when the training batch is 60000.
(3) Collecting textile fabric images in real time, and detecting by using a trained textile fabric defect detection model based on a convolutional neural network of a YOLOv2 frame to obtain defect detection results of the textile fabric images, wherein partial results are shown in FIG. 3, FIG. 3(a), FIG. 3(b), FIG. 3(c), FIG. 3(d), FIG. 3(e), and FIG. 3(f) respectively correspond to the detection results of defect types of warp breaking (warp-lacing), weft breaking (warp-lacing), hole breaking (hole), foreign matters (coverings), oil stain (oil) and crease (crease).
The method and the device can be seen in that a weaving cloth defect detection model and a method based on YOLOv2 are established aiming at the defects that the traditional manual defect detection is time-consuming, labor-consuming and low in efficiency, and the existing automatic detection method is complex, extremely high in complexity and difficult to be industrially practical, and the detection model has great advantages in accuracy and real-time performance. The identification is accurate, the defect identification of each image only needs about 12.5ms, and the performance of the existing detection method is greatly improved.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A training method of a textile cloth defect detection model is characterized by comprising the following steps:
(1) collecting a sample textile fabric defect image, marking the sample textile fabric defect image to obtain defect types and real frames containing defects, further establishing a data set, and establishing a textile fabric defect detection model based on YOLOv 2;
(2) carrying out dimension clustering on a real frame in a data set to obtain a fixed frame, applying the fixed frame to a textile cloth defect detection model, obtaining a prediction frame by utilizing direct coordinate prediction, carrying out loss value calculation on the basis of the prediction frame by utilizing a loss function to obtain a prediction error, and carrying out back propagation by utilizing the prediction error to obtain a current network weight parameter;
(3) updating the network weight parameters of the textile cloth defect detection model by using the current network weight parameters, and then performing multiple times of network weight calculation and updating by using the training set to obtain optimal network weight parameters, namely obtaining the trained textile cloth defect detection model;
the step (2) comprises the following steps:
(2-1) carrying out dimension clustering on a real frame in a data set to obtain a clustering frame, obtaining a distance measurement center deviation value d (box, centroid) which is 1-IOU (box, centroid) by utilizing the intersection ratio between the clustering frame and the real frame, and obtaining the width and height of a fixed frame when the distance measurement center deviation value is less than or equal to a measurement threshold value;
(2-2) applying the fixing frame to a textile fabric defect detection model, and after obtaining a relative parameter of a prediction center and a relative parameter of width and height according to the width and height of the fixing frame, obtaining a center coordinate of the prediction frame and the width and height of the prediction frame by utilizing direct coordinate prediction;
(2-3) performing loss calculation by using a loss function based on the central coordinate of the prediction frame and the width and the height of the prediction frame to obtain a prediction error, and performing back propagation by using the prediction error to obtain a current network weight parameter;
the loss function is:
Figure FDA0002372337360000021
where loss is the prediction error, the first line of the penalty function represents the confidence penalty for the grid containing defects and those not,
Figure FDA0002372337360000022
confidence that the ith grid contains a defect, CiWhether there is a defect in the ith grid, CiIs a number of 1 or 0, and,
Figure FDA0002372337360000023
represents traversing the prediction boxes containing no fault in j prediction boxes in the i grids,
Figure FDA0002372337360000024
representing that j prediction frames in i grids are traversed, wherein the j prediction frames contain defects; the second line of the penalty function formulates the penalty and gradient of the class prediction,
Figure FDA0002372337360000025
class value, p, predicted for the ith meshi(c) For the true class value of the ith mesh, the third row of the penalty function formulates the bounding box information gradient, w, of the prediction boxiAnd hiRepresenting the width and height of the real box in the ith grid,
Figure FDA0002372337360000026
and
Figure FDA0002372337360000027
represents the width and height of the prediction box in the ith mesh, (x)i,yi) Representing the coordinates of the center of the real box in the ith grid,
Figure FDA0002372337360000028
representing the coordinates of the center of the prediction box in the ith mesh, a loss functionThe fourth line of (a) represents the gradient of the box that does not contain the defect prediction, (p)jx,pjy) Center coordinates, p, of the jth prediction box representing no defectjwAnd pjhThe width and height of the jth prediction box representing no defect, l.w and l.h are both 13, l.n is 5, λnoobj=1,λobj=5,λclass=1,λcoord=1。
2. A method of training a textile fabric defect detection pattern in accordance with claim 1, wherein said sample textile fabric defect image comprises: broken warp image, broken weft image, broken hole image, foreign matter image, oil stain image and crease image.
3. A method for training a textile cloth defect detection model according to claim 1 or 2, wherein the textile cloth defect detection model comprises 23 convolutionnal layers Conv 1-Conv 23, 5 Maxpool layers Max 1-Max 5, two Route layers Route 1-Route 2, a Reorg layer Reorg1, and a Softmax layer Softmax1 based on a YOLOv2 frame with 32 layers in total, and the textile cloth defect detection model is cascaded in such a way that Conv1 connects in sequence Max1, Conv1, Max1, Conv 1-Conv 3, Conv 1-Conv 1, and Softmax 3.
4. A method of training a textile fabric defect detection pattern as claimed in claim 3 wherein Conv 1-Conv 22 in said textile fabric defect detection pattern are batch normalized prior to convolution and Conv 1-Conv 22 use a learky-ReLU activation function after convolution; conv23 did not perform batch normalization before convolution and used linear activation function after convolution.
5. A method of training a textile fabric defect detection pattern according to claim 1 or 2, wherein said textile fabric defect detection pattern is trained on a multi-scale basis during the training process.
6. A method of training a textile fabric defect detection model in accordance with claim 1, wherein said direct coordinate prediction is:
bx=σ(tx)+cx
by=σ(ty)+cy
Figure FDA0002372337360000031
Figure FDA0002372337360000032
wherein (b)x,by) To predict the center coordinates of the box, bwAnd bhWidth and height of the prediction box, respectively, (t)x,ty) To predict the central relative parameter, twAnd thRespectively, a predicted width-to-height relative parameter, σ (t)x) And σ (t)y) Respectively predicting the distances of the center of the frame from the upper left corner of the cell in which the frame is positioned in the horizontal direction and the vertical direction, cxAnd cyRespectively the distance between the cell where the center of the fixed frame is located and the upper left corner of the defect image of the sample textile fabric in the horizontal direction and the vertical direction, pwAnd phRespectively the width and the height of the fixed frame.
7. A textile fabric defect detecting apparatus, characterized in that said textile fabric defect detecting pattern is trained by a training method of a textile fabric defect detecting pattern according to any one of claims 1 to 6.
8. Use of a textile fabric defect detection apparatus according to claim 7, comprising: and collecting textile fabric images in real time, and detecting by using a textile fabric defect detection device to obtain defect detection results of the textile fabric images.
CN201810238038.0A 2018-03-21 2018-03-21 Textile fabric defect detection model and training method and application thereof Active CN108520114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810238038.0A CN108520114B (en) 2018-03-21 2018-03-21 Textile fabric defect detection model and training method and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810238038.0A CN108520114B (en) 2018-03-21 2018-03-21 Textile fabric defect detection model and training method and application thereof

Publications (2)

Publication Number Publication Date
CN108520114A CN108520114A (en) 2018-09-11
CN108520114B true CN108520114B (en) 2020-05-19

Family

ID=63432938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810238038.0A Active CN108520114B (en) 2018-03-21 2018-03-21 Textile fabric defect detection model and training method and application thereof

Country Status (1)

Country Link
CN (1) CN108520114B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460761A (en) * 2018-10-17 2019-03-12 福州大学 Bank card number detection and recognition methods based on dimension cluster and multi-scale prediction
CN109615610B (en) * 2018-11-13 2023-06-06 浙江师范大学 Medical band-aid flaw detection method based on YOLO v2-tiny
CN110197208A (en) * 2019-05-14 2019-09-03 江苏理工学院 A kind of textile flaw intelligent measurement classification method and device
CN110310259B (en) * 2019-06-19 2021-07-27 江南大学 Improved YOLOv3 algorithm-based knot defect detection method
CN111191648B (en) * 2019-12-30 2023-07-14 飞天诚信科技股份有限公司 Method and device for image recognition based on deep learning network
CN111402226A (en) * 2020-03-13 2020-07-10 浙江工业大学 Surface defect detection method based on cascade convolution neural network
CN111553898A (en) * 2020-04-27 2020-08-18 东华大学 Fabric defect detection method based on convolutional neural network
CN111724377A (en) * 2020-06-22 2020-09-29 创新奇智(上海)科技有限公司 Broken yarn detection method, broken yarn detection device, electronic equipment, storage medium and shutdown system
CN111881774A (en) * 2020-07-07 2020-11-03 上海艾豚科技有限公司 Method and system for identifying foreign matters in textile raw materials
CN112036541B (en) * 2020-10-16 2023-11-17 西安工程大学 Fabric defect detection method based on genetic algorithm optimization neural network
CN112215824A (en) * 2020-10-16 2021-01-12 南通大学 YOLO-v 3-based cloth cover defect detection and auxiliary device and method
CN112270252A (en) * 2020-10-26 2021-01-26 西安工程大学 Multi-vehicle target identification method for improving YOLOv2 model
CN113870870B (en) * 2021-12-02 2022-04-05 自然资源部第一海洋研究所 Convolutional neural network-based real-time recognition method for marine mammal vocalization
CN114372968B (en) * 2021-12-31 2022-12-27 江南大学 Defect detection method combining attention mechanism and adaptive memory fusion network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123107A (en) * 2017-03-24 2017-09-01 广东工业大学 Cloth defect inspection method based on neutral net deep learning
US9760806B1 (en) * 2016-05-11 2017-09-12 TCL Research America Inc. Method and system for vision-centric deep-learning-based road situation analysis
CN107169956A (en) * 2017-04-28 2017-09-15 西安工程大学 Yarn dyed fabric defect detection method based on convolutional neural networks
CN107369155A (en) * 2017-07-24 2017-11-21 广东工业大学 A kind of cloth surface defect detection method and its system based on machine vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9760806B1 (en) * 2016-05-11 2017-09-12 TCL Research America Inc. Method and system for vision-centric deep-learning-based road situation analysis
CN107123107A (en) * 2017-03-24 2017-09-01 广东工业大学 Cloth defect inspection method based on neutral net deep learning
CN107169956A (en) * 2017-04-28 2017-09-15 西安工程大学 Yarn dyed fabric defect detection method based on convolutional neural networks
CN107369155A (en) * 2017-07-24 2017-11-21 广东工业大学 A kind of cloth surface defect detection method and its system based on machine vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于回归的深度学习的织染布疵点检测的研究;Jun-Feng Jing, et al;《Textile Bioengineering and Informatics Symposium Proceedings (TBIS 2017)》;20170531;正文第1030-1036页 *
基于改进YOLOv2网络的遗留物检测算法;张瑞林 等;《浙江理工大学学报(自然科学版)》;20171211;正文第325-332页 *
室内监控场景的对象检测与行为分析方法研究;习自;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180215;正文第10-29页 *

Also Published As

Publication number Publication date
CN108520114A (en) 2018-09-11

Similar Documents

Publication Publication Date Title
CN108520114B (en) Textile fabric defect detection model and training method and application thereof
CN110827251B (en) Power transmission line locking pin defect detection method based on aerial image
CN111444939B (en) Small-scale equipment component detection method based on weak supervision cooperative learning in open scene of power field
CN111402226A (en) Surface defect detection method based on cascade convolution neural network
CN109285139A (en) A kind of x-ray imaging weld inspection method based on deep learning
CN115731164A (en) Insulator defect detection method based on improved YOLOv7
CN105957082A (en) Printing quality on-line monitoring method based on area-array camera
CN108830285A (en) A kind of object detection method of the reinforcement study based on Faster-RCNN
CN111462051B (en) Cloth defect detection method and system based on deep neural network
CN112037219A (en) Metal surface defect detection method based on two-stage convolution neural network
CN108596203A (en) Optimization method of the pond layer in parallel to pantograph carbon slide surface abrasion detection model
CN107016664A (en) A kind of bad pin flaw detection method of large circle machine
CN111382785A (en) GAN network model and method for realizing automatic cleaning and auxiliary marking of sample
CN111965197B (en) Defect classification method based on multi-feature fusion
CN111738994B (en) Lightweight PCB defect detection method
CN112102224A (en) Cloth defect identification method based on deep convolutional neural network
CN117152484B (en) Small target cloth flaw detection method based on improved YOLOv5s
CN116029979A (en) Cloth flaw visual detection method based on improved Yolov4
CN115937517A (en) Photovoltaic fault automatic detection method based on multispectral fusion
CN116205876A (en) Unsupervised notebook appearance defect detection method based on multi-scale standardized flow
CN111652846A (en) Semiconductor defect identification method based on characteristic pyramid convolution neural network
CN110618129A (en) Automatic power grid wire clamp detection and defect identification method and device
KR101782364B1 (en) Vision inspection method based on learning data
Mirani et al. Object Recognition in Different Lighting Conditions at Various Angles by Deep Learning Method
CN111738310B (en) Material classification method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant