CN110349146B - Method for constructing fabric defect identification system based on lightweight convolutional neural network - Google Patents

Method for constructing fabric defect identification system based on lightweight convolutional neural network Download PDF

Info

Publication number
CN110349146B
CN110349146B CN201910623819.6A CN201910623819A CN110349146B CN 110349146 B CN110349146 B CN 110349146B CN 201910623819 A CN201910623819 A CN 201910623819A CN 110349146 B CN110349146 B CN 110349146B
Authority
CN
China
Prior art keywords
convolution
layer
fabric
neural network
lzfnet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910623819.6A
Other languages
Chinese (zh)
Other versions
CN110349146A (en
Inventor
刘洲峰
李春雷
张驰
丁淑敏
朱永胜
董燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongyuan University of Technology
Original Assignee
Zhongyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongyuan University of Technology filed Critical Zhongyuan University of Technology
Priority to CN201910623819.6A priority Critical patent/CN110349146B/en
Publication of CN110349146A publication Critical patent/CN110349146A/en
Application granted granted Critical
Publication of CN110349146B publication Critical patent/CN110349146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for constructing a fabric defect identification system based on a lightweight convolutional neural network, which comprises the following steps: firstly, configuring the operating environment of a fabric defect identification system, obtaining a lightweight convolution neural network according to factorization convolution, then collecting fabric image sample data, standardizing the fabric image sample data, dividing the standardized fabric image sample data into a training image set and a test image set, inputting the training image set into the lightweight convolution neural network by using an asynchronous gradient descent training strategy to train to obtain an LZFNet-Fast model, and finally inputting the test image set into the LZFNet-Fast model to test and verify the performance of the LZFNet-Fast model. The method utilizes the factorization convolution structure to replace the standard convolution layer, effectively identifies the colored fabric with complex texture, reduces the parameter quantity and the calculated quantity of the model, and greatly improves the identification efficiency.

Description

Method for constructing fabric defect identification system based on lightweight convolutional neural network
Technical Field
The invention relates to the technical field of textile image recognition, in particular to a method for constructing a fabric defect recognition system based on a lightweight convolutional neural network.
Background
The production cost of the prior textile finished products is often influenced by the original grey cloth, most quality problems in the clothing industry are related to fabric defects, and the quality problems are one of the main problems faced by the textile industry. Fabric defects, often referred to as fabric defects, refer to defects in the appearance of the product caused by various adverse factors during the weaving of the piece goods. From fiber raw material to finished fabric, a plurality of processes such as spinning, weaving, printing and dyeing are generally required, and defects may be generated in each processing link. The cost for manually detecting the defects of the fabric is too high, and missing detection is easy to generate. Therefore, textile manufacturing enterprises need to adopt an automatic defect identification system to ensure the quality of the fabric.
At present, the existing fabric defect detection methods at home and abroad mainly comprise histogram characteristic analysis, local contrast enhancement, Fourier transform, wavelet transform, dictionary learning, directional gradient histogram and the like. However, with the continuous development of the textile industry of the whole society and the improvement of the appreciation level of consumers, most of the existing fabrics have complex textures and patterns, and the traditional visual recognition algorithms are poor in adaptability and difficult to extract visual features which are beneficial for a classifier to recognize defects. Therefore, the traditional fabric defect identification method is only suitable for single-color grey cloth or fabric with regular texture, and the defect identification of the fabric with complex texture and printing and dyeing is most suitable to be completed by utilizing a deep learning technology at present.
Convolutional neural networks, an important recognition model, have been rapidly developed in recent years, and have achieved remarkable results in some fields. However, the application of convolutional neural networks to fabric defect identification currently has some problems. The convolutional neural model has the structural characteristics of being both compute-intensive and memory-intensive, so that the convolutional neural model is difficult to deploy on a field programmable gate array and an embedded system with limited hardware resources. The existing deep convolutional neural network usually has a very complex structure in order to extract high-level semantic features of a target image, which greatly increases the calculation overhead and is not beneficial to real-time identification of fabric defects. Secondly, the trend of the development of the convolutional neural network in recent years is to build a deeper and more complex network structure so as to achieve higher identification precision. However, these new techniques to improve model identification accuracy do not necessarily make the system more efficient in terms of operating speed and memory footprint.
Disclosure of Invention
Aiming at the technical problems of complex structure and large calculation amount of the traditional deep convolutional neural network, the invention provides a fabric defect identification method based on a lightweight convolutional neural network, a special convolutional module for the field of fabric identification is constructed, the module is fused with an advanced factorization convolutional structure, a fabric image is expanded to 32 dimensions through three-dimensional convolution operation and then is used as the input of the convolutional module, then, the factorization convolutional layer is utilized for carrying out spatial filtering, and then, a feature map is compressed to a low-dimensional space by a global average pooling layer at the top of the network. The method can be quickly realized by using the artificial neural network library Keras and the tensor flow type machine learning library TFslim in a TensorFlow framework, and is suitable for identifying the fabric defects under the condition of limited computing resources.
The technical scheme of the invention is realized as follows:
a method for constructing a fabric defect identification system based on a lightweight convolutional neural network comprises the following steps:
s1, configuring the running environment of the fabric defect identification system;
s2, designing a factorizable convolution structure, and constructing a lightweight convolution neural network LZFNet-Fast by utilizing the factorizable convolution structure;
s3, collecting fabric image sample data, standardizing the fabric image sample data, and dividing the standardized fabric image sample data into a training image set and a test image set;
s4, inputting the training image set into a lightweight convolution neural network LZFNet-Fast for training by using a training strategy of asynchronous gradient descent to obtain an LZFNet-Fast model;
s5, inputting the test image set into the LZFNet-Fast model obtained in the step S4 for testing, and verifying the performance of the LZFNet-Fast model.
The operating environment of the fabric defect identification system in the step S1 includes a hardware system and a software system, the processor of the hardware system includes two CPUs and two GPUs, the models of the CPUs are intel Xeon (R) e5-2650-v4, and the models of the GPUs are inta Quadro M5000; the software system comprises an operating system and a convolution library, wherein the operating system is Windows10, and the convolution library is a convolution neural network acceleration library CUDNN 7.0.
The construction method of the lightweight convolutional neural network LZFNet-Fast in the step S2 comprises the following steps: defining a two-dimensional surface convolution layer with a local receptive field of 3 multiplied by 3 and a three-dimensional information fusion layer with a local receptive field of 1 multiplied by n, compiling the two-dimensional surface convolution layer and the three-dimensional information fusion layer into a factorizable convolution structure, and building a lightweight convolutional neural network LZFNet-Fast by utilizing the factorizable convolution structure.
The lightweight convolutional neural network LZFNet-Fast comprises a standard convolutional layer, nine factorizable convolutional structures, nineteen batch regularization layers, a global average pooling layer, a full connection layer and a Softmax classifier; a factorization convolution structure comprises a two-dimensional convolution layer and a three-dimensional information fusion layer, wherein nineteen convolution layers including a standard convolution layer, nine two-dimensional convolution layers and nine three-dimensional information fusion layers are in one-to-one correspondence with batch regularization layers, the last batch regularization layer is connected with a global average pooling layer, the global average pooling layer is connected with a full connection layer, and the full connection layer is connected with a Softmax classifier.
The calculated quantity C of standard convolution layer for input feature map of F × F sizesComprises the following steps: csF × K × m × n, where m is the number of input channels, n is the number of output channels, and K × K is the convolution kernel size of the standard convolution layer; the factorizable convolution structure divides the convolution layer into a surface convolution layer and an information fusion layer, wherein the convolution kernel of the surface convolution layer is K multiplied by K, the convolution kernel of the information fusion layer is 1 multiplied by 1, and the calculated quantity C of the factorizable convolution structure isfComprises the following steps: cfF × K × m + F × m × n; the ratio of the calculated quantities of the factorizable convolution structure and the standard convolution layer is:
Figure BDA0002126399900000031
the standard convolutional layers and the four two-dimensional convolutional layers adopt convolution with the step length of 2 to replace maximum pooling, the standardization of all the convolutional layers adopts batch standardization, an activation function adopts a correction linear unit with the value range of 0-6, and a Softmax classifier is adopted as a fabric defect judgment device at a terminal.
The fabric image sample data are divided into two types, namely a normal fabric image and a defective fabric image, wherein the number of the normal fabric images is similar to that of the defective fabric images; the number of the training image sets accounts for 4/5 of the total number, and the number of the testing image sets accounts for 1/5 of the total number.
In step S4, the training image set is input into the lightweight convolutional neural network LZFNet-Fast for training by using the asynchronous gradient descent training strategy, and the method for obtaining the LZFNet-Fast model includes:
s41, activating a convolutional neural network acceleration library CUDNN7.0, and activating an artificial neural network library Keras and a tensor flow type machine learning library TFslim;
s42, respectively converting the normal fabric image and the defective fabric image in the training image set into tfrecrd format files, and respectively storing the tfrecrd format files as two independent files;
s43, initializing an initial learning rate theta and iterating a momentum parameter viSetting the maximum iteration number i as 0max
S44, inputting the training image set into a lightweight convolutional neural network LZFNet-Fast for training by using an asynchronous gradient descent training strategy:
Figure BDA0002126399900000032
where α is the weight attenuation, L is the loss function, DiIs the number of training images at the i-th iteration, wiThe parameters of the LZFNet-Fast model to be trained are obtained;
updating LZFNet-Fast model parameters: w is ai+1=wi+vi+1
S45, increasing the iteration number by 1, and circularly executing the step S44 until the maximum iteration number i is reachedmaxAnd ending the circulation to generate an LZFNet-Fast model.
The beneficial effect that this technical scheme can produce: the invention utilizes factorization convolution structure to establish LZFNet-Fast model, can match with the design requirement of field programmable gate array or embedded system, can identify colored fabric with complex texture, and reduces 98.4% of model parameter quantity and 97.6% of calculated amount compared with the original neural network VGG16 on the premise of ensuring identification precision, thereby greatly reducing the dependence on hardware calculation capability and memory capacity, and leading the deep neural network to be easier to operate in industrial field.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a block diagram of the system of the present invention.
FIG. 2 is a graph of a factorizable convolution operation of the present invention; (a) a standard three-dimensional convolution layer, (b) a two-dimensional convolution layer, and (c) an information fusion layer.
FIG. 3 is a defect map of a fabric used in an embodiment of the present invention; (a) the defects of warp and weft, (b) the scratch defects, (c) the twill defects, and (d) the printing defects.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, a method for constructing a fabric defect identification system based on a lightweight convolutional neural network includes the following steps:
s1, configuring the running environment of the fabric defect identification system; the operating environment of the fabric defect identification system comprises a hardware system and a software system, wherein a processor of the hardware system comprises two CPUs and two GPUs, the types of the CPUs are all Intel Xeon (R) e5-2650-v4, the types of the GPUs are all Ingland Quadro M5000, and the software system comprises an operating system and a convolution library, wherein the operating system is Windows10, and the convolution library is a convolution neural network acceleration library CUDNN 7.0.
S2, designing a factorizable convolution structure, and constructing a lightweight convolution neural network LZFNet-Fast by utilizing the factorizable convolution structure; defining a two-dimensional surface convolution layer with a local receptive field of 3 multiplied by 3 and a three-dimensional characteristic map information fusion layer of 1 multiplied by n, compiling the two-dimensional surface convolution layer and the three-dimensional information fusion layer into a factorial convolution structure, and building the lightweight convolution neural network LZFNet-Fast by utilizing the factorial convolution structure. The lightweight convolutional neural network LZFNet-Fast comprises a standard convolutional layer, nine factorizable convolutional structures, nineteen batch regularization layers, a global average pooling layer, a full connection layer and a Softmax classifier; a factorization convolution structure comprises a two-dimensional convolution layer and a three-dimensional information fusion layer, wherein the number of the two-dimensional convolution layer and the three-dimensional information fusion layer is nineteen, and the number of the three-dimensional information fusion layer is one-to-one, namely a standard convolution layer, nine two-dimensional convolution layers and nine three-dimensional information fusion layers.
The lightweight convolutional neural network LZFNet-Fast is based on a standard convolutional structure, as shown in FIG. 2(a), a convolutional layer of the standard convolutional structure directly utilizes a convolutional core with the scale of KxKxmxnxn to perform spatial convolution filtering on an input feature matrix, and the calculated amount C of the standard convolutional layersComprises the following steps: csF × K × m × n, where m is the number of input channels, n is the number of output channels, K × K is the convolution kernel size of the standard convolution layer, and F × F is the dimension of the input feature map; as shown in FIGS. 2(b) and 2(c), the LZFNet-Fast is a lightweight convolutional neural network that divides a convolutional layer into two parts, a surface convolutional layer and an information fusion layer, by factoring a convolution structure, and the surface convolutional layer is convolved with each input bit by using a single convolution filter independent of each otherCharacterizing the graph matrix, and then linearly combining the multi-dimensional feature graphs output from the convolution layer by the information fusion layer using a simple 1 × 1 convolution kernel to factorize the amount of computation C of the convolution structurefComprises the following steps: cfF × K × m + F × m × n; it can be seen from this that the amount of computation of the factorized convolution structure is the sum of the amount of computation of the surface convolution layer and the amount of computation of the information fusion layer, and the three-dimensional 1 × 1 convolution of the information fusion layer occupies a large computational complexity. Assuming that the RAM capacity in the computing device is large enough to store the function maps and parameters, the memory access cost or number of memory operations MAC is: MAC ═ m + n) × F2+ m × n. The memory access cost or the memory operation respectively corresponds to the memory access and the kernel weight of the input/output characteristic diagram, the memory access operand has a lower limit given by a trigger, and when the number of input channels and the number of output channels are equal, the memory access operand reaches the lower limit. In practice, however, the RAM on many embedded devices is not large enough. In addition, modern neural network operation libraries usually adopt complex blocking strategies to fully utilize the caching mechanism, so that the real memory access operand may deviate from the theoretical value.
In the architecture of google lenet, a "multipath" structure is widely adopted in each convolution module. Many fragmentation operators are used instead of a few large operators. While this structure of fragmentation operations helps to improve accuracy, it may reduce computational efficiency due to its unfriendliness to devices with powerful parallel computing capabilities such as GPUs. But also introduces additional overhead such as startup and synchronization of the RAM. In lightweight convolutional neural networks, element-level operations take a considerable amount of time, especially on GPUs, and therefore, the operators calculated by elements include ReLU, tensor addition, offset addition, etc., and especially, area convolution is also an element-level operator because it also has a high ratio of memory access operands to floating point operations per second. By decomposing the standard convolutional layer into a surface convolutional layer and an information fusion layer, a lightweight convolutional module can be easily constructed. The ratio of the calculated quantities of the factorizable convolution structure and the standard convolution layer is:
Figure BDA0002126399900000051
the LZFNet-Fast is characterized in that an LZFNet is used as a reference network, a standard convolution layer is replaced by a factorizable convolution structure, Fast convolution operation is achieved, the purposes of compressing the neural network volume and reducing calculation consumption of a fabric defect identification system are achieved, and the new network configuration is shown in table 1.
TABLE 1 network configuration Table for LZFNet-Fast
Dimension of input Functional layer Stride length Convolution kernel dimensionality
224×224×3 Three-dimensional convolution 2 3×3×3×32
112×112×32 Dough rolling layer 1 3×3×32
112×112×32 Fusion layer 1 1×1×32×64
112×112×64 Dough rolling layer 2 3×3×64
56×56×64 Fusion layer 1 1×1×64×128
56×56×128 Dough rolling layer 1 3×3×128
56×56×128 Fusion layer 1 1×1×128×128
56×56×128 Dough rolling layer 2 3×3×128
28×28×128 Fusion layer 1 1×1×128×256
28×28×256 Dough rolling layer 1 3×3×256
28×28×256 Fusion layer 1 1×1×256×256
28×28×256 Dough rolling layer 2 3×3×256
14×14×256 Fusion layer 1 1×1×256×512
14×14×512 Dough rolling layer 1 3×3×512
14×14×512 Fusion layer 1 1×1×512×512
14×14×512 Dough rolling layer 2 3×3×512
7×7×512 Fusion layer 1 1×1×512×1024
7×7×1024 Dough rolling layer 1 3×3×1024
7×7×1024 Fusion layer 1 1×1×1024×1024
7×7×1024 Average pooling layer - 7×7
1×1×1024 Full connection layer - 1024×2
1×1×2 Classifier - -
The standard convolution layer and the four two-dimensional surface convolution layers adopt convolution with the step length of 2 to replace maximum pooling to realize a down-sampling function, the dimensionality of an output characteristic diagram of the last factorizable convolution structure in the lightweight convolution neural network LZFNet-Fast is 7 multiplied by 1024, and then all the characteristic diagrams are input into an average pooling layer to realize dimensionality reduction. And the Normalization of all the convolution layers adopts Batch Normalization, the activation function adopts a modified linear unit (ReLU6) with the value range of 0-6, and the terminal adopts a Softmax classifier as a fabric defect decision device.
The specific construction method of the lightweight convolutional neural network LZFNet-Fast is as follows:
1) activate the TensorFlow environment and load the "tf.
2) Defining factorization convolution operation, combining the two-dimensional surface convolution layer and the information fusion layer into a function, and assigning values to convolution kernel dimension, stride and layer number depth (the number of rows in table 1) according to table 1.
3) Constructing a lightweight convolutional neural network LZFNet-Fast according to the network structure configuration provided in the table 1, and setting hyper-parameters as the following values:
the image resolution inputs of the fabric image 224,
the number of categories num _ categories is 2,
the network depth multiplier depth _ multiplier is 1.0,
weight attenuation _ decay is 0.00004.
4) The lightweight convolutional neural network LZFNet-Fast file is named as' LZFNet-Fast.
The surface convolution layer in the lightweight convolutional neural network LZFNet-Fast contains 26208 parameters in total, the information fusion layer contains 2091008 parameters in total, and the whole model only contains 2120128 weight parameters. The calculated amount of the surface convolution operation is 13773312 times, the calculated amount of the line convolution operation is 333971456 times, and the total calculated amount of the model is 358584832 times and only accounts for 10.1 percent of the calculated amount of the standard convolution network.
S3, collecting fabric image sample data, standardizing the fabric image sample data, and dividing the standardized fabric image sample data into a training image set and a test image set; the fabric image sample data used in the present invention is a colored textile with a complex texture background, as shown in fig. 3. The method for standardizing the fabric image comprises the following steps: the fabric image in the jpg format is converted into the tfrecord format by using a tensor flow type machine learning library TFslim, so that the processing of a large batch of fabric images is facilitated, and the training speed is improved. The fabric image sample data are divided into two types, namely a normal fabric image and a defective fabric image, the number of the normal fabric images is similar to that of the defective fabric images, and the total amount of the fabric image sample data is 3800; the number of the training image sets accounts for 4/5 of the total number and is 3000, and the number of the testing image sets accounts for 1/5 of the total number and is 800.
S4, inputting the training image set into a lightweight convolutional neural network LZFNet-Fast for training by using a training strategy of asynchronous gradient descent to obtain an LZFNet-Fast model, and the steps are as follows:
s41, activating a convolutional neural network acceleration library CUDNN7.0, and activating an artificial neural network library Keras and a tensor flow type machine learning library TFslim;
s42, respectively converting the normal fabric image and the defective fabric image in the training image set into tfrecrd format files, fixing the size of the images into 224 x 224 red, green and blue three-channel color fabric images, and respectively storing the images into two independent files;
s43, importing the LZFNet-fast file in step 4), initializing the initial learning rate Θ to 0.01, and initializing the initial iteration momentum parameter vi0.9, 0 is the initial iteration number i, and i is the maximum iteration numbermax=2000;
S44, inputting the training image set into a lightweight convolutional neural network LZFNet-Fast for training by using an asynchronous gradient descent training strategy:
Figure BDA0002126399900000081
where α is weight attenuation, where α is 0.00004, L is a loss function, DiThe number of training images at the ith iteration, the number of training images Di=64,wiThe parameters of the LZFNet-Fast model to be trained are obtained; the ith iteration takes 64 images from the first folder for training, the (i + 1) th iteration takes 64 images from the second folder for training, and the images in the folders are taken for replacement;
updating LZFNet-Fast model parameters: w is ai+1=wi+vi+1
S45, circularly executing the step S44 until the maximum iteration number i is reachedmaxAnd ending the circulation to generate an LZFNet-Fast model.
S5, inputting the test image set into the LZFNet-Fast model obtained in the step S4 for testing, and verifying the performance of the LZFNet-Fast model.
To evaluate the performance of the model, the images containing the defects and the normal images were input into a fully trained LZFNet-Fast model, respectively, and generated with respective accuracy rates. In the data set containing 400 images of the defective web, 378 defective images and 22 normal images were identified together, with a miss rate of 5.5%. In a data set containing 400 normal fabric images, 383 normal images and 17 defect images were identified in total, with a false detection rate of 4.2%. Overall, the overall correct recognition rate of the entire system was 95.1%, and the average recognition time per input image was 13.2 milliseconds on average. A detailed performance comparison with a large scale convolutional neural network is shown in table 2.
The color fabric image library used in the invention has limited number of images, and if more color fabric images can be provided for model training, more ideal experimental results can be obtained.
TABLE 2 comparison of LZFNet-Fast Performance to Large Scale convolutional networks
Figure BDA0002126399900000082
Figure BDA0002126399900000091
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. A method for constructing a fabric defect identification system based on a lightweight convolutional neural network is characterized by comprising the following steps:
s1, configuring the running environment of the fabric defect identification system;
s2, designing a factorizable convolution structure, and constructing a lightweight convolution neural network LZFNet-Fast by utilizing the factorizable convolution structure; the construction method comprises the following steps: defining a two-dimensional surface convolution layer with a local receptive field of 3 multiplied by 3 and a three-dimensional information fusion layer with a local receptive field of 1 multiplied by n, compiling the two-dimensional surface convolution layer and the three-dimensional information fusion layer into a factorizable convolution structure, and building a lightweight convolutional neural network LZFNet-Fast by utilizing the factorizable convolution structure;
s3, collecting fabric image sample data, standardizing the fabric image sample data, and dividing the standardized fabric image sample data into a training image set and a test image set; the method of normalization is: converting fabric image sample data in a jpg format into a tfrecord format by using a tensor flow type machine learning library TFslim;
s4, inputting the training image set into a lightweight convolution neural network LZFNet-Fast for training by using a training strategy of asynchronous gradient descent to obtain an LZFNet-Fast model; the method comprises the following steps:
s41, activating a convolutional neural network acceleration library CUDNN7.0, and activating an artificial neural network library Keras and a tensor flow type machine learning library TFslim;
s42, respectively converting the normal fabric image and the defective fabric image in the training image set into tfrecrd format files, and respectively storing the tfrecrd format files as two independent files;
s43, initializing an initial learning rate theta and iterating a momentum parameter viSetting the maximum iteration number i as 0max
S44, inputting the training image set into a lightweight convolutional neural network LZFNet-Fast for training by using an asynchronous gradient descent training strategy:
Figure FDA0002446836950000011
where α is the weight attenuation, L is the loss function, DiIs the number of training images at the i-th iteration, wiThe parameters of the LZFNet-Fast model to be trained are obtained;
updating LZFNet-Fast model parameters: w is ai+1=wi+vi+1
S45, increasing the iteration number by 1, and circularly executing the step S44Until reaching the maximum iteration number imaxEnding the circulation to generate an LZFNet-Fast model;
s5, inputting the test image set into the LZFNet-Fast model obtained in the step S4 for testing, and verifying the performance of the LZFNet-Fast model.
2. The method for constructing the fabric defect identification system based on the lightweight convolutional neural network as claimed in claim 1, wherein the operating environment of the fabric defect identification system in the step S1 comprises a hardware system and a software system, a processor of the hardware system comprises two CPUs and two GPUs, the models of the CPUs are intel Xeon (R) e5-2650-v4, and the models of the GPUs are intel xadoro M5000; the software system comprises an operating system and a convolution library, wherein the operating system is Windows10, and the convolution library is a convolution neural network acceleration library CUDNN 7.0.
3. The method for building the fabric defect identification system based on the lightweight convolutional neural network is characterized in that the lightweight convolutional neural network LZFNet-Fast comprises a standard convolutional layer, nine factorizable convolutional structures, nineteen batch regularization layers, a global average pooling layer, a full connection layer and a Softmax classifier; the factorization convolution structure comprises a two-dimensional surface convolution layer and a three-dimensional information fusion layer, wherein the two-dimensional surface convolution layer is connected with a batch regularization layer, the batch regularization layer is connected with the three-dimensional information fusion layer, and the three-dimensional information fusion layer is connected with the batch regularization layer; and the last batch regularization layer is connected with the global average pooling layer, the global average pooling layer is connected with the full connection layer, and the full connection layer is connected with the Softmax classifier.
4. The method for constructing the fabric defect identification system based on the lightweight convolutional neural network as claimed in claim 3, wherein the calculated quantity C of the standard convolutional layer is calculated for an input feature map with the size of F x FsComprises the following steps: csWhere m is the number of input channels and n is the outputThe number of channels, K × K is the convolution kernel size of the standard convolution layer; the factorizable convolution structure divides the convolution layer into a surface convolution layer and an information fusion layer, wherein the convolution kernel of the surface convolution layer is K multiplied by K, the convolution kernel of the information fusion layer is 1 multiplied by 1, and the calculated quantity C of the factorizable convolution structure isfComprises the following steps: cfF × K × m + F × m × n; the ratio of the calculated quantities of the factorizable convolution structure and the standard convolution layer is:
Figure FDA0002446836950000021
5. the method for building the fabric defect identification system based on the lightweight convolutional neural network is characterized in that the standard convolutional layers and the four two-dimensional convolutional layers are all subjected to convolution with the step of 2 instead of maximum pooling, all convolutional layers are subjected to batch standardization, an activation function adopts a modified linear unit with the value range of 0-6, and a Softmax classifier is adopted as a fabric defect judgment device at a terminal.
6. The construction method of the fabric defect identification system based on the lightweight convolutional neural network as claimed in claim 1, wherein the fabric image sample data are divided into two types, namely a normal fabric image and a defective fabric image, and the number of the normal fabric images is similar to that of the defective fabric images; the number of the training image sets accounts for 4/5 of the total number, and the number of the testing image sets accounts for 1/5 of the total number.
CN201910623819.6A 2019-07-11 2019-07-11 Method for constructing fabric defect identification system based on lightweight convolutional neural network Active CN110349146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910623819.6A CN110349146B (en) 2019-07-11 2019-07-11 Method for constructing fabric defect identification system based on lightweight convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910623819.6A CN110349146B (en) 2019-07-11 2019-07-11 Method for constructing fabric defect identification system based on lightweight convolutional neural network

Publications (2)

Publication Number Publication Date
CN110349146A CN110349146A (en) 2019-10-18
CN110349146B true CN110349146B (en) 2020-06-02

Family

ID=68175700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910623819.6A Active CN110349146B (en) 2019-07-11 2019-07-11 Method for constructing fabric defect identification system based on lightweight convolutional neural network

Country Status (1)

Country Link
CN (1) CN110349146B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827263B (en) * 2019-11-06 2021-01-29 创新奇智(南京)科技有限公司 Magnetic shoe surface defect detection system and detection method based on visual identification technology
CN110929603B (en) * 2019-11-09 2023-07-14 北京工业大学 Weather image recognition method based on lightweight convolutional neural network
CN110930387A (en) * 2019-11-21 2020-03-27 中原工学院 Fabric defect detection method based on depth separable convolutional neural network
CN111402203B (en) * 2020-02-24 2024-03-01 杭州电子科技大学 Fabric surface defect detection method based on convolutional neural network
CN111476138B (en) * 2020-03-31 2023-08-18 万翼科技有限公司 Construction method, identification method and related equipment for building drawing component identification model
CN111709429B (en) * 2020-06-01 2023-05-05 江南大学 Woven fabric structural parameter identification method based on convolutional neural network
CN112115986B (en) * 2020-08-31 2024-04-16 南京航空航天大学 Lightweight neural network-based power transmission line scene classification method
CN113298751A (en) * 2020-09-29 2021-08-24 湖南长天自控工程有限公司 Detection method for auxiliary door blockage
CN112418397B (en) * 2020-11-19 2021-10-26 重庆邮电大学 Image classification method based on lightweight convolutional neural network
CN112598657B (en) * 2020-12-28 2022-03-04 锋睿领创(珠海)科技有限公司 Defect detection method and device, model construction method and computer equipment
CN113158968A (en) * 2021-05-10 2021-07-23 苏州大学 Embedded object cognitive system based on image processing
CN113936001B (en) * 2021-12-17 2022-03-04 杭州游画丝界文化艺术发展有限公司 Textile surface flaw detection method based on image processing technology
CN115713533B (en) * 2023-01-10 2023-06-06 佰聆数据股份有限公司 Power equipment surface defect detection method and device based on machine vision
GB2623140A (en) * 2023-03-02 2024-04-10 Imagination Tech Ltd Methods and systems for performing a sparse submanifold convolution using an NNA

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325940A (en) * 2018-09-05 2019-02-12 深圳灵图慧视科技有限公司 Textile detecting method and device, computer equipment and computer-readable medium
CN109613006A (en) * 2018-12-22 2019-04-12 中原工学院 A kind of fabric defect detection method based on end-to-end neural network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010012381A1 (en) * 1997-09-26 2001-08-09 Hamed Sari-Sarraf Vision-based, on-loom fabric inspection system
CN101866427A (en) * 2010-07-06 2010-10-20 西安电子科技大学 Method for detecting and classifying fabric defects
CN104751472B (en) * 2015-04-10 2017-06-23 浙江工业大学 Fabric defect detection method based on B-spline small echo and deep neural network
WO2018208791A1 (en) * 2017-05-08 2018-11-15 Aquifi, Inc. Systems and methods for inspection and defect detection using 3-d scanning
CN107833220B (en) * 2017-11-28 2021-06-11 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN108717568B (en) * 2018-05-16 2019-10-22 陕西师范大学 A kind of image characteristics extraction and training method based on Three dimensional convolution neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325940A (en) * 2018-09-05 2019-02-12 深圳灵图慧视科技有限公司 Textile detecting method and device, computer equipment and computer-readable medium
CN109613006A (en) * 2018-12-22 2019-04-12 中原工学院 A kind of fabric defect detection method based on end-to-end neural network

Also Published As

Publication number Publication date
CN110349146A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110349146B (en) Method for constructing fabric defect identification system based on lightweight convolutional neural network
CN107169956B (en) Color woven fabric defect detection method based on convolutional neural network
CN210428520U (en) Integrated circuit for deep learning acceleration
Marengoni et al. High level computer vision using opencv
CN110060237A (en) A kind of fault detection method, device, equipment and system
Masci et al. A fast learning algorithm for image segmentation with max-pooling convolutional networks
Li et al. Fabric defect detection based on biological vision modeling
CN109272500B (en) Fabric classification method based on adaptive convolutional neural network
CN109543548A (en) A kind of face identification method, device and storage medium
CN110930387A (en) Fabric defect detection method based on depth separable convolutional neural network
CN113408423B (en) Aquatic product target real-time detection method suitable for TX2 embedded platform
CN109359515A (en) A kind of method and device that the attributive character for target object is identified
CN105678788A (en) Fabric defect detection method based on HOG and low-rank decomposition
Wang et al. A fabric defect detection system based improved yolov5 detector
CN109344898A (en) Convolutional neural networks image classification method based on sparse coding pre-training
CN109376787A (en) Manifold learning network and computer visual image collection classification method based on it
CN116597224A (en) Potato defect detection method based on improved YOLO V8 network model
CN114913379A (en) Remote sensing image small sample scene classification method based on multi-task dynamic contrast learning
An et al. Fabric defect detection using deep learning: An Improved Faster R-approach
CN114332086A (en) Textile defect detection method and system based on style migration and artificial intelligence
CN111709429B (en) Woven fabric structural parameter identification method based on convolutional neural network
Guan et al. Defect detection and classification for plain woven fabric based on deep learning
CN109558803A (en) SAR target discrimination method based on convolutional neural networks Yu NP criterion
Li et al. Research on textile defect detection based on improved cascade R-CNN
CN116385401B (en) High-precision visual detection method for textile defects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant