CN112270722B - Digital printing fabric defect detection method based on deep neural network - Google Patents

Digital printing fabric defect detection method based on deep neural network Download PDF

Info

Publication number
CN112270722B
CN112270722B CN202011155761.6A CN202011155761A CN112270722B CN 112270722 B CN112270722 B CN 112270722B CN 202011155761 A CN202011155761 A CN 202011155761A CN 112270722 B CN112270722 B CN 112270722B
Authority
CN
China
Prior art keywords
loss function
digital printing
defect
neural network
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011155761.6A
Other languages
Chinese (zh)
Other versions
CN112270722A (en
Inventor
苏泽斌
武静威
李鹏飞
景军锋
张缓缓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Polytechnic University
Original Assignee
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Polytechnic University filed Critical Xian Polytechnic University
Priority to CN202011155761.6A priority Critical patent/CN112270722B/en
Publication of CN112270722A publication Critical patent/CN112270722A/en
Application granted granted Critical
Publication of CN112270722B publication Critical patent/CN112270722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a digital printing fabric defect detection method based on a deep neural network, which is implemented according to the following steps: step 1, acquiring an RGB color digital printing fabric defect image with the resolution of 416 multiplied by 416; establishing a neural network; step 2, extracting and calibrating target information by utilizing the color digital printing fabric defect image obtained in the step 1, and establishing a digital printing fabric defect sample data set to obtain a training set, a verification set and a test set; step 3, building a loss function by using the neural network built in the step 1 and the training set of the ImageNet data set, training the neural network by using the loss function to obtain a pre-training model, and adjusting and verifying the pre-training model by using the training set and the verification set obtained in the step 2; and 4, evaluating the model by using the test set obtained in the step 2. The digital printing fabric defect detection method based on the deep neural network disclosed by the invention can realize real-time accurate detection of digital printing defects.

Description

Digital printing fabric defect detection method based on deep neural network
Technical Field
The invention belongs to the technical field of textile defect detection methods, and relates to a digital printing fabric defect detection method based on a deep neural network.
Background
Textiles are indispensable products in life, and printing technology is a key procedure for improving added value of the textiles. The digital printing is used as a novel printing technology, a printing image is input into a computer, after the color separation treatment of the computer, the printing image is converted into digital lattice information by special RIP software, and a spray nozzle with the aperture precision reaching the micron level is controlled to carry out spray printing in a fixed direction on a fabric, so that an expected high-precision printing pattern is formed. Therefore, digital printing can cause defects of PASS (PASS through channel), ink leakage, uneven ink jet and cloth folds and the like of printed products due to faults of nozzle blockage, motor stepping deviation, unstable ink jet air pressure, ink discharge of a nozzle, uneven debugging, uneven equipment cloth pressing and the like, so that the selling price of the products is reduced by 45-65% on the basis of the original price. Thus, to ensure the quality of digital printed textile products, defect detection is a central part of quality control in textile production.
On the basis of a series of detection algorithms based on printed fabrics, the digital printed fabrics defect detection method is mainly divided into two types, namely a method based on traditional image processing and a target detection method based on a deep neural network. The traditional printed fabric defect detection method has the defects of low detection speed and poor accuracy, and detection objects of the method are mainly concentrated on grey cloth, color cleaning cloth, electronic cloth and color fabrics with single textures, and no mature detection method exists at present for printed products with rich patterns and vivid colors; based on the background, the target detection method based on the deep neural network is widely applied to the field of digital printing fabric defect detection.
Disclosure of Invention
The invention aims to provide a digital printing fabric defect detection method based on a deep neural network, which can realize real-time accurate detection of digital printing defects.
The technical scheme adopted by the invention is that the digital printing fabric defect detection method based on the deep neural network is implemented according to the following steps:
step 1, acquiring an RGB color digital printing fabric defect image with the resolution of 416 multiplied by 416; establishing a neural network;
Step 2, extracting and calibrating target information by utilizing the color digital printing fabric defect image obtained in the step 1, and establishing a digital printing fabric defect sample data set to obtain a training set, a verification set and a test set;
step 3, building a loss function by using the neural network built in the step 1 and the training set of the ImageNet data set, training the neural network by using the loss function to obtain a pre-training model, and adjusting and verifying the pre-training model by using the training set and the verification set obtained in the step 2;
And 4, evaluating the model by using the test set obtained in the step 2.
The invention is also characterized in that:
In the step 1, the acquisition of the RGB color digital printing fabric defect image with the resolution of 416 multiplied by 416 is specifically implemented as follows: and obtaining a digital printed fabric defect image by using a scanner, and adjusting the resolution of the image to 416 multiplied by 416 by using a local mean value method. The method comprises the following steps of including PASS channels, ink leakage, cloth folds and uneven ink jet type 4 defects, 800 defects in each type, and a total of 3200 sample images, and uniformly naming the images as a # # # # # # #.jpg format.
The neural network establishment in the step 1 is specifically implemented as follows: the neural network is composed specifically as follows:
1) The submodule of CSPDARKNET feature extraction networks is built, the submodule of CSPDARKNET feature extraction networks is added with Cross STAGE PARTIAL on the basis of the original DarkNet submodule, so that semantic information expression capacity of the network structure is enhanced while the light weight of the network structure is maintained, the submodule can perform 2 times downsampling on an input feature image, 5 submodules are continuously stacked, and feature images which perform 8 times, 16 times and 32 times downsampling on the input image are respectively obtained to serve as input information of a feature fusion network;
2) Carrying out feature fusion on the feature images extracted in the step 1) by adopting space pyramid pooling, so that pixel points of the output feature images have larger receptive fields for input images, each pixel point of the output feature images can realize target detection, semantic information of each pixel point is increased, and the detection capability of the model for targets with different scales is improved;
3) The path aggregation network aims to enhance the position information of the top-layer feature map through a bottom-up path, shorten the information path between the low-layer and the top-layer features, fully fuse the strong semantic information of the top layer with the strong position information of the bottom layer through a top-down and bottom-up bidirectional feature fusion path, and improve the detection capability of a digital printing defect detection algorithm on targets with different sizes;
4) The layered detection of large, medium and small targets is realized by using the concept of layered prediction, so that the top layer focuses on the detection of the large targets and the bottom layer focuses on the detection of the small targets.
Step 2 is specifically implemented as follows: marking printing defect information of the color digital printing fabric defect image obtained in the step 1 by using a LabelImg marking tool, wherein the marking content mainly comprises an input defect label, simultaneously, manually framing the defect to generate coordinate information taking the left upper corner position of the image as a reference, and information of the image size and a storage path, and corresponding a # # # # # # # # # xml format file generated by the marking information with the # # # # # # # # # jpg format file; 600 samples were randomly selected from each type of defect, 50 samples were verified, and 150 samples were tested.
Step 3 is specifically implemented according to the following steps: model training is completed by adopting Python language, pytorch deep learning framework and a third party function library Anaconda3.4.1. Firstly, pre-training a network model on an ImageNet data set, saving pre-trained model parameters, and adjusting the model by using the data set to serve as weight parameters of a digital printing defect detection algorithm. In the back propagation process, a momentum random gradient descent algorithm is adopted to update the weight and the bias parameters of the network, and an optimal digital printing defect detection model is obtained through 53 epoch iterations.
The step 3 of establishing the loss function is specifically implemented according to the following steps:
1) Confidence loss function:
The confidence loss function relieves the problems that the proportion of positive and negative samples is seriously unbalanced and small target defects are difficult to detect in single-stage target detection, balances the relation between the difficult-to-separate and easy-to-separate samples and the number of positive and negative samples (namely, the fabric has no defects) and the target number of large and small fabric defects (the fabric defect size is the ratio of the input image size), and the formula (1) is the confidence loss function;
Lobj=-αylog(y′)(1-y′)2(1-β)-(1-α)(1-y)log(1-y′)(y′)2 (1)
In the above formula, y is the real value of the anchor containing target, y' is the corresponding predicted value thereof, the parameter alpha is used for adjusting the proportion of positive and negative samples, and beta is the proportion of the real frame area to the input image area;
2) Category loss function:
The class loss function increases the weight of the small target defect, balances the proportion of the large target and the small target in the loss function, and focuses more on the detection of the small target, wherein the formula (2) is the class loss function;
Lcls=-ylog(y′)(1-β)-(1-y)log(1-y′) (2)
in the above formula, y is the real value of the anchor containing target class, y' is the corresponding predicted value, and beta is the proportion of the real frame area to the input image area;
3) Bounding box regression loss function:
the boundary frame regression loss function considers the relation of the overlapping area of the prediction frame and the target frame, the center point distance and the length-width ratio, balances the regression influence of the size defect target on the boundary frame, improves the regression capability of the small target boundary frame, and the formula (3) is the boundary frame regression loss function;
In the above-mentioned method, the step of, P represents the Euclidean distance between the center points of the prediction frame and the real frame, c represents the diagonal distance between the minimum circumscribed rectangle of the prediction frame and the real frame, alpha is a parameter used for making track-off, u is a parameter used for measuring the consistency of the length-width ratio, and lambda is the proportion of the area of the real frame to the area of the input image;
4) Total loss function:
the total loss function is the superposition of a confidence loss function, a category loss function and a boundary box regression loss function, and the formula (4) is the total loss function;
Lsum=εLobj+φLcls+γLCIoU (4)
In the above equation, ε, φ, and γ are used to balance the ratio of confidence loss, category loss, and bounding box regression loss to total loss.
Step 4 is specifically implemented as follows: in order to evaluate the detection performance of the digital printing defect detection model aiming at different defect types, an AP index is selected to evaluate the detection precision of the model for each type of defects, and a mAP index is used for comprehensively evaluating the detection precision of the model for various defects, so that the evaluation of the digital printing fabric defect detection model is completed.
The beneficial effects of the invention are as follows: the digital printing fabric defect detection method based on the deep neural network can realize real-time accurate detection of digital printing defects. The method inherits the core structure of Yolov < 4 > target detection algorithm, and keeps the performance of fast speed and high precision of the Yolov < 4 > target detection algorithm; firstly, pre-training a network model on an ImageNet data set, saving pre-trained model parameters, and adjusting the model by using the data set to serve as weight parameters of a digital printing defect detection algorithm; the loss function of the method fully considers the relation between the difficult-to-separate and easy-to-separate samples and the number of positive and negative samples (namely, the fabric has no defects) and the target number of large and small fabric defects (the ratio of the fabric defects to the input image size), and improves the detection precision of the model on small target defects (PASS (PASS through), ink leakage and cloth wrinkles).
Drawings
FIG. 1 is a flow chart of a digital printed fabric defect detection method based on a deep neural network;
FIG. 2 is a schematic diagram of a submodule structure of CSPDARKNET feature extraction network of a digital printing fabric defect detection method based on a deep neural network;
FIG. 3 is a schematic diagram of a network structure of a spatial pyramid of a digital printed fabric defect detection method based on a deep neural network;
FIG. 4 is a schematic diagram of a path aggregation network of a digital printed fabric defect detection method based on a deep neural network according to the present invention;
fig. 5 is a schematic diagram of experimental results of a digital printed fabric defect detection method based on a deep neural network.
Detailed Description
The invention will be described in detail below with reference to the drawings and the detailed description.
The invention discloses a digital printed fabric defect detection method based on a deep neural network, which is implemented according to the following steps as shown in figure 1:
step 1, acquiring an RGB color digital printing fabric defect image with the resolution of 416 multiplied by 416; establishing a neural network;
In the step 1, the acquisition of the RGB color digital printing fabric defect image with the resolution of 416 multiplied by 416 is specifically implemented as follows: and obtaining a digital printed fabric defect image by using a scanner, and adjusting the resolution of the image to 416 multiplied by 416 by using a local mean value method. The method comprises the following steps of including PASS channels, ink leakage, cloth folds and uneven ink jet type 4 defects, 800 defects in each type, and a total of 3200 sample images, and uniformly naming the images as a # # # # # # #.jpg format.
The neural network establishment in the step 1 is specifically implemented as follows: the neural network is composed specifically as follows:
1) The submodule of CSPDARKNET feature extraction network is built, as shown in fig. 2, the submodule of CSPDARKNET feature extraction network is formed by adding Cross STAGE PARTIAL on the basis of the original DarkNet submodule, so that the semantic information expression capacity of the network is enhanced while the light weight of the network structure is maintained, the submodule can perform 2 times downsampling on an input feature image, and 5 submodules are continuously stacked to respectively obtain feature images which are subjected to 8 times, 16 times and 32 times downsampling on the input image as input information of a feature fusion network;
2) The feature images extracted by the step 1) are subjected to feature fusion by adopting space pyramid pooling, as shown in a figure 3, pixel points outputting the feature images have larger receptive fields for input images, each pixel point can realize target detection, semantic information of each pixel point is increased, and the detection capability of the model for targets with different scales is improved;
3) The path aggregation network aims to enhance the position information of the top-layer feature map through a bottom-up path, shorten the information path between the low-layer and the top-layer features, fully fuse the strong semantic information of the top layer with the strong position information of the bottom layer through a top-down and bottom-up bidirectional feature fusion path, and improve the detection capability of the digital printing defect detection algorithm on targets with different sizes, as shown in fig. 4;
4) The layered detection of large, medium and small targets is realized by using the concept of layered prediction, so that the top layer focuses on the detection of the large targets and the bottom layer focuses on the detection of the small targets.
Step 2, extracting and calibrating target information by utilizing the color digital printing fabric defect image obtained in the step 1, and establishing a digital printing fabric defect sample data set to obtain a training set, a verification set and a test set;
Step 2 is specifically implemented as follows: marking printing defect information of the color digital printing fabric defect image obtained in the step 1 by using a LabelImg marking tool, wherein the marking content mainly comprises an input defect label, simultaneously, manually framing the defect to generate coordinate information taking the left upper corner position of the image as a reference, and information of the image size and a storage path, and corresponding a # # # # # # # # # xml format file generated by the marking information with the # # # # # # # # # jpg format file; 600 samples were randomly selected from each type of defect, 50 samples were verified, and 150 samples were tested.
Step 3, building a loss function by using the neural network built in the step 1 and the training set of the ImageNet data set, training the neural network by using the loss function to obtain a pre-training model, and adjusting and verifying the pre-training model by using the training set and the verification set obtained in the step 2;
Step 3 is specifically implemented according to the following steps: model training is completed by adopting Python language, pytorch deep learning framework and a third party function library Anaconda3.4.1. Firstly, pre-training a network model on an ImageNet data set, saving pre-trained model parameters, and adjusting the model by using the data set to serve as weight parameters of a digital printing defect detection algorithm. In the back propagation process, a momentum random gradient descent algorithm is adopted to update the weight and the bias parameters of the network, and an optimal digital printing defect detection model is obtained through 53 epoch iterations.
The step 3 of establishing the loss function is specifically implemented according to the following steps:
1) Confidence loss function:
The confidence loss function relieves the problems that the proportion of positive and negative samples is seriously unbalanced and small target defects are difficult to detect in single-stage target detection, balances the relation between the difficult-to-separate and easy-to-separate samples and the number of positive and negative samples (namely, the fabric has no defects) and the target number of large and small fabric defects (the fabric defect size is the ratio of the input image size), and the formula (1) is the confidence loss function;
Lobj=-αylog(y′)(1-y′)2(1-β)-(1-α)(1-y)log(1-y′)(y′)2 (1)
In the above formula, y is the real value of the anchor containing target, y' is the corresponding predicted value thereof, the parameter alpha is used for adjusting the proportion of positive and negative samples, and beta is the proportion of the real frame area to the input image area;
2) Category loss function:
The class loss function increases the weight of the small target defect, balances the proportion of the large target and the small target in the loss function, and focuses more on the detection of the small target, wherein the formula (2) is the class loss function;
Lcls=-ylog(y′)(1-β)-(1-y)log(1-y′) (2)
in the above formula, y is the real value of the anchor containing target class, y' is the corresponding predicted value, and beta is the proportion of the real frame area to the input image area;
3) Bounding box regression loss function:
the boundary frame regression loss function considers the relation of the overlapping area of the prediction frame and the target frame, the center point distance and the length-width ratio, balances the regression influence of the size defect target on the boundary frame, improves the regression capability of the small target boundary frame, and the formula (3) is the boundary frame regression loss function;
In the above-mentioned method, the step of, P represents the Euclidean distance between the center points of the prediction frame and the real frame, c represents the diagonal distance between the minimum circumscribed rectangle of the prediction frame and the real frame, alpha is a parameter used for making track-off, u is a parameter used for measuring the consistency of the length-width ratio, and lambda is the proportion of the area of the real frame to the area of the input image;
4) Total loss function:
the total loss function is the superposition of a confidence loss function, a category loss function and a boundary box regression loss function, and the formula (4) is the total loss function;
Lsum=εLobj+φLcls+γLCIoU (4)
In the above equation, ε, φ, and γ are used to balance the ratio of confidence loss, category loss, and bounding box regression loss to total loss.
And 4, evaluating the model by using the test set obtained in the step 2.
Step 4 is specifically implemented as follows: in order to evaluate the detection performance of the digital printing defect detection model aiming at different defect types, an AP index is selected to evaluate the detection precision of the model for each type of defects, a mAP index is used to comprehensively evaluate the detection precision of the model for various types of defects, and the evaluation of the digital printing fabric defect detection model is completed, and the experimental result is shown in figure 5.
The invention relates to a digital printing fabric defect detection method based on a deep neural network, which is characterized in that the size of an input image is 416 multiplied by 416, and characteristic diagrams with the sizes of 13 multiplied by 13, 26 multiplied by 26 and 52 multiplied by 52 are respectively obtained through a series of convolution layers and pooling layers and used for predicting defect types and regressing a boundary frame. Taking a feature map of size 13×13 as an example, the input image is first equally divided into 13×13 grids, each pixel on the feature map is responsible for predicting defects at the corresponding location, where each pixel predicts 3 bounding boxes, each bounding box has 9 components, namely the center position (x, y) of the object and its height (h) and width (w), respectively, as well as confidence and number of categories (inkjets unevenness, PASS lanes, weeping, cloth wrinkles). And finally, optimizing the target position by using a non-maximum suppression algorithm, and further optimizing the digital printing defect detection performance. Experimental results show that compared with the traditional target detection algorithm, the digital printing defect detection algorithm based on the deep neural network can realize real-time accurate detection of digital printing defects, and has certain practical use value.

Claims (4)

1. The digital printing fabric defect detection method based on the deep neural network is characterized by comprising the following steps of:
step 1, acquiring an RGB color digital printing fabric defect image with the resolution of 416 multiplied by 416; establishing a neural network;
The establishment of the neural network body is implemented as follows: the neural network is composed specifically as follows:
1) The submodule of CSPDARKNET feature extraction networks is built, the submodule of CSPDARKNET feature extraction networks is added with Cross STAGE PARTIAL on the basis of the original DarkNet submodule, so that semantic information expression capacity of the network structure is enhanced while the light weight of the network structure is maintained, the submodule can perform 2 times downsampling on an input feature image, 5 submodules are continuously stacked, and feature images which perform 8 times, 16 times and 32 times downsampling on the input image are respectively obtained to serve as input information of a feature fusion network;
2) Carrying out feature fusion on the feature images extracted in the step 1) by adopting space pyramid pooling, so that pixel points of the output feature images have receptive fields for input images, each pixel point of the output feature images can realize target detection, semantic information of each pixel point is increased, and the detection capability of the model for targets with different scales is improved;
3) The path aggregation network aims to enhance the position information of the top-layer feature map through a bottom-up path, shorten the information path between the low-layer and the top-layer features, fully fuse the strong semantic information of the top layer with the strong position information of the bottom layer through a top-down and bottom-up bidirectional feature fusion path, and improve the detection capability of a digital printing defect detection algorithm on targets with different sizes;
4) Layered detection of large, medium and small targets is realized by using the concept of layered prediction, so that the top layer focuses on the detection of the large targets, and the bottom layer focuses on the detection of the small targets;
Step 2, extracting and calibrating target information by utilizing the color digital printing fabric defect image obtained in the step 1, and establishing a digital printing fabric defect sample data set to obtain a training set, a verification set and a test set;
step 3, building a loss function by using the neural network built in the step 1 and the training set of the ImageNet data set, training the neural network by using the loss function to obtain a pre-training model, and adjusting and verifying the pre-training model by using the training set and the verification set obtained in the step 2;
Model training is completed by adopting Python language, pytorch deep learning frames and a third party function library Anaconda3.4.1; firstly, pre-training a network model on an ImageNet data set, saving pre-trained model parameters, and adjusting the model by using the data set to serve as weight parameters of a digital printing defect detection algorithm; in the back propagation process, a momentum random gradient descent algorithm is adopted to update the weight and bias parameters of the network, and an optimal digital printing defect detection model is obtained through 53 epoch iterations;
the step 3 of establishing the loss function is specifically implemented according to the following steps:
1) Confidence loss function:
The confidence loss function relieves the problems that the proportion of positive and negative samples is seriously unbalanced and the small target defects are difficult to detect in single-stage target detection, balances the relation between the difficult-to-separate and easy-to-separate samples, the number of the positive and negative samples and the target number of the large and small fabric defects, wherein the number of the positive and negative samples represents the number of the fabric with no defects, namely the positive samples represent the number of the fabric without defects, and the negative samples represent the number of the fabric with defects; the size fabric defect represents the ratio of the fabric defect size to the input image size, equation (1) is a confidence loss function;
Lobj=-δylog(y′)(1-y′)2(1-β)-(1-δ)(1-y)log(1-y′)(y′)2 (1)
In the above formula, y is the true value of the anchor containing target, y' is the corresponding predicted value thereof, the parameter delta is used for adjusting the proportion of the fabric with no defects, and beta is the proportion of the real frame area to the input image area;
2) Category loss function:
The class loss function increases the weight of the small target defect, balances the proportion of the large target and the small target in the loss function, and focuses more on the detection of the small target, wherein the formula (2) is the class loss function;
Lcls=-ylog(y′)(1-β)-(1-y)log(1-y′) (2)
in the above formula, y is the real value of the anchor containing target class, y' is the corresponding predicted value, and beta is the proportion of the real frame area to the input image area;
3) Bounding box regression loss function:
the boundary frame regression loss function considers the relation of the overlapping area of the prediction frame and the target frame, the center point distance and the length-width ratio, balances the regression influence of the size defect target on the boundary frame, improves the regression capability of the small target boundary frame, and the formula (3) is the boundary frame regression loss function;
In the above-mentioned method, the step of, P represents the Euclidean distance between the center points of the predicted frame and the real frame, iou represents the intersection ratio of the predicted frame and the real frame, b represents the center point of the predicted frame, b gt represents the center point of the real frame, w gt represents the width of the real frame, h gt represents the height of the real frame, w represents the width of the predicted frame, h represents the width of the predicted frame, c represents the diagonal distance of the minimum circumscribed rectangle of the predicted frame and the real frame, alpha is a parameter for making trade-off, u is a parameter for measuring the consistency of the length-width ratio, and beta is the proportion of the area of the real frame to the area of the input image;
4) Total loss function:
the total loss function is the superposition of a confidence loss function, a category loss function and a boundary box regression loss function, and the formula (4) is the total loss function;
Lsum=εLobj+φLcls+γLCIoU (4)
In the above formula, epsilon, phi and gamma are used for balancing the proportion of confidence loss, category loss and boundary box regression loss to the total loss;
And 4, evaluating the model by using the test set obtained in the step 2.
2. The digital printed fabric defect detection method based on deep neural network according to claim 1, wherein the RGB color digital printed fabric defect image with the resolution of 416×416 in step 1 is specifically implemented as follows: obtaining a digital printed fabric defect image by using a scanner, and adjusting the resolution of the image to 416 multiplied by 416 by using a local mean value method; the method comprises the following steps of including PASS channels, ink leakage, cloth folds and uneven ink jet type 4 defects, 800 defects in each type, and a total of 3200 sample images, and uniformly naming the images as a # # # # # # #.jpg format.
3. The digital printed fabric defect detection method based on the deep neural network according to claim 1, wherein the step 2 is specifically implemented as follows: marking printing defect information of the color digital printing fabric defect image obtained in the step 1 by using a LabelImg marking tool, wherein the marking content mainly comprises an input defect label, simultaneously, manually framing the defect to generate coordinate information taking the left upper corner position of the image as a reference, and information of the image size and a storage path, and corresponding a # # # # # # # # # xml format file generated by the marking information with the # # # # # # # # # jpg format file; 600 samples were randomly selected from each type of defect, 50 samples were verified, and 150 samples were tested.
4. The digital printed fabric defect detection method based on the deep neural network according to claim 1, wherein the step 4 is specifically implemented as follows: in order to evaluate the detection performance of the digital printing defect detection model aiming at different defect types, an AP index is selected to evaluate the detection precision of the model for each type of defects, and a mAP index is used for comprehensively evaluating the detection precision of the model for various defects, so that the evaluation of the digital printing fabric defect detection model is completed.
CN202011155761.6A 2020-10-26 2020-10-26 Digital printing fabric defect detection method based on deep neural network Active CN112270722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011155761.6A CN112270722B (en) 2020-10-26 2020-10-26 Digital printing fabric defect detection method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011155761.6A CN112270722B (en) 2020-10-26 2020-10-26 Digital printing fabric defect detection method based on deep neural network

Publications (2)

Publication Number Publication Date
CN112270722A CN112270722A (en) 2021-01-26
CN112270722B true CN112270722B (en) 2024-05-17

Family

ID=74341424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011155761.6A Active CN112270722B (en) 2020-10-26 2020-10-26 Digital printing fabric defect detection method based on deep neural network

Country Status (1)

Country Link
CN (1) CN112270722B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907529A (en) * 2021-02-09 2021-06-04 南京航空航天大学 Image-based woven preform defect detection method and device
CN112906794A (en) * 2021-02-22 2021-06-04 珠海格力电器股份有限公司 Target detection method, device, storage medium and terminal
CN113192040B (en) * 2021-05-10 2023-09-22 浙江理工大学 Fabric flaw detection method based on YOLO v4 improved algorithm
CN113313694A (en) * 2021-06-05 2021-08-27 西北工业大学 Surface defect rapid detection method based on light-weight convolutional neural network
CN113377356B (en) * 2021-06-11 2022-11-15 四川大学 Method, device, equipment and medium for generating user interface prototype code
CN113313706B (en) * 2021-06-28 2022-04-15 安徽南瑞继远电网技术有限公司 Power equipment defect image detection method based on detection reference point offset analysis
CN113538392B (en) * 2021-07-26 2022-11-11 长江存储科技有限责任公司 Wafer detection method, wafer detection equipment and storage medium
CN113516650B (en) * 2021-07-30 2023-08-25 深圳康微视觉技术有限公司 Circuit board hole plugging defect detection method and device based on deep learning
CN114519803A (en) * 2022-01-24 2022-05-20 东莞理工学院 Small sample target identification method based on transfer learning
CN114397306B (en) * 2022-03-25 2022-07-29 南方电网数字电网研究院有限公司 Power grid grading ring hypercomplex category defect multi-stage model joint detection method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316295A (en) * 2017-07-02 2017-11-03 苏州大学 A kind of fabric defects detection method based on deep neural network
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN109886359A (en) * 2019-03-25 2019-06-14 西安电子科技大学 Small target detecting method and detection model based on convolutional neural networks
CN110930387A (en) * 2019-11-21 2020-03-27 中原工学院 Fabric defect detection method based on depth separable convolutional neural network
CN111127383A (en) * 2019-03-15 2020-05-08 杭州电子科技大学 Digital printing online defect detection system and implementation method thereof
CN111292305A (en) * 2020-01-22 2020-06-16 重庆大学 Improved YOLO-V3 metal processing surface defect detection method
CN111462051A (en) * 2020-03-14 2020-07-28 华中科技大学 Cloth defect detection method and system based on deep neural network
CN111553898A (en) * 2020-04-27 2020-08-18 东华大学 Fabric defect detection method based on convolutional neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316295A (en) * 2017-07-02 2017-11-03 苏州大学 A kind of fabric defects detection method based on deep neural network
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN111127383A (en) * 2019-03-15 2020-05-08 杭州电子科技大学 Digital printing online defect detection system and implementation method thereof
CN109886359A (en) * 2019-03-25 2019-06-14 西安电子科技大学 Small target detecting method and detection model based on convolutional neural networks
CN110930387A (en) * 2019-11-21 2020-03-27 中原工学院 Fabric defect detection method based on depth separable convolutional neural network
CN111292305A (en) * 2020-01-22 2020-06-16 重庆大学 Improved YOLO-V3 metal processing surface defect detection method
CN111462051A (en) * 2020-03-14 2020-07-28 华中科技大学 Cloth defect detection method and system based on deep neural network
CN111553898A (en) * 2020-04-27 2020-08-18 东华大学 Fabric defect detection method based on convolutional neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于SSD的织物瑕疵检测的研究;张丽瑶;王志鹏;徐功平;;电子设计工程(06);全文 *
基于卷积神经网络的织物表面缺陷分类方法;景军锋;刘娆;;测控技术(09);全文 *
基于深度学习优化YOLOV3算法的芳纶带检测算法研究;杨建伟;涂兴子;梅峰漳;李亚宁;范鑫杰;;中国矿业;20200415(04);全文 *
基于贝叶斯CNN和注意力网络的钢轨表面缺陷检测系统;金侠挺;王耀南;张辉;刘理;钟杭;贺振东;;自动化学报(12);全文 *

Also Published As

Publication number Publication date
CN112270722A (en) 2021-01-26

Similar Documents

Publication Publication Date Title
CN112270722B (en) Digital printing fabric defect detection method based on deep neural network
CN109446992A (en) Remote sensing image building extracting method and system, storage medium, electronic equipment based on deep learning
CN108830285A (en) A kind of object detection method of the reinforcement study based on Faster-RCNN
CN109583483A (en) A kind of object detection method and system based on convolutional neural networks
CN107609638A (en) A kind of method based on line decoder and interpolation sampling optimization convolutional neural networks
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN110287806A (en) A kind of traffic sign recognition method based on improvement SSD network
CN110045015A (en) A kind of concrete structure Inner Defect Testing method based on deep learning
CN115731164A (en) Insulator defect detection method based on improved YOLOv7
CN108665005A (en) A method of it is improved based on CNN image recognition performances using DCGAN
CN114549507B (en) Improved Scaled-YOLOv fabric flaw detection method
CN115049619B (en) Efficient flaw detection method for complex scene
CN109360190A (en) Building based on image superpixel fusion damages detection method and device
CN115661628A (en) Fish detection method based on improved YOLOv5S model
CN113496480A (en) Method for detecting weld image defects
CN117474863A (en) Chip surface defect detection method for compressed multi-head self-attention neural network
CN116029979A (en) Cloth flaw visual detection method based on improved Yolov4
CN115187544A (en) DR-RSBU-YOLOv 5-based fabric flaw detection method
CN114972780A (en) Lightweight target detection network based on improved YOLOv5
CN109815957A (en) A kind of character recognition method based on color image under complex background
CN109284752A (en) A kind of rapid detection method of vehicle
CN117372332A (en) Fabric flaw detection method based on improved YOLOv7 model
CN115206455B (en) Deep neural network-based rare earth element component content prediction method and system
CN113469984B (en) Method for detecting appearance of display panel based on YOLO structure
CN116309398A (en) Printed circuit board small target defect detection method based on multi-channel feature fusion learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant