CN114240885A - Cloth flaw detection method based on improved Yolov4 network - Google Patents

Cloth flaw detection method based on improved Yolov4 network Download PDF

Info

Publication number
CN114240885A
CN114240885A CN202111549933.2A CN202111549933A CN114240885A CN 114240885 A CN114240885 A CN 114240885A CN 202111549933 A CN202111549933 A CN 202111549933A CN 114240885 A CN114240885 A CN 114240885A
Authority
CN
China
Prior art keywords
box
loss
cloth
eiou
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111549933.2A
Other languages
Chinese (zh)
Other versions
CN114240885B (en
Inventor
岳希
王庆
唐聃
何磊
刘敦龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202111549933.2A priority Critical patent/CN114240885B/en
Publication of CN114240885A publication Critical patent/CN114240885A/en
Application granted granted Critical
Publication of CN114240885B publication Critical patent/CN114240885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses a cloth flaw detection method based on an improved Yolov4 network, which comprises the steps of collecting cloth surface image information; inputting image information into an improved Yolov4 network model for flaw detection; and outputting a detection result. The invention provides a cloth defect detection method based on an improved Yolov4 network, which aims to solve the problem that the Yolov4 network in the prior art is difficult to be directly applied to cloth defect detection and achieve the purposes of improving detection efficiency and detection precision.

Description

Cloth flaw detection method based on improved Yolov4 network
Technical Field
The invention relates to the field of textile production, in particular to a cloth flaw detection method based on an improved Yolov4 network.
Background
The textile industry is always the traditional pillar type industry and the important civil industry of national economy in China, is also the industry with obvious international competitive advantage in China, and plays an important role in the aspects of market flourishing, export expansion, employment absorption, income increase of farmers, urbanization development promotion and the like. In the cloth production process, the cloth defects are key factors influencing the quality of the cloth. At present, textile and clothing production enterprises mainly detect cloth flaws through traditional manual naked eyes and have the problems of high cost, low efficiency, high omission factor and false detection rate and the like. In order to overcome the above problems, the prior art gradually starts to utilize a deep learning method to improve the efficiency and accuracy of the detection of the cloth flaws.
The popular target detection algorithms in the prior art are generally divided into two categories, one is a two-stage target detection algorithm represented by FasterR-CNN and FPN detection algorithms, and has the characteristics of high precision and low efficiency, and the other is a single-stage target detection algorithm represented by SSD and YOLO detection algorithms, and has the characteristics of low precision and high efficiency. Due to the fact that the defect detection of the cloth needs the characteristic of high efficiency, the single-stage target detection algorithm can better meet the efficiency requirement of the field.
The Yolov4 algorithm is an emerging algorithm derived on the basis of the traditional YOLO target detection architecture, and has optimization of different degrees in various aspects such as data processing, backbone networks, network training, activation functions, loss functions and the like. However, the defect detection of the cloth by using the traditional Yolov4 algorithm still has more problems: firstly, when the data volume is small, a Yolov4 network cannot converge well, and the problem cannot be solved by the Mosaic data enhancement carried by Yolov 4; second, many cloth defects are very small targets, which cannot be predicted well because Yolov4 Anchors are defined based on COCO data sets; thirdly, the Yolov4 network adopts the same learning weight for each pixel of the image during learning, and cannot give attention to valuable pixel information; fourth, the definition of the aspect ratio by the Loss function of Yolov4 is fuzzy, and there is no effective solution to the BBox sample imbalance problem.
In summary, the existing Yolov4 network is still difficult to be directly applied to visual inspection of cloth defects.
Disclosure of Invention
The invention provides a cloth defect detection method based on an improved Yolov4 network, which aims to solve the problem that the Yolov4 network in the prior art is difficult to be directly applied to cloth defect detection and achieve the purposes of improving detection efficiency and detection precision.
The invention is realized by the following technical scheme:
a cloth defect detection method based on an improved Yolov4 network comprises the steps of
S1, collecting the image information of the surface of the cloth;
s2, inputting the image information into an improved Yolov4 network model for flaw detection;
and S3, outputting the detection result.
Aiming at the problem that a Yolov4 network in the prior art is difficult to be directly applied to cloth defect detection, the invention provides an improved Yolov4 network-based cloth defect detection method, wherein an improved Yolov4 network model comprises the following establishing method:
s21, acquiring a flaw cloth image as a data set, and performing data enhancement on the data set;
s22, inputting the enhanced data set into a Yolov4 network;
s23, clustering the check boxes in the data set, and replacing the initial check box in the Yolov4 network with the check box obtained after clustering;
s24, adding an attention module;
s25, adding a characteristic output layer 104x104 in a Yolov4 network;
s26, determining a frame loss function CEIOULOSS
And S27, inputting the data set into a network for training, and taking the trained model as an improved Yolov4 network model.
The data enhancement is to overcome the problem that the training model of the existing Yolov4 network cannot be correctly converged when the data volume is small, and the purpose is to expand the original data set to the original n times size as required, read each label in the corresponding xml markup file and the data of the corresponding detection frame for each picture while reading the picture, and operate the detection frame while operating the picture.
The enhanced data are input into the existing Yolov4 network, and because the initial Anchors of the existing Yolov network cannot accurately and effectively predict the small target, the method and the device cluster the inspection boxes in the original cloth data set, replace the initial inspection box in the Yolov4 network with the inspection box obtained after clustering, and accordingly achieve the effect of customizing the small target inspection box suitable for cloth detection.
The attention module is added in the application to overcome the problem that the same learning weight is adopted for each pixel of an image in the learning process of the existing Yolov4 network, and valuable pixel information cannot be valued, and the attention module can make the network more sensitive to small targets and exert more attention on the area containing the detection frame.
The inventor finds that, in the process of a large number of experimental researches, even if the aforementioned data enhancement, check box clustering and attention enhancement are adopted, the problem that a slight tiny cloth defect cannot be detected still occurs in the primitive network of Yolov4, and the analysis reason is that the Yolov4 network outputs 3 features with different dimensions for prediction when performing prediction, namely, the whole graph is divided into grids of 13x13, 26x26 and 52x52, each network point is responsible for the detection of one area, the feature output mode causes that some tiny defects are difficult to be effectively detected, and for the cloth detection, the detection by using the deep learning technology is a very important target for efficiently detecting tiny defects which are difficult to be quickly identified by naked eyes besides improving the degree of automation, so that if the primitive network of Yolov4 is directly adopted, a large number of tiny defects on the cloth can still be detected, this does not meet the original intention of automatic detection by deep learning techniques, and there is no effective technical means for overcoming this drawback based on Yolov4 network in the prior art. Therefore, the method and the device have the advantages that the special improvement on the cloth detection is carried out on the network framework of the Yolov4, so that the characteristic network can specially identify and detect the tiny flaws on the cloth through the characteristic output layer 104x104, the problem that the tiny flaws are difficult to detect by the existing Yolov4 network is solved, the automatic detection precision of the flaws on the cloth is obviously improved, and the applicability of a deep learning algorithm in the field of cloth detection is improved.
In addition, a loss function used in the existing Yolov4 network is a CIOU loss function, and the CIOU considers the aspect ratio of the frame into the loss function on the basis of the traditional DIOU, so that the regression precision is relatively improved. However, the CIOU loss function is fuzzy in the design of the aspect ratio weight, and the difference of the aspect ratio is reflected by the parameter v in the formula, rather than the actual difference between the width and the height and the confidence coefficient, so that the effective optimization similarity of the model is sometimes hindered; furthermore, for cloth defect detection, the data set consists of many detection frames with extreme aspect ratios, and the CIOU loss function is not friendly to such detection frames with extreme aspect ratios. Therefore, the loss function of the existing Yolov4 network has many disadvantages when detecting cloth defects. Based on this, the present application provides a frame loss CEIOU (dark eiou) innovative in the field of cloth defects, and the function of the frame loss CEIOU is defined as CEIOULOSSThe method has the advantages that the aspect ratio can be effectively solved, and meanwhile, the larger the deviation degree of the prediction frame and the real frame is, the larger the obtained absolute value of the gradient is, so that the training of the network is accelerated.
Further, the method for enhancing the data of the data set comprises the following steps:
s211, classifying defects in the data set into a plurality of labels;
s212, reading an xml file of each picture in the data set, extracting the label category of each inspection box, and counting the number of samples corresponding to each category label;
s213, for the labels with the number of the samples less than or equal to N, sequentially performing data enhancement on each sample by using the following six operations; for labels with the number of samples larger than N, four of the following six operations are randomly selected to perform data enhancement on each sample; wherein, N is 5% of the total number of samples in the data set;
operation one, DropOut operation: randomly discarding 20% of pixels of the picture, and reserving the remaining 80% of pixel values;
operation two, impulsechoice operation: replacing 10% of pixels of the whole picture with impulse noise;
operation three, Gayssian Blur operation: carrying out Gaussian blur processing on the picture;
operation four, Fliplr operation: turning the image of the picture;
operation five, multiplex operation: multiplying each pixel of the picture by a preset value;
operation six and Affinine operation: the picture is enlarged or reduced or translated or rotated.
Aiming at the condition that the training model can not be correctly converged when the data volume is small, the self-defined data enhancement can be carried out before the data set is input into the network, the original data set is expanded to be n times of the original size according to the requirement, each picture can read each label in the corresponding xml annotation file and the data of the corresponding detection frame while reading the picture, and the detection frame is also operated while the picture is operated. This application uses 5% of the total number of sample in the data set as the critical point, classifies the reinforcing to the label of different categories, can show when guaranteeing the reinforcing effect and reduce the operand: when the number of samples corresponding to a certain label is less than or equal to 5% of the total number of samples, emphasis enhancement is needed, so that the problems that the network convergence speed is very low and the identification capability of the samples is extremely poor during training are solved; for the labels with the sample number larger than 5% of the total number of the samples, the number of the samples corresponding to the labels is relatively sufficient, and the requirement of network fast convergence can be met by performing appropriate enhancement.
The emphasis enhancement in the scheme refers to the operations of Dropout, Impulse, Gayssian Blur, Fliplr, Multiply and Affine which are sequentially carried out on the sample. The appropriate enhancement in the scheme refers to randomly selecting four implementation modes from DropOut, ImpulseNoise, galssianblu, flaplr, multipley and Affine, so that the data enhancement modes are different approximately for different labels, the stability and reliability of data enhancement can be improved, and the distribution of the enhanced labels is ensured to be more dispersed. After the data enhancement method, the problem of uneven distribution of the labels of the samples can be obviously improved, and the problem of too small number of samples corresponding to some labels can be effectively solved, so that the detection capability of the network is obviously improved.
Further, the method for clustering the check boxes in the data set comprises the following steps:
s231, reading the information of the check boxes in each xml file in the enhanced data set, and storing all the sizes of the check boxes into an array;
s232, setting 9 points as a center cluster, and clustering the center cluster;
and S233, replacing the initial check box in the Yolov4 network with 9 central points obtained by clustering as new check boxes Anchors.
For the problem that initial Anchors of a YOLOV4 network cannot well adapt to small cloth defect targets, the solution of the scheme is to read the information of the inspection boxes in each enhanced xml file, store all the sizes of the inspection boxes in an array, set the central cluster of a clustering algorithm to be 9 points, cluster the central cluster, and replace the Anchors of the traditional Yolov4 with the 9 clustered Anchors boxes specially used for detecting cloth defects. The method can effectively solve the problem that small targets cannot be well predicted due to the fact that the defects of the cloth are very small targets and the Anchors of Yolov4 are defined based on a COCO data set, and can achieve the effect of customizing the small target check box suitable for detecting the cloth.
Further, the method for adding the attention module comprises the following steps:
s241, introducing a CBAM attention module;
s242, generating a channel attention diagram M based on the CBAM attention moduleC(F):
MC(F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F)));
In the formula, sigma represents sigmoid operation, MLP represents two-layer convolution operation, and AvgPool represents average pooling operation; MaxPool represents the maximum pooling operation, and F represents a feature map;
s243, generating space attention diagram M based on CBAM attention moduleS(F):
Ms(F)=σ(f7×7([AvgPool(F);MaxPool(F)]));
Wherein σ represents a sigmoid operation, f7×7Represents the size of the convolution kernel, AvgPool represents the average pooling operation; MaxPool stands for maximal pooling operation and F for feature maps.
In the prior art, the Yolov4 network adopts the same learning weight for each pixel of an image during learning, and cannot give attention to valuable pixel information.
The CBAM attention module is an attention module for a feed-forward convolutional neural network, which in the present application works on the principle of giving an intermediate feature map, then inferring an attention map in turn by the CBAM module along two independent dimensions (channel and space), and then multiplying the attention map with the input feature map for adaptive feature optimization. Since CBAM is a lightweight, generic module, the overhead of this module can be ignored and seamlessly integrated into the network architecture of the present application, and end-to-end training can be performed with the underlying CNN.
The method introduces a channel attention mechanism to generate a channel attention diagram, and the principle is to compress a characteristic diagram on a space dimension to obtain a one-dimensional vector and then operate. When performing compression in the spatial dimension, not only Average Pooling (Average Pooling) but also maximum Pooling (Max Pooling) is considered. The average pooling and the maximum pooling can be used for aggregating the spatial information of the feature maps, sending the spatial information to a shared network, compressing the spatial dimension of the input feature maps, and summing and combining element by element to generate a channel attention map; in addition, the spatial attention mechanism is to compress the channels, and perform average pooling and maximum pooling respectively in the channel dimension. The operation of the MaxPool is to extract the maximum value on a channel, and the extraction times are height multiplied by width; AvgPool operates by taking an average over the channel, the number of times extracted being height times width.
Further, the method for adding the feature output layer 104x104 in the Yolov4 network includes:
s251, performing two-time upsampling on the output characteristics through a PANet module of the Yolov4 network, and combining sampling results of the two-time upsampling with the output of the residual error neural network of the penultimate layer and the third layer respectively;
s252, performing third upsampling;
and S253, combining the sampling result of the third upsampling with the output of the fourth last layer residual error neural network, and forming a 104x104 characteristic output layer for use in the Yolo _ Head prediction.
According to the scheme, after two times of upsampling are carried out on output characteristics by PANet, the third time of upsampling is continuously carried out, and then the upsampling is combined with the output of a fourth last layer residual error neural network to form a 104x104 characteristic output layer for YOLO _ Head prediction.
Further, determining a bounding box loss function CEIOULOSSThe method comprises the following steps:
s261, calculating penalty term LEIOU
Figure BDA0003416884440000051
Wherein b represents the center point of the prediction box, bgtRepresents the central point of the real frame, rho is the Euclidean distance between the two central points, c represents the minimum closure area which can contain the prediction frame and the real frame at the same timeDiagonal distance of, CwWidth of the minimum bounding box, C, representing the coverage of the prediction box and the real boxhRepresenting the height of the minimum bounding box covering the predicted box and the real box, IOU being the intersection ratio of the real box and the predicted box, w representing the width of the predicted box, wgtRepresenting the width of the real box, h representing the height of the predicted box, hgtRepresents the height of the real box;
s262, calculating the loss function EIOULOSS:EIOULoss=1-LEIOU
S263, calculating a frame loss function CEIOULOSS:CEIOULoss=3×ln 3-3×ln(2+EIOULoss)。
The loss function used in the existing Yolov4 network is a CIOU loss function, and the CIOU considers the aspect ratio of a Bounding box into the loss function on the basis of a DIOU, so that the regression precision is further improved. However, for the fuzzy weight design of the aspect ratio, the difference of the aspect ratio reflected by v in the formula is not the actual difference between the width and the height and the confidence coefficient thereof, so that the effective optimization similarity of the model is sometimes hindered, and the effect of the CIOU loss function on the detection frames is not good because the cloth defect data set has many extreme aspect ratio detection frames. Therefore, in the present application, the loss function in the network is replaced by EIOU which can effectively solve the aspect ratio and make the convergence speed faster, and the penalty term is LEIOUThe formula is as described above by LEIOUThe loss function EIOU can be obtainedLOSSThe value of (c).
The inventor finds out in the further experimental research process that although EIOULossThe EIOU is decreased with the increase of the EIOU, but the absolute value of the gradient is constant in the decreasing process, and the gradient value cannot be adaptively adjusted to make the convergence faster in the training process, so that the method still has the defects of low efficiency, slow speed and the like if the method is directly used for detecting the cloth defects. Therefore, the present application designs a new frame loss function CEIOU (dark eiou), so that the larger the deviation degree between the prediction frame and the real frame is, the larger the obtained absolute value of the gradient is, thereby accelerating the training of the network, and the calculation formula of CEIOU is as described above. Book (I)The application is through the design of loss function CEIOU for the difference is bigger for prediction frame and real frame, the momentum for the network training is also bigger, like this when real frame and prediction frame difference are great, can obtain a bigger adjustment, reduces the number of times of revising that obtains the complete overlapping frame, makes the network training more abundant, reduces the possibility that produces under-fitting and overfitting, and accelerates the convergence of network.
Further, the method includes step S264, redefining the penalty term as LFocal-EIOU:LFocal-EIOU=IOUγLEIOU(ii) a In the formula, γ is a parameter for controlling the degree of inhibition of abnormal values.
In the process of further experimental study, the inventor finds that the border regression of the Yolov4 network has the problem of unbalanced training samples, that is, the number of high-quality anchor boxes with small regression errors in one image is far less than that of low-quality samples with large errors, and samples with poor quality can generate an overlarge gradient to influence the training process. In order to overcome the problem, the scheme provides a Focal EIOU Loss by combining a Focal Loss on the basis of the EIOU, separates a high-quality anchor frame from a low-quality anchor frame from the perspective of gradient, and updates a punishment item to LFocal-EIOUTherefore, the balance of training samples in frame regression is improved, and gradient influence is further reduced.
Further, when the defect detection is performed in step S2, the picture is first reduced to 416x416 based on the resize function, and then the gray bars are added to the reduced scale-distorted picture, and then the picture is input to the improved Yolov4 network model. The purpose of adding the gray scale bar is to maintain semantic information of the image.
A cloth defect detection method based on an improved Yolov4 network, wherein the improved Yolov4 network comprises a frame loss function CEIOU determined by the following methodLOSS
Calculating a penalty term LEIOU
Figure BDA0003416884440000071
Wherein b represents the center point of the prediction box, bgtRepresenting the center point of the real frameρ is the Euclidean distance between two central points, C represents the diagonal distance of the minimum closure area capable of simultaneously containing the prediction frame and the real frame, CwWidth of the minimum bounding box, C, representing the coverage of the prediction box and the real boxhRepresenting the height of the minimum bounding box covering the predicted box and the real box, IOU being the intersection ratio of the real box and the predicted box, w representing the width of the predicted box, wgtRepresenting the width of the real box, h representing the height of the predicted box, hgtRepresents the height of the real box;
calculating the loss function EIOULOSS:EIOULoss=1-LEIOU
Computing a bezel loss function CEIOULOSS:CEIOULoss=3×ln 3-3×ln(2+EIOULoss)。
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the cloth defect detection method based on the improved Yolov4 network overcomes the problem that the training model of the existing Yolov4 network cannot be correctly converged when the data volume is small; in addition, the invention takes 5% of the total number of the samples in the data set as a critical point to carry out classification enhancement on the labels of different classes, thereby obviously reducing the operation amount while ensuring the enhancement effect.
2. The invention relates to a cloth defect detection method based on an improved Yolov4 network, which is characterized in that a check box in an original cloth data set is clustered, and the initial check box in the Yolov4 network is replaced by the check box obtained after clustering, so that the effect of self-defining a small target check box suitable for cloth detection is realized, and the defect that the initial Anchors of the existing Yolov network cannot accurately and effectively predict a small target is overcome.
3. According to the cloth defect detection method based on the improved Yolov4 network, the problem that the existing Yolov4 network adopts the same learning weight for each pixel of an image during learning and cannot give attention to valuable pixel information is solved through the attention module, so that the network is more sensitive to small targets, and more attention is applied to an area containing a detection frame.
4. The invention discloses a cloth defect detection method based on an improved Yolov4 network, which is characterized in that a special improvement aiming at cloth defect detection is carried out on a network framework of a Yolov4, so that a characteristic network can specially identify and detect tiny defects on cloth through a characteristic output layer 104x104, the problem that the tiny defects are difficult to detect by the existing Yolov4 network is solved, the automatic detection precision of the cloth defects is obviously improved, and the applicability of a deep learning algorithm in the field of cloth detection is improved.
5. The invention discloses a cloth defect detection method based on an improved Yolov4 network, which overcomes the problems that when the existing Yolov4 network is used for cloth defect detection, a loss function has the similarity of preventing effective optimization of a model, is unfriendly to detection frames with extreme aspect ratios, cannot adaptively adjust gradient values and the like, and provides a new frame loss function CEIOULOSSThe method can effectively solve the aspect ratio interference, and simultaneously, the larger the deviation degree of the prediction frame and the real frame is, the larger the obtained gradient absolute value is, thereby accelerating the convergence of the network and the training of the network.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a schematic flow chart of an embodiment of the present invention;
FIG. 2 is a diagram illustrating a network model backbone according to an embodiment of the present invention;
FIG. 3 is a label distribution diagram before data enhancement in an embodiment of the present invention;
FIG. 4 is a graph of a label distribution after data enhancement in an embodiment of the present invention;
FIG. 5 is a graph illustrating a reduction in the loss value at the thawing stage in accordance with an embodiment of the present invention;
FIG. 6 illustrates network prediction results in an embodiment of the present invention;
FIG. 7 illustrates network prediction results in accordance with an embodiment of the present invention;
FIG. 8 is a comparison of the bezel loss function and EIOU loss for an embodiment of the present invention;
FIG. 9 is a cross-sectional view of a piece goods defect detection apparatus in accordance with an embodiment of the present invention;
FIG. 10 is a schematic view of a marking mechanism according to an embodiment of the present invention;
FIG. 11 is a cross-sectional view of a marking mechanism in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention. In the description of the present application, it is to be understood that the terms "front", "back", "left", "right", "upper", "lower", "vertical", "horizontal", "high", "low", "inner", "outer", etc. indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus should not be construed as limiting the scope of the present application.
Example 1:
a cloth defect detection method based on an improved Yolov4 network comprises the following steps:
collecting a picture on the surface of the cloth;
reducing the picture to 416x416 size based on a resize function, adding a gray bar to the reduced picture with distorted proportion, then carrying out normalization processing, and then inputting the normalized picture into an improved Yolov4 network model for flaw detection;
and outputting a detection result.
The improved Yolov4 network in this embodiment includes a bounding box loss function CEIOU determined by the following methodLOSS
Calculating a penalty term LEIOU
Figure BDA0003416884440000081
Wherein b represents the center point of the prediction box, bgtRepresents the center point of the real frame, rho is the Euclidean distance between two center points, C represents the diagonal distance of the minimum closure area which can contain the prediction frame and the real frame at the same time, CwWidth of the minimum bounding box, C, representing the coverage of the prediction box and the real boxhRepresenting the height of the minimum bounding box covering the predicted box and the real box, IOU being the intersection ratio of the real box and the predicted box, w representing the width of the predicted box, wgtRepresenting the width of the real box, h representing the height of the predicted box, hgtRepresents the height of the real box;
calculating the loss function EIOULOSS:EIOULoss=1-LEIOU
Computing a bezel loss function CEIOULOSS:CEIOULoss=3×ln 3-3×ln(2+EIOULoss)。
Example 2:
a cloth defect detection method based on an improved Yolov4 network is disclosed in FIG. 1, and on the basis of embodiment 1, the Yolov4 network is improved by the following method:
(1) acquiring a flaw cloth image as a data set, and performing data enhancement on the data set;
the method for enhancing the data of the data set comprises the following steps:
classifying defects in the dataset into a number of labels;
reading an xml file of each picture in the data set, extracting the label category of each inspection box, and counting the number of samples corresponding to each category label;
for the labels with the number of samples less than or equal to N, sequentially performing data enhancement on each sample by using the following six operations; for labels with the number of samples larger than N, four of the following six operations are randomly selected to perform data enhancement on each sample; wherein, N is 5% of the total number of samples in the data set;
operation one, DropOut operation: randomly discarding 20% of pixels of the original picture, and reserving the remaining 80% of pixel values; the working principle is that a Bernoulli distribution with probability of p randomly generates 0,1 values which are the same as the number of picture pixel points, and partial pixels are shielded after the values are multiplied by input;
operation two, impulsechoice operation: replacing 10% of pixels of the whole picture with impulse noise;
operation three, Gayssian Blur operation: carrying out Gaussian blur processing on the picture; the method has the effects of reducing image noise and detail level, and the principle is to convolute the image and normal distribution;
operation four, Fliplr operation: turning the image of the picture;
operation five, multiplex operation: multiplying each pixel of the picture by a preset value to make the picture look brighter or darker; wherein
Operation six and Affinine operation: the picture is enlarged or reduced or translated or rotated.
(2) Inputting the enhanced data set into a Yolov4 network;
(3) clustering the check boxes in the data set, and replacing the initial check box in the Yolov4 network with the check box obtained after clustering;
the method for clustering the check boxes in the data set comprises the following steps:
reading the information of the check boxes in each xml file in the enhanced data set, and storing the sizes of all the check boxes into an array;
setting 9 points as a center cluster, and clustering the center cluster;
and (5) taking the 9 central points obtained by clustering as new check boxes Anchors to replace the initial check box in the Yolov4 network.
(4) Adding an attention module:
generating a channel attention map M based on a CBAM attention moduleC(F):
MC(F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F)));
In the formula, sigma represents sigmoid operation, MLP represents two-layer convolution operation, and AvgPool represents average pooling operation; MaxPool represents the maximum pooling operation, and F represents a feature map;
generating a spatial attention map based on a CBAM attention moduleMS(F):
Ms(F)=σ(f7×7([AvgPool(F);MaxPool(F)]));
Where σ denotes a sigmoid function, f7×7Represents the size of the convolution kernel, AvgPool represents the average pooling operation; MaxPool stands for maximal pooling operation and F for feature maps.
(5) Adding a feature output layer 104x104 in a Yolov4 network:
performing two-time upsampling on the output characteristics through a PANet module of the Yolov4 network, wherein sampling results of the two-time upsampling are respectively combined with the output of the residual error neural network of the penultimate layer and the third layer;
performing third upsampling;
and combining the sampling result of the third upsampling with the output of the fourth last layer residual error neural network, and forming a characteristic output layer of 104x104 for use in the Yolo _ Head prediction.
(6) Determining a bezel loss function CEIOULOSS
Calculating a penalty term LEIOU
Figure BDA0003416884440000101
Wherein b represents the center point of the prediction box, bgtRepresents the center point of the real frame, rho is the Euclidean distance between two center points, C represents the diagonal distance of the minimum closure area which can contain the prediction frame and the real frame at the same time, CwWidth of the minimum bounding box, C, representing the coverage of the prediction box and the real boxhRepresenting the height of the minimum bounding box covering the predicted box and the real box, IOU being the intersection ratio of the real box and the predicted box, w representing the width of the predicted box, wgtRepresenting the width of the real box, h representing the height of the predicted box, hgtRepresents the height of the real box;
calculating the loss function EIOULOSS:EIOULoss=1-LEIOU
Computing a bezel loss function CEIOULOSS:CEIOULoss=3×ln 3-3×ln(2+EIOULoss)。
FIG. 8 is a comparison between the frame loss function and the EIOU loss in this embodiment, in which the straight line is EIOUloss, the curve is CEIOUloss, and the x-axis of FIG. 8 is the value of EIOU, and the smaller the value of x-axis, the larger the difference between the generated prediction frame and the real frame, and the value of x is 1, which indicates that the prediction frame and the real frame are perfectly overlapped. The loss value of the eioulos is a straight line, which indicates that no matter how large the difference between the real frame and the prediction frame is during network training, the network training is adjusted by a stable momentum, which is equivalent to an arithmetic progression, but for the CEIOU, the larger the difference between the prediction frame and the real frame is, the larger the momentum (slope of the curve) is given to network training, so that when the difference between the real frame and the prediction frame is larger, a larger adjustment can be obtained, and the number of times of modifying the completely overlapped frame is reduced.
In addition, considering that training samples are unbalanced in the regression of BBox, that is, the number of high-quality anchor frames with small regression errors in one image is far less than that of low-quality samples with large errors, and samples with poor quality may generate an excessive gradient to affect the training process. Therefore, this embodiment proposes a Focal EIOU load in combination with the Focal location based on the EIOU, and from the perspective of gradient, separates the high-quality anchor frame from the low-quality anchor frame, and the penalty term formula is as follows: l isFocal-EIOU= IOUγLEIOU(ii) a In the formula, γ is a parameter for controlling the degree of inhibition of abnormal values.
(7) The method comprises the steps of reducing pictures in a data set to 416x416 size based on a resize function, adding gray bars to the reduced picture with the distorted proportion, and inputting the picture into an improved Yolov4 network model for training. And taking the trained model as an improved Yolov4 network model.
Example 3:
in this embodiment, an experiment is performed according to the method for detecting a cloth defect described in embodiment 2, a data set used in the experiment is a group of 453 marked cloth pictures with defects found on the internet, the defects in the data set are divided into 20 types, and after the category to which each detection frame in each picture xml file belongs is read, the distribution of the categories is plotted as fig. 3. As can be seen from fig. 3, the distribution of the labels is very unbalanced, and there are only samples with less than 5% of the total number of samples for training the network, which may cause the convergence rate of the network during training to be very slow and the identification capability of the samples to be very poor.
After the data enhancement method described in embodiment 2 is used for enhancement, the obtained label distribution is shown in fig. 4, and as can be seen from fig. 4, the problem of uneven label distribution of the sample is obviously improved, and meanwhile, the problem of too few labels is also solved, so that the detection capability of the network is obviously improved.
In the clustering of this embodiment, a program is used to read the detection frame information in each enhanced xml file, store all the detection frame sizes in an array, set the center cluster of the K-means algorithm to 9 points, perform clustering on the detection frames, and replace the original YOLOV4 Anchors with 9 clustered Anchors dedicated to cloth defect detection. In this embodiment, Anchors is converted from (12,16,19,36,40,28,36,75,76,55,72,146,142,110,192, 243,459,401) to (6,11,9,16,13,22,14,29,18,31,23,42,33,57,53,85, 129, 136).
After the model building is completed, the enhanced each picture resize is adjusted to the size of 416x416, and a gray bar is added to the scaled-down and distorted picture to keep the semantic information of the image, and then the picture is input into the network for training.
The target detection network used in the training is an improved Yolov4 network described in the application, as shown in fig. 2, the feature extraction backbone network is CSPDarknet + CBAM, the method adopted in the learning is to freeze and train 100 epochs with BatchSize16, the learning rate is initially set to 1e-3, and an early-stop mechanism is added, if no significant decline is performed on Val _ loss after multiple rounds of training, the learning is directly stopped, and the next stage of learning is entered. And after the freezing training is finished, training in a thawing stage is carried out, the BatchSize of the thawing training is 4, 50 epochs are trained, other training parameters are consistent with those of the freezing stage, and the trained model is stored for prediction after the training is finished. The loss value during the thaw training phase decreases as shown in fig. 5.
The trained model is stored and used for prediction. If the single picture is predicted, the single picture can be predicted only by inputting the path information of the picture to be predicted, the Yolov4 network weight file to be loaded and the used Anchors file, and the predicted picture is stored under a corresponding folder; if the prediction is performed on a plurality of pictures at the same time, the program automatically reads the information of all the pictures in the folder, and then the information is predicted one by one and stored in the corresponding folder, and partial prediction results are shown in fig. 6 and 7.
Example 4:
a piece of cloth flaw detection device adopts the detection method in any one of the embodiments to perform feature recognition, and comprises a workbench 1, a groove 2 formed in the surface of the workbench 1, an unwinding roller 3 and a winding roller 4 arranged in the groove 2, a first portal frame 5 and a second portal frame 6 erected on the groove 2, and an image acquisition device 7 arranged at the bottom of the first portal frame 5, as shown in fig. 9. The first portal frame 5 is located between the unwinding roller 3 and the second portal frame 6, the bottom of the second portal frame 6 is provided with a sliding groove 8, a sliding block 9 is in sliding fit with the sliding groove 8, the sliding block 9 is driven by a first driving device to slide, and a marking mechanism is connected below the sliding block 9. The image acquisition device 7 is connected with a processor, and the processor is used for executing the cloth defect detection method in any of embodiments 1-3.
The marking mechanism is shown in fig. 10 and 11, and comprises a sleeve 10 fixed opposite to the sliding block 9, a paint container 11 inserted into the sleeve 10 from the top end, an inner cylinder 12 matched with the inner wall of the sleeve 10 in a sliding way along the axial direction, and an annular inkpad sponge 13 arranged on the inner wall of the inner cylinder 12; the bottom of the paint container 11 is provided with a plurality of discharge holes 14 which are distributed in an annular shape, and each discharge hole 14 is over against the top end of the annular inkpad sponge 13. The inner cylinder 12 is driven by a second driving device 15 to slide up and down.
Preferably, the top of the paint container 11 is provided with a filling opening 16 with a plug.
Preferably, the sleeve 10 is fixedly connected with the sliding block 9 through a connecting rod 17; wherein the connecting rod 17 can be L-shaped or C-shaped.
Preferably, the paint container 11 is welded or removably attached to the sleeve 10.
Preferably, the second driving device 15 is a gear mechanism driven by a motor, a rack engaged with the gear mechanism is fixed on the outer wall of the inner cylinder 12, and the motor outputs the rotation of the gear mechanism to drive the rack to move up and down.
Preferably, the bottom of the inner wall of the inner cylinder 12 is provided with an annular boss extending inwards, and the annular inkpad sponge 13 is placed on the annular boss.
Preferably, a light source 18 and a bearing platform 19 are further arranged at the bottom of the groove 2, the light source 18 is positioned right below the first portal frame 5, and the bearing platform 19 is positioned right below the marking mechanism.
Preferably, in order to increase the marking range, the sliding groove and the sliding block may be replaced by an XY-axis moving mechanism, such as an existing cross slide or the like.
In the existing cloth flaw detection equipment, detected flaw parts are marked in a paint spraying mode so as to be convenient for subsequent cutting or correction, but the equipment has the following defects: the paint spraying is directly positioned at the flaw position, so that the difficulty is increased for the follow-up manual flaw type rechecking; and the spraying of the spray head is uncontrollable, so that non-defective parts are easily polluted excessively. Based on this, this embodiment improves cloth flaw detection equipment, and this equipment's specific working process is as follows:
the cloth is released from the releasing roll 3 and recovered to the winding roll 4, during the process, the cloth is firstly released through the first portal frame 5, the image acquisition device 7 shoots a picture on the surface of the cloth and transmits the picture to the processor, the processor detects the defect through the cloth defect detection method according to any one of the embodiments 1-3, if the defect is detected, when the defect part moves to the second portal frame 6 to be released, the releasing roll 3 and the winding roll 4 are controlled to pause, the first driving device drives the sliding block 9 to move, so that the marking mechanism reaches the position above the defect, at the moment, the second driving device drives the inner cylinder to move downwards, the annular inkpad sponge 13 is driven to synchronously move downwards to be shown in the graph 10, an annular mark is covered around the defect to serve as a mark, and the bearing platform 19 can serve as a bearing component at the bottom of the inner cylinder in the process to ensure that the mark is fully covered.
The inner cylinder is recovered to the inside of the sleeve under the normal state of the equipment, so that the top end of the annular inkpad sponge 13 is contacted with the bottom of the paint container 11, and the paint is conveniently replenished through the discharge hole 14 at any time. Of course, the paint used by the device during operation is a material with high viscosity and will not run dry quickly, such as the stamp-pad ink used for stamps in the prior art, which only slowly soaks the annular inkpad sponge 13. The paint container 11 may also be removed from the sleeve for separate storage when not in use.
Preferably, a touch sensor or a pressure sensor can be arranged at the bottom of the inner cylinder, so that when the bottom of the inner cylinder is contacted with the bearing platform, the second driving device controls the inner cylinder to move upwards and reset to the state shown in fig. 11.
The detection equipment of the embodiment solves the problem of covering and shielding the flaw caused by a spray head painting mode in the prior art, and the applied mark is an annular mark, so that the probability of covering and shielding the flaw is remarkably reduced, the subsequent manual review of the flaw type by workers is facilitated, and a more accurate basis is provided for optimizing and improving the production line; the effect is particularly remarkable for small-range defects such as knots, broken holes, hair particles and the like. In addition, the marking area of the embodiment is controllable, so that the problem of overlarge area caused by a spraying mode is solved, and the defect that the traditional spraying mode easily and excessively pollutes non-defective parts is overcome.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, the term "connected" used herein may be directly connected or indirectly connected via other components without being particularly described.

Claims (10)

1. A cloth defect detection method based on an improved Yolov4 network is characterized by comprising the following steps
S1, collecting the image information of the surface of the cloth;
s2, inputting the image information into an improved Yolov4 network model for flaw detection;
and S3, outputting the detection result.
2. The improved Yolov4 network-based cloth defect detection method according to claim 1, wherein the improved Yolov4 network model comprises the following establishment methods:
s21, acquiring a flaw cloth image as a data set, and performing data enhancement on the data set;
s22, inputting the enhanced data set into a Yolov4 network;
s23, clustering the check boxes in the data set, and replacing the initial check box in the Yolov4 network with the check box obtained after clustering;
s24, adding an attention module;
s25, adding a characteristic output layer 104x104 in a Yolov4 network;
s26, determining a frame loss function CEIOULOSS
And S27, inputting the data set into a network for training, and taking the trained model as an improved Yolov4 network model.
3. The cloth defect detection method based on the improved Yolov4 network according to claim 2, wherein the method for data enhancement of the data set comprises:
s211, classifying defects in the data set into a plurality of labels;
s212, reading an xml file of each picture in the data set, extracting the label category of each inspection box, and counting the number of samples corresponding to each category label;
s213, for the labels with the number of the samples less than or equal to N, sequentially performing data enhancement on each sample by using the following six operations; for labels with the number of samples larger than N, four of the following six operations are randomly selected to perform data enhancement on each sample; wherein, N is 5% of the total number of samples in the data set;
operation one, DropOut operation: randomly discarding 20% of pixels of the picture, and reserving the remaining 80% of pixel values;
operation two, impulsechoice operation: replacing 10% of pixels of the whole picture with impulse noise;
operation three, Gayssian Blur operation: carrying out Gaussian blur processing on the picture;
operation four, Fliplr operation: turning the image of the picture;
operation five, multiplex operation: multiplying each pixel of the picture by a preset value;
operation six and Affinine operation: the picture is enlarged or reduced or translated or rotated.
4. The method for detecting cloth defects based on the improved Yolov4 network according to claim 2, wherein the method for clustering the check boxes in the data set comprises:
s231, reading the information of the check boxes in each xml file in the enhanced data set, and storing all the sizes of the check boxes into an array;
s232, setting 9 points as a center cluster, and clustering the center cluster;
and S233, replacing the initial check box in the Yolov4 network with 9 central points obtained by clustering as new check boxes Anchors.
5. The method for detecting cloth defects based on the improved Yolov4 network according to claim 2, wherein the method for adding the attention module comprises the following steps:
s241, introducing a CBAM attention module;
s242, generating a channel attention diagram M based on the CBAM attention moduleC(F):
MC(F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F)));
In the formula, sigma represents sigmoid operation, MLP represents two-layer convolution operation, and AvgPool represents average pooling operation; MaxPool represents the maximum pooling operation, and F represents a feature map;
s243, generating space attention diagram M based on CBAM attention moduleS(F):
Ms(F)=σ(f7×7([AvgPool(F);MaxPool(F)]));
Wherein σ represents a sigmoid operation, f7×7Represents the size of the convolution kernel, AvgPool represents the average pooling operation; MaxPool stands for maximal pooling operation and F for feature maps.
6. The method for detecting cloth defects based on the improved Yolov4 network according to claim 2, wherein the method for adding the feature output layer 104x104 in the Yolov4 network comprises:
s251, performing two-time upsampling on the output characteristics through a PANet module of the Yolov4 network, and combining sampling results of the two-time upsampling with the output of the residual error neural network of the penultimate layer and the third layer respectively;
s252, performing third upsampling;
and S253, combining the sampling result of the third upsampling with the output of the fourth last layer residual error neural network, and forming a 104x104 characteristic output layer for use in the Yolo _ Head prediction.
7. A switch-based switch according to claim 2The cloth defect detection method of the Yolov4 network is characterized in that a frame loss function CEIOU is determinedLOSSThe method comprises the following steps:
s261, calculating penalty term LEIOU
Figure FDA0003416884430000021
Wherein b represents the center point of the prediction box, bgtRepresents the center point of the real frame, rho is the Euclidean distance between two center points, C represents the diagonal distance of the minimum closure area which can contain the prediction frame and the real frame at the same time, CwWidth of the minimum bounding box, C, representing the coverage of the prediction box and the real boxhRepresenting the height of the minimum bounding box covering the predicted box and the real box, IOU being the intersection ratio of the real box and the predicted box, w representing the width of the predicted box, wgtRepresenting the width of the real box, h representing the height of the predicted box, hgtRepresents the height of the real box;
s262, calculating the loss function EIOULOSS:EIOULoss=1-LEIOU
S263, calculating a frame loss function CEIOULOSS
CEIOULoss=3×ln 3-3×ln(2+EIOULoss)。
8. The cloth defect detection method based on the improved Yolov4 network according to claim 7, further comprising S264, redefining penalty term as LFocal-EIOU
LFocal-EIOU=IOUγLEIOU
In the formula, γ is a parameter for controlling the degree of inhibition of abnormal values.
9. The method for detecting defects in cloth based on the improved Yolov4 network as claimed in claim 1, wherein in step S2, when detecting defects, the picture is first reduced to 416x416 size based on resize function, and then gray bars are added to the reduced scale distorted picture, before inputting to the improved Yolov4 network model.
10. The method for detecting cloth defects based on the improved Yolov4 network as claimed in claim 1, wherein the improved Yolov4 network comprises a frame loss function CEIOU determined by the following methodLOSS
Calculating a penalty term LEIOU
Figure FDA0003416884430000031
Wherein b represents the center point of the prediction box, bgtRepresents the center point of the real frame, rho is the Euclidean distance between two center points, C represents the diagonal distance of the minimum closure area which can contain the prediction frame and the real frame at the same time, CwWidth of the minimum bounding box, C, representing the coverage of the prediction box and the real boxhRepresenting the height of the minimum bounding box covering the predicted box and the real box, IOU being the intersection ratio of the real box and the predicted box, w representing the width of the predicted box, wgtRepresenting the width of the real box, h representing the height of the predicted box, hgtRepresents the height of the real box;
calculating the loss function EIOULOSS:EIOULoss=1-LEIOU
Computing a bezel loss function CEIOULOSS:CEIOULoss=3×ln 3-3×ln(2+EIOULoss)。
CN202111549933.2A 2021-12-17 2021-12-17 Cloth flaw detection method based on improved Yolov4 network Active CN114240885B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111549933.2A CN114240885B (en) 2021-12-17 2021-12-17 Cloth flaw detection method based on improved Yolov4 network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111549933.2A CN114240885B (en) 2021-12-17 2021-12-17 Cloth flaw detection method based on improved Yolov4 network

Publications (2)

Publication Number Publication Date
CN114240885A true CN114240885A (en) 2022-03-25
CN114240885B CN114240885B (en) 2022-08-16

Family

ID=80758009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111549933.2A Active CN114240885B (en) 2021-12-17 2021-12-17 Cloth flaw detection method based on improved Yolov4 network

Country Status (1)

Country Link
CN (1) CN114240885B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114723750A (en) * 2022-06-07 2022-07-08 南昌大学 Transmission line strain clamp defect detection method based on improved YOLOX algorithm
CN114926842A (en) * 2022-04-29 2022-08-19 黄颢 Dongba pictograph recognition method and device
CN115294556A (en) * 2022-09-28 2022-11-04 西南石油大学 Improved YOLOv 5-based method for detecting abnormal flow state fluid on closed vibrating screen
CN116228754A (en) * 2023-05-08 2023-06-06 山东锋士信息技术有限公司 Surface defect detection method based on deep learning and global difference information

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490874A (en) * 2019-09-04 2019-11-22 河海大学常州校区 Weaving cloth surface flaw detecting method based on YOLO neural network
CN111612751A (en) * 2020-05-13 2020-09-01 河北工业大学 Lithium battery defect detection method based on Tiny-yolov3 network embedded with grouping attention module
CN111681240A (en) * 2020-07-07 2020-09-18 福州大学 Bridge surface crack detection method based on YOLO v3 and attention mechanism
CN112170233A (en) * 2020-09-01 2021-01-05 燕山大学 Small part sorting method and system based on deep learning
CN112766188A (en) * 2021-01-25 2021-05-07 浙江科技学院 Small-target pedestrian detection method based on improved YOLO algorithm
CN112819804A (en) * 2021-02-23 2021-05-18 西北工业大学 Insulator defect detection method based on improved YOLOv5 convolutional neural network
CN113034478A (en) * 2021-03-31 2021-06-25 太原科技大学 Weld defect identification and positioning method and system based on deep learning network
CN113076842A (en) * 2021-03-26 2021-07-06 烟台大学 Method for improving identification precision of traffic sign in extreme weather and environment
CN113192040A (en) * 2021-05-10 2021-07-30 浙江理工大学 Fabric flaw detection method based on YOLO v4 improved algorithm
CN113362285A (en) * 2021-05-21 2021-09-07 同济大学 Steel rail surface damage fine-grained image classification and detection method
CN113393439A (en) * 2021-06-11 2021-09-14 重庆理工大学 Forging defect detection method based on deep learning
US20210370993A1 (en) * 2020-05-27 2021-12-02 University Of South Carolina Computer vision based real-time pixel-level railroad track components detection system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490874A (en) * 2019-09-04 2019-11-22 河海大学常州校区 Weaving cloth surface flaw detecting method based on YOLO neural network
CN111612751A (en) * 2020-05-13 2020-09-01 河北工业大学 Lithium battery defect detection method based on Tiny-yolov3 network embedded with grouping attention module
US20210370993A1 (en) * 2020-05-27 2021-12-02 University Of South Carolina Computer vision based real-time pixel-level railroad track components detection system
CN111681240A (en) * 2020-07-07 2020-09-18 福州大学 Bridge surface crack detection method based on YOLO v3 and attention mechanism
CN112170233A (en) * 2020-09-01 2021-01-05 燕山大学 Small part sorting method and system based on deep learning
CN112766188A (en) * 2021-01-25 2021-05-07 浙江科技学院 Small-target pedestrian detection method based on improved YOLO algorithm
CN112819804A (en) * 2021-02-23 2021-05-18 西北工业大学 Insulator defect detection method based on improved YOLOv5 convolutional neural network
CN113076842A (en) * 2021-03-26 2021-07-06 烟台大学 Method for improving identification precision of traffic sign in extreme weather and environment
CN113034478A (en) * 2021-03-31 2021-06-25 太原科技大学 Weld defect identification and positioning method and system based on deep learning network
CN113192040A (en) * 2021-05-10 2021-07-30 浙江理工大学 Fabric flaw detection method based on YOLO v4 improved algorithm
CN113362285A (en) * 2021-05-21 2021-09-07 同济大学 Steel rail surface damage fine-grained image classification and detection method
CN113393439A (en) * 2021-06-11 2021-09-14 重庆理工大学 Forging defect detection method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AMUSI(CVER): "NeurIPS 2021 | 助力YOLOv5涨点!Alpha-IoU:IoU Loss大一统", 《HTTPS://BLOG.CSDN.NET/AMUSI1994/ARTICLE/DETAILS/121240415》 *
YI-FAN ZHANG等: "Focal and Efficient IOU Loss for Accurate Bounding Box Regression", 《ARXIV》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114926842A (en) * 2022-04-29 2022-08-19 黄颢 Dongba pictograph recognition method and device
CN114723750A (en) * 2022-06-07 2022-07-08 南昌大学 Transmission line strain clamp defect detection method based on improved YOLOX algorithm
CN115294556A (en) * 2022-09-28 2022-11-04 西南石油大学 Improved YOLOv 5-based method for detecting abnormal flow state fluid on closed vibrating screen
CN115294556B (en) * 2022-09-28 2022-12-13 西南石油大学 Improved YOLOv 5-based method for detecting abnormal flow state fluid on closed vibrating screen
CN116228754A (en) * 2023-05-08 2023-06-06 山东锋士信息技术有限公司 Surface defect detection method based on deep learning and global difference information
CN116228754B (en) * 2023-05-08 2023-08-25 山东锋士信息技术有限公司 Surface defect detection method based on deep learning and global difference information

Also Published As

Publication number Publication date
CN114240885B (en) 2022-08-16

Similar Documents

Publication Publication Date Title
CN114240885B (en) Cloth flaw detection method based on improved Yolov4 network
CN110310259B (en) Improved YOLOv3 algorithm-based knot defect detection method
CN110853015A (en) Aluminum profile defect detection method based on improved Faster-RCNN
CN109509187B (en) Efficient inspection algorithm for small defects in large-resolution cloth images
CN105844621A (en) Method for detecting quality of printed matter
CN112270722A (en) Digital printing fabric defect detection method based on deep neural network
CN109544522A (en) A kind of Surface Defects in Steel Plate detection method and system
CN112037219A (en) Metal surface defect detection method based on two-stage convolution neural network
CN110910339B (en) Logo defect detection method and device
CN110889838A (en) Fabric defect detection method and device
CN110135430A (en) A kind of aluminium mold ID automatic recognition system based on deep neural network
CN110264457A (en) Weld seam autonomous classification method based on rotary area candidate network
CN107273933A (en) The construction method of picture charge pattern grader a kind of and apply its face tracking methods
CN112232263A (en) Tomato identification method based on deep learning
CN114445707A (en) Intelligent visual fine detection method for defects of bottled water labels
An et al. Fabric defect detection using deep learning: An Improved Faster R-approach
CN114882216A (en) Garment button quality detection method, system and medium based on deep learning
CN110111358B (en) Target tracking method based on multilayer time sequence filtering
CN116934685A (en) Steel surface defect detection algorithm based on Focal module and deformable convolution
CN116994049A (en) Full-automatic flat knitting machine and method thereof
CN111340783A (en) Real-time cloth defect detection method
CN114612506B (en) Simple, efficient and anti-interference high-altitude parabolic track identification and positioning method
CN113610831B (en) Wood defect detection method based on computer image technology and transfer learning
CN115937095A (en) Printing defect detection method and system integrating image processing algorithm and deep learning
CN115546788A (en) Concrete bubble detection method based on improved YOLOv5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant