CN112200790A - Cloth defect detection method, device and medium - Google Patents

Cloth defect detection method, device and medium Download PDF

Info

Publication number
CN112200790A
CN112200790A CN202011110696.5A CN202011110696A CN112200790A CN 112200790 A CN112200790 A CN 112200790A CN 202011110696 A CN202011110696 A CN 202011110696A CN 112200790 A CN112200790 A CN 112200790A
Authority
CN
China
Prior art keywords
picture
cloth
data
defect
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011110696.5A
Other languages
Chinese (zh)
Other versions
CN112200790B (en
Inventor
程洁
茅心悦
胡晓伟
陈成才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinghu Shanghai Intelligent Technology Co ltd
Shanghai Xiaoi Robot Technology Co Ltd
Original Assignee
Jinghu Shanghai Intelligent Technology Co ltd
Shanghai Xiaoi Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinghu Shanghai Intelligent Technology Co ltd, Shanghai Xiaoi Robot Technology Co Ltd filed Critical Jinghu Shanghai Intelligent Technology Co ltd
Priority to CN202011110696.5A priority Critical patent/CN112200790B/en
Publication of CN112200790A publication Critical patent/CN112200790A/en
Application granted granted Critical
Publication of CN112200790B publication Critical patent/CN112200790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/89Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles
    • G01N21/892Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles characterised by the flaw, defect or object feature examined
    • G01N21/898Irregularities in textured or patterned surfaces, e.g. textiles, wood
    • G01N21/8983Irregularities in textured or patterned surfaces, e.g. textiles, wood for testing textile webs, i.e. woven material
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The embodiment of the invention provides a cloth defect detection method, equipment and a medium, wherein the detection method comprises the following steps: obtaining a detected cloth picture; performing first convolution neural network processing on the detected cloth picture to obtain spatial characteristic data; performing second convolutional neural network processing on the detected cloth picture to obtain detail characteristic data, wherein the second convolutional neural network is shallower than the first convolutional neural network in level and is wider than a channel of the first convolutional neural network; fusing the spatial feature data and the detail feature data to obtain picture data; and judging the defect information of the cloth based on the picture data. The embodiment of the invention can improve the detection precision.

Description

Cloth defect detection method, device and medium
Technical Field
The embodiment of the invention relates to the technical field of image recognition, in particular to a detection method, detection equipment and a detection medium.
Background
In the process of processing the fabric material, certain flaws are easily formed on the surface of the fabric. Referring to fig. 1 and 2, two schematic views of the cloth with the flaw are shown, respectively. In fig. 1, a long patchwork defect 10 is formed on the surface of the cloth. The surface of the cloth in fig. 2 has a less pronounced blemish 20.
In the prior art, cloth in a fabric material processing program is subjected to image acquisition to obtain an input image, and then the input image is analyzed to realize flaw detection. However, the existing flaw detection has the problem of insufficient precision. Specifically, the rectangular frame 30 in fig. 2 is a defect detection result output by the conventional detection method, however, the stain 20 on the cloth is not located within the range of the rectangular frame 30, that is, the method does not accurately detect the stain 20.
Disclosure of Invention
The invention aims to provide a detection method, equipment and a medium, which improve the detection precision.
The technical scheme of the invention provides a cloth defect detection method, which comprises the following steps: obtaining a detected cloth picture; performing first convolution neural network processing on the detected cloth picture to obtain spatial characteristic data; performing second convolutional neural network processing on the detected cloth picture to obtain detail characteristic data, wherein the second convolutional neural network is shallower than the first convolutional neural network in level and is wider than a channel of the first convolutional neural network; fusing the spatial feature data and the detail feature data to obtain picture data; and judging the defect information of the cloth based on the picture data.
Optionally, the step of fusing the spatial feature data and the detail feature data to obtain picture data includes: and fusing the spatial feature data and the detail feature data based on a preset weight to obtain picture data.
Optionally, before obtaining the detected cloth image, the cloth defect detection method further includes: a modeling step comprising: obtaining a sample picture; performing the first convolution neural network processing on the sample picture to obtain sample spatial feature data; performing the second convolutional neural network processing on the sample picture to obtain sample detail characteristic data; based on the initial weight, fusing the sample space characteristic data and the sample detail characteristic data to obtain sample picture data, and finishing one-time training; and continuously adjusting the initial weight through multiple times of training, and when the loss of the sample picture data meets a specification value, taking the adjusted weight as a preset weight.
Optionally, the step of obtaining a detected cloth image includes: obtaining an original picture; equally cutting the original pictures to obtain a plurality of detected cloth pictures; the step of judging and detecting the defect information on the cloth picture based on the picture data comprises the following steps: and merging the picture data corresponding to the multiple detected cloth pictures, and judging the positions and/or types of the defects based on the merged data.
Optionally, the second convolutional neural network comprises a VGG network, the first convolutional neural network comprises a MobileNet V2 network; or, the first convolutional neural network comprises a mobilene V2 network and a feature image pyramid, and is used for processing data output by the mobilene V2 network; alternatively, the first convolution neural network includes a ResNet 50 network, a feature image pyramid, and a full convolution network.
Correspondingly, the embodiment of the invention also provides a cloth defect detection data processing method, which comprises the following steps: acquiring defect information of the cloth according to any one detection method; and determining a defect processing mode according to at least the defect information.
Optionally, when the defect information includes a defect position, calculating a distance between two adjacent defect reference lines according to the defect position information; determining a material breaking area according to a preset condition, wherein the preset rule is that an area formed by the combination of the continuous defects with the distance smaller than a preset threshold value is used as the material breaking area; respectively obtaining an edge reference line of each material breaking area and each isolated defect outside the material breaking areas; and determining the material breaking position information of the cloth according to the edge reference line.
Accordingly, the embodiment of the present invention further provides a medium, on which computer instructions are stored, and the computer instructions execute the steps of the method according to the embodiment of the present invention.
Correspondingly, an embodiment of the present invention further provides an apparatus, including a defect detection module, where the defect detection module further includes: the first picture acquisition unit is used for acquiring a detected cloth picture; the semantic unit is used for carrying out first convolution neural network processing on the detected cloth picture to obtain spatial characteristic data; the detail unit is used for carrying out second convolution neural network processing on the detected cloth picture to obtain detail characteristic data, the second convolution neural network is shallower than the first convolution neural network in level, and the second convolution neural network is wider than a channel of the first convolution neural network; the fusion unit is used for fusing the spatial feature data and the detail feature data to obtain picture data; and the judging unit is used for judging the defect information of the cloth according to the picture data.
Optionally, the apparatus further comprises: the sampling device is used for photographing the detected cloth; the transportation platform is used for transporting the detection cloth; the sampling device is arranged on the transportation platform and used for photographing the detection cloth; the detection method is used for judging the type and/or position of the defect on the detected cloth according to the picture of the detected cloth obtained by the sampling device.
Compared with the prior art, the technical scheme of the invention has the following advantages:
in the technical scheme of the invention, the detection cloth picture is subjected to image recognition by a Convolutional Neural Network (CNN) deep learning method so as to obtain the defect information on the detection cloth picture, and the embodiment of the invention respectively obtains space and detail characteristic data by performing first and second Convolutional Neural network processing on one detection cloth picture; therefore, the combined picture data contains spatial information and does not lose detail information, and the detection method of the embodiment of the invention can obtain higher defect detection precision on the basis of ensuring the processing efficiency.
Drawings
FIG. 1 is a schematic view of a cloth with a flaw;
FIG. 2 is a schematic view of another cloth with a flaw;
FIG. 3 is a schematic flow chart of an embodiment of the detection method of the present invention;
FIG. 4 is a diagram illustrating the detection of the picture 101 in step S1 in FIG. 3;
FIG. 5 is a schematic diagram of a first convolutional neural network of step 2 in FIG. 3;
FIG. 6 is a schematic diagram of another first convolutional neural network of step 2 in FIG. 3;
FIG. 7 is a schematic diagram of the fusion step of step 4 in FIG. 3;
FIG. 8, panel a and panel b, respectively, illustrate a comparison of the output of a prior art detection method and a detection method of the present invention;
FIG. 9 is a functional block diagram of an embodiment of a detection system of the present invention;
fig. 10 is a functional block diagram of an embodiment of the apparatus of the present invention.
Detailed Description
As described in the background art, the defect detection of the cloth in the prior art has a problem of low detection accuracy, and the problem of the image processing in the prior art is analyzed in combination with fig. 1 and 2.
In the prior art, image processing performed on an input image includes algorithms such as gaussian filtering, and these image algorithms need to adjust parameters according to different pictures. The size of the patchwork defect 10 is larger as in fig. 1, while the size of the spot defect 20 is smaller as in fig. 2. When the image processing is performed on the flaws with different sizes, different parameters need to be set, and the flaws can be detected more accurately.
In the actual cloth processing procedure, when it is impossible to know what kind of flaws occur on the cloth, image processing is usually performed by using a parameter setting, which is likely to cause target missing detection, for example: the patchwork defect 10 in fig. 1 is large in size and is easily detected, while the spot defect 20 in fig. 2 is not easily detected because of its small size, thereby presenting a problem of missing defects.
In addition, in order to reduce the computational complexity, the prior art also defines the input image by a method such as cropping or zooming a picture, which easily causes loss of spatial details, especially loss of details caused by a boundary part is more serious, and thus the problem of detection accuracy reduction easily occurs.
In order to solve the technical problem, embodiments of the present invention provide a detection method, where an image of a detected picture is identified by a Convolutional Neural Network (CNN) deep learning method to obtain defect information on the detected picture, and spatial and detail feature data are obtained for one detected picture through first and second Convolutional Neural network processing, respectively; therefore, the combined picture data contains spatial information and does not lose detail information, and the detection method of the embodiment of the invention can obtain higher defect detection precision on the basis of ensuring the processing efficiency.
Referring to fig. 3, a flow chart of an embodiment of the detection method of the present invention is schematically shown. The detection method comprises the following steps:
step S1, obtaining a detection picture;
step S2, performing first convolution neural network processing on the detection picture to obtain spatial characteristic data;
step S3, performing second convolution neural network processing on the detection picture to obtain detail feature data, wherein the second convolution neural network is shallower than the first convolution neural network in level and wider than the first convolution neural network in channel;
step S4, fusing the spatial feature data and the detail feature data to obtain picture data;
step S5, based on the picture data, determines defect information.
The following describes each step of the above detection method in detail.
As shown in fig. 4, step S1 is executed to obtain the detected picture 101. The detection picture 101 here refers to a picture that can be recognized and processed by a convolutional neural network.
In this embodiment, the detection method detects the cloth, and the defect information is defect information on the cloth, so that the detection picture is a cloth picture, that is, the detection picture includes a detection cloth picture.
In an actual cloth processing program, cloth moves rapidly on a production line, and when flaw detection is carried out, the surface of the cloth is photographed through a camera (or other image sensors) to obtain an original picture of the surface of the cloth. After obtaining the original picture, the present embodiment further includes performing equivalent segmentation on the original picture to obtain a plurality of detected pictures 101. This is because the size of the original image obtained by the common industrial camera does not meet the processing requirements of the convolutional neural network, and the original image needs to be cut.
For example, the original picture is 4096 × 500, and the original picture is cut into 500 × 500 detected pictures by equal cutting, and then sent to a convolutional neural network for processing.
In other embodiments, the size of the equal cut can be selected according to the requirement of the convolutional neural network on the detection picture. Alternatively, in other embodiments, the same amount of segmentation may not be performed if the original picture meets the requirements of convolutional neural network processing.
In order to facilitate the identification of the defect information, the embodiment further includes: after obtaining an original picture, preprocessing the original picture before performing equivalent segmentation to enhance the characteristic information of the picture. Because the embodiment of the invention identifies whether the image characteristics which are different from and abnormal with the cloth background exist on the cloth background, the abnormal image information related to the defects can be more remarkable by strengthening the characteristic information on the picture, thereby being beneficial to the accuracy of subsequent defect identification and detection.
Specifically, the preprocessing of the present embodiment is to perform gaussian filter processing (Gauss filter) on the original image. The gaussian filtering is very effective for suppressing the noise data of normal distribution by smoothing the data, thereby obtaining an image with high signal-to-noise ratio and capable of reflecting real image information.
In other embodiments, the pre-processing may further include: picture expansion (dispation) or picture Erosion (Erosion). The image expansion processing can strengthen the information of the image characteristics; the image corrosion treatment can weaken noise so as to highlight the characteristic information, so that the image expansion treatment and the image corrosion treatment can both play a role in strengthening the characteristic information of the image.
And step S2 is executed, and the first convolutional neural network processing is performed on the detected picture to obtain spatial feature data.
The detected image obtained in step S1 may be represented by data, specifically, the data when the detected image is input into the network is a pixel value matrix, and each element of the matrix is a pixel value representing different gray levels in the image.
Features (features) in the pixel value matrix can be extracted and learned through a convolutional neural network, and image information (such as a flat background, a small object on the background, an edge of a large object on the background, and the like) of a detected picture is obtained.
The process of feature extraction mainly comprises the following steps: the pixel value matrix is convolved by different convolution kernels (filters) (usually 3 × 3 or 5 × 5) to obtain different feature maps (feature maps), and based on the feature maps and subsequent processing (e.g., sampling, etc.), the image learning and recognition process can be realized.
In the embodiment of the invention, the first convolutional neural network processing is carried out through the convolutional neural network with narrow Channel (Channel) and deep hierarchy to obtain the first matrix which embodies the spatial information and is used as the spatial characteristic data.
When the first convolution neural network processing is performed, as the number of downsampling or convolution times increases, the receptive field (recurrent field) of the pixel matrix gradually increases, the overlapping area between the receptive fields also continuously increases, and the obtained information is the information of one area, that is, the obtained information is the characteristic information between the current area or adjacent areas. Therefore, high-level semantics can be obtained by enlarging the receptive field, and further, spatial feature data can be obtained.
The first convolutional neural network process has a narrow Channel (e.g., Channel is 32 or 64), and accordingly, the number of convolution kernels is small, which can reduce the amount of computation for image processing.
It should be noted that, the deeper the hierarchy of the deep convolutional network, i.e., the more convolution times, the more easily the gradient between layers diverges, and thus the error is easily generated.
The embodiment can obtain the spatial feature data through the convolutional neural network with a direct connection (Shortcut) structure, and the convolutional neural network with the direct connection structure reduces errors caused by network processing by enabling input data to be data with residual errors, so that the training effect is optimized.
In particular, the first convolutional neural network may be a MobileNet V2 network. The main framework of the MobileNet V2 network also combines residual units of MobileNet V1 and residual network ResNet, and adopts a method of ascending dimension first and then descending dimension, and processes of expansion, convolution feature extraction and compression are sequentially executed. The MobileNet V2 network is a lightweight network with narrow channels and deep layers, so that the processing speed of the network on the detected picture 101 can be increased.
In other embodiments, other convolutional neural networks may be used to process the detected pictures to obtain high-level semantics. Referring to fig. 5, a schematic diagram of a first convolutional neural network of step 2 in fig. 3 is shown. The first convolutional neural network includes: the MobileNet Network 201, and a Feature Pyramid 202 (FPN) are used for further processing data output by the MobileNet Network 201.
Specifically, FPN is a method of feature fusion with different resolutions, and features at different levels are enhanced by adding a feature map (feature map) of each resolution and a low-resolution feature (element-wise) of an upsample (up sample), so that the performance of target detection can be improved more significantly. In addition, because FPN is based on MobileNet network, cross-layer connection and low-resolution feature addition are carried out. Compared with the embodiment only adopting the MobileNet V2 network 2, the embodiment increases the calculation amount less, thereby taking efficiency and precision into consideration.
Specifically, as shown in FIG. 5, the FPN is characterized by 4 layers of features (e.g., C2, C3, C4, C5 extract pictures 32, 64, 128, 256, respectively), and each layer incorporates low resolution features (e.g., C4 incorporates C5 features). The FPN can extract features in the pixel matrix at different scales by fusing the features at different scales, so as to prevent the defect target on the detected picture 101 from being lost as much as possible.
Referring to fig. 6, a schematic diagram of another first convolutional neural network of step 2 in fig. 3 is shown. The first convolution neural network of this embodiment is composed of a ResNet 50 network, a feature image pyramid, and a full convolution network. The step of processing by the first convolutional neural network further comprises: the feature data obtained by the network is fused through the native module 301, the segmentation module 302 and the fusion module 303.
Specifically, as shown in fig. 6, when the inspection picture 101 is input into a network for processing, the feature data may be processed in a way of descending dimension (for example, a process from block1 to block 5) and then ascending dimension (for example, a process from up4 to up 1).
The native module 301 outputs the first feature data obtained by performing the whole process of dimensionality reduction and dimensionality lifting, and the segmentation module 302 outputs the second feature data obtained by performing the first half of dimensionality reduction.
The fusion module 303 fuses the first feature data and the second feature data to obtain spatial feature data.
And step 3 is executed, second convolutional neural network processing is carried out on the detection picture, and detail feature data are obtained, wherein the second convolutional neural network is shallower than the first convolutional neural network in hierarchy, and the second convolutional neural network is wider than the first convolutional neural network in channel.
In the embodiment of the invention, the second convolutional neural network processing adopts a network complementary to the first convolutional neural network, so that complementary characteristic information can be obtained. In addition, the second convolutional neural network processing and the first convolutional neural network in the embodiment of the invention adopt a parallel mode to process the detection picture, so that the processing efficiency of the detection method can be improved.
And performing second convolutional neural network processing through a convolutional neural network with a wide channel and shallow layers to obtain a second matrix which embodies detailed information and is used as the detailed characteristic data.
Specifically, the second convolutional neural network has shallow hierarchy and correspondingly has smaller receptive field, so that the finally output characteristic diagram can embody more fine-grained characteristic information.
The Channel width of the second convolutional neural network (for example, Channel is 512), and data of three channels of RGB can be processed, so that more detail information can be obtained through more convolution kernels.
In this embodiment, the second convolutional neural network adopts a VGG network structure.
Specifically, each layer of the VGG network structure comprises: convolution layer, Batch Normalization (Batch Normalization) and activation function.
The step size (stride) of the first layer can be set to 2, and the feature map output by the second convolutional neural network processing is 1/8 of the original input, so that the feature map has smaller fine granularity and can obtain detail information.
In practical application, the step size and the size of the convolution kernel can be adjusted according to the requirements of calculation speed and precision.
It should be noted that, compared with the second convolutional neural network in step S3, the first convolutional neural network adopted in step S2 has a residual error structure, so that errors caused by network processing are reduced; in addition, the first convolutional neural network is also provided with network layer feature fusion, so that more features can be reserved.
In addition, compared with the second convolutional neural network, the first convolutional neural network has fewer parameters under the same convolutional layer, and the calculation amount can be reduced, and the characteristic is mainly directed to the basic network MobileNet V2.
And executing step S4, fusing the spatial feature data and the detail feature data to obtain picture data.
The fusion is a process of adding corresponding positions of a first matrix representing the spatial information and a second matrix representing the detail information. And the preset weight refers to the ratio of the spatial characteristic data and the detail characteristic data in addition.
The weight of the spatial feature data and the weight of the detail feature data are respectively between 0 and 1, and the sum of the two weights is 1.
The spatial feature data and the detail feature data are mutually complementary feature data, and the obtained picture data contain spatial information and have no loss of detail information by fusing the spatial feature data and the detail feature data, so that the higher detection precision can be ensured on the basis of keeping a certain processing speed.
Specifically, the step of fusing comprises: and fusing the spatial feature data and the detail feature data based on a preset weight to obtain picture data.
In practical applications, the preset weight may be set to 1: 1. that is, picture data is obtained by simply adding the spatial feature data and the detail feature data. The processing mode is simple and the calculation amount is small.
The step of fusing the two feature data can also be performed in other ways. Referring to fig. 7, a schematic diagram of one way of fusion of step S4 is shown. The fusing step comprises: processing the spatial feature data through two different convolutions to respectively obtain first spatial data and second spatial data; processing the detail characteristic data through two different convolutions to respectively obtain first detail data and second detail data; in the fusion, the first spatial data and the first detail data (or the second detail data) are combined, and the second spatial data and the first detail data (or the second detail data) are combined, so that four combination modes are obtained. Through the multiple combination modes, the loss of the obtained image data is smaller than that of the original image by adjusting the preset weight, so that the information of the original image can be reflected more truly, and the accuracy of defect judgment is improved.
In other embodiments, more paths or combinations of paths may be used to configure the weights to perform the fusing step.
The fusion step superimposes the spatial feature data and the detail feature data learned in step S2 and step S3, respectively, to obtain picture data, thereby completing the learning process of the detected picture.
Step S5, based on the picture data, determines defect information.
And performing machine learning on the detected picture to obtain corresponding picture data, and comparing the picture data with the prestored defect information to judge the position and/or type of the defect.
In this embodiment, the same amount of segmentation processing is performed before the detected picture is input to the network. Correspondingly, the step of judging the defect information on the detected picture based on the picture data comprises the following steps: and merging the picture data corresponding to the plurality of detected pictures, and judging the position or the type of the defect based on the merged data.
When merging is carried out, restoration is carried out according to the corresponding position of each detection picture during cutting, so that picture data of the whole original picture is obtained, and accurate positioning of the defect position is facilitated.
It should be noted that the picture data herein is equivalent to a matrix, and the elements in the matrix represent whether there is a defect at each position and the type of the defect. For example, the element value is 0 for the locations without flaws; the defective positions have element values of 1, 2 and 3 … …, wherein 1, 2 and … … represent different defect types respectively.
Please refer to the flowchart of fig. 3, in this embodiment, before performing actual detection, a modeling step is further required, and the modeling step is mainly used for configuring the preset weight. In the modeling process, the convolutional neural network performs a feature learning process, and is also a defect feature learning process.
Specifically, the step of modeling comprises: obtaining a sample picture; performing first convolution neural network processing on the sample picture to obtain sample spatial feature data; performing second convolution neural network processing on the sample picture to obtain sample detail characteristic data; based on the initial weight, fusing the sample space characteristic data and the sample detail characteristic data to obtain sample picture data, and finishing one-time training; and adjusting the initial weight through multiple times of training, and when the loss of the sample picture data meets a specification value, taking the adjusted weight as a preset weight.
The processing modes executed in the modeling step and the detection method step are the same, and the difference is that data input into the network in the modeling process are different, the data input into the network in the modeling process are sample pictures, and the network learns the defect characteristics in the pictures on the one hand and configures the preset weight of the two characteristic data in the fusion step on the other hand based on the learning of the sample pictures.
In the initial learning process, the initial weight is a randomly set weight, the weight is adjusted in each learning process to reduce the loss of the image data, and when the loss of the sample image data meets a specification value, the adjusted weight is used as a preset weight.
And then, in the actual detection process, detecting by using the preset weight obtained in the modeling process.
In addition, the modeling step is different from the detection method in that: after an original sample picture is obtained, converting the original sample picture into a mask picture; and carrying out gray level processing on the mask image to obtain a gray level image, and taking the gray level image and the original sample image as sample images.
The sample picture after gray processing is used for training, so that on one hand, the data volume is small, on the other hand, defect edge information can be reflected, and the learning of defect characteristics is facilitated.
It should be noted that the original sample picture in the modeling step and the original picture actually subjected to detection are both equally-cut pictures. The gray-scale image and the original sample image are used as sample images to be trained in pairs, and the association between the gray-scale image and the original image can be established through the original sample image, so that in the subsequent detection process, the defect detection can be realized as long as the original image is input.
Referring to fig. 8, a and b are graphs illustrating comparison of the output results of the prior art detection method and the detection method of the present invention, respectively.
For the blemish 501, the detection box 502 of the prior art detection method of FIG. a is not calibrated to the location of the blemish 501. As shown in fig. b, the inspection method of the present invention accurately marks the locations of the smudge defects 501, and determines that the defect type is a spot defect (spot) and the number of defects is 1.
In other embodiments, other types of defects may also be detected, such as, for example, pins, snags, patchwork, and so forth.
It should be further noted that, when the detection method of the embodiment of the present invention detects the defect information on the cloth image, the processing of one cloth image can be completed in less than 0.1 s. In addition, in the defect information detection, the mIOU can be detected with a precision of 0.7 or more. The detection method provided by the embodiment of the invention simultaneously considers the processing speed and the detection precision.
Accordingly, the present invention also provides a detection apparatus and a system, the apparatus includes a defect detection module, the defect detection module further includes each unit in the system, referring to fig. 9, which shows a functional block diagram of an embodiment of the detection system of the present invention, the detection system includes:
a first picture obtaining unit 601, configured to obtain a detection picture, where the detection picture includes a detection cloth picture;
a semantic unit 602, configured to perform first convolutional neural network processing on the detected picture to obtain spatial feature data;
a detail unit 603, configured to perform a second convolutional neural network processing on the detected picture to obtain detail feature data, where the second convolutional neural network has a shallower level than the first convolutional neural network, and a channel of the second convolutional neural network is wider than that of the first convolutional neural network;
a fusion unit 604, configured to fuse the spatial feature data and the detail feature data to obtain picture data;
the determining unit 605 is configured to determine defect information according to the picture data, where the defect information may be defect information of the fabric, and includes a size, a position, or coordinates of the defect.
The detection system of the embodiment of the invention carries out image recognition on the detection picture by a CNN deep learning method to obtain the defect information on the detection picture, and the detection system of the embodiment of the invention respectively obtains space and detail characteristic data for one detection picture by a first convolutional neural network and a second convolutional neural network; therefore, the combined picture data contains spatial information and does not lose detail information, and the detection method of the embodiment of the invention can obtain higher defect detection precision on the basis of ensuring the processing efficiency.
The various elements and modules of the detection system are described in detail below with reference to the figures.
With reference to fig. 4 in combination, a first picture acquisition unit 601 configured to obtain a detection picture 101; the detection picture 101 here refers to a picture that can be recognized and processed by a convolutional neural network.
In this embodiment, the detection system is configured to detect a fabric, and the defect information is defect information on the fabric. Therefore, the detection picture is a cloth picture.
In an actual cloth processing program, cloth moves rapidly on a production line, and when flaw detection is carried out, the surface of the cloth is photographed through a camera (or other image sensors) to obtain an original picture of the surface of the cloth. After obtaining the original picture, the present embodiment further includes performing equivalent segmentation on the original picture to obtain a plurality of detected pictures 101. This is because the size of the original image obtained by a common industrial camera does not meet the processing requirements of the convolutional neural network, and the original image needs to be cut.
For example, the original picture is 4096 × 500, and the original picture is cut into 500 × 500 detected pictures by equal amount of cutting, and the extra parts are enlarged to 500 × 500 at this time; or cutting the image into 512 × 500 detection images, and sending the images into a convolutional neural network for processing.
In other embodiments, the size of the equal cut can be selected according to the requirement of the convolutional neural network on the detection picture. Alternatively, in other embodiments, the original image meets the requirements of convolutional neural network processing, and does not need to be cut equally.
It should be noted that the types of defects that may occur on the cloth are many, and the sizes of the defects are different. In order to facilitate the identification of the defect information, the first picture capturing unit 601 in the system of this embodiment is further configured to perform preprocessing on the original picture before performing equal-amount segmentation, so as to enhance the feature information of the picture. Because the embodiment of the invention identifies whether the image characteristics which are different from and abnormal with the cloth background exist on the cloth background, the abnormal image information related to the defects can be more remarkable by strengthening the characteristic information on the picture, thereby being beneficial to the accuracy of subsequent defect identification and detection.
The first picture acquiring unit 601 is configured to perform expansion or corrosion preprocessing on the original picture; the image expansion processing can strengthen the information of the image characteristics; the image corrosion treatment can weaken noise so as to highlight the characteristic information, so that the image expansion treatment and the image corrosion treatment can both play a role in strengthening the characteristic information of the image.
Alternatively, the first picture obtaining unit 601 is configured to perform gaussian filtering preprocessing on the original picture. The gaussian filtering is very effective for suppressing noise data that follows normal distribution by smoothing the data, thereby obtaining an image that has a high signal-to-noise ratio and can reflect real image information.
A semantic unit 602, configured to perform first convolutional neural network processing on the detected picture to obtain spatial feature data;
the detected picture obtained by the first picture obtaining unit 601 can be represented by data, specifically, a pixel matrix, and each element of the matrix is a pixel value representing different gray scales. That is, the input to the first convolutional neural network for processing is a matrix of pixel values.
Features (features) in the pixel value matrix can be extracted and learned through a convolutional neural network, and image information (such as a flat background, a small object on the background, an edge of a large object on the background, and the like) of a detection picture can be obtained.
The process of feature extraction mainly comprises the following steps: the pixel value matrix is convolved by different convolution kernels (filters) (usually 3 × 3 or 5 × 5) to obtain different feature maps (feature maps), and based on the feature maps and subsequent processing (e.g., sampling, etc.), the image learning and recognition process can be realized.
In the embodiment of the invention, the first convolutional neural network processing is carried out through the convolutional neural network with narrow Channel (Channel) and deep hierarchy to obtain the first matrix which embodies the spatial information, namely the spatial characteristic data.
The level of the first convolution neural network processing can be gradually increased for the reception fields (reconstruction fields) of the pixel matrix along with the increase of the down-sampling or convolution times, the overlapping area between the reception fields is also continuously increased, and the obtained information is the information of one area, namely the obtained information is the characteristic information between the area or the adjacent areas. Therefore, high-level semantics can be obtained by enlarging the receptive field, and further, spatial feature data can be obtained.
The first convolutional neural network process has a narrow Channel (e.g., Channel is 32 or 64), and accordingly, the number of convolution kernels is small, which can reduce the amount of computation for image processing.
It should be noted that, the deeper the hierarchy of the deep convolutional network, i.e., the more convolution times, the more easily the gradient between layers diverges, and thus the error is easily generated. The embodiment can obtain the spatial feature data through the convolutional neural network with the direct connection (short) structure, and the convolutional neural network with the direct connection structure can reduce errors caused by network processing by enabling the input data to be data with residual errors, so that the training effect is optimized.
In particular, the first convolutional neural network may be a MobileNet V2 network. The main framework of the MobileNet V2 network also combines residual units of MobileNet V1 and residual network ResNet, and adopts a method of ascending dimension first and then descending dimension, and processes of expansion, convolution feature extraction and compression are sequentially executed. The MobileNet V2 network is a lightweight network with narrow channels and deep layers, so that the processing speed of the network on the detected picture 101 can be increased.
In other embodiments, other convolutional neural networks may be used to process the detected pictures to obtain high-level semantics. Referring to fig. 5, a schematic diagram of another first convolutional neural network of step 2 in fig. 3 is shown. The first convolutional neural network includes: the system comprises a MobileNet Network 201 and a Feature image Pyramid 202 (FPN) for further processing data output by the MobileNet Network 201.
Specifically, FPN is a method of feature fusion with different resolutions, and features at different levels are enhanced by adding a feature map (feature map) of each resolution and a low-resolution feature (element-wise) of an upsample (up sample), so that the performance of target detection can be improved more significantly. In addition, because FPN is based on MobileNet network, cross-layer connection and low-resolution feature addition are carried out. Compared with the embodiment only adopting the MobileNet V2 network 2, the embodiment increases the calculation amount less, thereby taking efficiency and precision into consideration.
Specifically, as shown in FIG. 5, the FPN is characterized by 4 layers of features (e.g., C2, C3, C4, C5 extract pictures 32, 64, 128, 256, respectively), and each layer incorporates low resolution features (e.g., C4 incorporates C5 features). The FPN can extract features in the pixel matrix at different scales by fusing the features at different scales, so as to prevent the defect target on the detected picture 101 from being lost as much as possible.
As shown in fig. 6, the first convolution neural network consists of a ResNet-50 network, a feature image pyramid, and a full convolution network. The step of processing through the first convolutional neural network further includes processing the feature data obtained by the network through the native module 301, the segmentation module 302 and the fusion module 303.
Specifically, as shown in fig. 6, when the inspection picture 101 is input into a network for processing, the feature data may be processed in a way of descending dimension (for example, a process from block1 to block 5) and then ascending dimension (for example, a process from up4 to up 1).
The native module 301 outputs the first feature data obtained by performing the whole process of dimensionality reduction and dimensionality lifting, and the segmentation module 302 outputs the second feature data obtained by performing the first half of the process of dimensionality reduction only.
The fusion module 303 fuses the first feature data and the second feature data to obtain spatial feature data.
The detection system of the embodiment of the invention also comprises: a detail unit 603, configured to perform a second convolutional neural network processing on the detected picture to obtain detail feature data, where the second convolutional neural network has a shallower level than the first convolutional neural network, and the second convolutional neural network has a wider channel than the first convolutional neural network.
The second convolutional neural network processing of the detail unit 603 employs a network complementary to the first convolutional neural network, so that complementary feature information can be obtained. In addition, the second convolutional neural network processing is performed in parallel with the first convolutional neural network, so that the processing efficiency of the detection method can be improved.
And performing second convolutional neural network processing through a convolutional neural network with a wide channel and shallow layers to obtain a second matrix which embodies detail information, namely the detail characteristic data.
Specifically, the second convolutional neural network has shallow hierarchy and correspondingly has smaller receptive field, so that the finally output characteristic diagram can embody more fine-grained characteristic information.
The Channel width of the second convolutional neural network (e.g., Channel is 512), and more detail information is obtained by more convolution kernels.
In this embodiment, the second convolutional neural network adopts a VGG network structure.
Specifically, each layer of the VGG network structure comprises: convolutional layer, batch normalization and activation functions.
The step size (stride) of the first layer of each stage can be set to 2, and the feature map output by the second convolutional neural network processing is 1/8 of the original input, so that the feature map has smaller fine granularity and can obtain detail information.
It should be noted that, compared with the second convolutional neural network, the first convolutional neural network has a residual error structure, which reduces errors caused by network processing; in addition, the first convolutional neural network is also provided with network layer feature fusion, so that more features can be reserved.
In addition, compared with the second convolutional neural network, the first convolutional neural network can have fewer parameters under the same convolutional layer, and the calculation amount can be reduced.
The detection system of the embodiment of the invention also comprises: the fusion unit 604 is configured to fuse the spatial feature data and the detail feature data according to a preset weight to obtain picture data.
The fusion is a process of adding corresponding positions of a first matrix representing the spatial information and a second matrix representing the detail information. And the preset weight refers to the ratio of the spatial characteristic data and the detail characteristic data in addition.
The weight of the spatial feature data and the weight of the detail feature data are respectively between 0 and 1, and the sum of the two weights is 1.
The spatial feature data and the detail feature data are mutually complementary feature data, and the obtained picture data contain spatial information and have no loss of detail information by fusing the spatial feature data and the detail feature data, so that the higher detection precision can be ensured on the basis of keeping a certain processing speed.
In practical applications, the preset weight may be set to 1: and 1, simply adding the spatial feature data and the detail feature data to obtain picture data. The processing mode is simple and the calculation amount is small.
The step of fusing the two feature data can also be performed in other ways. As shown in fig. 7, the fusion unit 604 is configured to process the spatial feature data through two different convolutions to obtain first spatial data and second spatial data, respectively; the first convolution processing module is further used for processing the detail characteristic data through two different convolutions to respectively obtain first detail data and second detail data; the fusion unit 604 combines the first spatial data and the first detail data (or the second detail data) during fusion, and combines the second spatial data and the first detail data (or the second detail data), thereby obtaining four combination modes. Through the multiple combination modes, the loss of the obtained image data is smaller than that of the original image by adjusting the preset weight, so that the information of the original image can be reflected more truly, and the accuracy of defect judgment is improved.
In other embodiments, more paths or combinations of paths may be used to configure the weights to perform the fusing step.
The fusion unit 604 obtains picture data by superimposing the spatial feature data and the detail feature data that are learned separately, thereby completing the learning process of the detected picture.
As shown in fig. 9, the detection system further includes: the determining unit 605 is configured to determine defect information according to the picture data.
The determining unit 605 performs machine learning on the detected picture to obtain corresponding picture data, and compares the picture data with the defect information stored in advance to determine the position and/or type of the defect.
In this embodiment, the same amount of segmentation processing is performed before the detected picture is input to the network. Correspondingly, the determining unit 605 is further configured to combine the picture data corresponding to the multiple detected pictures, and determine the position or the type of the defect based on the combined data.
When merging, the determining unit 605 restores the original pictures according to the corresponding positions of the detected pictures during cutting, so as to obtain the picture data of the whole original picture, thereby facilitating accurate positioning of the defect position.
It should be noted that the picture data herein is equivalent to a matrix, and the elements in the matrix represent whether there is a defect at each position and the type of the defect. For example, the element value is 0 for the locations without flaws; the defective positions have element values of 1, 2 and 3 … …, wherein 1, 2 and … … represent different defect types respectively.
In the above description, the functions and the connection relationships of the modules of the detection system during actual detection are described, and in practical application, before detection, the deep convolutional neural network training process for the modules of the detection system is further included. And in the training process, the method is mainly used for configuring the preset weight so as to realize modeling. In addition, the detection system also completes defect feature learning in the training process so as to compare and judge the defect information in the subsequent detection process.
With continued reference to the functional block diagram of the detection system shown in fig. 9, the detection system further includes: a second picture obtaining unit 701, configured to obtain a sample picture; the semantic unit 602 is further configured to perform first convolutional neural network processing on the sample picture to obtain sample spatial feature data; the detail unit 603 is further configured to perform second convolutional neural network processing on the sample picture to obtain sample detail feature data; the fusion unit 604 is further configured to fuse the sample spatial feature data and the sample detail feature data according to the initial weight to obtain sample picture data, and complete a training; and the initial weight is adjusted through multiple times of training, and when the loss of the sample picture data meets the specification value, the adjusted weight is used as a preset weight.
With continued reference to fig. 9, the second picture taking unit 701 includes: a first picture processing unit 7011, configured to obtain an original sample picture, and convert the original sample picture into a mask picture; a second picture processing unit 7012, configured to perform gray scale processing on the mask image to obtain a gray scale image, and use the gray scale image and the original sample picture as sample pictures.
The sample picture after gray processing is used for training, so that on one hand, the data volume is small, on the other hand, defect edge information can be reflected, and the learning of defect characteristics is facilitated.
It should be noted that the original sample picture obtained by the first picture processing unit 7011 and the detected picture obtained by the first picture obtaining unit 601 are both equally divided pictures. The gray-scale image and the original sample image are used as sample images to be trained in pairs, and the association between the gray-scale image and the original image can be established through the original sample image, so that in the subsequent detection process, the defect detection can be realized as long as the original image is input.
In the process of training to realize modeling, data input into the network by the detection system are different, a sample picture is input into the network in the modeling process, and the network learns the defect characteristics in the picture on the one hand and configures the preset weights of the two characteristic data in the fusion step on the other hand based on the learning of the sample picture.
In the initial learning process, the initial weight is a randomly set weight, the weight is adjusted in each learning process to reduce the loss of the image data, and when the loss of the sample image data meets a specification value, the adjusted weight is used as a preset weight. And then, in the actual detection process, detecting by preset weights in the modeling process.
Referring to fig. 10, a schematic diagram of an embodiment of the apparatus of the present invention is shown.
The equipment comprises the detection system provided by the embodiment of the invention and is used for judging the type and the position of the defect on the cloth according to the detection picture of the cloth to be detected.
The apparatus further comprises: and the sampling device 30 is used for photographing the object to be detected. A first picture acquisition unit in the detection system is used for acquiring a detection picture from the sampling device.
In particular, the sampling device 30 is a camera (which may also be another image sensor). In the cloth processing procedure, the cloth is moved rapidly on the production line. During detection, the device photographs the surface of the cloth through the camera to obtain a picture of the surface of the cloth, then performs image processing on the picture through the detection system to judge whether the surface of the cloth has defects or not, and further analyzes defect information, for example: the location and type of the defect.
The camera can take pictures of moving cloth at a high shooting rate (tens of thousands of times per second), thereby ensuring the production efficiency of the production line.
The apparatus further comprises: the transportation platform 40 is used for transporting the cloth to be detected; the camera is arranged on the cloth conveying platform and used for photographing the cloth to be detected.
The apparatus may further include: and the marking device is used for marking the defects on the cloth according to the types and positions of the defects judged by the detection system.
The cloth manufacturer can judge whether the cloth section with the defect is abandoned or still used as a qualified product after the defect is removed through cleaning according to the defect type and the product quality requirement.
In some embodiments, after the defect information is obtained, further processing of the defect data is required. The cloth defect information comprises at least one of defect types, defect sizes and defect positions, wherein the defect types comprise a plurality of types, such as missed stitches, broken holes, oil stains, yarn hooks, horizontal strips, dyeing, line skew, wind tunnels, edge drop and the like, which are not listed herein. Then, a defect handling mode is determined according to at least the defect type. In some embodiments, the defect processing mode may include at least one of material cutting, repairing, and cleaning, where the fabric property includes a fabric price, a fabric cost, and the like.
For example, when the defect type is greasy dirt or dyeing, manual or automatic cleaning can be selected, when the defect size is small, subsequent treatment of cloth is not affected, untreated treatment can be selected, and when the defect is yarn hooking or hole breaking, material breaking and repair can be selected. The defect processing mode is determined according to preset conditions, and in a specific example, the defect part can be cut off when the cloth is used for making clothes, the cut part can be used for making parts with less materials such as collar sleeves or decorations, and if the cloth is used for making bed sheets, whether the defect part can be unprocessed or not can be determined according to the size of the defect part of the cloth. In summary, the defect processing manner can be determined in the specific application environment and application purpose according to the defect information and the preset condition.
Further, in some embodiments, when a defect location has been detected and a material break is determined, the following steps may be followed.
Step 1: and acquiring detection data of the cloth, wherein the detection data comprises defect position information. In the embodiment of the application, the cloth can detect the defect information through the neural network, and the defect information comprises defect position information. Specifically, the transverse direction of the fabric may be taken as an abscissa, and the longitudinal direction of the fabric may be taken as an ordinate, so that the defect position information may be obtained. Optionally, the defect information may further include defect size information and defect type information.
Step 2: and calculating the distance between two adjacent defect reference lines according to the defect position information. In the embodiment of the present application, a defect reference line parallel to the abscissa may be made at the position of each defect, and it can be understood that the defect reference lines of some defects are coincident and parallel to each other. And the distance between two adjacent defect reference lines can be calculated according to the defect position information.
And step 3: and determining a material breaking area according to a preset condition, wherein the preset condition is that an area formed by the combination of the continuous defects with the distance smaller than a preset threshold is used as the material breaking area. In the embodiment of the application, for example, the cloth includes a plurality of defects in a front-back order, the plurality of defects respectively correspond to a plurality of defect reference lines, and distances between two adjacent defect reference lines, such as distances between defect 1 and defect 2, between defect 2 and defect 3, between defect 3 and defect 4, between defect 4 and defect 5, and between defect 5 and defect 6, are respectively calculated. And comparing the calculated distance with a preset threshold value, judging whether the distance is smaller than the preset threshold value, if the distances between the defects 1 and 2, between the defects 2 and 3, and between the defects 4 and 5 are smaller than the preset threshold value, and the distances between the defects 3 and 4, between the defects 5 and 6 are larger than or equal to the preset threshold value, taking the area formed by combining the continuous defects 123 as a material breaking area 1, taking the area formed by combining the continuous defects 45 as a material breaking area 2, and taking the remaining defects 6 as isolated defects. It can be understood that the defect reference lines of some defects are overlapped to make one defect reference line contain a plurality of defects, and if the distances between the defect reference line and the adjacent defect reference lines before and after the defect reference line are both greater than or equal to the preset threshold, the plurality of defects on the defect reference line are all regarded as isolated defects. In the embodiment of the application, the preset threshold value can be set according to the requirement of the material for producing the clothes, if the length of the material for producing the clothes is short, the preset threshold value can be set to be small, so that the cloth between two adjacent defects can be fully utilized, and if the length of the material for producing the clothes is long, the preset threshold value can be set to be large, so that the material breaking time can be saved, and the production efficiency is improved.
And 4, step 4: and respectively obtaining two edge reference lines of each material breaking area and each isolated defect outside the material breaking area. In this embodiment of the application, the two edge reference lines of the material breakage region may be obtained by moving up the first defect reference line and moving down the last defect reference line in the material breakage region, so that the defect in the material breakage region is located between the two edge reference lines. The two edge reference lines of the isolated defect can be obtained by respectively moving up and down the defect reference line where the isolated defect is located, so that the isolated defect is located between the two edge reference lines.
And 5: and determining the material breaking position information of the cloth according to the edge reference line. In the embodiment of the application, the edge reference line is a material breaking position, and the material breaking position information of the cloth is determined according to the edge reference line.
According to the example provided by the embodiment of the application, when the material cutting is carried out according to the material cutting information generated in the steps, the cutting times are far less than that of the traditional mode of cutting each flaw defect, so that the scheme of the embodiment saves the processing times and time of the material cutting processing step. Moreover, if two adjacent defects are too close to each other, if the areas where the two defects are located are cut, the remaining area between the two defects after cutting is too narrow, and the cloth corresponding to the area cannot be used as the material for subsequently producing clothes, and finally becomes waste, which is equivalent to doing useless work. The application also comprises a cloth defect detection data processing system. The detection data acquisition module is used for acquiring defect information of the acquired cloth, wherein the defect information comprises at least one of defect type, defect size and defect position; and the defect processing mode determining module is used for determining a defect processing mode according to the defect information and/or preset conditions, wherein the preset conditions comprise at least one of cloth usage and cloth attributes, and the defect processing mode comprises at least one of material breaking, repairing and cleaning. The detection data acquisition module can directly acquire data from the detection system or acquire the detection data in other wireless or limited modes. The defect processing mode determining module can determine which mode to use to process the defect according to any combination of the defect information and preset conditions or according to the setting parameters of a user and the combination of an actual use scene.
The device can detect the defects on the surface of the product in real time in the production process, so that the defective product can be found in time or the defects can be marked. The equipment comprises the detection system, so that the equipment has higher detection precision and higher detection efficiency.
Correspondingly, the embodiment of the invention also provides a medium, on which computer instructions are stored, and when the computer instructions are executed, the steps of the detection method of the invention are executed.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims. Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A cloth defect detection method is characterized by comprising the following steps:
obtaining a detected cloth picture;
performing first convolution neural network processing on the detected cloth picture to obtain spatial characteristic data;
performing second convolutional neural network processing on the detected cloth picture to obtain detail characteristic data, wherein the second convolutional neural network is shallower than the first convolutional neural network in level and is wider than a channel of the first convolutional neural network;
fusing the spatial feature data and the detail feature data to obtain picture data;
and judging the defect information of the cloth based on the picture data.
2. The cloth defect detection method of claim 1, wherein the step of fusing the spatial feature data and the detail feature data to obtain picture data comprises:
and fusing the spatial feature data and the detail feature data based on a preset weight to obtain picture data.
3. The cloth defect detecting method of claim 2, wherein before obtaining the detected cloth picture, the cloth defect detecting method further comprises: a modeling step comprising:
obtaining a sample picture;
performing the first convolution neural network processing on the sample picture to obtain sample spatial feature data;
performing the second convolutional neural network processing on the sample picture to obtain sample detail characteristic data;
based on the initial weight, fusing the sample space characteristic data and the sample detail characteristic data to obtain sample picture data, and finishing one-time training;
and continuously adjusting the initial weight through multiple times of training, and when the loss of the sample picture data meets a specification value, taking the adjusted weight as a preset weight.
4. The cloth defect detecting method of claim 1, wherein the step of obtaining the detected cloth picture comprises: obtaining an original picture; cutting the original picture to obtain a plurality of detected cloth pictures;
the step of judging and detecting the defect information on the cloth picture based on the picture data comprises the following steps: and merging the picture data corresponding to the multiple detected cloth pictures, and judging the positions and/or types of the defects based on the merged data.
5. The cloth defect detection method of any of claims 1-4, wherein the second convolutional neural network comprises a VGG network, and the first convolutional neural network comprises a MobileNet V2 network; alternatively, the first and second electrodes may be,
the first convolutional neural network comprises a Mobilene V2 network and a characteristic image pyramid, and is used for processing data output by the Mobilene V2 network; alternatively, the first and second electrodes may be,
the first convolution neural network includes a ResNet 50 network, a feature image pyramid, and a full convolution network.
6. A cloth defect detection data processing method is characterized by comprising the following steps:
acquiring defect information of the cloth according to the method of any one of claims 1 to 5;
and determining a defect processing mode according to at least the defect information.
7. The cloth defect detecting data processing method of claim 6, when the defect information includes a defect position,
calculating the distance between two adjacent defect reference lines according to the defect position information;
determining a material breaking area according to a preset condition, wherein the preset rule is that an area formed by the combination of the continuous defects with the distance smaller than a preset threshold value is used as the material breaking area;
respectively obtaining an edge reference line of each material breaking area and each isolated defect outside the material breaking areas;
and determining the material breaking position information of the cloth according to the edge reference line.
8. A medium having stored thereon computer instructions, characterized in that the computer instructions are operable to perform the steps of the method according to any of claims 1-7.
9. An apparatus, comprising a defect detection module, the defect detection module further comprising:
the first picture acquisition unit is used for acquiring a detected cloth picture;
the semantic unit is used for carrying out first convolution neural network processing on the detected cloth picture to obtain spatial characteristic data;
the detail unit is used for carrying out second convolution neural network processing on the detected cloth picture to obtain detail characteristic data, the second convolution neural network is shallower than the first convolution neural network in level, and the second convolution neural network is wider than a channel of the first convolution neural network;
the fusion unit is used for fusing the spatial feature data and the detail feature data to obtain picture data;
and the judging unit is used for judging the defect information of the cloth according to the picture data.
10. The apparatus of claim 9, wherein the apparatus further comprises:
the sampling device is used for photographing the detected cloth;
the transportation platform is used for transporting the detection cloth; the sampling device is arranged on the transportation platform and used for photographing the detection cloth;
the detection method is used for judging the defect type and/or position of the cloth to be detected according to the picture of the detected cloth obtained by the sampling device.
CN202011110696.5A 2020-10-16 2020-10-16 Cloth defect detection method, device and medium Active CN112200790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011110696.5A CN112200790B (en) 2020-10-16 2020-10-16 Cloth defect detection method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011110696.5A CN112200790B (en) 2020-10-16 2020-10-16 Cloth defect detection method, device and medium

Publications (2)

Publication Number Publication Date
CN112200790A true CN112200790A (en) 2021-01-08
CN112200790B CN112200790B (en) 2023-04-07

Family

ID=74009219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011110696.5A Active CN112200790B (en) 2020-10-16 2020-10-16 Cloth defect detection method, device and medium

Country Status (1)

Country Link
CN (1) CN112200790B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950584A (en) * 2021-03-01 2021-06-11 哈尔滨工程大学 Coating surface defect identification method based on deep learning
CN113205110A (en) * 2021-03-19 2021-08-03 哈工大机器人(中山)无人装备与人工智能研究院 Panel defect classification model establishing method and panel defect classification method
CN113610848A (en) * 2021-10-09 2021-11-05 阿里巴巴(中国)有限公司 Digital cloth processing system, cloth flaw detection method, device and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133943A (en) * 2017-04-26 2017-09-05 贵州电网有限责任公司输电运行检修分公司 A kind of visible detection method of stockbridge damper defects detection
CN109492537A (en) * 2018-10-17 2019-03-19 桂林飞宇科技股份有限公司 A kind of object identification method and device
CN109670979A (en) * 2018-11-30 2019-04-23 深圳灵图慧视科技有限公司 Cloth detection data processing method, device and equipment
CN109859163A (en) * 2018-12-19 2019-06-07 重庆邮电大学 A kind of LCD defect inspection method based on feature pyramid convolutional neural networks
CN110334865A (en) * 2019-07-05 2019-10-15 上海交通大学 A kind of electrical equipment fault rate prediction technique and system based on convolutional neural networks
CN111260621A (en) * 2020-01-14 2020-06-09 湖南大学 Method for positioning and identifying surface defects of printed circuit board
CN111754513A (en) * 2020-08-07 2020-10-09 腾讯科技(深圳)有限公司 Product surface defect segmentation method, defect segmentation model learning method and device
CN111768388A (en) * 2020-07-01 2020-10-13 哈尔滨工业大学(深圳) Product surface defect detection method and system based on positive sample reference
CN112150460A (en) * 2020-10-16 2020-12-29 上海智臻智能网络科技股份有限公司 Detection method, detection system, device, and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133943A (en) * 2017-04-26 2017-09-05 贵州电网有限责任公司输电运行检修分公司 A kind of visible detection method of stockbridge damper defects detection
CN109492537A (en) * 2018-10-17 2019-03-19 桂林飞宇科技股份有限公司 A kind of object identification method and device
CN109670979A (en) * 2018-11-30 2019-04-23 深圳灵图慧视科技有限公司 Cloth detection data processing method, device and equipment
CN109859163A (en) * 2018-12-19 2019-06-07 重庆邮电大学 A kind of LCD defect inspection method based on feature pyramid convolutional neural networks
CN110334865A (en) * 2019-07-05 2019-10-15 上海交通大学 A kind of electrical equipment fault rate prediction technique and system based on convolutional neural networks
CN111260621A (en) * 2020-01-14 2020-06-09 湖南大学 Method for positioning and identifying surface defects of printed circuit board
CN111768388A (en) * 2020-07-01 2020-10-13 哈尔滨工业大学(深圳) Product surface defect detection method and system based on positive sample reference
CN111754513A (en) * 2020-08-07 2020-10-09 腾讯科技(深圳)有限公司 Product surface defect segmentation method, defect segmentation model learning method and device
CN112150460A (en) * 2020-10-16 2020-12-29 上海智臻智能网络科技股份有限公司 Detection method, detection system, device, and medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950584A (en) * 2021-03-01 2021-06-11 哈尔滨工程大学 Coating surface defect identification method based on deep learning
CN113205110A (en) * 2021-03-19 2021-08-03 哈工大机器人(中山)无人装备与人工智能研究院 Panel defect classification model establishing method and panel defect classification method
CN113205110B (en) * 2021-03-19 2024-03-19 哈工大机器人(中山)无人装备与人工智能研究院 Method for establishing panel defect classification model and panel defect classification method
CN113610848A (en) * 2021-10-09 2021-11-05 阿里巴巴(中国)有限公司 Digital cloth processing system, cloth flaw detection method, device and medium
CN113610848B (en) * 2021-10-09 2022-04-12 阿里巴巴(中国)有限公司 Digital cloth processing system, cloth flaw detection method, device and medium

Also Published As

Publication number Publication date
CN112200790B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN112200790B (en) Cloth defect detection method, device and medium
JP7087397B2 (en) Substrate defect inspection equipment, substrate defect inspection method and storage medium
EP4280153A1 (en) Defect detection method, apparatus and system
CN108776140B (en) Machine vision-based printed matter flaw detection method and system
CN112150460B (en) Detection method, detection system, device and medium
CN111325713A (en) Wood defect detection method, system and storage medium based on neural network
CN112233067A (en) Hot rolled steel coil end face quality detection method and system
CN113554631B (en) Chip surface defect detection method based on improved network
CN106355579A (en) Defect detecting method of cigarette carton surface wrinkles
US20210150700A1 (en) Defect detection device and method
CN111047655A (en) High-definition camera cloth defect detection method based on convolutional neural network
CN110596120A (en) Glass boundary defect detection method, device, terminal and storage medium
CN110706224B (en) Optical element weak scratch detection method, system and device based on dark field image
CN112164048B (en) Magnetic shoe surface defect automatic detection method and device based on deep learning
CN115830004A (en) Surface defect detection method, device, computer equipment and storage medium
CN114255212A (en) FPC surface defect detection method and system based on CNN
CN113538603A (en) Optical detection method and system based on array product and readable storage medium
CN109886936B (en) Low-contrast defect detection method and device
CN114998314A (en) PCB (printed Circuit Board) defect detection method based on computer vision
TWI822968B (en) Color filter inspection device, inspection device, color filter inspection method, and inspection method
CN115861259A (en) Lead frame surface defect detection method and device based on template matching
Haik et al. A novel inspection system for variable data printing using deep learning
CN114445317A (en) Detection method, detection system, device, and medium
CN114219758A (en) Defect detection method, system, electronic device and computer readable storage medium
KR20230036650A (en) Defect detection method and system based on image patch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant