CN115239615A - Cloth defect detection method based on CTPN - Google Patents
Cloth defect detection method based on CTPN Download PDFInfo
- Publication number
- CN115239615A CN115239615A CN202210528722.9A CN202210528722A CN115239615A CN 115239615 A CN115239615 A CN 115239615A CN 202210528722 A CN202210528722 A CN 202210528722A CN 115239615 A CN115239615 A CN 115239615A
- Authority
- CN
- China
- Prior art keywords
- cloth
- ctpn
- defect
- data set
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30124—Fabrics; Textile; Paper
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
The invention discloses a cloth defect detection method based on CTPN, comprising the following steps: s1: acquiring image information of a cloth data set; s2: dividing the image samples of the cloth data set; s3: extracting the characteristics of the cloth data set image; s4: constructing a piece goods data set model based on CTPN: (1) an image preprocessing module; (2) Optimizing a network frame, namely replacing a network of Mobilenetv2 as a feature extractor; (3) use of bidirectional LSTM; (4) designing a module by a specific anchor; (5) a post-processing module; s5: and detecting defects of the cloth data set. The method is stable and reliable, has strong generalization capability, can be directly popularized, completely abandons the traditional algorithm, increases the robustness of the model, greatly improves the accuracy of the positioning of the cloth defects, not only meets the precision of the cloth defect detection, but also reduces the 3/4 reasoning time and accelerates the landing of the cloth detection algorithm of an industrial-grade textile mill.
Description
Technical Field
The invention relates to the technical field of cloth detection, in particular to a cloth defect detection method based on CTPN.
Background
The detection of defects in cloth is a very important step in the textile industry. At present, in the field of cloth defect detection in the textile industry, manual detection is still the main quality detection mode. In recent years, due to the increase of labor cost and the problems of low detection speed, high omission factor, poor consistency, high personnel mobility and the like of manual detection, more and more factories use machines to replace manual quality inspection so as to improve the production efficiency and save labor cost. Therefore, cloth defect detection technology is of great importance in the textile industry.
At present, for defect detection of industrial cloth, traditional algorithms are mostly used, for example: edge detection, region segmentation, etc. are used. The defects (broken warp) of the cloth are variable, the generalization capability of the method is extremely poor, the defects of the cloth are extremely sensitive to natural environment and the like, the whole result can be influenced, the method can only be in an experimental nature in the traditional chemical enterprises, and no method is provided for large-scale popularization.
Disclosure of Invention
The invention discloses a cloth defect detection method based on CTPN, which aims to solve the technical problem in the background technology.
In order to achieve the purpose, the invention adopts the following technical scheme:
the cloth defect detection method based on the CTPN comprises the following steps:
s1: acquiring image information of the cloth data set: collecting a plurality of cloth defect pictures by using a camera;
s2: dividing the image samples of the cloth data set;
s3: extracting the characteristics of the cloth data set image;
s4: constructing a CTPN-based model of the cloth data set:
(1) The image preprocessing module is used for carrying out average processing on the brightness of the whole picture under the condition of different lights such as strong light, dark light and the like, and adjusting the brightness to a uniform level by using a histogram equalization method;
(2) Optimizing a network framework, namely, aiming at the requirement of algorithm deployment ground end and further optimizing the network framework, and replacing an original VGG16 with a Mobilenetv2 network as a feature extractor;
(3) Using bidirectional LSTM: the CTPN algorithm is used for detecting a text box, is transferred to the defect detection of cloth, and mainly takes the key role of the bidirectional LSTM in the time sequence characteristic detection into consideration;
(4) Specific anchor design module: observing the defect condition of the cloth, wherein a vertical direction anchor in a CTPN algorithm is completely applied to cloth detection, and a group of 10 anchors with equal width are used for positioning the position of the defect of the cloth;
(5) A post-processing module: the method is used for defect detection, a defect detection image of one piece of cloth is preprocessed through histogram equalization of an image, then a trained model is loaded, and the model outputs classification branches, vertical coordinates of a defect frame and offset training of side-redefinition;
s5: defect detection of the cloth data set:
(1) The pictures are preprocessed, and after histogram equalization, the defects of cloth are more obvious and are beneficial to model training;
(2) Inputting a picture into a backbone network of a CTPN algorithm, performing feature extraction through the backbone network to generate an NxCxHxW feature map, performing 3x3 convolution by sliding in the feature map, then performing im2col operation, then obtaining a 3x3xC feature vector by each sliding, finally generating a new Nx9 CxHxW feature map, then inputting the feature map into BLSTM (bi-directional short-term) for sequence feature extraction, then transmitting the feature map into a full connection layer for further feature extraction, connecting the full connection layer with 3 full connection layer branches, respectively predicting vertical coordinate regression, classification score and horizontal translation regression, and finally using an algorithm constructed by a text based on the map to obtain an elongated rectangular frame.
In a preferred scheme, in S2, the collected pictures are divided into independent and non-repeating verification sets and test sets in a certain proportion by means of random sampling.
In a preferred scheme, in S3, the detection image features are extracted, an identification model of the detection image data set is constructed on the verification set, parameters of the identification model are determined, then the identification effect is detected by using the test set, and the performance of the model is verified.
In a preferred scheme, in S4, mobilenetv2 adopts convolution with convolution kernel 1x1xC1 and LSTM, convolves the feature map output by v2 with specially designed CONVLSTM, and generates the feature map to wait for inputting into RPN network for learning.
In a preferred scheme, in S4, the main implementation method of the bidirectional LSTM is: the bottom layer uses VGG16 characteristics, a W, H, C and Conv5 featuremap, a space window with the size of 3,3 and a window is used, the window is slid on the featuremap of the last convolution layer (Conv 5 of VGG 16), sequential windows in each row are circularly connected through BLSTM (bi-directional convolutional short-term) with convolution characteristics (3 x3 xC) of each window serving as input of BLSTM, bidirectional BLSTM is realized, information learning of an association sequence is enhanced, and the featuremap output by the last convolution layer of VGG is converted into a vector form for next BLSTM training.
In a preferred scheme, in the S4, the specific anchor design, because of the way of network optimization, the width and height of the feature map of the last layer output by the feature extractor are 1/16 of the width and height of the input image, and the feature map of the layer is pulled into a column of vectors for the subsequent training of BLSTM, while the output of BLSTM is transmitted to the FC, and the model of cloth detection is learned through three pre-measured and real value errors output by the network.
In a preferred embodiment, in S4, the phase training model of the post-processing module is mainly divided into three phases: the method comprises a first stage of data preprocessing, a second stage of network frame training, wherein a Mobilenetv2 frame is used as a feature extractor, and a third stage of specific anchor training.
In a preferred scheme, after the post-processing module trains the cloth defect detection models, when the models are deployed, the following flow steps need to be called:
A. collecting an RGB cloth defect picture from a camera;
B. preprocessing the picture to obtain the picture;
C. then sent to a modified mobilenetv2 network framework;
D. outputting the network framework optimization module, sending the output to a BLSTM module, and learning the time sequence characteristics of the cloth; and inputting the output characteristics into the FC, and finally outputting the classification branches, the vertical coordinates of the defect frame and the offset of the side-redefinition by the output model.
Therefore, the cloth defect detection method based on the CTPN is stable and reliable, has strong generalization capability, can be directly popularized, completely abandons the traditional algorithm, uses a deep learning method, improves the generalization capability and the accuracy, and reduces false detection; the CTPN algorithm of the text detection frame is applied to the cloth defect detection, and the model learns the relation of the time sequence characteristics, so that the robustness of the model is improved, and the accuracy of the cloth defect positioning is greatly improved; the optimization of the network frame not only meets the precision of cloth defect detection, but also reduces 3/4 reasoning time and accelerates the falling of the cloth detection algorithm of an industrial textile mill.
Drawings
Fig. 1 is a flowchart of a CTPN-based cloth defect detection method according to the present invention.
Fig. 2 is an optimization diagram of a network framework of the CTPN-based cloth defect detection method according to the present invention.
Fig. 3 is a cloth defect original drawing of the CTPN-based cloth defect detection method according to the present invention.
Fig. 4 is a cloth defect original image equalization diagram of the cloth defect detection method based on CTPN according to the present invention.
Fig. 5 is a label segmentation diagram of a cloth defect map of the cloth defect detection method based on CTPN according to the present invention.
Fig. 6 is a design diagram of anchor of the CTPN-based cloth defect detection method of the present invention.
Fig. 7 is a cloth defect testing effect diagram of the cloth defect detecting method based on CTPN according to the present invention.
Fig. 8 is a cloth defect detection diagram of the cloth defect detection method based on CTPN according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Referring to fig. 1 to 8, the CTPN-based cloth defect detecting method includes the steps of:
s1: acquiring image information of a cloth data set: collecting a plurality of cloth defect pictures by using a camera;
s2: dividing the image samples of the cloth data set;
s3: extracting the characteristics of the cloth data set image;
s4: constructing a piece goods data set model based on CTPN:
(1) The image preprocessing module is used for carrying out average processing on the brightness of the whole picture under the condition of different lights such as strong light, dark light and the like, and adjusting the brightness to a uniform level by using a histogram equalization method;
for example, the histogram equalization is used for the excessively dark pictures in fig. 3-5, and then the pictures are adjusted to have good brightness, so that the subsequent analysis can be used.
(2) Optimizing a network framework, namely, aiming at the requirement of algorithm deployment ground end and further optimizing the network framework, and replacing an original VGG16 with a Mobilenetv2 network as a feature extractor;
the network framework of VGG16 used by the original CTPN text box detection algorithm cannot meet the real-time performance of industrial-grade cloth defect detection, the common LSTM plays an important and positive role in processing two-dimensional time sequence data, and for a three-dimensional image, the network framework has rich spatial information, each pixel in the image has strong correlation with surrounding pixels, the common LSTM is very general in performance, and the CONVLSTM processes the three-dimensional image, not only can be added with convolution operation to capture spatial features on the basis of the LSTM, but also can be more effective in feature extraction of the image.
(3) Using bidirectional LSTM: the CTPN algorithm is used for detecting a text box, is transferred to the defect detection of cloth, and mainly takes the key role of the bidirectional LSTM in the time sequence characteristic detection into consideration;
(4) Specific Anchor design Module: observing the defect condition of the cloth, completely applying the arrangement of an anchor in the vertical direction in a CTPN algorithm to the cloth detection, and adopting a group of 10 anchors with equal width to position the defect position of the cloth;
the backsbone is an improved mobilenetv2, the model compression is very small, most of the speed and precision problems can be solved, and the CONVLSTM is added, so that the features extracted by the feature extractor have stronger spatial correlation, the learning of the time sequence features of the BLSTM module is facilitated, and an excellent effect is achieved in the aspect of cloth defect detection;
(5) A post-processing module: the method is used for defect detection, a defect detection image of one piece of cloth is preprocessed through histogram equalization of an image, then a trained model is loaded, and the model outputs classification branches, vertical coordinates of a defect frame and offset training of side-redefinition;
s5: defect detection of the cloth data set:
(1) The pictures are preprocessed, and after histogram equalization, the defects of cloth are more obvious and are beneficial to model training;
(2) Inputting a picture into a backbone network of a CTPN algorithm, performing feature extraction through the backbone network to generate an NxCxHxW feature map, performing 3x3 convolution by sliding in the feature map, then performing im2col operation, then obtaining a 3x3xC feature vector by each sliding, finally generating a new Nx9 CxHxW feature map, then inputting the feature map into BLSTM (bi-directional short-term) for sequence feature extraction, then transmitting the feature map into a full connection layer for further feature extraction, connecting the full connection layer with 3 full connection layer branches, respectively predicting vertical coordinate regression, classification score and horizontal translation regression, and finally using an algorithm constructed by a text based on the map to obtain an elongated rectangular frame.
In a preferred embodiment, in S2, the collected pictures are divided into independent and non-repetitive verification sets and test sets according to a certain proportion by means of random sampling.
In a preferred embodiment, in S3, the detection picture features are extracted, an identification model of the detection picture data set is constructed on the verification set, parameters of the identification model are determined, and then the test set is used to detect the identification effect and verify the performance of the model.
In a preferred embodiment, in S4, mobilenetv2 uses convolution with a convolution kernel of 1x1xC1 and LSTM, convolves the feature map output by v2 with a specially designed cons, and the generated feature map waits to be input into the RPN network for learning.
In a preferred embodiment, in S4, the main implementation method of the bidirectional LSTM is: the bottom layer uses VGG16 characteristics, a W, H, C and Conv5 featuremap, a space window with the size of 3,3 and a window is used, the window is slid on the featuremap of the last convolution layer (Conv 5 of VGG 16), sequential windows in each row are circularly connected through BLSTM (bi-directional convolutional short-term) with convolution characteristics (3 x3 xC) of each window serving as input of BLSTM, bidirectional BLSTM is realized, information learning of an association sequence is enhanced, and the featuremap output by the last convolution layer of VGG is converted into a vector form for next BLSTM training.
In a preferred embodiment, in S4, a specific anchor design, due to the way of network optimization, the width and height of the feature map of the last layer output by the feature extractor are 1/16 of the width and height of the input image, and the feature map of the layer is pulled into a column of vectors for the subsequent training of BLSTM, while the output of BLSTM is transmitted into FC, and the model of cloth detection is learned through three pre-measured and real value errors output by the network.
As shown by the small squares in FIG. 6, the width of Anchor is width = [16], heights = [11,16,23,33,48,68,97,139,198,283].
As shown in the experimental diagram of fig. 7, the line is a defect (broken portion) in the test set. Training parameter setting, training stage-by-stage reduction of learning rate, total training period of 100 epochs, each stage reduction of 0.1, stage [45,90], initial learning rate of 0.001, final learning rate of minus 5 power of 10, and network part structure diagram as shown in figure 8.
In a preferred embodiment, in S4, the phase training model of the post-processing module is mainly divided into three phases: the method comprises a first stage of data preprocessing, a second stage of network frame training, wherein a Mobilenetv2 frame is used as a feature extractor, and a third stage of specific anchor training.
In a preferred embodiment, after the training of the cloth defect detection models, the post-processing module needs to call the following flow steps when deploying the models:
A. collecting an RGB cloth defect picture from a camera;
B. preprocessing the picture to obtain the picture;
C. then, sending the data into a network frame of improved mobilenetv 2;
D. outputting the network framework optimization module, and sending the output to a BLSTM module to learn the time sequence characteristics of the cloth; and inputting the output characteristics into the FC, and finally outputting a classification branch, the vertical coordinate of the defect frame and the offset of the side-redefinition by the output model.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.
Claims (8)
1. The cloth defect detection method based on the CTPN is characterized by comprising the following steps:
s1: acquiring image information of a cloth data set: collecting a plurality of cloth defect pictures by using a camera;
s2: dividing the image samples of the cloth data set;
s3: extracting the characteristics of the cloth data set image;
s4: constructing a CTPN-based model of the cloth data set:
(1) The image preprocessing module is used for carrying out average processing on the brightness of the whole picture under the condition of different lights such as strong light, dark light and the like, and adjusting the brightness to a uniform level by using a histogram equalization method;
(2) Optimizing a network framework, namely, aiming at the requirement of algorithm deployment on a ground end and aiming at further optimization of the network framework, replacing an original VGG16 with a network of Mobilenetv2 to be used as a feature extractor;
(3) Using bidirectional LSTM: the CTPN algorithm is used for detecting a text box, is transferred to the defect detection of cloth, and mainly takes the key role of the bidirectional LSTM in the time sequence characteristic detection into consideration;
(4) Specific anchor design module: observing the defect condition of the cloth, completely applying the arrangement of an anchor in the vertical direction in a CTPN algorithm to the cloth detection, and adopting a group of 10 anchors with equal width to position the defect position of the cloth;
(5) A post-processing module: the method is used for defect detection, a defect detection image of one piece of cloth is preprocessed through histogram equalization of an image, then a trained model is loaded, and the model outputs classification branches, vertical coordinates of a defect frame and offset training of side-redefinition;
s5: defect detection of the cloth data set:
(1) The pictures are preprocessed, and after histogram equalization is performed, the defects of cloth are more obvious and are beneficial to model training;
(2) Inputting a picture into a main network of a CTPN algorithm, performing feature extraction through the main network to generate an NxCxHxW feature map, performing 3x3 convolution by sliding in the feature map, then performing im2col operation, then obtaining a 3x3xC feature vector by each sliding, finally generating a new Nx9 CxHxW feature map, then inputting the feature map into BLSTM for sequence feature extraction, then transmitting the feature into a full connection layer for further feature extraction, connecting 3 full connection layer branches after the full connection layer, respectively predicting vertical coordinate regression, classification score and horizontal translation regression, and finally using an algorithm constructed by a graph-based text to obtain an elongated rectangular frame.
2. The CTPN-based cloth defect detecting method of claim 1, wherein in S2, the collected pictures are divided into independent and non-repeating verification sets and test sets according to a certain proportion by means of random sampling.
3. The CTPN-based cloth defect detection method of claim 1, wherein in S3, the detection picture features are extracted, an identification model of the detection picture data set is constructed on the verification set, parameters of the identification model are determined, and then a test set is used to detect the identification effect and verify the model performance.
4. The CTPN-based cloth defect detection method as recited in claim 1, wherein in S4, mobilenetv2 uses convolution with a convolution kernel of 1x1xC1 and LSTM, and the feature diagram output by v2 is convolved by a specially designed convstm, and the generated feature diagram waits to be input into the RPN network for learning.
5. The CTPN-based cloth defect detecting method of claim 1, wherein in S4, the main implementation method of bidirectional LSTM is: the bottom layer uses VGG16 characteristics, uses a space window with the size of 3x3 by using Featuremap of Conv5 of W x H C, slides a window on Featuremap of the last convolution layer (Conv 5 of VGG 16), and sequential windows in each row are connected by a BLSTM loop, wherein the convolution characteristics (3 x3 xC) of each window are used as input of the BLSTM, then bidirectional BLSTM is realized, information learning of an association sequence is enhanced, and then the Featuremap output by the last convolution layer of the VGG is converted into a vector form for the next BLSTM training.
6. The CTPN-based cloth defect detection method as claimed in claim 1, wherein in S4, a specific anchor design, due to a network optimization manner, the feature pattern width and height of the last layer output by the feature extractor is 1/16 of the width and height of the input image, and the feature pattern of the layer is simultaneously pulled into a column of vectors for subsequent training of BLSTM, while the output of BLSTM is transmitted into FC, and a cloth detection model is learned through three predicted values and true value errors output by the network.
7. The cloth defect detecting method based on CTPN according to claim 1, characterized in that in S4, the stage training model of the post-processing module is mainly divided into three stages: the method comprises a first stage of data preprocessing, a second stage of network frame training, wherein a Mobilenetv2 frame is used as a feature extractor, and a third stage of specific anchor training.
8. The CTPN-based cloth defect inspection method of claim 7, wherein after the cloth defect inspection models are trained, the post-processing module needs to call the following process steps when deploying the models:
A. collecting an RGB cloth defect picture from a camera;
B. preprocessing the picture to obtain the picture;
C. then sent to a modified mobilenetv2 network framework;
D. outputting the network framework optimization module, sending the output to a BLSTM module, and learning the time sequence characteristics of the cloth; and inputting the output characteristics into the FC, and finally outputting a classification branch, the vertical coordinate of the defect frame and the offset of the side-redefinition by the output model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210528722.9A CN115239615A (en) | 2022-05-16 | 2022-05-16 | Cloth defect detection method based on CTPN |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210528722.9A CN115239615A (en) | 2022-05-16 | 2022-05-16 | Cloth defect detection method based on CTPN |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115239615A true CN115239615A (en) | 2022-10-25 |
Family
ID=83667913
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210528722.9A Pending CN115239615A (en) | 2022-05-16 | 2022-05-16 | Cloth defect detection method based on CTPN |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115239615A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117115147A (en) * | 2023-10-19 | 2023-11-24 | 山东华盛创新纺织科技有限公司 | Textile detection method and system based on machine vision |
-
2022
- 2022-05-16 CN CN202210528722.9A patent/CN115239615A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117115147A (en) * | 2023-10-19 | 2023-11-24 | 山东华盛创新纺织科技有限公司 | Textile detection method and system based on machine vision |
CN117115147B (en) * | 2023-10-19 | 2024-01-26 | 山东华盛创新纺织科技有限公司 | Textile detection method and system based on machine vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111223088B (en) | Casting surface defect identification method based on deep convolutional neural network | |
CN110866907A (en) | Full convolution network fabric defect detection method based on attention mechanism | |
CN113160123B (en) | Leather defect detection method, system and device based on YOLOv5 | |
CN106355579A (en) | Defect detecting method of cigarette carton surface wrinkles | |
CN111047655A (en) | High-definition camera cloth defect detection method based on convolutional neural network | |
CN112233067A (en) | Hot rolled steel coil end face quality detection method and system | |
CN110009622B (en) | Display panel appearance defect detection network and defect detection method thereof | |
CN111178177A (en) | Cucumber disease identification method based on convolutional neural network | |
CN113191216B (en) | Multi-user real-time action recognition method and system based on posture recognition and C3D network | |
CN114549507B (en) | Improved Scaled-YOLOv fabric flaw detection method | |
CN111080574A (en) | Fabric defect detection method based on information entropy and visual attention mechanism | |
CN115775236A (en) | Surface tiny defect visual detection method and system based on multi-scale feature fusion | |
CN115239615A (en) | Cloth defect detection method based on CTPN | |
CN116402769A (en) | High-precision intelligent detection method for textile flaws considering size targets | |
CN114972246A (en) | Die-cutting product surface defect detection method based on deep learning | |
CN114549489A (en) | Carved lipstick quality inspection-oriented instance segmentation defect detection method | |
CN110618129A (en) | Automatic power grid wire clamp detection and defect identification method and device | |
CN109596620A (en) | Product surface shape defect detection method and system based on machine vision | |
CN117495836A (en) | Plain-color fabric defect detection method | |
Liu et al. | Defect detection of fabrics with generative adversarial network based flaws modeling | |
CN212846839U (en) | Fabric information matching system | |
CN116228708A (en) | Industrial defect detection method and system based on visual cognition calculation | |
CN111882545B (en) | Fabric defect detection method based on bidirectional information transmission and feature fusion | |
CN113642473A (en) | Mining coal machine state identification method based on computer vision | |
CN117078608B (en) | Double-mask guide-based high-reflection leather surface defect detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |