CN111028250A - Real-time intelligent cloth inspecting method and system - Google Patents

Real-time intelligent cloth inspecting method and system Download PDF

Info

Publication number
CN111028250A
CN111028250A CN201911373207.2A CN201911373207A CN111028250A CN 111028250 A CN111028250 A CN 111028250A CN 201911373207 A CN201911373207 A CN 201911373207A CN 111028250 A CN111028250 A CN 111028250A
Authority
CN
China
Prior art keywords
cloth
image
defect
real
preprocessed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911373207.2A
Other languages
Chinese (zh)
Inventor
黄泽
陈锐桐
陈冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alnnovation Guangzhou Technology Co ltd
Original Assignee
Alnnovation Guangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alnnovation Guangzhou Technology Co ltd filed Critical Alnnovation Guangzhou Technology Co ltd
Priority to CN201911373207.2A priority Critical patent/CN111028250A/en
Publication of CN111028250A publication Critical patent/CN111028250A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Abstract

The invention discloses a real-time intelligent cloth inspecting method and a real-time intelligent cloth inspecting system, which specifically comprise the following steps: acquiring real-time images of the rapidly moving cloth by using a line scanning camera to obtain a plurality of real-time cloth images; carrying out image preprocessing on each real-time cloth image to obtain a preprocessed cloth image; predicting each preprocessed cloth image according to a pre-generated binary model to obtain the cloth defect probability of each preprocessed cloth image, and predicting the preprocessed cloth image corresponding to the cloth defect probability according to a pre-generated target detection model when the cloth defect probability is not less than the probability threshold to obtain the cloth defect area of each preprocessed cloth image and the cloth defect type corresponding to the cloth defect area. The invention can meet the real-time requirement of detection; the mode of classification and detection is adopted, the frequency of the occurrence of defects is fully utilized and is definitely lower than that of normal samples, and the accuracy of model detection is effectively improved.

Description

Real-time intelligent cloth inspecting method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a real-time intelligent cloth inspecting method and system.
Background
In the field of clothes production and manufacturing, quality testing personnel use a cloth inspecting machine to identify various flaws and defects of cloth, such as dirt, leak, laddering, mixed lines, wrinkles and the like, score different defects and finally judge whether the batch of cloth is qualified. The manual observation mode has the advantages of low detection speed and low efficiency, and human eyes cannot perform detection in the cloth movement process for a long time.
A feasible method is to add a light source and an optical imaging device on the existing cloth inspecting machine for image acquisition, use hardware such as an encoder and a PLC (programmable logic controller) to carry out automatic control, and use a machine or a deep learning method to carry out prediction classification on pictures after the images are acquired. In general, the traveling speed of the cloth inspecting machine can reach 1m/s, the width of the cloth is close to 2m, and a great deal of time delay exists in the existing deep learning-based method. The main reason is that the existing solution is to make the whole defect detection task into a classification task, so that the specific position of the defect cannot be accurately described, and for example, an industrial scene may require that defects of more than 2-3mm are detected; or as a target detection and image segmentation task, the two-stage deep network cannot meet the requirement of real-time performance due to the fact that some defects are very small. The latter method does not fully consider the problem that in practical items, the proportion of defects to the cloth is very low.
Disclosure of Invention
The invention aims to provide a real-time intelligent cloth inspecting method and a real-time intelligent cloth inspecting system.
In order to achieve the purpose, the invention adopts the following technical scheme:
the real-time intelligent cloth inspecting method comprises the following steps:
step S1, acquiring real-time images of the fast moving cloth by using a line scanning camera to obtain a plurality of real-time cloth images;
step S2, carrying out image preprocessing on each real-time cloth image to obtain a preprocessed cloth image;
step S3, predicting each preprocessed cloth image according to a pre-generated binary model to obtain the cloth defect probability of each preprocessed cloth image, and comparing the cloth defect probability with a preset probability threshold:
if the cloth defect probability is smaller than the probability threshold, quitting;
if the cloth defect probability is not less than the probability threshold, turning to step S4;
and step S4, predicting the preprocessed cloth image corresponding to the cloth defect probability according to a pre-generated target detection model to obtain a cloth defect area of each preprocessed cloth image and a cloth defect type corresponding to the cloth defect area.
As a preferable embodiment of the present invention, the step S2 specifically includes:
step S21, performing image cutting on each real-time cloth image to obtain a first cut image;
step S22, performing image cutting on each first cut image to obtain a second cut image;
and step S23, performing image enhancement on each second cut image to obtain a preprocessed cloth image.
In a preferred embodiment of the present invention, in step S23, the method used for image enhancement is histogram equalization.
As a preferable aspect of the present invention, after the step S4 is executed, the method further includes a step of performing post-processing on each of the preprocessed cloth images including the cloth defect region and the corresponding cloth defect type, and the method specifically includes:
and restoring each preprocessed cloth image according to the relative position relation between the preprocessed cloth images before the image cutting to obtain a restored cloth image, and acquiring the cloth defect area with the size larger than that of the second cut image according to the restored cloth image.
As a preferred embodiment of the present invention, the method further includes a process of generating the two classification models in advance, specifically including:
step A1, acquiring a plurality of defect cloth images containing defect areas, labeling the defect areas on each defect cloth image respectively to obtain a defect cloth label image containing a real label area and a corresponding defect type, and adding each defect cloth label image into a first data set;
step A2, cutting each defect cloth label image in the first data set to obtain a plurality of image blocks, marking the image blocks containing the real label area and the corresponding defect type as positive samples, and marking the image blocks not containing the real label area and the corresponding defect type as negative samples to obtain a second data set containing the positive samples and the negative samples;
and A3, training according to the second data set to obtain the binary model.
As a preferred embodiment of the present invention, the step a2 further includes performing data enhancement on the positive samples so that the number of the positive samples and the negative samples reaches a preset ratio.
As a preferred embodiment of the present invention, the value range of the preset ratio is [0.5, 1 ].
As a preferred embodiment of the present invention, the method further includes a method for generating the target detection model in advance, and specifically includes:
step B1, acquiring a plurality of defect cloth images containing defect areas, labeling the defect areas on each defect cloth image respectively to obtain a defect cloth label image containing a real label area and a corresponding defect type, and adding each defect cloth label image into a third data set;
and B2, training according to the third data set to obtain the target detection model.
A real-time intelligent perching system applies any one of the real-time intelligent perching methods, and specifically comprises the following steps:
the image acquisition module is used for acquiring real-time images of the fast moving cloth by adopting the line scanning camera to obtain a plurality of real-time cloth images;
the image preprocessing module is connected with the image acquisition module and used for preprocessing the images of the real-time cloth to obtain preprocessed cloth images;
a first prediction module connected to the image pre-processing module, the first prediction module comprising:
the prediction unit is used for predicting each preprocessed cloth image according to a pre-generated binary model to obtain the cloth defect probability of each preprocessed cloth image;
the comparison unit is connected with the prediction unit and used for comparing the cloth defect probability with a preset probability threshold value and outputting a corresponding comparison result when the cloth defect probability is not less than the probability threshold value;
and the second prediction module is connected with the first prediction module and used for predicting the preprocessed cloth image corresponding to the cloth defect probability according to the comparison result and a pre-generated target detection model to obtain a cloth defect area of each preprocessed cloth image and a cloth defect type corresponding to the cloth defect area.
As a preferred aspect of the present invention, the image preprocessing module specifically includes:
the first cutting unit is used for respectively carrying out image cutting on each real-time cloth image to obtain a first cutting image;
the second cutting unit is connected with the first cutting unit and used for respectively carrying out image cutting on each first cutting image to obtain a second cutting image;
and the image enhancement unit is connected with the second cutting unit and is used for respectively carrying out image enhancement on each second cutting image to obtain a preprocessed cloth image.
As a preferable scheme of the present invention, the present invention further includes an image post-processing module, respectively connected to the second prediction module and the image preprocessing module, configured to restore each of the preprocessed cloth images according to a relative position relationship between the preprocessed cloth images before the image cutting is performed, to obtain a restored cloth image, and obtain the cloth defect region with a size larger than that of the second cut image according to the restored cloth image.
As a preferred embodiment of the present invention, the present invention further includes a first model generation module, connected to the first prediction module, where the first model generation module specifically includes:
the device comprises a first data acquisition unit, a second data acquisition unit and a third data acquisition unit, wherein the first data acquisition unit is used for acquiring a plurality of defect cloth images containing defect areas, marking the defect areas on the defect cloth images respectively to obtain defect cloth marked images containing real marked areas and corresponding defect types, and adding the defect cloth marked images into a first data set;
the second data acquisition unit is connected with the first data acquisition unit and used for respectively cutting each defect cloth label image in the first data set to obtain a plurality of image blocks, recording the image blocks containing the real label area and the corresponding defect type as positive samples, and recording the image blocks not containing the real label area and the corresponding defect type as negative samples to obtain a second data set containing the positive samples and the negative samples;
and the first model training unit is connected with the second data acquisition unit and used for training according to the second data set to obtain the two classification models.
As a preferred embodiment of the present invention, the prediction device further includes a second model generation module, connected to the second prediction module, where the second model generation module specifically includes:
the third data acquisition unit is used for acquiring a plurality of defect cloth images containing defect areas, labeling the defect areas on the defect cloth images respectively to obtain defect cloth labeled images containing real labeled areas and corresponding defect types, and adding the defect cloth labeled images into a third data set;
and the second model training unit is connected with the third data acquisition unit and used for training according to the third data set to obtain the target detection model.
The invention has the beneficial effects that:
1) on a single 2080GPU, defect detection of 8K by 8K oversized pixel pictures can be completed every second, and the real-time requirement of detection is met;
2) the mode of classification and detection is adopted, the frequency of the occurrence of defects is fully utilized and is definitely lower than that of normal samples, and the accuracy rate of model detection is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flow chart of a real-time intelligent perching method according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating an image preprocessing method according to an embodiment of the present invention.
Fig. 3 is a flowchart illustrating a process of generating a two-class model in advance according to an embodiment of the present invention.
Fig. 4 is a flowchart illustrating a process of generating a target detection model in advance according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a real-time intelligent perching system according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
Wherein the showings are for the purpose of illustration only and are shown by way of illustration only and not in actual form, and are not to be construed as limiting the present patent; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if the terms "upper", "lower", "left", "right", "inner", "outer", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not indicated or implied that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and the specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the description of the present invention, unless otherwise explicitly specified or limited, the term "connected" or the like, if appearing to indicate a connection relationship between the components, is to be understood broadly, for example, as being fixed or detachable or integral; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through one or more other components or may be in an interactive relationship with one another. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Aiming at the problems in the prior art, the invention provides a real-time intelligent cloth inspecting method, which specifically comprises the following steps as shown in fig. 1:
step S1, acquiring real-time images of the fast moving cloth by using a line scanning camera to obtain a plurality of real-time cloth images;
step S2, carrying out image preprocessing on each real-time cloth image to obtain a preprocessed cloth image;
step S3, predicting each preprocessed cloth image according to the pre-generated binary model to obtain the cloth defect probability of each preprocessed cloth image, and comparing the cloth defect probability with a preset probability threshold:
if the cloth defect probability is smaller than the probability threshold, quitting;
if the cloth defect probability is not less than the probability threshold, turning to step S4;
and step S4, predicting the preprocessed cloth images corresponding to the cloth defect probability according to the pre-generated target detection model to obtain cloth defect areas of the preprocessed cloth images and cloth defect types corresponding to the cloth defect areas.
Specifically, in this embodiment, the present invention provides a method for real-time defect detection in fast cloth motion, which supports defect detection of cloth with a width of about 2 meters and a traveling speed of 1m/s, and is based on that the distribution of defect and normal samples is unbalanced, and the number of defect samples is far smaller than that of normal samples, the method is divided into two stages, and two models are used:
stage one: the classification method is used for quick recall, only two classifications are carried out, namely a defect sample and a normal sample, the algorithm in the stage focuses on the execution speed and the recall rate, the closer the recall rate is to 1, the better the recall rate is, and the misjudgment of the non-defect sample is allowed; the stage can fully utilize defective marking samples and normal various cloth samples;
and a second stage: and (3) filtering at least more than 95% of pictures of the samples divided into the defects in the first stage by using a target detection method, and performing multi-classification and defect positioning, wherein the attention point of the algorithm in the stage is the accuracy.
The classification model of the first stage can use a resnet18 network to connect with a softmax layer to obtain the probability of the cloth defects; the target detection model of stage two can use the method of anchor-free, such as fcos model using resnet50 as background, to adapt to defects of different sizes. In the training phase, the two models are not identical for the division of the sample training stage.
More specifically, a line scan camera is used to acquire the fast moving piece of cloth in real time, and for example, an 8K line scan camera acquires a real-time piece of cloth image with 8192 pixels by 8192 pixels. When the real-time cloth image is subjected to image preprocessing, the real-time cloth image is preferably cut into 1024 pixels by 1024 pixels, and the picture of the 1024 pixels by 1024 pixels is further extracted into a picture of 256 pixels by 256 pixels, and image enhancement is performed, for example, histogram equalization is used, so as to obtain a preprocessed cloth image.
And then, predicting by using a trained two-classification model and using 256 pixels by 256 pixels pictures with the batch size of 16, namely, the preprocessed cloth image to obtain the cloth defect probability, and detecting the preprocessed cloth image which is possibly defective at the next stage according to the result preset probability threshold of the test stage.
Finally, the anchor-free method of fcos _ r _50_ fpn _2X is preferably used, the reasoning speed and the reasoning accuracy are considered, and other two-stage target detection models such as fast-rcnn can be used for positioning the cloth defect type and the cloth defect area.
As a preferable scheme of the invention, after the detection result is obtained according to the target detection model, the method further comprises a post-processing process. The size of the divided picture is smaller to 256 pixels by 256 pixels in the picture classification stage; 1mm is about 10 pixels and defects exceeding 25mm, such as a filament length of possibly more than 60mm, cannot be shown. Therefore, in an actual scene, the defect position can be restored by using a post-processing mode. For example, the method comprises the steps of carrying out adjacent judgment on cloth defect areas by using various preprocessed cloth images which are adjacent in position and comprise the cloth defect areas and corresponding cloth defect types, merging the adjacent cloth defect areas and cloth defect types to obtain defect positions with the sizes exceeding 256 pixels.
As a preferable embodiment of the present invention, as shown in fig. 2, step S2 specifically includes:
step S21, performing image cutting on each real-time cloth image to obtain a first cut image;
step S22, performing image cutting on each first cut image to obtain a second cut image;
and step S23, respectively carrying out image enhancement on each second cutting image to obtain a preprocessed cloth image.
In a preferred embodiment of the present invention, in step S23, the method used for image enhancement is histogram equalization.
As a preferred embodiment of the present invention, after step S4 is executed, the method further includes a step of performing post-processing on each preprocessed cloth image including the cloth defect region and the corresponding cloth defect type, and specifically includes:
and restoring each preprocessed cloth image according to the relative position relation between the preprocessed cloth images before image cutting to obtain a restored cloth image, and acquiring a cloth defect area with the size larger than that of the second cut image according to the restored cloth image.
As a preferred embodiment of the present invention, the method further includes a process of generating a two-class model in advance, as shown in fig. 3, specifically including:
step A1, acquiring a plurality of defect cloth images containing defect areas, labeling the defect areas on each defect cloth image respectively to obtain a defect cloth label image containing a real label area and a corresponding defect type, and adding each defect cloth label image into a first data set;
step A2, cutting each defect cloth label image in the first data set respectively to obtain a plurality of image blocks, recording the image blocks containing the real label area and the corresponding defect type as positive samples, and recording the image blocks not containing the real label area and the corresponding defect type as negative samples to obtain a second data set containing the positive samples and the negative samples;
and step A3, training according to the second data set to obtain a second classification model.
Specifically, in this embodiment, preparation of a data set is first required before generating the binary model. In consideration of the actual defect size, the sample set collects pieces of cloth about 1m × 1m according to the actually collected size, such as an 8K line scan camera, and then a picture with the resolution of 8K × 8K is obtained. And after the picture is divided into 1024 pixels by 1024 pixels, marking the defect area and the defect type to obtain a first data set which contains all the defects.
For pictures with too large size, the effect of the two-classification model is not good; preferably, the 1024 pixel by 1024 pixel picture can be further divided into 16 256 pixel by 256 pixel small pictures, so that more positive and negative samples are generated, samples of all different defects are classified as positive samples, and the background or other normal pictures outside the other defect regions are randomly sampled to obtain negative samples, so as to obtain the second data set. Preferably, gray level conversion, brightness modification, contrast, saturation and the like are used for the positive samples, and the samples are turned up and down, left and right, supplemented and amplified in a physical simulation mode to obtain a third training data set, so that the ratio of the positive samples to the negative samples is 1:1 to 1: 2.
Finally, using a convolutional neural network, a backbone network preferably selects resnet18, then splices the full connection layer and the softmax layer, trains a binary model, and focuses on recall rate and running speed.
As a preferred embodiment of the present invention, step a2 further includes performing data enhancement on the positive samples so that the number of the positive samples and the negative samples reaches a preset ratio.
As a preferred scheme of the invention, the value range of the preset proportion is [0.5, 1 ].
As a preferred embodiment of the present invention, the method further includes a process of generating a target detection model in advance, as shown in fig. 4, specifically including:
step B1, acquiring a plurality of defect cloth images containing defect areas, labeling the defect areas on each defect cloth image respectively to obtain a defect cloth label image containing a real label area and a corresponding defect type, and adding each defect cloth label image into a third data set;
and step B2, training according to the third data set to obtain a target detection model.
Specifically, in the present embodiment, first, preparation of a data set needs to be performed. In consideration of the actual defect size, the sample set collects pieces of cloth about 1m × 1m according to the actually collected size, such as an 8K line scan camera, and then a picture with the resolution of 8K × 8K is obtained. And after the picture is divided into 1024 pixels by 1024 pixels, marking the defect area and the defect type to obtain a third data set which contains all the defects.
Then, directly dividing a subset training target detection model by using a third data set; the method is not limited to models such as fcos _ r _50_ fpn _2X, speed and precision are considered, and the main problems to be solved include defects of different scales and sample imbalance.
A real-time intelligent cloth inspecting system applies any one of the real-time intelligent cloth inspecting methods, as shown in fig. 5, the real-time intelligent cloth inspecting system specifically comprises:
the image acquisition module 1 is used for acquiring real-time images of the fast moving cloth by adopting a line scanning camera to obtain a plurality of real-time cloth images;
the image preprocessing module 2 is connected with the image acquisition module 1 and is used for preprocessing each real-time cloth image to obtain a preprocessed cloth image;
the first prediction module 3 is connected with the image preprocessing module 2, and the first prediction module 3 comprises:
the prediction unit 31 is configured to predict each preprocessed cloth image according to a pre-generated binary model to obtain a cloth defect probability of each preprocessed cloth image;
the comparison unit 32 is connected with the prediction unit 31 and is used for comparing the cloth defect probability with a preset probability threshold value and outputting a corresponding comparison result when the cloth defect probability is not less than the probability threshold value;
and the second prediction module 4 is connected with the first prediction module 3 and used for predicting the preprocessed cloth images corresponding to the cloth defect probability according to the comparison result and the pre-generated target detection model to obtain cloth defect areas of the preprocessed cloth images and cloth defect types corresponding to the cloth defect areas.
As a preferred embodiment of the present invention, the image preprocessing module 2 specifically includes:
the first cutting unit 21 is configured to perform image cutting on each real-time cloth image to obtain a first cut image;
the second cutting unit 22 is connected with the first cutting unit 21 and is used for respectively carrying out image cutting on each first cutting image to obtain a second cutting image;
and the image enhancement unit 23 is connected with the second cutting unit 22 and is used for respectively carrying out image enhancement on each second cutting image to obtain a preprocessed cloth image.
As a preferred scheme of the present invention, the present invention further includes an image post-processing module 5, which is respectively connected to the second prediction module 4 and the image preprocessing module 2, and is configured to restore each preprocessed cloth image according to a relative position relationship between the preprocessed cloth images before image cutting, to obtain a restored cloth image, and obtain a cloth defect region with a size larger than that of the second cut image according to the restored cloth image.
As a preferred embodiment of the present invention, the prediction device further includes a first model generation module 6 connected to the first prediction module 3, wherein the first model generation module 6 specifically includes:
the first data acquisition unit 61 is configured to acquire a plurality of defect cloth images including a defect region, label the defect region on each defect cloth image to obtain a defect cloth label image including a real label region and a corresponding defect type, and add each defect cloth label image into a first data set;
the second data acquisition unit 62 is connected to the first data acquisition unit 61, and is configured to cut each defect cloth label image in the first data set to obtain a plurality of image blocks, record an image block including a real label area and a corresponding defect type as a positive sample, and record an image block not including the real label area and the corresponding defect type as a negative sample to obtain a second data set including the positive sample and the negative sample;
and the first model training unit 63 is connected to the second data obtaining unit 62, and is configured to train to obtain two classification models according to the second data set.
As a preferred embodiment of the present invention, the prediction device further includes a second model generation module 7 connected to the second prediction module 4, where the second model generation module 7 specifically includes:
a third data obtaining unit 71, configured to obtain a plurality of defect cloth images including a defect region, label the defect region on each defect cloth image to obtain a defect cloth label image including a real label region and a corresponding defect type, and add each defect cloth label image to a third data set;
and the second model training unit 72 is connected to the third data acquisition unit 71, and is used for obtaining the target detection model according to the third data set training.
It should be understood that the above-described embodiments are merely preferred embodiments of the invention and the technical principles applied thereto. It will be understood by those skilled in the art that various modifications, equivalents, changes, and the like can be made to the present invention. However, such variations are within the scope of the invention as long as they do not depart from the spirit of the invention. In addition, certain terms used in the specification and claims of the present application are not limiting, but are used merely for convenience of description.

Claims (13)

1. A real-time intelligent cloth inspecting method is characterized by comprising the following steps:
step S1, acquiring real-time images of the fast moving cloth by using a line scanning camera to obtain a plurality of real-time cloth images;
step S2, carrying out image preprocessing on each real-time cloth image to obtain a preprocessed cloth image;
step S3, predicting each preprocessed cloth image according to a pre-generated binary model to obtain the cloth defect probability of each preprocessed cloth image, and comparing the cloth defect probability with a preset probability threshold:
if the cloth defect probability is smaller than the probability threshold, quitting;
if the cloth defect probability is not less than the probability threshold, turning to step S4;
and step S4, predicting the preprocessed cloth image corresponding to the cloth defect probability according to a pre-generated target detection model to obtain a cloth defect area of each preprocessed cloth image and a cloth defect type corresponding to the cloth defect area.
2. The real-time intelligent perching method of claim 1, wherein said step S2 specifically comprises:
step S21, performing image cutting on each real-time cloth image to obtain a first cut image;
step S22, performing image cutting on each first cut image to obtain a second cut image;
and step S23, performing image enhancement on each second cut image to obtain a preprocessed cloth image.
3. The real-time intelligent perching method of claim 2, wherein in said step S23, said image enhancement adopts histogram equalization.
4. The real-time intelligent cloth inspection method according to claim 2, wherein after the step S4 is executed, the method further includes a post-processing procedure for each of the preprocessed cloth images including the cloth defect region and the corresponding cloth defect type, specifically including:
and restoring each preprocessed cloth image according to the relative position relation between the preprocessed cloth images before the image cutting to obtain a restored cloth image, and acquiring the cloth defect area with the size larger than that of the second cut image according to the restored cloth image.
5. The real-time intelligent perching method of claim 1, further comprising a process of generating the two classification models in advance, specifically comprising:
step A1, acquiring a plurality of defect cloth images containing defect areas, labeling the defect areas on each defect cloth image respectively to obtain a defect cloth label image containing a real label area and a corresponding defect type, and adding each defect cloth label image into a first data set;
step A2, cutting each defect cloth label image in the first data set to obtain a plurality of image blocks, marking the image blocks containing the real label area and the corresponding defect type as positive samples, and marking the image blocks not containing the real label area and the corresponding defect type as negative samples to obtain a second data set containing the positive samples and the negative samples;
and A3, training according to the second data set to obtain the binary model.
6. The real-time intelligent perching method of claim 5, wherein said step A2, further comprises data enhancement of said positive samples, so that the number of said positive samples and said negative samples reaches a preset ratio.
7. The real-time intelligent perching method of claim 6, wherein the value range of said preset ratio is [0.5, 1 ].
8. The real-time intelligent perching method of claim 1, further comprising a method for generating the target detection model in advance, specifically comprising:
step B1, acquiring a plurality of defect cloth images containing defect areas, labeling the defect areas on each defect cloth image respectively to obtain a defect cloth label image containing a real label area and a corresponding defect type, and adding each defect cloth label image into a third data set;
and B2, training according to the third data set to obtain the target detection model.
9. A real-time intelligent perching system, which is characterized in that the real-time intelligent perching method of any one of claims 1 to 8 is applied, and the real-time intelligent perching system specifically comprises:
the image acquisition module is used for acquiring real-time images of the fast moving cloth by adopting the line scanning camera to obtain a plurality of real-time cloth images;
the image preprocessing module is connected with the image acquisition module and used for preprocessing the images of the real-time cloth to obtain preprocessed cloth images;
a first prediction module connected to the image pre-processing module, the first prediction module comprising:
the prediction unit is used for predicting each preprocessed cloth image according to a pre-generated binary model to obtain the cloth defect probability of each preprocessed cloth image;
the comparison unit is connected with the prediction unit and used for comparing the cloth defect probability with a preset probability threshold value and outputting a corresponding comparison result when the cloth defect probability is not less than the probability threshold value;
and the second prediction module is connected with the first prediction module and used for predicting the preprocessed cloth image corresponding to the cloth defect probability according to the comparison result and a pre-generated target detection model to obtain a cloth defect area of each preprocessed cloth image and a cloth defect type corresponding to the cloth defect area.
10. The real-time intelligent perching system of claim 9, wherein said image preprocessing module specifically comprises:
the first cutting unit is used for respectively carrying out image cutting on each real-time cloth image to obtain a first cutting image;
the second cutting unit is connected with the first cutting unit and used for respectively carrying out image cutting on each first cutting image to obtain a second cutting image;
and the image enhancement unit is connected with the second cutting unit and is used for respectively carrying out image enhancement on each second cutting image to obtain a preprocessed cloth image.
11. The real-time intelligent cloth inspection system according to claim 10, further comprising an image post-processing module, respectively connected to the second prediction module and the image preprocessing module, for restoring each of the preprocessed cloth images according to a relative positional relationship between the preprocessed cloth images before the image cutting, to obtain a restored cloth image, and obtaining the cloth defect region having a size larger than that of the second cut image according to the restored cloth image.
12. The real-time intelligent perching system of claim 9, further comprising a first model generation module, connected to said first prediction module, said first model generation module specifically comprising:
the device comprises a first data acquisition unit, a second data acquisition unit and a third data acquisition unit, wherein the first data acquisition unit is used for acquiring a plurality of defect cloth images containing defect areas, marking the defect areas on the defect cloth images respectively to obtain defect cloth marked images containing real marked areas and corresponding defect types, and adding the defect cloth marked images into a first data set;
the second data acquisition unit is connected with the first data acquisition unit and used for respectively cutting each defect cloth label image in the first data set to obtain a plurality of image blocks, recording the image blocks containing the real label area and the corresponding defect type as positive samples, and recording the image blocks not containing the real label area and the corresponding defect type as negative samples to obtain a second data set containing the positive samples and the negative samples;
and the first model training unit is connected with the second data acquisition unit and used for training according to the second data set to obtain the two classification models.
13. The real-time intelligent perching system of claim 9, further comprising a second model generation module, connected to said second prediction module, said second model generation module specifically comprising:
the third data acquisition unit is used for acquiring a plurality of defect cloth images containing defect areas, labeling the defect areas on the defect cloth images respectively to obtain defect cloth labeled images containing real labeled areas and corresponding defect types, and adding the defect cloth labeled images into a third data set;
and the second model training unit is connected with the third data acquisition unit and used for training according to the third data set to obtain the target detection model.
CN201911373207.2A 2019-12-27 2019-12-27 Real-time intelligent cloth inspecting method and system Pending CN111028250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911373207.2A CN111028250A (en) 2019-12-27 2019-12-27 Real-time intelligent cloth inspecting method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911373207.2A CN111028250A (en) 2019-12-27 2019-12-27 Real-time intelligent cloth inspecting method and system

Publications (1)

Publication Number Publication Date
CN111028250A true CN111028250A (en) 2020-04-17

Family

ID=70214130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911373207.2A Pending CN111028250A (en) 2019-12-27 2019-12-27 Real-time intelligent cloth inspecting method and system

Country Status (1)

Country Link
CN (1) CN111028250A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801965A (en) * 2021-01-21 2021-05-14 中南大学 Sintering belt foreign matter monitoring method and system based on convolutional neural network
CN115797349A (en) * 2023-02-07 2023-03-14 广东奥普特科技股份有限公司 Defect detection method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106996935A (en) * 2017-02-27 2017-08-01 华中科技大学 A kind of multi-level fuzzy judgment Fabric Defects Inspection detection method and system
CN109191430A (en) * 2018-07-27 2019-01-11 江苏理工学院 A kind of plain color cloth defect inspection method based on Laws texture in conjunction with single classification SVM
CN109509171A (en) * 2018-09-20 2019-03-22 江苏理工学院 A kind of Fabric Defects Inspection detection method based on GMM and image pyramid
CN109509187A (en) * 2018-11-05 2019-03-22 中山大学 A kind of efficient check algorithm for the nibs in big resolution ratio cloth image
CN110175988A (en) * 2019-04-25 2019-08-27 南京邮电大学 Cloth defect inspection method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106996935A (en) * 2017-02-27 2017-08-01 华中科技大学 A kind of multi-level fuzzy judgment Fabric Defects Inspection detection method and system
CN109191430A (en) * 2018-07-27 2019-01-11 江苏理工学院 A kind of plain color cloth defect inspection method based on Laws texture in conjunction with single classification SVM
CN109509171A (en) * 2018-09-20 2019-03-22 江苏理工学院 A kind of Fabric Defects Inspection detection method based on GMM and image pyramid
CN109509187A (en) * 2018-11-05 2019-03-22 中山大学 A kind of efficient check algorithm for the nibs in big resolution ratio cloth image
CN110175988A (en) * 2019-04-25 2019-08-27 南京邮电大学 Cloth defect inspection method based on deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801965A (en) * 2021-01-21 2021-05-14 中南大学 Sintering belt foreign matter monitoring method and system based on convolutional neural network
CN115797349A (en) * 2023-02-07 2023-03-14 广东奥普特科技股份有限公司 Defect detection method, device and equipment

Similar Documents

Publication Publication Date Title
Shipway et al. Automated defect detection for fluorescent penetrant inspection using random forest
CN111325713A (en) Wood defect detection method, system and storage medium based on neural network
CN111951249A (en) Mobile phone light guide plate defect visual detection method based on multitask learning network
CN107966454A (en) A kind of end plug defect detecting device and detection method based on FPGA
CN111127448B (en) Method for detecting air spring fault based on isolated forest
CN109307675A (en) A kind of product appearance detection method and system
CN111986195B (en) Appearance defect detection method and system
CN110349125A (en) A kind of LED chip open defect detection method and system based on machine vision
CN104483320A (en) Digitized defect detection device and detection method of industrial denitration catalyst
CN111591715A (en) Belt longitudinal tearing detection method and device
CN111028250A (en) Real-time intelligent cloth inspecting method and system
CN104048966B (en) The detection of a kind of fabric defect based on big law and sorting technique
CN113111903A (en) Intelligent production line monitoring system and monitoring method
CN116228651A (en) Cloth defect detection method, system, equipment and medium
CN115205209A (en) Monochrome cloth flaw detection method based on weak supervised learning
Kulkarni et al. An automated computer vision based system for bottle cap fitting inspection
JP3572750B2 (en) Automatic evaluation method for concrete defects
CN111986145A (en) Bearing roller flaw detection method based on fast-RCNN
Kunze et al. Efficient deployment of deep neural networks for quality inspection of solar cells using smart labeling
CN103926255A (en) Method for detecting surface defects of cloth based on wavelet neural network
CN114331961A (en) Method for defect detection of an object
Hashmi et al. Computer-vision based visual inspection and crack detection of railroad tracks
CN112836724A (en) Object defect recognition model training method and device, electronic equipment and storage medium
CN115937555A (en) Industrial defect detection algorithm based on standardized flow model
CN113592859B (en) Deep learning-based classification method for defects of display panel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200417