CN112270687A - Cloth flaw identification model training method and cloth flaw detection method - Google Patents

Cloth flaw identification model training method and cloth flaw detection method Download PDF

Info

Publication number
CN112270687A
CN112270687A CN202011112313.8A CN202011112313A CN112270687A CN 112270687 A CN112270687 A CN 112270687A CN 202011112313 A CN202011112313 A CN 202011112313A CN 112270687 A CN112270687 A CN 112270687A
Authority
CN
China
Prior art keywords
data
contour
training
cloth
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011112313.8A
Other languages
Chinese (zh)
Inventor
程洁
王晓珂
胡晓伟
陈成才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinghu Shanghai Intelligent Technology Co ltd
Shanghai Xiaoi Robot Technology Co Ltd
Original Assignee
Jinghu Shanghai Intelligent Technology Co ltd
Shanghai Xiaoi Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinghu Shanghai Intelligent Technology Co ltd, Shanghai Xiaoi Robot Technology Co Ltd filed Critical Jinghu Shanghai Intelligent Technology Co ltd
Priority to CN202011112313.8A priority Critical patent/CN112270687A/en
Publication of CN112270687A publication Critical patent/CN112270687A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Abstract

The application provides a training method of a cloth flaw identification model, which comprises the following steps: providing base data of a plurality of samples, wherein the base data corresponds to the profile of the known cloth defects in the samples; obtaining a plurality of training data based on the base data for each of the samples; and training a neural network model based on the sample with the plurality of training data so that the neural network model can output a plurality of contour recognition data respectively corresponding to the plurality of training data based on the sample. The basic data of the sample corresponds to the profile of the known cloth defect in the sample, the training data correspond to the training profiles obtained based on the profile of the known cloth defect in the sample, one sample is provided with the training samples, the neural network model is trained based on the sample with the training data, and the profile recognition model has the capability of inputting one sample and outputting the profile recognition data corresponding to the training data respectively through the training method.

Description

Cloth flaw identification model training method and cloth flaw detection method
Technical Field
The application relates to the technical field of computers, in particular to a training method of a cloth defect identification model, a cloth defect detection method, electronic equipment and a computer readable storage medium.
Background
Object detection is a classic problem in the field of computer vision, with the aim of detecting the position, contour, type, etc. of a target object in an input image. In the target detection method in the prior art, convolution calculation is performed on an image to be detected by utilizing convolution cores representing different target categories, and the position, the contour and the type of a target object are obtained through classification.
Disclosure of Invention
In view of this, embodiments of the present application provide a training method for a fabric defect identification model, a detection method for a fabric defect, an electronic device, and a computer-readable storage medium, so as to solve the technical problems in the prior art that accurate identification of a fabric defect cannot be achieved and time consumption is short.
According to an aspect of the present application, an embodiment of the present application provides a method for training a fabric defect identification model, including: providing base data of a plurality of samples, wherein the base data corresponds to the profile of the known cloth defects in the samples; obtaining a plurality of training data based on the base data for each of the samples; and training a neural network model based on the sample with the plurality of training data so that the neural network model can output a plurality of contour recognition data respectively corresponding to the plurality of training data based on the sample.
According to another aspect of the present application, an embodiment of the present application provides a method for detecting a fabric defect, including: extracting characteristic information of an image to be detected, wherein the image to be detected comprises one or more cloth flaws; inputting the characteristic information into a contour recognition model obtained by training in any one of the above methods to obtain a plurality of contour prediction data of the image to be detected; performing feature superposition on the plurality of contour prediction data to obtain prediction superposition data; and obtaining profile data of the cloth defect based on the predicted superposition data.
According to yet another aspect of the present application, an embodiment of the present application provides an electronic device, including: a processor; a memory; and computer program instructions stored in the memory, which when executed by the processor, cause the processor to perform the method of any one of the above.
According to yet another aspect of the present application, an embodiment of the present application provides a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the method as described in any of the preceding.
According to another aspect of the present application, an embodiment of the present application provides a cloth defect detection data processing method, including: acquiring flaw information of the cloth according to any one of the detection methods; and determining a defect processing mode according to at least the defect information.
According to the training method for the contour recognition model, the plurality of training data of the basic data of the sample are obtained, the neural network model is trained on the basis of the sample with the plurality of training data, and therefore the neural network model can output the plurality of contour recognition data corresponding to the plurality of training data respectively on the basis of the sample. By this training method, the contour recognition model has the ability to input one sample and output a plurality of contour recognition data corresponding to each of a plurality of training data. Specifically, the basic data of the sample corresponds to the profile of the known fabric defect in the sample, the training data corresponds to the training profiles obtained based on the profile of the known fabric defect in the sample, and since one sample has the training samples, the neural network model is trained based on the sample with the training data, and the neural network model can have the profile recognition data which can output the profile recognition data corresponding to the training data.
According to the cloth defect detection method provided by the embodiment of the application, the characteristic information of the image to be detected is extracted and input into the contour recognition model obtained by training through the training method, so that the plurality of contour prediction data of the image to be detected are obtained, the plurality of contour prediction data are overlapped to obtain the prediction overlapped data, and the contour data of the cloth defect are obtained based on the prediction overlapped data. The contour recognition model obtained by the training method obtains a plurality of contour prediction data of an image to be detected, and the contours of one or more cloth flaws are obtained more accurately and clearly by superposing the plurality of contour prediction data, so that the contour data of the cloth flaws are obtained efficiently and accurately. Specifically, since the contour recognition model can obtain a plurality of contour prediction data reflecting the contour of one or more cloth defects in the image to be detected, cloth defects or a plurality of cloth defects overlapped together from a plurality of size ranges, even if the contour prediction data cannot be predicted from one size range, the contour prediction data can be predicted from the other size ranges. A plurality of contour prediction numbers representing the contours of cloth flaws in the images to be detected predicted in a plurality of size ranges are superposed, and the prediction superposed data can clearly reflect the detection results of the cloth flaws or the superposed cloth flaws. And finally obtaining the profile data of the cloth defect by superposing the plurality of profile prediction data.
Drawings
Fig. 1 is a schematic flowchart illustrating a method for training a contour recognition model according to an embodiment of the present application.
Fig. 2 is a schematic flowchart illustrating a process of training a neural network model based on a sample with a plurality of training data in a training method for a contour recognition model according to an embodiment of the present application.
Fig. 3 is a schematic flowchart illustrating a process of training a neural network model based on a sample with a plurality of training data in a training method for a contour recognition model according to an embodiment of the present application.
Fig. 4 is a schematic flowchart illustrating a method for detecting a target object according to an embodiment of the present application.
Fig. 5 is a schematic flow chart illustrating a process of obtaining predicted stacked data by stacking features of a plurality of contour predicted data in a target object detection method according to an embodiment of the present invention.
Fig. 6 is a schematic flowchart illustrating a method for detecting a target object according to an embodiment of the present application.
Fig. 7 is a schematic flowchart illustrating a process of obtaining contour data of a target object based on predicted overlay data in a target object detection method according to an embodiment of the present application.
Fig. 8 is a schematic flowchart illustrating a process of determining contour data of a target object based on predicted overlay data and category contour prediction data in a target object detection method according to an embodiment of the present application.
Fig. 9 is a schematic flow chart illustrating a process of extracting feature information of an image to be detected in a target object detection method according to an embodiment of the present application.
Fig. 10 is a schematic flowchart illustrating a process of acquiring an image to be detected in a target object detection method according to an embodiment of the present application.
Fig. 11a is a schematic flowchart illustrating a method for detecting defects on a fabric image according to an embodiment of the present disclosure.
Fig. 11b is a schematic flowchart illustrating a method for detecting defects on a fabric image according to an embodiment of the present disclosure.
Fig. 12 is a schematic structural diagram of a training apparatus for contour recognition models according to an embodiment of the present application.
Fig. 13 is a schematic structural diagram of a target object detection apparatus according to an embodiment of the present application.
Fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As described above, object detection is a classic problem in the field of computer vision, and aims to detect the position, contour, kind, and the like of a target object in an input image. In the prior art, a target detection method uses convolution cores representing different target classes to perform convolution calculation on an image to be detected, and obtains the position, the contour and the type of a target object through classification. However, the method has low detection accuracy on the target object or the overlapped target objects, cannot accurately detect the outline of the target object, cannot accurately segment the overlapped target objects, and has low calculation speed and long consumed time.
In view of the above technical problems, the basic concept of the present application provides a training method for a contour recognition model, in which a plurality of training data are obtained based on basic data of a sample, and a neural network model is trained through a sample with a plurality of training data, so that the contour recognition model has a capability of inputting a sample and outputting a plurality of contour recognition data respectively corresponding to the plurality of training data. Based on the contour recognition model, the detection method comprises the steps of inputting characteristic information of an image to be detected into the contour recognition model, obtaining a plurality of contour prediction data of the image to be detected, reflecting the contour of one or a plurality of target objects in the image to be detected from a plurality of size ranges, superposing the plurality of contour prediction data to enable the contour of the one or a plurality of target objects to be obtained more accurately and clearly, and finally obtaining the contour data of the target objects efficiently and accurately.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart illustrating a method for training a contour recognition model according to an embodiment of the present application. As shown in fig. 1, the training method includes the following steps:
step 101: base data for a plurality of samples is provided, the base data corresponding to contours of the identified objects in the samples.
Specifically, the position, contour, and class of the recognition object in the sample are known and are references provided for training the contour recognition model. The underlying data in the sample corresponds to the contours of the identified objects in the sample. The sample may be a picture of the cloth with known locations and contours of cloth imperfections. The profile of the identified object includes a profile of a known cloth defect. The profile identification model comprises a cloth defect profile identification model.
Step 102: a plurality of training data is acquired based on the base data for each sample.
Specifically, the basic data corresponds to the contour of the recognition object in the sample, and the training data corresponds to a training contour obtained based on the contour of the recognition object in the sample. The sample has basic data and training data.
Step 103: the neural network model is trained based on the sample with the plurality of training data, so that the neural network model can output a plurality of contour recognition data respectively corresponding to the plurality of training data based on the sample.
Specifically, a sample with a plurality of training samples is input to a neural network model, and the neural network model is trained so that the neural network model has a capability of outputting a plurality of contour recognition data corresponding to a plurality of training data, respectively.
In one embodiment, the contour recognition model may be a neural network model such as MobilenetV2, Fast-RCNN, R-FCN, YOLO (You Only Look one), and SSD (Single Shot Mutibox Detector). It should be understood that the contour identification model may also be other neural network models that can implement outputting a plurality of contour identification data corresponding to a plurality of training data, respectively.
In the embodiment of the application, the neural network model is trained based on the sample with the training data by acquiring the training data of the basic data of the sample, so that the neural network model can output a plurality of contour recognition data respectively corresponding to the training data based on the sample. By this training method, the contour recognition model has the ability to input one sample and output a plurality of contour recognition data corresponding to each of a plurality of training data. Specifically, the basic data of the sample corresponds to the contour of the recognition object in the sample, the training data corresponds to the training contours obtained based on the contour of the recognition object in the sample, and since one sample has the training samples, the neural network model is trained based on the sample with the training data, and the neural network model can output a plurality of contour recognition data respectively corresponding to the training data.
Fig. 2 is a schematic flowchart illustrating a process of training a neural network model based on a sample with a plurality of training data in a training method for a contour recognition model according to an embodiment of the present application. As shown in fig. 2, training the neural network model based on the sample with a plurality of training data specifically includes the following steps:
step 2031: and extracting characteristic information of the sample.
In particular, the characteristic information of the sample is used to characterize the sample. And inputting the sample into a feature extraction model to obtain the feature information of the sample. The feature extraction model can be a convolution neural network model such as MobileNetv2, SqueezeNet, ShuffleNet and the like, and the convolution processing is carried out on the pixels of the sample through a plurality of different convolution kernels so as to obtain a plurality of feature layers corresponding to the sample. And screening basic characteristic layers from the plurality of characteristic layers, and superposing the basic characteristic layers to obtain the characteristic information of the sample.
In one embodiment, when the feature extraction model is MobileNetv2, layers 3, 7, 14, and 19 may be used as basic feature layers from 19 feature layers, dimensions of feature matrices of the selected layers are 1/2, 1/4, 1/8, and 1/16 of the original image, and layers 3, 7, 14, and 19 may be used as basic feature layers to be superimposed to obtain feature information of the sample.
Step 2032: the feature information is input to the neural network model to acquire a plurality of contour recognition data corresponding to the plurality of training data, respectively.
Specifically, feature information of one sample is input to a neural network model, and the neural network model outputs a plurality of contour recognition data corresponding to a plurality of training data.
Step 2033: a loss result is obtained based on the plurality of contour recognition data and the plurality of training data.
Specifically, since the plurality of contour recognition data and the plurality of training data correspond to each other, the loss of the contour recognition data and the loss of the training data are calculated with reference to each training data, and the loss result is obtained.
Step 2034: based on the loss result, parameters of the neural network model are adjusted.
Specifically, the training data of the sample input to the neural network model is an input reference value, the contour recognition data corresponding to the training data is an output value, and if there is a difference between the output value and the input reference value, the parameters of the neural network model need to be adjusted until the loss result is within a preset range, and the adjustment of the parameters of the neural network model is stopped. The loss result indicates that the difference between the plurality of profile identification data and the corresponding plurality of profile identification data is within the preset range in the preset range.
In the embodiment of the application, the characteristic information of the sample is extracted, the characteristic information of the sample is input into the neural network model, a plurality of contour recognition data corresponding to a plurality of training data are obtained, loss of the contour recognition data and the training data is calculated respectively by taking each training data as reference, a loss result is obtained, the parameters of the neural network model are adjusted based on the loss result, and the adjustment of the parameters of the neural network model is stopped until the loss result is within a preset range. And taking the training data as a reference value, and making the difference between the output value of the neural network model and the reference value within a preset range through training.
In one embodiment, the range of the contour corresponding to each of the plurality of training data is less than or equal to the range of the contour of the recognition object corresponding to the basic data. Because the outline range corresponding to the training data is less than or equal to the outline range of the corresponding recognition object, the recognition objects in different size ranges are reflected by a plurality of outline recognition data output from the outline recognition model, and the parameters of the outline recognition model are adjusted from different ranges, so that the recognition capability of the outline recognition model is more accurate.
In one embodiment, obtaining the plurality of training data based on the sample basis data comprises: and taking the geometric center of the outline of the identification object corresponding to the basic data as a zooming center, and zooming the outline of the identification object corresponding to the basic data according to M different zooming multiples to obtain M training data.
The M training data acquired in the reduction mode can reflect the contours of the recognition objects in the samples from different proportions, so that the reference values are clearer in different proportions, and the output contour recognition data reflect the recognition objects in different proportions, so that the contour recognition model has the capability of recognizing the recognition objects from different scales, and the recognition objects are recognized more comprehensively, accurately and quickly.
In a further embodiment, M may be any integer from 2 to 8, and the scaling factor may range from 0.5 to 1.
In a further embodiment, M is 6, and the 6 different scaling factors take values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1.0. And taking the geometric center of the outline of the recognition object corresponding to the basic data as a scaling center, and reducing the outline of the recognition object corresponding to the basic data according to scaling multiples of 0.5, 0.6, 0.7, 0.8, 0.9 and 1.0 to obtain 6 training data. The contour of the recognition object in the sample is reflected from 6 different scales, and the contour recognition model is trained by the sample with 6 training data, so that the contour recognition model has the capability of recognizing the recognition object from 6 different scales.
Fig. 3 is a schematic flowchart illustrating a process of training a neural network model based on a sample with a plurality of training data in a training method for a contour recognition model according to an embodiment of the present application. As shown in fig. 3, inputting the feature information into the neural network model to obtain a plurality of contour recognition data corresponding to the plurality of training data respectively includes the following steps:
step 30321: and (3) performing convolution on the characteristic information by utilizing a convolution layer with the convolution kernel size of 1 multiplied by 1 and the number of output channels of N to obtain a plurality of contour identification data, wherein N is the number of training data.
Specifically, the size of the convolution kernel is 1 × 1, so that the identification object in the sample is on the pixel scale, the contour of the identification object is more accurately acquired, the N convolution layers correspond to the N training data, the identification object is identified from different scales, the identification result is more comprehensive and accurate, the contour identification data of different scales is identified, the efficiency is higher, and the time is more saved.
Fig. 4 is a schematic flowchart illustrating a method for detecting a target object according to an embodiment of the present application. As shown in fig. 4, the detection method includes the following steps:
step 401: and extracting the characteristic information of the image to be detected, wherein the image to be detected comprises one or more target objects.
Specifically, the number, position and contour of the target objects in the image to be detected are unknown, and the detection needs to be performed based on the detection method. The target object comprises a cloth flaw, the image to be detected can be a cloth picture, and the image to be detected comprises the image of the cloth to be detected. The feature extraction model for extracting the feature information of the image to be detected can adopt a convolutional neural network model such as MobileNetv2, SqueezeNet, ShuffleNet and the like.
In one embodiment, the feature extraction model for extracting the feature information of the image to be detected is the same as the feature extraction model for extracting the feature information of the sample in the training method for training the contour recognition model.
Step 402: and inputting the characteristic information into the contour recognition model obtained by training by any method to obtain a plurality of contour prediction data of the image to be detected.
Specifically, since the contour recognition model trained by the above-described method has the capability of inputting the feature information of one sample and outputting a plurality of contour prediction data corresponding to each of a plurality of training data obtained based on the contour of the recognition object in the sample, the plurality of contour prediction data represent the contours of prediction target objects in different size ranges. Then, by inputting the feature information of one image to be detected, a plurality of contour prediction data of the image to be detected can be output, and the plurality of contour prediction data predict the contour of the target object in the image to be detected from a plurality of size ranges.
In one embodiment, the contour recognition model may be a neural network model such as MobilenetV2, Fast-RCNN, R-FCN, YOLO (You Only Look one), and SSD (Single Shot Mutibox Detector) trained by the above method.
Step 403: and performing feature superposition on the plurality of contour prediction data to obtain prediction superposition data.
Specifically, for a target object in an image to be detected or a plurality of target objects overlapped together, the prediction cannot be predicted or clearly divided in obtaining contour prediction data from one size range in the related art. But predicts the contour of the target object in the image to be detected from a plurality of size ranges due to a plurality of contour prediction data. For a target object or a plurality of target objects overlapped together, even if it cannot be predicted in one size range, it can be predicted in the other size range. A plurality of contour prediction numbers representing contours of target objects in a plurality of size range prediction images to be detected are superimposed, and prediction superimposed data can clearly reflect detection results of the target objects or the superimposed target objects.
Step 404: based on the predicted overlay data, contour data of the target object is obtained.
The predicted superimposition data can clearly reflect the detection result of the target object or a plurality of target objects superimposed together, obtain contour data of the target object based on the predicted superimposition data, determine the position and contour of the target object, and output the detection result. The outlines of the overlapped target objects can be clearly separated, and the outline of each target object can be accurately predicted.
In the embodiment of the application, the contour recognition model obtained by the training method obtains a plurality of contour prediction data of an image to be detected, and the contours of one or more target objects are more accurately and clearly obtained by superposing the plurality of contour prediction data, so that the contour data of the target objects are finally obtained efficiently and accurately. Since the contour recognition model can obtain a plurality of contour prediction data reflecting the contour of one or more target objects in the image to be detected, the target object or a plurality of target objects overlapped together from a plurality of size ranges, even if the contour prediction data cannot be predicted in one size range, the contour prediction data can be predicted in the other size range. A plurality of contour prediction numbers representing contours of target objects in a plurality of size range prediction images to be detected are superimposed, and prediction superimposed data can clearly reflect detection results of the target objects or the superimposed target objects. The contour data of the target object is finally obtained by superimposing the plurality of contour prediction data.
In an embodiment, an area within a contour range corresponding to contour prediction data is a target area, and an area outside the contour range is a background area, and fig. 5 shows a schematic flow chart of obtaining prediction superposition data by performing feature superposition on a plurality of contour prediction data in a detection method of a target object according to an embodiment of the present invention, and as shown in fig. 5, performing feature superposition on a plurality of contour prediction data to obtain prediction superposition data includes the following steps:
step 5031: and when the same pixel in the plurality of contour prediction data is positioned in the target area, predicting that the pixel in the superposition data is the target area of the first target object.
Specifically, since the plurality of predicted contour data reflect the prediction results of the contour of the target object from the plurality of size ranges, when the same pixel in the plurality of contour prediction data is located in the target region, it indicates that the contour prediction module regards this pixel as belonging to the target object in the plurality of scale ranges, and therefore, the pixel in the predicted superimposition data is the target region of the first target object.
Step 5032: and when the same pixel in the plurality of contour prediction data is positioned in the background area, predicting the pixel in the superposition data to be the background area.
When the same pixel in the plurality of contour prediction data is located in the background area, the contour prediction module considers the pixel as belonging to the background area in a plurality of scale ranges, and therefore the pixel in the prediction superposition data is regarded as the background area.
Step 5033: and when the same pixel in the plurality of contour prediction data is respectively positioned in the target area and the background area, predicting that the pixel in the superposition data is the target area of the second target object.
When the same pixel in the plurality of contour prediction data is respectively located in the target area and the background area, the pixel is predicted to be a target object in a certain scale range, the pixel is predicted to be a background area in the certain scale range, the pixel in the superposition data is predicted to be a target area of a second target object, and the plurality of overlapped target objects are divided in different scale ranges, so that the contour of each target object is clearly obtained.
It should be understood that the second target object is a different target object than the first target object. When the same pixel in the contour prediction data is located in the target area, the pixel is indicated to belong to the target object in any scale, and if each pixel in the contour prediction data is located in the target area, the target object in the contour prediction data is indicated to be one target object instead of being caused by overlapping of a plurality of target objects. When the same pixel in the plurality of contour prediction data is located in the background area, it indicates that the pixel belongs to the background in any scale and is not a pixel on the target object, and the pixel does not belong to any target object. The occurrence of a pixel on different scales is considered to be a background region and a target region, respectively, due to the overlap of the target regions of two target objects. Some pixels in the target area of one target object are mistaken for background pixels on some scale, at which time such pixels are divided into second target objects different from the first target object, so that the second target objects are separated from the first target objects.
In the embodiment of the application, the same pixels on the plurality of contour prediction data are judged to be positioned in the target area or the background area or both the target area and the background area, so that the target objects can be detected or the plurality of target objects overlapped together can be clearly segmented.
In one embodiment, obtaining contour data of the target object based on the predicted overlay data comprises: and taking the predicted superposition data as the contour data of the target object. In this embodiment, a target object (e.g., a target object) can be detected or a plurality of target objects overlapped together can be clearly segmented. In some embodiments, the recognition of small-sized target objects, which include flaws in the size range of 1 pixel to 256 × 256 pixels, may be more effective.
Fig. 6 is a schematic flowchart illustrating a method for detecting a target object according to an embodiment of the present application. As shown in fig. 6, the detection method further includes the following steps:
step 605: performing convolution on the characteristic information of the image to be detected by utilizing convolution layers with convolution kernels of 1 x 1 and the number of output channels being the same as that of preset target categories to obtain a plurality of preliminary target classification data of the image to be detected, wherein the preliminary target classification data comprise category probability values for representing that pixels on the image to be detected belong to one of the preset target categories;
specifically, because the convolution kernel of the convolution layer is 1 × 1 and the number of output channels is equal to the number of preset target classes, the convolution layer is utilized to perform convolution, a plurality of preliminary target classification data of the image to be detected are output, and the probability value that each pixel belongs to one target class is obtained respectively. The target categories include a target flaw category. For example, if the convolution layer is 1 × 1 and the convolution layer is 3, the target object is divided into three types, when the image to be detected is a cloth image and the target object is a defect on the cloth image, the preset target type can be a cut line, a damage and a spot, and the probability value of the defect to which the pixel on the cloth image belongs can be determined through the convolution.
Step 606: and carrying out maximum probability value taking on a plurality of class probability values corresponding to each pixel in a plurality of preliminary target classification data, and determining the target class to which each pixel in the image to be detected belongs.
Specifically, the probability values of the same pixel belonging to different preset target classes can be obtained through the convolution, and the maximum probability value determines the target class to which the pixel belongs according to which target class the pixel most possibly belongs.
Step 607: and obtaining a target classification result corresponding to the predicted superposition data based on the predicted superposition data.
Specifically, since the predicted superimposed data can more accurately and clearly obtain the contour of the target object, but the category of the target object cannot be determined, the category to which each pixel belongs is obtained through the above steps, and then the target classification to which each pixel on the contour of the target object predicted in the predicted superimposed data belongs is determined, thereby determining the target classification result corresponding to the predicted superimposed data.
In one embodiment, the obtaining of the target classification result corresponding to the predicted overlay data based on the predicted overlay data may be directly obtaining a class of a pixel corresponding to the predicted contour data in the overlay data to be predicted, or obtaining a class of each pixel in the predicted overlay data, and determining the class of the pixel corresponding to the predicted contour data.
In the embodiment of the application, a plurality of preliminary target classification data of the image to be detected are obtained through convolution calculation, the target class to which each pixel belongs is obtained through maximum probability value taking, and the position and the contour of the target object can be accurately and efficiently obtained and the type of the target object can be determined when the target classification result corresponding to the prediction superposition data is determined.
Fig. 7 is a schematic flowchart illustrating a process of obtaining contour data of a target object based on predicted overlay data in a target object detection method according to an embodiment of the present application. As shown in fig. 7, obtaining contour data of the target object based on the predicted overlay data includes the steps of:
step 7041: and determining category profile prediction data of the image to be detected based on the target categories to which the pixels in the image to be detected belong respectively.
Specifically, a target category to which each pixel in the image to be detected belongs is obtained, and no matter which category the pixel belongs to, the contour of the target object belongs to, and category contour prediction data of the image to be detected is determined based on all pixels determining the target category in the detected image.
Step 7042: determining contour data of the target object based on the predicted overlay data and the category contour prediction data.
Although the predicted superimposed data is obtained by superimposing the contour prediction data in a plurality of scale ranges, an accurate contour of the target object can be obtained, but the category to which the contour of the target object in the predicted superimposed data belongs cannot be accurately obtained. By the category contour prediction data and the prediction superposition data, the target object position, contour and category can be obtained.
In the embodiment of the application, the class contour prediction data of the image to be detected is determined by detecting the target class to which each pixel in the image belongs, and the position, the contour and the class of the target object are accurately obtained by predicting the superposition data and the class contour prediction data.
Fig. 8 is a schematic flowchart illustrating a process of determining contour data of a target object based on predicted overlay data and category contour prediction data in a target object detection method according to an embodiment of the present application. As shown in fig. 8, determining the contour data of the target object based on the predicted overlay data and the category contour prediction data specifically includes:
step 80421: when the contour line corresponding to the category contour prediction data divides the contour region corresponding to the prediction superposition data, the contour region is divided into two contour sub-regions respectively corresponding to the two target objects based on the contour line.
Specifically, although the predicted superimposition data can segment the contours of the superimposed plurality of target objects, it cannot determine the category of each target object, and one contour line of the predicted superimposition data is segmented by the category contour prediction data, indicating that the contour belongs to two categories, obtaining two contour sub-regions corresponding to the two target objects, respectively.
In the embodiment of the application, the profile area corresponding to the prediction superposition data is divided through the class profile prediction data, and more accurate class is obtained on the basis of obtaining the profile.
Fig. 9 is a schematic flow chart illustrating a process of extracting feature information of an image to be detected in a target object detection method according to an embodiment of the present application. As shown in fig. 9, extracting the feature information of the image to be detected includes the following steps:
step 9011: carrying out feature extraction processing on an image to be detected to obtain a plurality of output feature information of the image to be detected;
specifically, an image to be detected is input into a feature extraction model to obtain feature information of a sample. The feature extraction model can be a convolution neural network model such as MobileNetv2, SqueezeNet, ShuffleNet and the like, and the convolution processing is carried out on the image to be detected through a plurality of different convolution kernels to output a plurality of output feature information; for example, the feature extraction model is MobileNetv2, and 19 pieces of output feature information can be obtained.
Step 9012: screening a plurality of basic characteristic information from the plurality of output characteristic information; the method comprises the steps that a plurality of pieces of basic characteristic information which are sequenced according to a preset sequence are subjected to up-sampling to enable the number of channels of the plurality of pieces of basic characteristic information to be the same;
specifically, the 3 rd, 7 th, 14 th, and 19 th feature information may be selected from the 19 pieces of output feature information as basic feature information, and the dimensions of the feature matrix of each selected layer may be 1/2, 1/4, 1/8, and 1/16 of the original image, respectively. In order to facilitate feature fusion, the number of channels of a plurality of basic feature information is made the same by utilizing upsampling.
Step 9013: and performing characteristic fusion on the plurality of basic characteristic information after the upsampling to obtain the characteristic information of the image to be detected.
Specifically, feature fusion is performed on the plurality of pieces of basic feature information after upsampling through feature connection or feature superposition, and features of the image to be detected are deepened.
Fig. 10 is a schematic flowchart illustrating a process of acquiring an image to be detected in a target object detection method according to an embodiment of the present application. As shown in fig. 10, acquiring the image to be detected includes the following steps:
step 1001: acquiring an original image;
it should be understood that the original image is obtained as long as the original image can be obtained, and the obtaining manner of the original image in the embodiment of the present application is not limited.
Step 1002: and carrying out sliding window processing on the original image to obtain an image set to be detected, wherein the image set to be detected comprises a plurality of images to be detected.
In the embodiment of the application, a plurality of images to be detected are obtained through sliding window processing, and an original image is divided into a plurality of images to be detected for simultaneous processing, so that the calculated amount of each processing is reduced, and the operation speed is increased through the simultaneous processing.
In one embodiment, the target object is a flaw on the cloth image.
Fig. 11a is a schematic flowchart illustrating a method for detecting defects on a fabric image according to an embodiment of the present disclosure. As shown in fig. 11a, the method for detecting defects on a fabric image includes: acquiring a cloth original image (as shown in a step 11001 in fig. 11 a), performing sliding window processing on the cloth original image, and acquiring a cloth image set to be detected, where the cloth image set to be detected includes a plurality of cloth images to be detected (as shown in a step 11002 in fig. 11 a). 19 pieces of cloth output characteristic information are obtained by using MobileNetv2 as a characteristic extraction model (as shown in a step 11011 in fig. 11 a). The 3 rd, 7 th, 14 th and 19 th cloth output characteristic information are screened out from the 19 cloth output characteristic information to be taken as cloth basic characteristic information, and the channels of the cloth basic characteristic information are made to be the same by utilizing upsampling (as shown in a step 11012 in fig. 11 a). And performing feature fusion on the plurality of cloth basic feature information subjected to the up-sampling through feature connection or feature superposition to obtain feature information of the cloth image to be detected (as shown in step 11013 in fig. 11 a). Inputting the feature information of the cloth image to be detected into the contour identification model obtained by training according to any one of the above methods, wherein the convolution kernel size in the contour identification model is 1 × 1, and convolution layers with the number of output channels being N convolve the feature information to obtain N predicted defective contour data (as shown in step 1102 in fig. 11 a). When the same pixel in the N defective outline prediction data is located in the qualified fabric area, it indicates that, in the multiple scale ranges, the outline prediction module regards the pixel as belonging to the qualified fabric area, and the pixel in the prediction overlay data is the qualified fabric area (as shown in step 11032 in fig. 11 a). When the same pixel in the plurality of defect contour prediction data is located in the defect area and the qualified distribution area respectively, it indicates that the pixel is predicted as the defect area in a certain scale range, and when the same pixel is predicted as the qualified distribution area in a certain scale range, it predicts the contour area of the pixel as the second defect in the overlay data, and then the plurality of overlapped defects are divided in different scale ranges (as shown in step 11033 in fig. 11 a). The first defect profile and the second defect profile are used as defect data (step 1104 in FIG. 11 a). The first defect and the second defect refer to two defects of which the outline can be divided, and the two defects of which the outline is divided are detected on the cloth image. In the embodiment of the application, small flaws in the size range of 1 pixel-256 × 256 pixels on the cloth image can be detected or a plurality of flaws overlapped together can be clearly divided.
Fig. 11b is a schematic flowchart illustrating a method for detecting defects on a fabric image according to an embodiment of the present disclosure. Although the predicted overlay data can obtain the flaw outline more accurately and clearly, the flaw type cannot be determined. As shown in fig. 11b, the method for detecting defects on a fabric image further includes: using convolution layers with convolution kernel of 1 × 1 and the same number of output channels as the number of default defect types, the method comprises the steps of performing convolution on feature information of an image to be detected to obtain a plurality of preliminary defect classification data of the image to be detected, wherein the preliminary defect classification data comprise class probability values (shown as step 1105 in fig. 11 a) for representing that pixels on the image to be detected belong to a certain defect mark class in a plurality of preset defect classes, determining defect classes (shown as step 1106 in fig. 11 a) to which the pixels belong by maximum probability values through which the pixels belong to, obtaining defect classes to which each pixel in the image to be detected belongs, determining class profile prediction data (shown as step 11041 in fig. 11 a) of the image to be detected based on all pixels determining defects in the detected image, no matter which class the pixels belong to which the defect belongs. One contour line of the predicted superposed data is divided by the category contour prediction data, which shows that the contour belongs to two defect categories, and two contour sub-regions respectively corresponding to two defects are obtained. The profile area corresponding to the prediction superposition data is divided by the class profile prediction data, and on the basis of obtaining the defect profile, a more accurate defect class is obtained (as shown in step 11042 in fig. 11 a).
In some embodiments, after the defect information is obtained, further processing of the defect data is required. The fabric defect information includes at least one of defect types, defect sizes and defect positions, wherein the defect types include a plurality of types, such as missed stitch, broken hole, oil stain, yarn hooking, horizontal bar, dyeing, line skew, wind tunnel or edge drop, and the like, which are not listed here. Then, a defect handling manner is determined based on at least the defect type. In some embodiments, the defect processing manner may also be determined according to a defect type and/or a preset condition, where the preset condition includes at least one of a use of the fabric or a property of the fabric, and the defect processing manner includes at least one of a material breaking, a repair, and a cleaning, where the property of the fabric includes a price of the fabric or a cost of the fabric.
For example, when the type of the flaw is greasy dirt or dyeing, manual or automatic cleaning can be selected, when the size of the flaw is small, subsequent treatment of the cloth is not affected, untreated treatment can be selected, and when the flaw is yarn hooking or hole breaking, material breaking and repair can be selected. The method for treating the flaws is determined according to preset conditions, and in a specific example, the method can be understood that when the cloth is used for making clothes, the flaws can be cut off, the cut-off parts can be used for making parts with less materials such as neckline sleeves or decorations, and if the cloth is used for making bed sheets, whether the flaws can be left or not can be judged according to the sizes of the flaws of the cloth. In summary, the defect processing manner can be determined in the specific application environment and the application purpose according to the defect information and the preset condition.
Further, in some embodiments, when a defect location has been detected and a material break is determined, the following steps may be followed.
Step 1: and acquiring detection data of the cloth, wherein the detection data comprises flaw position information. In the embodiment of the application, the cloth can detect the flaw information through the neural network, and the flaw information comprises flaw position information. Specifically, the transverse direction of the fabric may be taken as an abscissa, and the longitudinal direction of the fabric may be taken as an ordinate, so that the flaw position information may be obtained. Optionally, the defect information may further include defect size information and defect type information.
Step 2: and calculating the distance between two adjacent defect reference lines according to the defect position information. In the embodiment of the present application, a defect reference line parallel to the abscissa may be made at the position of each defect, and it can be understood that the defect reference lines of some defects are coincident and parallel to each other. And calculating the distance between two adjacent defect reference lines according to the defect position information.
And step 3: and determining a material breaking area according to a preset condition, wherein the preset condition is that an area formed by the combination of the continuous flaws of which the distance is smaller than a preset threshold value is used as the material breaking area. In the embodiment of the present application, for example, the cloth includes a plurality of defects in a front-back order, the plurality of defects correspond to a plurality of defect reference lines, and distances between two adjacent defect reference lines, such as distances between defects 1 and 2, between defects 2 and 3, between defects 3 and 4, between defects 4 and 5, and between defects 5 and 6, are calculated. And comparing the calculated distance with a preset threshold value, judging whether the calculated distance is smaller than the preset threshold value, if the distance between the flaws 1 and 2, the distance between the flaws 2 and 3, and the distance between the flaws 4 and 5 are smaller than the preset threshold value, and the distance between the flaws 3 and 4, and the distance between the flaws 5 and 6 are greater than or equal to the preset threshold value, taking the area formed by combining the continuous flaws 123 as a material breaking area 1, taking the area formed by combining the continuous flaws 45 as a material breaking area 2, and taking the remaining flaws 6 as isolated flaws. It can be understood that the defect reference lines of some defects are overlapped to make one defect reference line contain a plurality of defects, and if the distances between the defect reference line and the adjacent defect reference lines before and after the defect reference line are both greater than or equal to the preset threshold, the plurality of defects on the defect reference line are all regarded as isolated defects. In this application embodiment, preset threshold value can be set for according to the demand that is used as the materials of production clothes, if the yardstick of the materials that is used for producing the clothes is shorter, then preset threshold value can set for a little, can make full use of the cloth between two adjacent flaws like this, if the yardstick of the materials that is used for producing the clothes is longer, then preset threshold value can set for a little, can save disconnected material time like this, improves production efficiency.
And 4, step 4: and respectively obtaining two edge reference lines of each material breaking area and each isolated flaw outside the material breaking area. In this embodiment, the two edge reference lines of the material breakage region may be obtained by moving up the first defect reference line and the last defect reference line in the material breakage region, so that the defect in the material breakage region is located between the two edge reference lines. The two edge reference lines of the isolated defect can be obtained by respectively moving up and down the defect reference line where the isolated defect is located, so that the isolated defect is located between the two edge reference lines.
And 5: and determining the material breaking position information of the cloth according to the edge reference line. In the embodiment of the application, the edge reference line is a material breaking position, and the material breaking position information of the cloth is determined according to the edge reference line.
According to the example provided by the embodiment of the application, when the material cutting is carried out according to the material cutting information generated in the steps, the cutting times are far less than that of the traditional mode of cutting each flaw, so that the scheme of the embodiment saves the processing times and time of the material cutting processing step. Moreover, if two adjacent flaws are separated too closely, if the areas where the two flaws are located are cut off respectively, the remaining area between the two flaws after the cutting is too narrow, the cloth corresponding to the area cannot be used as the material for subsequently producing clothes, and finally becomes waste material, which is equivalent to doing useless work.
The application also comprises a cloth flaw detection data processing system. The detection data acquisition module is used for acquiring flaw information of the cloth according to the cloth flaw detection method, wherein the flaw information comprises at least one of flaw type, flaw size or flaw position; and the defect processing mode determining module is used for determining a defect processing mode according to at least the defect information, wherein the defect processing mode comprises at least one of material breaking, repairing or cleaning. The detection data acquisition module can directly acquire data from the detection system or acquire the detection data in other wireless or limited modes. The flaw processing mode determining module can determine which mode to use to process flaws according to any combination of the flaw information and preset conditions or according to setting parameters of a user and an actual use scene.
Fig. 12 is a schematic structural diagram of a training apparatus for contour recognition models according to an embodiment of the present application. As shown in fig. 12, the apparatus 1200 includes: an input module 1201 configured to provide base data of a plurality of samples, the base marker data corresponding to a contour of a sample identification object in the samples; a labeling module 1202 configured to obtain a plurality of training label data based on the base label data of the sample; and a training module 1203 configured to train the neural network model based on the sample with the plurality of training label data, so that the neural network model can output a plurality of predicted contour data corresponding to the plurality of training label data, respectively, based on the sample.
In one embodiment, the training module 1203 further includes: the feature extraction submodule 12301 is configured to extract feature information of the sample; a recognition submodule 12302 configured to input feature information into the neural network model to acquire a plurality of contour recognition data corresponding to the plurality of training data, respectively; a loss calculation module 12303 configured to obtain a loss result based on a plurality of contour recognition data and the plurality of training data; an adjusting sub-module 12304 configured to adjust parameters of the neural network model based on the loss result.
In one embodiment, the range of the contour corresponding to each of the plurality of training data is less than or equal to the range of the contour of the recognition object corresponding to the basic data.
In one embodiment, the tagging module 1202 is further configured to: and taking the geometric center of the outline of the identification object corresponding to the basic data as a zooming center, and zooming the outline of the identification object corresponding to the basic data according to M different zooming multiples to obtain M training data.
In a further embodiment, M may be any integer from 2 to 8, and the scaling factor may range from 0.5 to 1.
In a further embodiment, M is 6, and the 6 different scaling factors take values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1.0.
In one embodiment, the identifier module 12302 is further configured to convolve the feature information with convolution layers having a convolution kernel size of 1 × 1 and a number of output channels of N to obtain the plurality of contour identification data, where N is the number of the training data.
Fig. 13 is a schematic structural diagram of a target object detection apparatus according to an embodiment of the present application. As shown in fig. 13, the detection device includes:
an extraction module 1301 configured to extract feature information of an image to be detected; the training method training contour recognition module 1302 is configured to obtain a plurality of predicted contour data of an image to be detected based on input feature information of the image to be detected; an overlay module 1303 configured to perform feature overlay on the plurality of predicted contour data to obtain predicted overlay data; an identification module 1304 configured to obtain contour data of the target object based on the predicted overlay data.
In one embodiment, the area within the contour range corresponding to the contour prediction data is a target area, the area outside the contour range is a background area, and the overlay module 1303 is further configured to: when the same pixel in the plurality of contour prediction data is located in a target area, predicting that the pixel in the superimposed data is the target area of the first target object; when the same pixel in the plurality of contour prediction data is located in a background area, predicting the pixel in the superposition data as the background area; and when the same pixel in the plurality of contour prediction data is respectively located in the target area and the background area, predicting that the pixel in the superposition data is the target area of the second target object.
In one embodiment, the detection apparatus further comprises: the class probability acquiring module 1305 is configured to convolve the feature information of the image to be detected by using convolution layers with convolution kernels of 1 × 1 and the number of output channels equal to the number of preset target classes to obtain a plurality of preliminary target classification data of the image to be detected, where the preliminary target classification data includes a class probability value for representing that a pixel on the image to be detected belongs to one of the preset target classes; a category determination module 1306, configured to perform maximum probability dereferencing on a plurality of category probability values corresponding to each pixel in the plurality of preliminary target classification data, and determine target categories to which each pixel in the image to be detected belongs; a classification module 1307 configured to obtain a target classification result corresponding to the predicted overlay data.
In one embodiment, the identification module 1304 further comprises: a category identifying sub-module 13041 configured to determine category contour prediction data of the image to be detected based on the target categories to which the pixels in the image to be detected acquired by the category extracting module 1306 respectively belong, and an integration identifying module 13042 configured to determine contour data of the target object based on the prediction superposition data and the category contour prediction data.
In one embodiment, the integrated identification module 13042 is further configured to, when the contour line corresponding to the category contour prediction data divides the contour region corresponding to the prediction superposition data, divide the contour region into two contour sub-regions corresponding to the two target objects respectively based on the contour line.
In one embodiment, the extraction module 1301 is further configured to perform feature extraction processing on an image to be detected, so as to obtain a plurality of output feature information of the image to be detected; screening a plurality of basic characteristic information from the plurality of output characteristic information; the method comprises the steps that a plurality of pieces of basic characteristic information which are sequenced according to a preset sequence are subjected to up-sampling to enable the number of channels of the plurality of pieces of basic characteristic information to be the same; and performing characteristic fusion on the plurality of basic characteristic information after the upsampling to obtain the characteristic information of the image to be detected.
Fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 14, the electronic device 1400 includes: one or more processors 1401 and memory 1402; and computer program instructions stored in memory 1402 which, when executed by processor 1401, cause processor 1401 to perform any of the methods described above.
The processor 1401 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
Memory 1402 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 1701 to implement the steps of any of the methods described herein above and/or other desired functions described herein.
In one example, the electronic device 1400 may further include: an input device 1403 and an output device 1404, which are interconnected by a bus system and/or other form of connection mechanism (not shown in fig. 14).
In addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of any of the methods described above.
The computer program product may write program code for carrying out operations for embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps of the training method according to any of the above-mentioned embodiments of the present application or the detection method of any of the above-mentioned embodiments, described in the section "training method of an exemplary contour recognition model" or "detection method of an exemplary target object" above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a random access memory ((RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modifications, equivalents and the like that are within the spirit and principle of the present application should be included in the scope of the present application.

Claims (10)

1. A training method of a cloth defect recognition model is characterized by comprising the following steps:
providing base data of a plurality of samples, wherein the base data corresponds to the profile of the known cloth defects in the samples;
obtaining a plurality of training data based on the base data for each of the samples; and
training a neural network model based on the sample with the plurality of training data so that the neural network model can output a plurality of contour recognition data corresponding to the plurality of training data, respectively, based on the sample.
2. The training method of claim 1, wherein the training a neural network model based on the sample with the plurality of training data comprises:
extracting characteristic information of the sample;
inputting the feature information into the neural network model to acquire a plurality of contour recognition data corresponding to the plurality of training data, respectively;
obtaining a loss result based on the plurality of contour recognition data and the plurality of training data; and
adjusting parameters of the neural network model based on the loss result.
3. The training method according to claim 2, wherein the inputting the feature information into a neural network model to obtain a plurality of contour recognition data corresponding to the plurality of training data, respectively, comprises:
and performing convolution on the characteristic information by utilizing a convolution layer with the convolution kernel size of 1 multiplied by 1 and the number of output channels of N to obtain the plurality of contour identification data, wherein N is the number of the training data.
4. A cloth flaw detection method is characterized by comprising the following steps:
extracting characteristic information of a cloth image to be detected, wherein the cloth image to be detected comprises one or more cloth flaws;
inputting the characteristic information into a cloth defect outline recognition model obtained by training according to the method of any one of claims 1 to 3, and obtaining a plurality of outline prediction data of the cloth image to be detected;
performing feature superposition on the plurality of contour prediction data to obtain prediction superposition data; and
and obtaining the profile data of the cloth flaws based on the predicted superposition data.
5. The detection method according to claim 4, wherein an area within a contour range corresponding to the contour prediction data is a target area, and an area outside the contour range is a background area, wherein the performing feature superposition on the plurality of contour prediction data to obtain prediction superposition data includes:
when the same pixel in the plurality of contour prediction data is located in a target area, predicting that the pixel in the superposed data is the target area of the first cloth defect;
when the same pixel in the plurality of contour prediction data is located in a background area, predicting the pixel in the superposition data as the background area;
and when the same pixel in the plurality of contour prediction data is respectively located in the target area and the background area, predicting that the pixel in the superposed data is the target area of the second cloth defect.
6. The detection method according to claim 5, further comprising:
performing convolution on the characteristic information of the cloth image to be detected by utilizing convolution layers with convolution kernels of 1 x 1 and the number of output channels being the same as that of preset target defect categories to obtain a plurality of preliminary target classification data of the cloth image to be detected, wherein the preliminary target classification data comprise category probability values for representing that pixels on the cloth image to be detected belong to one of the preset target defect categories;
carrying out maximum probability value obtaining on a plurality of class probability values corresponding to each pixel in the plurality of preliminary target classification data, and determining target defect classes to which each pixel in the cloth image to be detected belongs; and
and obtaining a target classification result corresponding to the predicted superposition data based on the predicted superposition data.
7. A cloth flaw detection data processing method is characterized by comprising the following steps:
acquiring flaw information of the cloth according to the method of any one of claims 4 to 6;
and determining a defect processing mode according to at least the defect information.
8. The inspection data processing method of claim 7, when the defect information includes a defect location, wherein:
calculating the distance between two adjacent defect reference lines according to the defect position information;
determining a material breaking area according to a preset condition, wherein the preset rule is that an area formed by continuous flaw combinations with the distance smaller than a preset threshold is used as the material breaking area;
respectively obtaining an edge reference line of each material breaking area and each isolated flaw outside the material breaking areas;
and determining the material breaking position information of the cloth according to the edge reference line.
9. An electronic device, comprising:
a processor; and
memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the method of any of claims 1-8.
10. A computer readable storage medium having computer program instructions stored thereon, which, when executed by a processor, cause the processor to perform the method of any one of claims 1-8.
CN202011112313.8A 2020-10-16 2020-10-16 Cloth flaw identification model training method and cloth flaw detection method Pending CN112270687A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011112313.8A CN112270687A (en) 2020-10-16 2020-10-16 Cloth flaw identification model training method and cloth flaw detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011112313.8A CN112270687A (en) 2020-10-16 2020-10-16 Cloth flaw identification model training method and cloth flaw detection method

Publications (1)

Publication Number Publication Date
CN112270687A true CN112270687A (en) 2021-01-26

Family

ID=74338258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011112313.8A Pending CN112270687A (en) 2020-10-16 2020-10-16 Cloth flaw identification model training method and cloth flaw detection method

Country Status (1)

Country Link
CN (1) CN112270687A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807434A (en) * 2021-09-16 2021-12-17 中国联合网络通信集团有限公司 Defect recognition method and model training method for cloth
CN116309586A (en) * 2023-05-22 2023-06-23 杭州百子尖科技股份有限公司 Defect detection method, device, equipment and medium based on convolutional neural network
CN117576088A (en) * 2024-01-15 2024-02-20 平方和(北京)科技有限公司 Intelligent liquid impurity filtering visual detection method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766314A (en) * 2015-03-27 2015-07-08 上海和鹰机电科技股份有限公司 Fabric reading machine and method with defect marking function
CN107301637A (en) * 2017-05-22 2017-10-27 南京理工大学 Nearly rectangle plane shape industrial products surface flaw detecting method
CN107607554A (en) * 2017-09-26 2018-01-19 天津工业大学 A kind of Defect Detection and sorting technique of the zinc-plated stamping parts based on full convolutional neural networks
CN109670979A (en) * 2018-11-30 2019-04-23 深圳灵图慧视科技有限公司 Cloth detection data processing method, device and equipment
CN110838149A (en) * 2019-11-25 2020-02-25 创新奇智(广州)科技有限公司 Camera light source automatic configuration method and system
CN110992311A (en) * 2019-11-13 2020-04-10 华南理工大学 Convolutional neural network flaw detection method based on feature fusion
CN111080622A (en) * 2019-12-13 2020-04-28 熵智科技(深圳)有限公司 Neural network training method, workpiece surface defect classification and detection method and device
CN111223078A (en) * 2019-12-31 2020-06-02 河南裕展精密科技有限公司 Method for determining defect grade and storage medium
CN111652098A (en) * 2020-05-25 2020-09-11 四川长虹电器股份有限公司 Product surface defect detection method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766314A (en) * 2015-03-27 2015-07-08 上海和鹰机电科技股份有限公司 Fabric reading machine and method with defect marking function
CN107301637A (en) * 2017-05-22 2017-10-27 南京理工大学 Nearly rectangle plane shape industrial products surface flaw detecting method
CN107607554A (en) * 2017-09-26 2018-01-19 天津工业大学 A kind of Defect Detection and sorting technique of the zinc-plated stamping parts based on full convolutional neural networks
CN109670979A (en) * 2018-11-30 2019-04-23 深圳灵图慧视科技有限公司 Cloth detection data processing method, device and equipment
CN110992311A (en) * 2019-11-13 2020-04-10 华南理工大学 Convolutional neural network flaw detection method based on feature fusion
CN110838149A (en) * 2019-11-25 2020-02-25 创新奇智(广州)科技有限公司 Camera light source automatic configuration method and system
CN111080622A (en) * 2019-12-13 2020-04-28 熵智科技(深圳)有限公司 Neural network training method, workpiece surface defect classification and detection method and device
CN111223078A (en) * 2019-12-31 2020-06-02 河南裕展精密科技有限公司 Method for determining defect grade and storage medium
CN111652098A (en) * 2020-05-25 2020-09-11 四川长虹电器股份有限公司 Product surface defect detection method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807434A (en) * 2021-09-16 2021-12-17 中国联合网络通信集团有限公司 Defect recognition method and model training method for cloth
CN113807434B (en) * 2021-09-16 2023-07-25 中国联合网络通信集团有限公司 Cloth flaw identification method and model training method
CN116309586A (en) * 2023-05-22 2023-06-23 杭州百子尖科技股份有限公司 Defect detection method, device, equipment and medium based on convolutional neural network
CN117576088A (en) * 2024-01-15 2024-02-20 平方和(北京)科技有限公司 Intelligent liquid impurity filtering visual detection method and device
CN117576088B (en) * 2024-01-15 2024-04-05 平方和(北京)科技有限公司 Intelligent liquid impurity filtering visual detection method and device

Similar Documents

Publication Publication Date Title
CN112270687A (en) Cloth flaw identification model training method and cloth flaw detection method
US10878283B2 (en) Data generation apparatus, data generation method, and data generation program
CN107403424B (en) Vehicle loss assessment method and device based on image and electronic equipment
KR102058427B1 (en) Apparatus and method for inspection
US7409081B2 (en) Apparatus and computer-readable medium for assisting image classification
WO2020238256A1 (en) Weak segmentation-based damage detection method and device
JP2003100826A (en) Inspecting data analyzing program and inspecting apparatus and inspecting system
US20130163869A1 (en) Apparatus and method for extracting edge in image
CN114998324A (en) Training method and device for semiconductor wafer defect detection model
CN115690670A (en) Intelligent identification method and system for wafer defects
CN113160161A (en) Method and device for detecting defects at edge of target
CN112149693A (en) Training method of contour recognition model and detection method of target object
CN115136209A (en) Defect detection system
CN116542984A (en) Hardware defect detection method, device, computer equipment and storage medium
CN117392042A (en) Defect detection method, defect detection apparatus, and storage medium
CN116342525A (en) SOP chip pin defect detection method and system based on Lenet-5 model
CN114972880A (en) Label identification method and device, electronic equipment and storage medium
CN115170501A (en) Defect detection method, system, electronic device and storage medium
CN114677348A (en) IC chip defect detection method and system based on vision and storage medium
CN117372424B (en) Defect detection method, device, equipment and storage medium
WO2014103617A1 (en) Alignment device, defect inspection device, alignment method, and control program
CN116091503B (en) Method, device, equipment and medium for discriminating panel foreign matter defects
CN115311273A (en) Training method of detection model, defect detection method, device and storage medium
JP2010019561A (en) Flaw inspection device and flaw inspection method
CN112288066B (en) Graphic code coding and identifying method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210126

RJ01 Rejection of invention patent application after publication