CN110136126A - Cloth textured flaw detection method based on full convolutional neural networks - Google Patents
Cloth textured flaw detection method based on full convolutional neural networks Download PDFInfo
- Publication number
- CN110136126A CN110136126A CN201910414690.8A CN201910414690A CN110136126A CN 110136126 A CN110136126 A CN 110136126A CN 201910414690 A CN201910414690 A CN 201910414690A CN 110136126 A CN110136126 A CN 110136126A
- Authority
- CN
- China
- Prior art keywords
- network
- region
- interest
- flaw
- full convolutional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30124—Fabrics; Textile; Paper
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
The invention discloses a kind of cloth textured flaw detection methods of the circular weaving machine based on full convolutional neural networks, belong to technical field of computer vision.The requirement of real-time when present invention is for four class flaws present in hose weaving and detection, replaces with floating point arithmetic amount for ResNet-101 used in the full convolutional network based on region and reduces half, the comparable ResNeXt-50 network of ability in feature extraction.For the lesser problem of ribbon fabric feature figure size, it is down-sampled to remove last time in network, and it introduces empty convolution and substitutes common convolution, increase receptive field, so that the output of each convolution is contained large range of information, guarantees that network is more fully trained using multiple dimensioned alternately trained and online difficult example excavation.Detection scheme designed by the invention can satisfy requirement of real-time, realize effective detection to texture flaw on ribbon fabric.
Description
Technical field
The present invention relates to technical field of computer vision, more particularly to the cloth textured flaw based on full convolutional neural networks
Defect detection method.
Background technique
High-pressure water delivery oil hose is that one kind can be used for conveying a variety of media such as gas, water, oil, is widely used in being laid with and face
When conveying circuit hose.Its manufacturing process is that high tenacity polyester fiber silk is first woven into 50 meters of length to 300 meters not by circular weaving machine
Deng cloth pipe band, then form inner glue layer and outer glue-line using high-quality polyurethane in ectonexine, guarantee its bearing capacity.It produces
Product added value is higher, and the 300mm bore high-pressure water delivery oil hose for exporting to the U.S. for shale gas exploitation should can be born most
The operating pressure of high 3MPa.But if the cloth for hose occur such as the disconnected warp of black framework mark in figure, staplings, pass through more,
Degree of impairment on this class formation of more latitudes is easy to occur in liquid delivery process after to polyurethane glue-line is coated inside and outside it
Explosion.
Quality assurance has become indispensable important link for modern liquid transmission flexible pipe production industry, currently, soft
The quality testing of pipe textile is mainly completed by manually, the problem is that fabric enormous amount, the workload of worker
Heavy, prolonged work will lead to the decline of worker's fatigue and verification and measurement ratio.An average worker needs to be responsible for more circular weaving machines,
It wherein also needs to carry out terylene spool replacement, inevitably has carelessness.It is detected automatically using machine vision and replaces artificial detection cloth
Flaw is a kind of more reasonably selection, and human cost can not only be saved by doing so, but can more quickly and effectively rate work.
In recent years, the mode of target detection has been reformed in the appearance of deep learning technology, improve target detection precision and
Robustness.Target detection model based on deep learning, since deep neural network is capable of the feature of autonomous learning different levels,
Compared to traditional-handwork design feature, the feature of study is richer, and feature representation ability is stronger.Currently, based on deep learning
Object detection method is broadly divided into two classes: the model based on region candidate and the model based on recurrence.Depth based on region candidate
Spend learning objective detection model establish region candidate inwardly, first to detection zone extract candidate region, be subsequent spy
Sign is extracted and classification is prepared, Typical Representative are as follows: R-CNN, Fast R-CNN, Faster R-CNN and R-FCN etc..Based on return
The deep learning target detection model returned needs to delimit default frame according to certain way in advance then using the thought returned, thus
Set up prediction block, default frame, ground truth object frame relationship to be trained, Typical Representative are as follows: YOLO and SSD.
Summary of the invention
To solve the above problems, the present invention provides the cloth textured Defect Detection based on full convolutional neural networks
Method, the present invention use deep learning technology, realize the cloth textured flaw of circular weaving machine by quick R-FCN algorithm of target detection
Detection.Deep learning is a kind of learning process by data-driven, does not need artificially to design complicated manual feature, can directly from
Learn clarification of objective in data.It improves, is proposed for ribbon fabric on the high R-FCN network of detection accuracy
Carry out the quick R-FCN network of Defect Detection.The ResNet-101 of R-FCN Web vector graphic is replaced with into floating point arithmetic amount more
It is small, the comparable ResNeXt-50 network of ability in feature extraction.For the lesser problem of ribbon fabric feature figure size, removal
Conv5 layers primary down-sampled, and introduce empty convolution and substitute common convolution, use multiple dimensioned alternately trained and online difficult example
Excavation guarantees that network is more fully trained, for this purpose, the present invention provides the fabric based on full convolutional neural networks
Texture flaw detection method, it is real by the full convolutional neural networks algorithm of target detection based on region using deep learning technology
The detection of existing circular weaving machine fabric defects, method include the following steps:
(1) first to input picture calculating depth characteristic to obtain candidate interest regional location,
(2) the interest region extracted is responded by subsequent convolutional layer using position sensing interest pool area
Score, carries out classification and position returns.
As a further improvement of that present invention, depth characteristic is calculated to obtain candidate to input picture in the step (1)
Interest regional location, specific steps are as follows:
Step 2.1: full convolutional neural networks neural network of the training based on region, core network use 50 layers of enhancing residual error
Network removes the full articulamentum of the last layer of raw residual network, retains first 49 layers, after the 40th layer i.e. Conv4 layers, will count
Obtained characteristic pattern is divided into two branches, and one is input in subsequent 9 layers of feature extraction network, and another is input to region
It to be calculated may include region defective in recommendation network;
Step 2.2: the core concept of region recommendation network is that region of interest domain is generated using CNN, is used on specific algorithm
The convolution kernel of 256 3*3 obtains 256 numbers, group by sliding window and characteristic layer convolution on the position of each characteristic pattern
The vector tieed up at one 256, it is { 32,64,128 } that the center of such sliding window, which corresponds to input picture mesoscale, horizontal
Vertical ratio is { 1:1,1:2,2:1 }, and group is combined into 9 kinds of anchor frames, passes through the relative position information (t for learning framework and anchor framex, ty, tw,
th) realize framework prediction;
tx=(x-xa)/wa
ty=(y-ya)/ha
tw=log (w/wa)
th=log (h/ha)
Wherein, (xa, ya, wa, ha) be respectively anchor frame center abscissa, the width and height of center ordinate and anchor frame,
(x, y, w, h) is the position coordinates that actual prediction obtains, and the loss function of region recommendation network is;
Wherein, i is the index of anchor frame, piIt is the probability that prospect is predicted as at i-th of anchor frame, when i-th of anchor frame is positive
When sample,Be 1, otherwise be 0, the partitioning standards of positive negative sample be according to the frame body position demarcated in each supervision sample, with
The maximum anchor frame of its overlap proportion is positive sample;For remaining anchor frame, if it is greater than with some calibration framework overlap proportion
0.7, it is denoted as positive sample, if being both less than 0.3 with the framework overlap proportion of any one calibration, is denoted as negative sample, tiWithRespectively
It is the offset of prediction and actual position relative to anchor frame, calculates error in classification, smooth L1 damage using cross entropy loss function
Function calculation of position errors is lost, these two types of errors need respectively divided by trained interest region batch size NclsWith anchor frame
Center quantity NregIt is normalized, adjusts weight, after using area recommendation network obtains candidate framework, root using λ=10
It is ranked up according to interest region prospect probability height, chooses first 100 and be used as interest region, be input in next network.
As a further improvement of that present invention, the band-like cloth textured flaw of detector bar in the step (2), specific steps are as follows:
Step 3.1: full convolutional neural networks neural network of the training based on region, core network use 50 layers of enhancing residual error
Network removes the full articulamentum of the last layer of raw residual network, retains first 49 layers, down-sampled in the 41st layer of removal, use
9 layers of common convolution after empty convolution replacement, then the full convolutional layer of a 1*1*1024 is connect, the output of network is W*H*1024;
Step 3.2: 100 or so candidate frames are calculated according to region recommendation network, while a with k*k* (C+1)
The convolution kernel of 1024*1*1 deconvolute residual error network output characteristic pattern obtain k*k* (C+1) a size be W*H position sensing obtain
Component, k=3 indicate an interest region to be divided into 3*3 block, and for k*k=9 block, every piece (W*H* (C+1)) expression is
There are the probability values of target for different location, share k*k* (C+1) a characteristic pattern, each characteristic pattern, z (i, j, c) is the i-th+k (j-
1) c-th of figure in a stereo block (1≤i, j≤3), (i, j) determine a certain position of 9 kinds of positions, it is assumed that are upper left
Angle Position (i=j=1), which kind of c determines, it is assumed that is flaw class, some pixel on z (i, j, c) this characteristic pattern
Position be (x, y), pixel value is v, then what v was indicated be on this position original image corresponding (x, y) may be the flaw (c=' flaw
Defect ') and be flaw upper left position (i=j=1) probability value;
Step 3.3: by picture interest pool area in candidate frame, though thinking be to great interest region, it is specified that
The grid of a n*n bin is drawn above, and all pixels value in each grid does a pond, no matter image is much in this way, pond
Interest provincial characteristics dimension after change is all n*n, and interest pool area is that each characteristic pattern is individually done, be not multiple channels together
, if there is C class, the width in interest region is W', and height is H', then the input of interest pool areaization operation is k*k* (C+1) *
That corresponding stereo block of certain interest region on the shot chart of W'*H', and the stereo block forms new k*k* (C+1) * W'*
The stereo block of H': each stereo block (C+1) only plucks out a bin of corresponding position, this k*k bin is formed new solid
Block, size are (C+1) * W'*H', and the output of interest pool area is the stereo block of (C+1) * k*k;
Bin a for (i, j), the interest pool areaization of position sensing only carry out pond on (i, j) shot chart:
The ballot classified behind pond, k*k bin directly sum, and obtain the score of every one kind, and carry out flexibility
Maximum obtains the final score of every class, and for calculating loss, for each interest region:
The flexible peak response of classification are as follows:
Loss function is made of Classification Loss and recurrence loss, and classification is returned and damaged with L1-smooth with entropy loss is intersected
It loses:
L (s, tX, y, w, h)=Lcls(sc*)+λ [c* > 0] Lreg(t, t*)
After full convolutional neural networks of the above method training based on region, the knot of texture Defect Detection in picture is exported
Fruit.
The utility model has the advantages that the cloth textured flaw detection method provided by the invention based on full convolutional neural networks uses depth
Learning art realizes the detection of fabric defects by quick R-FCN algorithm of target detection.Circular weaving machine fabric based on deep learning
Texture Defect Detection can greatly reduce cost of labor and workload, be conducive to long term maintenance.Deep learning detection improves mesh
The precision and robustness for marking detection, are capable of the feature of autonomous learning different levels, compared to traditional-handwork design feature, study
Feature is richer, and feature representation ability is stronger.The detection scheme designed using us can effectively improve detection speed and efficiency
And reduce cost.
Detailed description of the invention
Fig. 1 is that the present invention is based on the flow charts of the cloth textured flaw detection method of the circular weaving machine of full convolutional neural networks;
Fig. 2 is the present invention quickly R-FCN network structure;
Fig. 3 is 20% width fabric band Defect Detection effect picture of the invention;
Fig. 4 is 30% width fabric band Defect Detection effect picture of the invention;
Fig. 5 is the cloth textured Defect Detection effect picture of circular weaving machine of the present invention;
Fig. 6 is the cloth textured Defect Detection effect picture of circular weaving machine of the present invention.
Specific embodiment
Present invention is further described in detail with specific embodiment with reference to the accompanying drawing:
The present invention provides the cloth textured flaw detection method based on full convolutional neural networks, and the present invention uses deep learning
Technology realizes the detection of the cloth textured flaw of circular weaving machine by quick R-FCN algorithm of target detection.Deep learning is one kind by counting
According to the learning process of driving, does not need artificially to design complicated manual feature, can directly learn clarification of objective from data.?
It is improved on the high R-FCN network of detection accuracy, proposes the quick R-FCN for carrying out Defect Detection for ribbon fabric
Network.The ResNet-101 of R-FCN Web vector graphic replaces with to floating point arithmetic amount is smaller, and ability in feature extraction is comparable
ResNeXt-50 network.For the lesser problem of ribbon fabric feature figure size, Conv5 layers of removal it is primary down-sampled, and
It introduces empty convolution and substitutes common convolution, guarantee that network is more filled using multiple dimensioned alternately trained and online difficult example excavation
The training divided.
Below by taking the Defect Detection in practical circular weaving machine fabrication processes as an example, in conjunction with attached drawing to the present invention is based on full convolution minds
The specific embodiment of the cloth textured flaw detection method of circular weaving machine through network is described in further detail.
The present invention is based on the cloth textured flaw detection method of full convolutional neural networks, the circle based on full convolutional neural networks
Flow chart such as Fig. 1 of the cloth textured flaw detection method of loom;
Step 1: full convolutional neural networks neural network of the training based on region, core network use 50 layers of enhancing residual error net
Network removes the full articulamentum of the last layer of raw residual network, retains first 49 layers.After the 40th layer i.e. Conv4 layers, it will calculate
Obtained characteristic pattern is divided into two branches, and one is input in subsequent 9 layers of feature extraction network, and another is input to region and pushes away
Recommending in network to be calculated may include region defective.
Overall network structure is as shown in Figure 2.
Step 2: the core concept of region recommendation network is that region of interest domain is generated using CNN, and 256 are used on specific algorithm
The convolution kernel of a 3*3 obtains 256 numbers, composition one by sliding window and characteristic layer convolution on the position of each characteristic pattern
The vector of a 256 dimension.It is { 32,64,128 }, transverse and longitudinal ratio that the center of such sliding window, which corresponds to input picture mesoscale,
For { 1:1,1:2,2:1 }, group is combined into 9 kinds of anchor frames, by the relative position information (t for learning framework and anchor framex, ty, tw, th) real
Existing framework prediction.
tx=(x-xa)/wa
ty=(y-ya)/ha
tw=log (w/wa)
th=log (h/ha)
Wherein, (xa, ya, wa, ha) be respectively anchor frame center abscissa, the width and height of center ordinate and anchor frame.
(x, y, w, h) is the position coordinates that actual prediction obtains.The loss function of region recommendation network is
Wherein, i is the index of anchor frame, piIt is the probability that prospect is predicted as at i-th of anchor frame, when i-th of anchor frame is positive
When sample,It is 1, otherwise is 0.The partitioning standards of positive negative sample be according to the frame body position demarcated in each supervision sample, with
The maximum anchor frame of its overlap proportion is positive sample;For remaining anchor frame, if it is greater than with some calibration framework overlap proportion
0.7, it is denoted as positive sample, if being both less than 0.3 with the framework overlap proportion of any one calibration, is denoted as negative sample.tiWithRespectively
It is the offset of prediction and actual position relative to anchor frame.Error in classification, smooth L1 damage are calculated using cross entropy loss function
Function calculation of position errors is lost, these two types of errors need respectively divided by trained interest region batch size NclsWith anchor frame
Center quantity NregIt is normalized, adjusts weight using λ=10.After using area recommendation network obtains candidate framework, root
It is ranked up according to interest region prospect probability height, chooses first 100 and be used as interest region, be input in next network.
Step 3: full convolutional neural networks neural network of the training based on region, core network use 50 layers of enhancing residual error net
Network, removes the full articulamentum of the last layer of raw residual network, retains preceding 49 layers, removes down-sampled in the 41st layer, uses sky
9 layers of common convolution after the replacement of hole convolution, then the full convolutional layer of a 1*1*1024 is connect, the output of network is W*H*1024.
Step 4: calculating 100 or so candidate frames according to region recommendation network, while with k*k* (C+1) a 1024*
The convolution kernel of 1*1 deconvolute residual error network output characteristic pattern obtain k*k* (C+1) a size be W*H position sensing shot chart, k
=3, it indicates an interest region to be divided into 3*3 block, for k*k=9 block, every piece (W*H* (C+1)) expression is different positions
The probability value there are target is set, k*k* (C+1) a characteristic pattern, each characteristic pattern are shared, z (i, j, c) is the i-th+k (j-1) a vertical
C-th of figure (<=3 1 <=i, j) on body block, (i, j) determines a certain position of 9 kinds of positions, it is assumed that for upper left corner position
(i=j=1) is set, which kind of c determines, it is assumed that be flaw class, the position of some pixel on z (i, j, c) this characteristic pattern
Setting is (x, y), and pixel value is v, then it may be flaw (c=' flaw ') on this position original image corresponding (x, y) that v was indicated, which is,
It and is the probability value at the upper left position (i=j=1) of flaw.
Step 5: by picture interest pool area in candidate frame, no matter thinking is to great interest region, it is specified that upper
The grid of a n*n bin is drawn in face, and all pixels value in each grid does a pond, no matter image is much in this way, Chi Hua
Interest provincial characteristics dimension afterwards is all n*n, and interest pool area is that each characteristic pattern is individually done, be not multiple channels together
, if there is C class, the width in interest region is W ', and height is H ', then the input of interest pool areaization operation is k*k* (C+1) *
That corresponding stereo block of certain interest region on the shot chart of W ' * H ', and the stereo block forms new k*k* (C+1) * W ' *
The stereo block of H ': each stereo block (C+1) only plucks out a bin of corresponding position, this k*k bin is formed new solid
Block, size are (C+1) * W ' * H ', and the output of interest pool area is the stereo block of (C+1) * k*k;
Bin a for (i, j), the interest pool areaization of position sensing only carry out pond on (i, j) shot chart:
The ballot classified behind pond, k*k bin directly sum, and obtain the score of every one kind, and carry out flexibility
Maximum obtains the final score of every class, and for calculating loss, for each interest region:
The flexible peak response of classification are as follows:
Loss function is made of Classification Loss and recurrence loss, and classification is returned and damaged with L1-smooth with entropy loss is intersected
It loses:
L (s, tX, y, w, h)=Lcls(sc*)+λ [c* > 0] Lreg(t, t*)
After full convolutional neural networks of the above method training based on region, the knot of texture Defect Detection in picture is exported
Fruit.
Using the above method, it is being configured with 2 6132 processors of Intel Xeon Gold, 2 pieces of NVIDIA Tesla
P100 video card, the picture for carrying out mark using 1317 on the server of 128G memory carry out training in 6 hours to model.It will justify
Loom ribbon flaw image inputs in trained model, and the ribbon of available different in width as shown in Figure 3 and Figure 4 is knitted
Object texture Defect Detection effect.In order to realize engineering application requirement, using C Plus Plus, Qt 5.6.3 frame writes the interface UI;Make
With Python, TensorFlow 1.4.0 frame is write quick R-FCN detection algorithm and is realized to the cloth textured flaw of circular weaving machine
The detection of defect;The acquisition and preservation of video data are completed using OpenCV 3.3.1;Use OpenCV 3.3.1 combination OpenGL
Complete image procossing and display.System stable operation, detection effect is as shown in Figure 5 and Figure 6 when software is run, it can be seen that system
It accurately detects flaw present on texture, and stopped the work of circular weaving machine by controlling relay, shown in the lower right corner
The fabric defects that the last time detects facilitate worker to be compared.
The above described is only a preferred embodiment of the present invention, being not the limit for making any other form to the present invention
System, and made any modification or equivalent variations according to the technical essence of the invention, still fall within present invention model claimed
It encloses.
Claims (3)
1. the cloth textured flaw detection method based on full convolutional neural networks, it is characterised in that: use deep learning technology, lead to
The detection that the full convolutional neural networks algorithm of target detection based on region realizes circular weaving machine fabric defects is crossed, method includes following step
It is rapid:
(1) first to input picture calculating depth characteristic to obtain candidate interest regional location,
(2) by the interest region extracted by subsequent convolutional layer, response score is obtained using position sensing interest pool area,
It carries out classification and position returns.
2. the cloth textured flaw detection method according to claim 1 based on full convolutional neural networks, it is characterised in that:
Depth characteristic is calculated to obtain candidate interest regional location, specific steps to input picture in the step (1) are as follows:
Step 2.1: full convolutional neural networks neural network of the training based on region, core network use 50 layers of enhancing residual error net
Network removes the full articulamentum of the last layer of raw residual network, retains first 49 layers, after the 40th layer i.e. Conv4 layers, will calculate
Obtained characteristic pattern is divided into two branches, and one is input in subsequent 9 layers of feature extraction network, and another is input to region and pushes away
Recommending in network to be calculated may include region defective;
Step 2.2: the core concept of region recommendation network is that region of interest domain is generated using CNN, and 256 are used on specific algorithm
The convolution kernel of 3*3 obtains 256 numbers by sliding window and characteristic layer convolution on the position of each characteristic pattern, forms one
The vector of 256 dimensions, it is { 32,64,128 } that the center of such sliding window, which corresponds to input picture mesoscale, and transverse and longitudinal ratio is
{ 1:1,1:2,2:1 }, group are combined into 9 kinds of anchor frames, by the relative position information (t for learning framework and anchor framex, ty, tw, th) realize
Framework prediction;
tx=(x-xa)/wa
ty=(y-ya)/ha
tw=log (w/wa)
th=log (h/ha)
Wherein, (xa, ya, wa, ha) be respectively anchor frame center abscissa, the width and height of center ordinate and anchor frame,.(x, y,
W, h) it is the position coordinates that actual prediction obtains, the loss function of region recommendation network is;
Wherein, i is the index of anchor frame, piIt is the probability that prospect is predicted as at i-th of anchor frame, when i-th of anchor frame is positive sample
When,Be 1, otherwise be 0, the partitioning standards of positive negative sample be according to the frame body position demarcated in each supervision sample, it is heavy with it
The folded maximum anchor frame of ratio is positive sample;For remaining anchor frame, if it is greater than 0.7 with some calibration framework overlap proportion,
It is denoted as positive sample, if being both less than 0.3 with the framework overlap proportion of any one calibration, is denoted as negative sample, tiWithIt is respectively
The offset of prediction and actual position relative to anchor frame calculates error in classification, smooth L1 loss using cross entropy loss function
Function calculation of position errors, these two types of errors need respectively divided by trained interest region batch size NclsIn anchor frame
Heart number of positions NregIt is normalized, adjusts weight using λ=10, after using area recommendation network obtains candidate framework, according to
Interest region prospect probability height is ranked up, and is chosen first 100 and is used as interest region, is input in next network.
3. the cloth textured flaw detection method according to claim 1 based on full convolutional neural networks, it is characterised in that:
The band-like cloth textured flaw of detector bar, specific steps in the step (2) are as follows:
Step 3.1: full convolutional neural networks neural network of the training based on region, core network use 50 layers of enhancing residual error net
Network, removes the full articulamentum of the last layer of raw residual network, retains preceding 49 layers, removes down-sampled in the 41st layer, uses sky
9 layers of common convolution after the replacement of hole convolution, then the full convolutional layer of a 1*1*1024 is connect, the output of network is W*H*1024;
Step 3.2: calculating 100 or so candidate frames according to region recommendation network, while with k*k* (C+1) a 1024*1*1
Convolution kernel deconvolute residual error network output characteristic pattern obtain k*k* (C+1) a size be W*H position sensing shot chart, k=
3, it indicates an interest region to be divided into 3*3 block, for k*k=9 block, every piece (W*H* (C+1)) expression is different location
There are the probability values of target, share k*k* (C+1) a characteristic pattern, each characteristic pattern, z (i, j, c) is the i-th+k (j-1) a solid
C-th of figure (1≤i, j≤3) on block, (i, j) determines a certain position of 9 kinds of positions, it is assumed that is upper left position (i
=j=1), which kind of c determines, it is assumed that is flaw class, the position of some pixel on z (i, j, c) this characteristic pattern is
(x, y), pixel value are v, then what v was indicated is that may be flaw (c=' flaw ') on this position original image corresponding (x, y) and be
The probability value at the upper left position (i=j=1) of flaw;
Step 3.3: by picture interest pool area in candidate frame, no matter thinking is to great interest region, it is specified that above
The grid of a n*n bin is drawn, all pixels value in each grid does a pond, no matter image is much in this way, Chi Huahou
Interest provincial characteristics dimension be all n*n, interest pool area is that each characteristic pattern is individually done, be not multiple channels together,
If there is C class, the width in interest region is W', and height is H', then the input of interest pool areaization operation is k*k* (C+1) * W'*
That corresponding stereo block of certain interest region on the shot chart of H', and the stereo block forms new k*k* (C+1) * W'*H'
Stereo block: each stereo block (C+1) only plucks out a bin of corresponding position, this k*k bin is formed new stereo block,
Size is (C+1) * W'*H', and the output of interest pool area is the stereo block of (C+1) * k*k;
Bin a for (i, j), the interest pool areaization of position sensing only carry out pond on (i, j) shot chart:
The ballot classified behind pond, k*k bin directly sum, and obtain the score of every one kind, and carry out flexible maximum
The final score of every class is obtained, and for calculating loss, for each interest region:
rc(Θ)=∑I, jrc(i, j | Θ)
The flexible peak response of classification are as follows:
Loss function is made of Classification Loss and recurrence loss, and classification is returned and lost with L1-smooth with entropy loss is intersected:
L (s, tX, y, w, h)=Lcls(sc*)+λ [c* > 0] Lreg(t, t*)
After full convolutional neural networks of the above method training based on region, the result of texture Defect Detection in picture is exported.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910414690.8A CN110136126A (en) | 2019-05-17 | 2019-05-17 | Cloth textured flaw detection method based on full convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910414690.8A CN110136126A (en) | 2019-05-17 | 2019-05-17 | Cloth textured flaw detection method based on full convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110136126A true CN110136126A (en) | 2019-08-16 |
Family
ID=67571193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910414690.8A Pending CN110136126A (en) | 2019-05-17 | 2019-05-17 | Cloth textured flaw detection method based on full convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110136126A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110490874A (en) * | 2019-09-04 | 2019-11-22 | 河海大学常州校区 | Weaving cloth surface flaw detecting method based on YOLO neural network |
CN110827277A (en) * | 2019-11-26 | 2020-02-21 | 山东浪潮人工智能研究院有限公司 | Cloth flaw detection method based on yolo3 network |
CN110992361A (en) * | 2019-12-25 | 2020-04-10 | 创新奇智(成都)科技有限公司 | Engine fastener detection system and detection method based on cost balance |
CN111210417A (en) * | 2020-01-07 | 2020-05-29 | 创新奇智(北京)科技有限公司 | Cloth defect detection method based on convolutional neural network |
CN111260614A (en) * | 2020-01-13 | 2020-06-09 | 华南理工大学 | Convolutional neural network cloth flaw detection method based on extreme learning machine |
CN111507990A (en) * | 2020-04-20 | 2020-08-07 | 南京航空航天大学 | Tunnel surface defect segmentation method based on deep learning |
CN111986199A (en) * | 2020-09-11 | 2020-11-24 | 征图新视(江苏)科技股份有限公司 | Unsupervised deep learning-based wood floor surface flaw detection method |
CN112233090A (en) * | 2020-10-15 | 2021-01-15 | 浙江工商大学 | Film flaw detection method based on improved attention mechanism |
CN113139520A (en) * | 2021-05-14 | 2021-07-20 | 杭州旭颜科技有限公司 | Equipment diaphragm performance monitoring method for industrial Internet |
CN113180688A (en) * | 2020-12-14 | 2021-07-30 | 上海交通大学 | Coronary heart disease electrocardiogram screening system and method based on residual error neural network |
CN114549507A (en) * | 2022-03-01 | 2022-05-27 | 浙江理工大学 | Method for detecting fabric defects by improving Scaled-YOLOv4 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106599939A (en) * | 2016-12-30 | 2017-04-26 | 深圳市唯特视科技有限公司 | Real-time target detection method based on region convolutional neural network |
CN107316295A (en) * | 2017-07-02 | 2017-11-03 | 苏州大学 | A kind of fabric defects detection method based on deep neural network |
CN108345911A (en) * | 2018-04-16 | 2018-07-31 | 东北大学 | Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics |
CN109543716A (en) * | 2018-10-23 | 2019-03-29 | 华南理工大学 | A kind of K line morphology image-recognizing method based on deep learning |
CN109613006A (en) * | 2018-12-22 | 2019-04-12 | 中原工学院 | A kind of fabric defect detection method based on end-to-end neural network |
-
2019
- 2019-05-17 CN CN201910414690.8A patent/CN110136126A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106599939A (en) * | 2016-12-30 | 2017-04-26 | 深圳市唯特视科技有限公司 | Real-time target detection method based on region convolutional neural network |
CN107316295A (en) * | 2017-07-02 | 2017-11-03 | 苏州大学 | A kind of fabric defects detection method based on deep neural network |
CN108345911A (en) * | 2018-04-16 | 2018-07-31 | 东北大学 | Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics |
CN109543716A (en) * | 2018-10-23 | 2019-03-29 | 华南理工大学 | A kind of K line morphology image-recognizing method based on deep learning |
CN109613006A (en) * | 2018-12-22 | 2019-04-12 | 中原工学院 | A kind of fabric defect detection method based on end-to-end neural network |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110490874A (en) * | 2019-09-04 | 2019-11-22 | 河海大学常州校区 | Weaving cloth surface flaw detecting method based on YOLO neural network |
CN110827277A (en) * | 2019-11-26 | 2020-02-21 | 山东浪潮人工智能研究院有限公司 | Cloth flaw detection method based on yolo3 network |
CN110992361A (en) * | 2019-12-25 | 2020-04-10 | 创新奇智(成都)科技有限公司 | Engine fastener detection system and detection method based on cost balance |
CN111210417A (en) * | 2020-01-07 | 2020-05-29 | 创新奇智(北京)科技有限公司 | Cloth defect detection method based on convolutional neural network |
CN111210417B (en) * | 2020-01-07 | 2023-04-07 | 创新奇智(北京)科技有限公司 | Cloth defect detection method based on convolutional neural network |
CN111260614A (en) * | 2020-01-13 | 2020-06-09 | 华南理工大学 | Convolutional neural network cloth flaw detection method based on extreme learning machine |
CN111507990A (en) * | 2020-04-20 | 2020-08-07 | 南京航空航天大学 | Tunnel surface defect segmentation method based on deep learning |
CN111507990B (en) * | 2020-04-20 | 2022-02-11 | 南京航空航天大学 | Tunnel surface defect segmentation method based on deep learning |
CN111986199A (en) * | 2020-09-11 | 2020-11-24 | 征图新视(江苏)科技股份有限公司 | Unsupervised deep learning-based wood floor surface flaw detection method |
CN111986199B (en) * | 2020-09-11 | 2024-04-16 | 征图新视(江苏)科技股份有限公司 | Method for detecting surface flaws of wood floor based on unsupervised deep learning |
CN112233090A (en) * | 2020-10-15 | 2021-01-15 | 浙江工商大学 | Film flaw detection method based on improved attention mechanism |
CN112233090B (en) * | 2020-10-15 | 2023-05-30 | 浙江工商大学 | Film flaw detection method based on improved attention mechanism |
CN113180688A (en) * | 2020-12-14 | 2021-07-30 | 上海交通大学 | Coronary heart disease electrocardiogram screening system and method based on residual error neural network |
CN113180688B (en) * | 2020-12-14 | 2022-11-29 | 上海交通大学 | Coronary heart disease electrocardiogram screening system and method based on residual error neural network |
CN113139520A (en) * | 2021-05-14 | 2021-07-20 | 杭州旭颜科技有限公司 | Equipment diaphragm performance monitoring method for industrial Internet |
CN113139520B (en) * | 2021-05-14 | 2022-07-29 | 江苏中天互联科技有限公司 | Equipment diaphragm performance monitoring method for industrial Internet |
CN114549507A (en) * | 2022-03-01 | 2022-05-27 | 浙江理工大学 | Method for detecting fabric defects by improving Scaled-YOLOv4 |
CN114549507B (en) * | 2022-03-01 | 2024-05-24 | 浙江理工大学 | Improved Scaled-YOLOv fabric flaw detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110136126A (en) | Cloth textured flaw detection method based on full convolutional neural networks | |
CN109711474B (en) | Aluminum product surface defect detection algorithm based on deep learning | |
CN110490874A (en) | Weaving cloth surface flaw detecting method based on YOLO neural network | |
CN110490858B (en) | Fabric defective pixel level classification method based on deep learning | |
CN106875373B (en) | Mobile phone screen MURA defect detection method based on convolutional neural network pruning algorithm | |
CN103440654B (en) | A kind of LCD foreign body defect detection method | |
CN106156781A (en) | Sequence convolutional neural networks construction method and image processing method and device | |
CN110070536A (en) | A kind of pcb board component detection method based on deep learning | |
CN109584248A (en) | Infrared surface object instance dividing method based on Fusion Features and dense connection network | |
CN110349146A (en) | The building method of fabric defect identifying system based on lightweight convolutional neural networks | |
CN108346144A (en) | Bridge Crack based on computer vision monitoring and recognition methods automatically | |
CN106530284A (en) | Welding spot type detection and device based on image recognition | |
CN108272154A (en) | A kind of garment dimension measurement method and device | |
CN104346818B (en) | A kind of threads per unit length method for automatic measurement | |
CN108921819A (en) | A kind of cloth examination device and method based on machine vision | |
CN108389180A (en) | A kind of fabric defect detection method based on deep learning | |
CN110458201A (en) | A kind of remote sensing image object-oriented classification method and sorter | |
CN109272487A (en) | The quantity statistics method of crowd in a kind of public domain based on video | |
CN114119361B (en) | Method, system, equipment and medium for reconstructing downhole image based on TESRGAN network super-resolution | |
CN109685793A (en) | A kind of pipe shaft defect inspection method and system based on three dimensional point cloud | |
CN109712117A (en) | Lightweight TFT-LCD mould group scratch detection method based on convolutional neural networks | |
CN108664986A (en) | Based on lpThe multi-task learning image classification method and system of norm regularization | |
CN109902755A (en) | A kind of multi-layer information sharing and correcting method for XCT slice | |
CN106023098A (en) | Image repairing method based on tensor structure multi-dictionary learning and sparse coding | |
CN113222992A (en) | Crack characteristic characterization method and system based on multi-fractal spectrum |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |