CN110222792A - A kind of label defects detection algorithm based on twin network - Google Patents
A kind of label defects detection algorithm based on twin network Download PDFInfo
- Publication number
- CN110222792A CN110222792A CN201910538938.1A CN201910538938A CN110222792A CN 110222792 A CN110222792 A CN 110222792A CN 201910538938 A CN201910538938 A CN 201910538938A CN 110222792 A CN110222792 A CN 110222792A
- Authority
- CN
- China
- Prior art keywords
- label
- convolution
- section
- network
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8883—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Analytical Chemistry (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Pathology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Immunology (AREA)
- Biochemistry (AREA)
- Chemical & Material Sciences (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of label defects detection algorithm based on twin network, step S1: the acquisition of training set and test set;Step S2: network establishment and training;Step S3: test set verifying.Using technical solution of the present invention, build twin web tab defect detecting system, investment gets label data collection and is trained, classified again using softmax, it only needs to train the label of several types, even if tag types to be measured are not in training set when test, can equally carry out defects detection, preparation workload can be effectively reduced, detection efficiency is improved and reduces cost.
Description
Technical field
The present invention relates to label manufacturing technology field, in particular to a kind of label defect detection based on twin network is calculated
Method can be used for improving the management to label.
Background technique
As the carrier of product information, Commercial goods labels include bulk information, are played an important role in the management of product.But
That Commercial goods labels can have a print defect, lacking number, leakage number is broken number, it is damaged the problems such as, this will generate huge shadow to the management of product
It rings.Market circulation label is hundreds of millions of, and label quality is essential, and before label comes into the market, progress quality testing pole has must
It wants.
Currently, the detection to label mostly uses artificial detection, hence it is evident that there are efficiency, the defects of accuracy rate is not high;Market is
There is quality inspection instrument, too fat to move huge, expensive, maintenance difficulties are big, only support offline inspection;Popular OCR scheme is pursued comprehensively general
Adaptive, efficiency and precision be not inevitably high.And at present using camera as image capture device, identify bar code two dimensional code and word
Symbol, to realize that the scheme of the detection of dynamic labels be mainly template matching, background difference, frequency domain analysis, but template
With that can not cope with rotation and scaling problem, and label to be detected need to be perfectly aligned with its correct label, to network requirement height;And it carries on the back
Scape difference wants and trains label to match one by one in the sample of background template, and training sample is complex, with template matching one
The correct tag template of sample, label to be detected need to be present in trained library;Method based on frequency domain analysis, there is when normal
The problem of image is easy to produce erroneous detection when close with defect frequency.
Summary of the invention
The present invention is directed to the shortcomings that traditional image processing techniques, devises a kind of label defect inspection based on twin network
Method of determining and calculating, to improve the training data problem of label detection.
Realize that technical scheme is as follows:
A kind of label defects detection algorithm based on twin network, comprising the following steps:
Step S1: obtain label data collection, wherein training set include various label defect types, and be divided into training set,
Verifying collection and test set;
Step S2: building twin network and trains the twin network with training set data;
Step S3: test set verifies trained twin network, and simulates different type by using different type character
Label;
Wherein, the step S1 further comprises:
Step S11: collecting label, intercepts character zone, obtains being used as simulating without being stained and being stained picture N
Label;Modulus intends N in label1Zhang Zuowei verifying collection, N2Zhang Zuowei test set, N3Zhang Zuowei training set, wherein N1+N2+N3=N,
And in three kinds of data sets, normal tag and M kind defective labels quantity obedience are uniformly distributed;
Step S12: the picture in training set is converted into tfrecord format, while for whether there is or not the pictures being stained to stick phase
" label " is answered, in order to determine the y value of cost function calculation formula;Sequentially built batch is upset to training set later, so as to
Realization is trained in batches;
The step S2 further comprises:
Step S21: twin network establishment, using 2 convolutional neural networks model VGG16 networks as convolutional neural networks
Frame, wherein each VGG16 model includes 5 convolution sections, 13 convolutional layers, 5 maximization pond layers, 3 full articulamentums;Make
Feature is extracted with 3*3 convolution kernel;
Input n to size be 224 × 224 image to two shared weights VGG16 model, export two 1000 Wei Te
Levy f1、f2, pedestrian's identity of prediction input picture is respectively used to by identification label d, then compares high dimensional feature f with square layer1、
f2, it is made to predict verifying label s jointly;
Step S22: the training of twin network;Training set is put into twin network training, due to two VGG16 networks it
Between share weight, two input by same convolutional neural networks study extraction feature;VGGNet is made of 5 sections of convolution, and every section
Convolution includes 2~3 layers of convolutional layer, has a maximum pond layer to come compressed picture, dimensionality reduction de-redundant behind the last layer convolutional layer;
Each section of convolutional layer convolution nucleus number all having the same, every to increase by one section, the convolution nucleus number of convolutional layer is increased by 1 in this section
Times;It is a VGG16 network structure as follows:
First convolution section, the input picture size of the first layer convolution of VGG network is 224 × 224 × 3, by 64
The convolution kernel of 3*3*3 is 1 progress convolution with step-length, exports the characteristic pattern of a 224*224*64, second layer convolution input picture
Size and output image size are 224*224*64.Then by with the maximization pond layer of 2*2, step-length 2 obtains 112*
112*64 characteristic pattern.
Second convolution section.This convolution section and first convolution section pixel, but after two convolutional layers of this section
Output channel becomes 128, exports as 112*112*128.Pond layer is maximized as first segment maximization pond layer, therefore defeated
Size becomes 56*56*128 out.
Third convolution section.The number of this section of convolutional layer becomes 3 by two before, and then output channel becomes 256,
56*56*256 is exported, by becoming 28*28*256 with consistent pond layer, Output Size before.
4th convolution section.The port number of convolutional layer is still double, becomes 512, and finally by pond layer, Output Size becomes
At 14*14*512.
5th convolution section.The port number of this section of convolutional layer maintains 512, after maximizing pond layer, output
Become 7*7*512.
After 13 layers of convolutional layer, VGG16 network enters 3 full articulamentums, and first and second full articulamentum has
4096 units, the full articulamentum of third have 1000 units, finally export the feature that dimension is 1000.
Step S23: by calculating loss function, pass through back-propagation algorithm training convolutional nerve net as reference
Network optimizes the parameters such as corresponding weight biasing, convolutional neural networks is allowed preferably to extract feature, final to realize the complete of training sample
U.S. fitting.The loss function (loss) of use is shown below:
Wherein, L is loss function value;
Y indicates input picture label similarity, and y=0 indicates that input sample input1 and input2 are dissimilar, y=1 then table
Show that input sample is similar;
dnIndicate the Euclidean distance between two input samples, in which:
dn=| | Gw(X1)-Gw(X2)||
Margin is used in Gw(x) boundary is defined, is indicated when inputting a positive sample and a negative sample, only
Can just have an impact to loss function when distance has less than this value.
The step S3 further comprises:
Step S31: one is known correct label in dual input;
Step S32: another channel inputs label to be measured;
Step S33: constructed fuction f (z, x) calculates similarity,
Wherein, z is to compare template image, and x is an equal amount of candidate image, and φ is convolution Embedding function, and b is indicated every
Different bias on a position;
The output of the twin frame is a shot chart, if the two iamge descriptions is same object, returns to high score,
Otherwise low point is returned;
Step S35: the classification accuracy rate by softmax real time monitoring network about training set and verifying collection, every completion
Average accuracy of 100 output.Network every batch of trains 64 (32*2) pictures, and when iteration n times, loss stablizes intimate 0 and omits
There is minor swing, the accuracy rate on training set stabilizes to 100%, and the classification accuracy on verifying collection is also stable more than 99%,
Complete training.
In above-mentioned technical proposal, the specific implementation process is as follows:
(1) twin network establishment.The present invention is using 2 convolutional neural networks model VGG16 networks as convolutional Neural net
Network frame, wherein each VGG16 model include 5 convolution sections, 13 convolutional layers, 5 maximization pond layers, 3 full articulamentums,
Feature is extracted using 3*3 convolution kernel, is not uniquely both the convolution kernel number difference that each convolution section uses.Input n is to size
224 × 224 image exports two 1000 dimensional feature f to the VGG16 model of two shared weights1、f2, it is respectively used to predict defeated
Enter pedestrian's identity of image by identification label d, then compares high dimensional feature f with square layer1、f2, it is made to predict verifying label jointly
s。
(2) twin network is constituted.Since twin network is by 2 convolutional neural networks model VGG16 networks, two VGG16
Weight is shared between network, therefore can be considered as two inputs and be learnt to extract feature by same convolutional neural networks.VGGNet
It is made of 5 sections of convolution, every section of convolution includes 2~3 layers of convolutional layer, there is a maximum pond layer behind the last layer convolutional layer
Compressed picture, dimensionality reduction de-redundant.Each section of convolutional layer convolution nucleus number all having the same, every to increase by one section, convolutional layer in the section
Convolution nucleus number be increased by 1 times.Starting convolution nucleus number is 64, and one convolution section of every process was with regard to double later, up to 512.
(3) network training.It is instructed being put into twin network to size for the training set of 224 × 224 image construction by n
Practice, due to sharing weight between two VGG16 networks, two inputs can be considered as by same VGG16 network frame, learned
It practises and extracts feature, carry out dimension specification, to calculate comparison loss function in feature space.To obtain the shorter training time
It first loads VGG before the network of training label defect with the target of better training effect and is directed to MNIST handwritten numeral number
According to the pre-training parameter of collection.
(4) by calculating loss function (loss), pass through back-propagation algorithm training convolutional nerve net as reference
Network optimizes the parameters such as corresponding weight biasing, convolutional neural networks is allowed preferably to extract feature, final to realize the complete of training sample
U.S. fitting.
(5) when carrying out test set verifying, different type label is simulated by using different type character, its in dual input
One is known correct label, another channel inputs label to be measured, and tag types can be not present in training set, such as exist
Training set is not present in experiment simulation simplified and unsimplified Hanzi and Japanese Latin etc. appear in test set.
(6) classify to Commercial goods labels.Since twinned structure solves target detection using the method for similarity-based learning
Thus problem proposes function f (z, x), compares template image z and an equal amount of candidate image x, if the two iamge descriptions
Be same object, return high score, otherwise return to low point.
(7) classification accuracy rate collected by softmax real time monitoring network about training set and verifying (distinguishes by two classification
Bad label), every completion average accuracy of 100 output.Network every batch of trains 64 (32*2) pictures, when iteration n times,
Loss stablizes intimate 0 slightly minor swing, the accuracy rate on training set and stabilizes to 100%, the classification accuracy on verifying collection
Also stable more than 99%, complete training.
In above-mentioned technical proposal, characteristics of image is extracted by convolutional neural networks, and by reference to sample and to test sample
Originally the distance metric in feature space, sorts out sample to be tested.
Compared with prior art, the present invention has the following technical effect that
(1) model proposed by the present invention has given full play to the characteristics of one shot learning may be implemented in twin network,
Under the premise of not needing to expand a large amount of training data, customized it can increase the defect type for needing to detect and number of types
Mesh.
(2) defect characteristic is extracted using neural network due to the present invention, mark label is treated, without spectrum signature etc.
Limitation need to only have defect characteristic to the image of input network, or be normal tag.There is no specific requirement to picture material.Such as
In label defects detection, when label to be detected is Chinese label, reference label can be for Japanese label, English label even
The labels such as bar code.Therefore context of methods has higher selection freedom degree and wider application range.
(3) using the label defect detecting system of twin network establishment, most significant feature is image dual input, internal
Structure includes two identical networks and shared weight.Subnet, which shares weight, means that training needs less parameter, also
It is intended to less data and is not easy over-fitting, a large amount of training samples need not be prepared, so just can solve nerve net
Common data deficiencies problem in network training, and be conducive to obtain desired result on test set.Also, compare template matching side
Method can effectively reduce preparation workload, improve detection efficiency and reduce cost.
(4) twin network asks comparison loss as output, in other words can after the key feature for extracting two input pictures
It extracts target similitude and contacts, be so easier to catch technicality.Especially tag class is ten hundreds of, if by each
Kind needs the label detected to go into training, and workload is excessive.In twin network, it is only necessary to which the label of training several types is surveyed
Even if tag types to be measured are not in training set when examination, defects detection can be equally carried out.
(5) compare twin network, traditional industrial label detection, often by background subtraction carry out template matching with
And using Fourier transformation analysis of high frequency signal both, the former needs a large amount of templates, has raised cost, while such as light control
System, dimension of picture unanimously waits requirements to also increase technology and cost burden, and the latter performs poor in label defects detection.And
Label defects detection is carried out by building twin network, duplex input system is used as, traditional detection is compared, to equipment requirement
It substantially reduces, does not also need the variables such as strict control light size, there is higher freedom degree, serious forgiveness and accuracy rate, pairing
Case marker label are classified with four kinds of defective labels, are worked well, accuracy rate is more than 99%.
Detailed description of the invention
Fig. 1 is one embodiment of the invention training flow chart;
Fig. 2 is one embodiment of the invention VGG16 network structure;
Fig. 3 is the twin network detection structure figure of one embodiment of the invention;
Fig. 4 is the twin network structure of one embodiment of the invention;
Fig. 5 is one embodiment of the invention system testing structure chart;
Fig. 6 is one embodiment of the invention detection label schematic diagram.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with specific example, and referring to attached
Figure, the invention will be further described.It should be appreciated that those skilled in the art, not departing from this
Under the premise of inventive concept, various modifications and improvements can be made.These are all within the scope of protection of the present invention.
The present invention is based on the label defect detection algorithms of twin network, and training set to be put into net by establishing twin network
Training in network is calculated loss function, similarity etc., and is separated the sheep from the goats label using two classification, and softmax real-time monitoring net is passed through
Classification accuracy rate of the network about training set and verifying collection, average accuracy of 100 output, when the classification on verifying collection is quasi-
True rate is also stable more than 99%, completes training.
As shown in Figure 1, the present invention includes 3 big steps, step S1: the acquisition of training set and test set;Step S2: network is taken
It builds and trains;Step S3: test set verifying.It is as follows that here is that label defects detection detailed process is described in detail:
Step S1: the acquisition of training set and test set.As the data of the twin network of training, label defect kind in training set
Class needs to enrich as far as possible.And since network is by training, it can be by scratch, burr ink dot goes out with filthy Edge Gradient Feature
Come, therefore of less demanding to the type and content of known correct label, the test set tag types to be measured that can not sum are identical.
Step S11: collecting label, intercepts character zone, obtains being used as simulating without being stained and being stained picture N
Label.Modulus intends N in label1Zhang Zuowei verifying collection, N2Zhang Zuowei test set, N3Zhang Zuowei training set, wherein N1+N2+N3=N,
And in three kinds of data sets, normal tag and M kind defective labels quantity obedience are uniformly distributed.
Step S12: the picture in training set is converted into tfrecord format, while for whether there is or not the pictures being stained to stick phase
" label " is answered, in order to determine the y value of cost function calculation formula.Sequentially built batch is upset to training set later, so as to
Realization is trained in batches.
Step S2: network establishment and training.
Step S21: twin network establishment.The present invention is using 2 convolutional neural networks model VGG16 networks as convolution mind
Through network frame, wherein each VGG16 model includes 5 convolution sections, 13 convolutional layers, 5 maximization pond layers, 3 connect entirely
Layer is connect, is illustrated in fig. 2 shown below.
Network uses 3*3 convolution kernel to extract feature, and two 3 × 3 convolution kernels series connection are equivalent to one 5 on receptive field
× 5 convolution kernel, but parameter is less, while CNN can also be reinforced to the learning ability of feature, it is unique different in each convolution
It is that the convolution kernel number that each convolution section uses is different.
Be illustrated in fig. 3 shown below, input n to size be 224 × 224 image to two shared weights VGG16 model, output
Two 1000 dimensional feature f1、f2, it is respectively used to predict that pedestrian's identity of input picture compares by identification label d, then with square layer
High dimensional feature f1、f2, it is made to predict verifying label s jointly.
Step S22: the training of twin network.Training set is put into training in twin network, is illustrated in fig. 4 shown below, due to two
Weight is shared between a VGG16 network, therefore can be considered as two inputs and be learnt to extract feature by same convolutional neural networks.
VGGNet is made of 5 sections of convolution, and every section of convolution includes 2~3 layers of convolutional layer, there is a maximum pond behind the last layer convolutional layer
Change layer and comes compressed picture, dimensionality reduction de-redundant.Each section of convolutional layer convolution nucleus number all having the same, it is every to increase by one section, in the section
The convolution nucleus number of convolutional layer is increased by 1 times.It is a VGG16 network structure and input and output feature of image as follows:
First convolution section.The input picture size of the first layer convolution of VGG network is 224 × 224 × 3, by 64
The convolution kernel of 3*3*3 is 1 progress convolution with step-length, exports the characteristic pattern of a 224*224*64, second layer convolution input picture
Size and output image size are 224*224*64.Then by with the maximization pond layer of 2*2, step-length 2 obtains 112*
112*64 characteristic pattern.
Second convolution section.This convolution section and first convolution section pixel, but after two convolutional layers of this section
Output channel becomes 128, exports as 112*112*128.Pond layer is maximized as first segment maximization pond layer, therefore defeated
Size becomes 56*56*128 out.
Third convolution section.The number of this section of convolutional layer becomes 3 by two before, and then output channel becomes 256,
56*56*256 is exported, by becoming 28*28*256 with consistent pond layer, Output Size before.
4th convolution section.The port number of convolutional layer is still double, becomes 512, and finally by pond layer, Output Size becomes
At 14*14*512.
5th convolution section.The port number of this section of convolutional layer maintains 512, after maximizing pond layer, output
Become 7*7*512.
After 13 layers of convolutional layer, VGG16 network enters 3 full articulamentums, and first and second full articulamentum has
4096 units, the full articulamentum of third have 1000 units, finally export the feature that dimension is 1000.
Step S23: by calculating loss function, pass through back-propagation algorithm training convolutional nerve net as reference
Network optimizes the parameters such as corresponding weight biasing, convolutional neural networks is allowed preferably to extract feature, final to realize the complete of training sample
U.S. fitting.The loss function (loss) of use is shown below:
Wherein, L is loss function value;
Y indicates input picture label similarity, and y=0 indicates that input sample input1 and input2 are dissimilar, y=1 then table
Show that input sample is similar;
dnIndicate the Euclidean distance between two input samples, in which:
dn=| | Gw(X1)-Gw(X2)||
Margin is used in Gw(x) boundary is defined, is indicated when inputting a positive sample and a negative sample, only
Can just have an impact to loss function when distance has less than this value.
Step S3: test set verifying.As shown in figure 5, simulating different type label by using different type character.
Step S31: one is known correct label in dual input, can be by scratch, burr ink since network is by training
Point comes out with filthy Edge Gradient Feature, therefore of less demanding to the type and content of known correct label, also not sum
Tag types to be measured are identical, can be number in experiment simulation, English character is also possible to Chinese character etc., as shown in Figure 6.
Step S32: another channel inputs label to be measured, does not limit content and type, even, tag types equally
It can be not present in training set, the simplified and unsimplified Hanzi and Japanese Latin that such as training set is not present in experiment simulation
Text etc. appears in test set.
Step S33: since twinned structure solves the problems, such as target detection using the method for similarity-based learning, letter is thus proposed
Number f (z, x), compares template image z and an equal amount of candidate image x, if the two iamge descriptions is same object, returns
High score is returned, otherwise returns to low point.
Network can be inputted for biggish search image by introducing full convolutional network, instead of time identical with template image size
Select image.The twin network frame of full convolution can do primary assessment in the sub- window dense grid after all search full convolution of image,
Calculate similarity.Similarity in order to obtain calculates f in conjunction with the cross-correlation layer of the characteristic pattern after convolution with convolution Embedding function φ
(z, x):
The output of the twin frame is not a single similar score value but a shot chart, and b is indicated in each position
The different bias set.
Step S35: classification accuracy rate (two classification by softmax real time monitoring network about training set and verifying collection
Separate the sheep from the goats label), every completion average accuracy of 100 output.Network every batch of trains 64 (32*2) pictures, iteration N
When secondary, loss stablizes intimate 0 slightly minor swing, the accuracy rate on training set and stabilizes to 100%, and the classification on verifying collection is quasi-
True rate is also stable more than 99%, completes training.
To sum up for embodiment as it can be seen that the present invention can carry out defect detection to label, required data detect network compared to other
It is few, solve the problems, such as data deficiencies common in neural network.Meanwhile when training, the data of several types, test need to be only trained
When, even if tag types to be measured not in training set, can equally carry out defects detection.It is detected compared to traditional industrial label,
Equipment requirement is greatly reduced, the variables such as strict control light size are not needed yet, there is a higher freedom degree, serious forgiveness and
Accuracy rate classifies to qualified label and four kinds of defective labels, works well, accuracy rate is more than 99%.
Specific embodiments of the present invention are illustrated above, professional and technical personnel in the field is made to can be realized or use
The present invention.Various modifications to these embodiments will be readily apparent to those skilled in the art, herein
Defined General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.It needs
It is to be understood that the present invention is not limited to the above specific embodiments, those skilled in the art can be in the model of claim
Various deformations or amendments are made in enclosing, this is not affected the essence of the present invention.
Claims (1)
1. a kind of label defects detection algorithm based on twin network, which comprises the following steps:
Step S1: label data collection is obtained, wherein training set includes various label defect types, and is divided into training set, verifying
Collection and test set;
Step S2: building twin network and trains the twin network with training set data;
Step S3: test set verifies trained twin network, and simulates different type label by using different type character;
Wherein, the step S1 further comprises:
Step S11: collecting label, intercepts character zone, obtains being used as simulation label without being stained and being stained picture N;
Modulus intends N in label1Zhang Zuowei verifying collection, N2Zhang Zuowei test set, N3Zhang Zuowei training set, wherein N1+N2+N3=N, and three
In kind data set, normal tag and M kind defective labels quantity obedience are uniformly distributed;
Step S12: the picture in training set is converted into tfrecord format, while for whether there is or not the pictures being stained to stick accordingly
" label ", in order to determine the y value of cost function calculation formula;Sequentially built batch is upset to training set later, so as to reality
Now train in batches;
The step S2 further comprises:
Step S21: twin network establishment, using 2 convolutional neural networks model VGG16 networks as convolutional neural networks frame
Frame, wherein each VGG16 model includes 5 convolution sections, 13 convolutional layers, 5 maximization pond layers, 3 full articulamentums;It uses
3*3 convolution kernel extracts feature;
Input n to size be 224 × 224 image to two shared weights VGG16 model, export two 1000 dimensional feature f1、
f2, pedestrian's identity of prediction input picture is respectively used to by identification label d, then compares high dimensional feature f with square layer1、f2, make
Its common prediction verifying label s;
Step S22: the training of twin network;Training set is put into training in twin network, due to total between two VGG16 networks
Weight is enjoyed, two inputs learn to extract feature by same convolutional neural networks;VGGNet is made of 5 sections of convolution, every section of convolution
Including 2~3 layers of convolutional layer, there is a maximum pond layer to come compressed picture, dimensionality reduction de-redundant behind the last layer convolutional layer;It is each
The convolutional layer convolution nucleus number all having the same of section, every to increase by one section, the convolution nucleus number of convolutional layer is increased by 1 times in this section;Such as
Under be a VGG16 network structure:
First convolution section, the input picture size of the first layer convolution of VGG network is 224 × 224 × 3, by 64 3*3*3
Convolution kernel, with step-length be 1 carry out convolution, export the characteristic pattern of a 224*224*64, second layer convolution input picture size
It is 224*224*64 with output image size;Then by with the maximization pond layer of 2*2, step-length 2 obtains 112*112*
64 characteristic patterns;
Second convolution section, this convolution section and first convolution section pixel, but exported after two convolutional layers of this section
Channel becomes 128, exports as 112*112*128;Pond layer is maximized as first segment maximization pond layer, therefore exports ruler
It is very little to become 56*56*128;
The number of third convolution section, this section of convolutional layer becomes 3 by two before, and then output channel becomes 256, output
56*56*256, by becoming 28*28*256 with consistent pond layer, Output Size before;
4th convolution section, the port number of convolutional layer is still double, becomes 512, and finally by pond layer, Output Size becomes
14*14*512;
5th convolution section, the port number of this section of convolutional layer maintain 512, and after maximizing pond layer, output becomes
7*7*512;
After 13 layers of convolutional layer, VGG16 network enters 3 full articulamentums, and first and second full articulamentum has 4096
Unit, the full articulamentum of third have 1000 units, finally export the feature that dimension is 1000;
Step S23: excellent as reference by back-propagation algorithm training convolutional neural networks by calculating loss function
Change the parameters such as corresponding weight biasing, convolutional neural networks is allowed preferably to extract feature, the final perfection for realizing training sample is quasi-
It closes;The loss function (loss) of use is shown below:
Wherein, L is loss function value;
Y indicates input picture label similarity, and y=0 indicates that input sample input1 and input2 is dissimilar, and y=1 then indicates defeated
It is similar to enter sample;
dnIndicate the Euclidean distance between two input samples, in which:
dn=| | Gw(X1)-Gw(X2)||
Margin is used in Gw(x) boundary is defined, is indicated when inputting a positive sample and a negative sample, only distance
Having when being less than this value can just have an impact to loss function;
The step S3 further comprises:
Step S31: one is known correct label in dual input;
Step S32: another channel inputs label to be measured;
Step S33: constructed fuction f (z, x) calculates similarity,
Wherein, z is to compare template image, and x is an equal amount of candidate image, and φ is convolution Embedding function, and b is indicated in each position
The different bias set;
The output of the twin frame is a shot chart, if the two iamge descriptions is same object, returns to high score, otherwise
Return to low point;
Step S35: the classification accuracy rate by softmax real time monitoring network about training set and verifying collection, it is every to complete 100 times
Export an average accuracy;Network every batch of trains 64 (32*2) pictures, and when iteration n times, loss stablizes intimate 0 slightly small echo
Dynamic, the accuracy rate on training set stabilizes to 100%, and the classification accuracy on verifying collection is also stable more than 99%, completes instruction
Practice.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910538938.1A CN110222792A (en) | 2019-06-20 | 2019-06-20 | A kind of label defects detection algorithm based on twin network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910538938.1A CN110222792A (en) | 2019-06-20 | 2019-06-20 | A kind of label defects detection algorithm based on twin network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110222792A true CN110222792A (en) | 2019-09-10 |
Family
ID=67813956
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910538938.1A Pending CN110222792A (en) | 2019-06-20 | 2019-06-20 | A kind of label defects detection algorithm based on twin network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110222792A (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111105411A (en) * | 2019-12-30 | 2020-05-05 | 创新奇智(青岛)科技有限公司 | Magnetic shoe surface defect detection method |
CN111179251A (en) * | 2019-12-30 | 2020-05-19 | 上海交通大学 | Defect detection system and method based on twin neural network and by utilizing template comparison |
CN111275137A (en) * | 2020-03-26 | 2020-06-12 | 南京工业大学 | Tea true-checking method based on exclusive twin network model |
CN111402203A (en) * | 2020-02-24 | 2020-07-10 | 杭州电子科技大学 | Fabric surface defect detection method based on convolutional neural network |
CN111428800A (en) * | 2020-03-30 | 2020-07-17 | 南京工业大学 | Tea true-checking method based on 0-1 model |
CN111598839A (en) * | 2020-04-22 | 2020-08-28 | 浙江工业大学 | Wrist bone grade classification method based on twin network |
CN111612763A (en) * | 2020-05-20 | 2020-09-01 | 重庆邮电大学 | Mobile phone screen defect detection method, device and system, computer equipment and medium |
CN111709920A (en) * | 2020-06-01 | 2020-09-25 | 深圳市深视创新科技有限公司 | Template defect detection method |
CN111754513A (en) * | 2020-08-07 | 2020-10-09 | 腾讯科技(深圳)有限公司 | Product surface defect segmentation method, defect segmentation model learning method and device |
CN111768387A (en) * | 2020-06-30 | 2020-10-13 | 创新奇智(青岛)科技有限公司 | Flaw detection method, twin neural network training method and device and electronic equipment |
CN111986186A (en) * | 2020-08-25 | 2020-11-24 | 华中科技大学 | Quantitative on-line detection system and method for defects of PCB (printed Circuit Board) patches in front of furnace |
CN112183224A (en) * | 2020-09-07 | 2021-01-05 | 北京达佳互联信息技术有限公司 | Model training method for image recognition, image recognition method and device |
CN112232269A (en) * | 2020-10-29 | 2021-01-15 | 南京莱斯网信技术研究院有限公司 | Twin network-based intelligent ship identity identification method and system |
CN112308148A (en) * | 2020-11-02 | 2021-02-02 | 创新奇智(青岛)科技有限公司 | Defect category identification and twin neural network training method, device and storage medium |
CN112330666A (en) * | 2020-11-26 | 2021-02-05 | 成都数之联科技有限公司 | Image processing method, system, device and medium based on improved twin network |
CN112417918A (en) * | 2020-11-13 | 2021-02-26 | 珠海格力电器股份有限公司 | Two-dimensional code identification method and device, storage medium and electronic equipment |
CN112465045A (en) * | 2020-12-02 | 2021-03-09 | 东莞理工学院 | Supply chain exception event detection method based on twin neural network |
CN112990234A (en) * | 2021-04-28 | 2021-06-18 | 广东西尼科技有限公司 | Method for detecting super-resolution small sample data based on improved twin network |
CN113051962A (en) * | 2019-12-26 | 2021-06-29 | 四川大学 | Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine |
CN113112483A (en) * | 2021-04-16 | 2021-07-13 | 合肥科大智能机器人技术有限公司 | Rigid contact net defect detection method and system based on similarity measurement |
CN113128518A (en) * | 2021-03-30 | 2021-07-16 | 西安理工大学 | Sift mismatch detection method based on twin convolution network and feature mixing |
CN113160200A (en) * | 2021-04-30 | 2021-07-23 | 聚时科技(上海)有限公司 | Industrial image defect detection method and system based on multitask twin network |
CN113255611A (en) * | 2021-07-05 | 2021-08-13 | 浙江师范大学 | Twin network target tracking method based on dynamic label distribution and mobile equipment |
CN113324993A (en) * | 2021-05-19 | 2021-08-31 | 吉林大学 | Omnibearing medicine bottle appearance defect detection method |
CN113612733A (en) * | 2021-07-07 | 2021-11-05 | 浙江工业大学 | Twin network-based few-sample false data injection attack detection method |
CN113903043A (en) * | 2021-12-11 | 2022-01-07 | 绵阳职业技术学院 | Method for identifying printed Chinese character font based on twin metric model |
CN114049507A (en) * | 2021-11-19 | 2022-02-15 | 国网湖南省电力有限公司 | Distribution network line insulator defect identification method, equipment and medium based on twin network |
CN114757900A (en) * | 2022-03-31 | 2022-07-15 | 启东新朋莱纺织科技有限公司 | Artificial intelligence-based textile defect type identification method |
CN114898472A (en) * | 2022-04-26 | 2022-08-12 | 华南理工大学 | Signature identification method and system based on twin vision Transformer network |
CN115375691A (en) * | 2022-10-26 | 2022-11-22 | 济宁九德半导体科技有限公司 | Image-based semiconductor diffusion paper source defect detection system and method thereof |
CN116128798A (en) * | 2022-11-17 | 2023-05-16 | 台州金泰精锻科技股份有限公司 | Finish forging process for bell-shaped shell forged surface teeth |
CN117455852A (en) * | 2023-10-19 | 2024-01-26 | 北京闪电侠科技有限公司 | Image quality judging and processing method and system based on twin network |
CN117541563A (en) * | 2023-11-22 | 2024-02-09 | 泸州老窖股份有限公司 | Image defect detection method, device, computer equipment and medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292333A (en) * | 2017-06-05 | 2017-10-24 | 浙江工业大学 | A kind of rapid image categorization method based on deep learning |
CN108198200A (en) * | 2018-01-26 | 2018-06-22 | 福州大学 | The online tracking of pedestrian is specified under across camera scene |
CN108388927A (en) * | 2018-03-26 | 2018-08-10 | 西安电子科技大学 | Small sample polarization SAR terrain classification method based on the twin network of depth convolution |
CN108805200A (en) * | 2018-06-08 | 2018-11-13 | 中国矿业大学 | Optical remote sensing scene classification method and device based on the twin residual error network of depth |
CN109117744A (en) * | 2018-07-20 | 2019-01-01 | 杭州电子科技大学 | A kind of twin neural network training method for face verification |
CN109446889A (en) * | 2018-09-10 | 2019-03-08 | 北京飞搜科技有限公司 | Object tracking method and device based on twin matching network |
CN109508655A (en) * | 2018-10-28 | 2019-03-22 | 北京化工大学 | The SAR target identification method of incomplete training set based on twin network |
CN109840556A (en) * | 2019-01-24 | 2019-06-04 | 浙江大学 | A kind of image classification recognition methods based on twin network |
-
2019
- 2019-06-20 CN CN201910538938.1A patent/CN110222792A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292333A (en) * | 2017-06-05 | 2017-10-24 | 浙江工业大学 | A kind of rapid image categorization method based on deep learning |
CN108198200A (en) * | 2018-01-26 | 2018-06-22 | 福州大学 | The online tracking of pedestrian is specified under across camera scene |
CN108388927A (en) * | 2018-03-26 | 2018-08-10 | 西安电子科技大学 | Small sample polarization SAR terrain classification method based on the twin network of depth convolution |
CN108805200A (en) * | 2018-06-08 | 2018-11-13 | 中国矿业大学 | Optical remote sensing scene classification method and device based on the twin residual error network of depth |
CN109117744A (en) * | 2018-07-20 | 2019-01-01 | 杭州电子科技大学 | A kind of twin neural network training method for face verification |
CN109446889A (en) * | 2018-09-10 | 2019-03-08 | 北京飞搜科技有限公司 | Object tracking method and device based on twin matching network |
CN109508655A (en) * | 2018-10-28 | 2019-03-22 | 北京化工大学 | The SAR target identification method of incomplete training set based on twin network |
CN109840556A (en) * | 2019-01-24 | 2019-06-04 | 浙江大学 | A kind of image classification recognition methods based on twin network |
Non-Patent Citations (5)
Title |
---|
史璐璐 等: "基于Tiny Darknet全卷积孪生网络的目标跟踪", 《南京邮电大学学报(自然科学版)》 * |
张雪芹 等: "基于深度学习的快速植物图像识别", 《华东理工大学学报(自然科学版)》 * |
文常保 等: "《人工神经网络理论及应用》", 31 March 2019 * |
李培秀 等: "基于Caffe深度学习框架的标签缺陷检测应用研究", 《中国电子科学研究院学报》 * |
陈首兵 等: "基于孪生网络和重排序的行人重识别", 《计算机应用》 * |
Cited By (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113051962B (en) * | 2019-12-26 | 2022-11-04 | 四川大学 | Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine |
CN113051962A (en) * | 2019-12-26 | 2021-06-29 | 四川大学 | Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine |
CN111105411A (en) * | 2019-12-30 | 2020-05-05 | 创新奇智(青岛)科技有限公司 | Magnetic shoe surface defect detection method |
CN111179251A (en) * | 2019-12-30 | 2020-05-19 | 上海交通大学 | Defect detection system and method based on twin neural network and by utilizing template comparison |
CN111179251B (en) * | 2019-12-30 | 2021-04-02 | 上海交通大学 | Defect detection system and method based on twin neural network and by utilizing template comparison |
CN111105411B (en) * | 2019-12-30 | 2023-06-23 | 创新奇智(青岛)科技有限公司 | Magnetic shoe surface defect detection method |
CN111402203A (en) * | 2020-02-24 | 2020-07-10 | 杭州电子科技大学 | Fabric surface defect detection method based on convolutional neural network |
CN111402203B (en) * | 2020-02-24 | 2024-03-01 | 杭州电子科技大学 | Fabric surface defect detection method based on convolutional neural network |
CN111275137B (en) * | 2020-03-26 | 2023-07-18 | 南京工业大学 | Tea verification method based on exclusive twin network model |
CN111275137A (en) * | 2020-03-26 | 2020-06-12 | 南京工业大学 | Tea true-checking method based on exclusive twin network model |
CN111428800A (en) * | 2020-03-30 | 2020-07-17 | 南京工业大学 | Tea true-checking method based on 0-1 model |
CN111428800B (en) * | 2020-03-30 | 2023-07-18 | 南京工业大学 | Tea verification method based on 0-1 model |
CN111598839A (en) * | 2020-04-22 | 2020-08-28 | 浙江工业大学 | Wrist bone grade classification method based on twin network |
CN111612763A (en) * | 2020-05-20 | 2020-09-01 | 重庆邮电大学 | Mobile phone screen defect detection method, device and system, computer equipment and medium |
CN111612763B (en) * | 2020-05-20 | 2022-06-03 | 重庆邮电大学 | Mobile phone screen defect detection method, device and system, computer equipment and medium |
CN111709920A (en) * | 2020-06-01 | 2020-09-25 | 深圳市深视创新科技有限公司 | Template defect detection method |
CN111768387A (en) * | 2020-06-30 | 2020-10-13 | 创新奇智(青岛)科技有限公司 | Flaw detection method, twin neural network training method and device and electronic equipment |
CN111754513B (en) * | 2020-08-07 | 2024-03-22 | 腾讯科技(深圳)有限公司 | Product surface defect segmentation method, defect segmentation model learning method and device |
CN111754513A (en) * | 2020-08-07 | 2020-10-09 | 腾讯科技(深圳)有限公司 | Product surface defect segmentation method, defect segmentation model learning method and device |
CN111986186B (en) * | 2020-08-25 | 2024-03-22 | 华中科技大学 | Quantitative in-furnace PCB patch defect online detection system and method |
CN111986186A (en) * | 2020-08-25 | 2020-11-24 | 华中科技大学 | Quantitative on-line detection system and method for defects of PCB (printed Circuit Board) patches in front of furnace |
CN112183224A (en) * | 2020-09-07 | 2021-01-05 | 北京达佳互联信息技术有限公司 | Model training method for image recognition, image recognition method and device |
CN112232269A (en) * | 2020-10-29 | 2021-01-15 | 南京莱斯网信技术研究院有限公司 | Twin network-based intelligent ship identity identification method and system |
CN112232269B (en) * | 2020-10-29 | 2024-02-09 | 南京莱斯网信技术研究院有限公司 | Ship identity intelligent recognition method and system based on twin network |
CN112308148A (en) * | 2020-11-02 | 2021-02-02 | 创新奇智(青岛)科技有限公司 | Defect category identification and twin neural network training method, device and storage medium |
CN112417918B (en) * | 2020-11-13 | 2022-03-18 | 珠海格力电器股份有限公司 | Two-dimensional code identification method and device, storage medium and electronic equipment |
CN112417918A (en) * | 2020-11-13 | 2021-02-26 | 珠海格力电器股份有限公司 | Two-dimensional code identification method and device, storage medium and electronic equipment |
CN112330666B (en) * | 2020-11-26 | 2022-04-29 | 成都数之联科技股份有限公司 | Image processing method, system, device and medium based on improved twin network |
CN112330666A (en) * | 2020-11-26 | 2021-02-05 | 成都数之联科技有限公司 | Image processing method, system, device and medium based on improved twin network |
CN112465045A (en) * | 2020-12-02 | 2021-03-09 | 东莞理工学院 | Supply chain exception event detection method based on twin neural network |
CN113128518A (en) * | 2021-03-30 | 2021-07-16 | 西安理工大学 | Sift mismatch detection method based on twin convolution network and feature mixing |
CN113128518B (en) * | 2021-03-30 | 2023-04-07 | 西安理工大学 | Sift mismatch detection method based on twin convolution network and feature mixing |
CN113112483A (en) * | 2021-04-16 | 2021-07-13 | 合肥科大智能机器人技术有限公司 | Rigid contact net defect detection method and system based on similarity measurement |
CN112990234A (en) * | 2021-04-28 | 2021-06-18 | 广东西尼科技有限公司 | Method for detecting super-resolution small sample data based on improved twin network |
CN113160200A (en) * | 2021-04-30 | 2021-07-23 | 聚时科技(上海)有限公司 | Industrial image defect detection method and system based on multitask twin network |
CN113160200B (en) * | 2021-04-30 | 2024-04-12 | 聚时科技(上海)有限公司 | Industrial image defect detection method and system based on multi-task twin network |
CN113324993B (en) * | 2021-05-19 | 2022-09-27 | 吉林大学 | Omnibearing medicine bottle appearance defect detection method |
CN113324993A (en) * | 2021-05-19 | 2021-08-31 | 吉林大学 | Omnibearing medicine bottle appearance defect detection method |
CN113255611A (en) * | 2021-07-05 | 2021-08-13 | 浙江师范大学 | Twin network target tracking method based on dynamic label distribution and mobile equipment |
CN113612733B (en) * | 2021-07-07 | 2023-04-07 | 浙江工业大学 | Twin network-based few-sample false data injection attack detection method |
CN113612733A (en) * | 2021-07-07 | 2021-11-05 | 浙江工业大学 | Twin network-based few-sample false data injection attack detection method |
CN114049507A (en) * | 2021-11-19 | 2022-02-15 | 国网湖南省电力有限公司 | Distribution network line insulator defect identification method, equipment and medium based on twin network |
CN113903043B (en) * | 2021-12-11 | 2022-05-06 | 绵阳职业技术学院 | Method for identifying printed Chinese character font based on twin metric model |
CN113903043A (en) * | 2021-12-11 | 2022-01-07 | 绵阳职业技术学院 | Method for identifying printed Chinese character font based on twin metric model |
CN114757900B (en) * | 2022-03-31 | 2023-04-07 | 绍兴柯桥奇诺家纺用品有限公司 | Artificial intelligence-based textile defect type identification method |
CN114757900A (en) * | 2022-03-31 | 2022-07-15 | 启东新朋莱纺织科技有限公司 | Artificial intelligence-based textile defect type identification method |
CN114898472A (en) * | 2022-04-26 | 2022-08-12 | 华南理工大学 | Signature identification method and system based on twin vision Transformer network |
CN114898472B (en) * | 2022-04-26 | 2024-04-05 | 华南理工大学 | Signature identification method and system based on twin vision transducer network |
CN115375691A (en) * | 2022-10-26 | 2022-11-22 | 济宁九德半导体科技有限公司 | Image-based semiconductor diffusion paper source defect detection system and method thereof |
CN116128798B (en) * | 2022-11-17 | 2024-02-27 | 台州金泰精锻科技股份有限公司 | Finish forging method for bell-shaped shell forging face teeth |
CN116128798A (en) * | 2022-11-17 | 2023-05-16 | 台州金泰精锻科技股份有限公司 | Finish forging process for bell-shaped shell forged surface teeth |
CN117455852A (en) * | 2023-10-19 | 2024-01-26 | 北京闪电侠科技有限公司 | Image quality judging and processing method and system based on twin network |
CN117541563A (en) * | 2023-11-22 | 2024-02-09 | 泸州老窖股份有限公司 | Image defect detection method, device, computer equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110222792A (en) | A kind of label defects detection algorithm based on twin network | |
Huang et al. | Development and validation of a deep learning algorithm for the recognition of plant disease | |
CN105825511B (en) | A kind of picture background clarity detection method based on deep learning | |
CN105488536B (en) | A kind of agricultural pests image-recognizing method based on multiple features depth learning technology | |
CN102609681B (en) | Face recognition method based on dictionary learning models | |
CN104992223A (en) | Dense population estimation method based on deep learning | |
CN109063649A (en) | Pedestrian's recognition methods again of residual error network is aligned based on twin pedestrian | |
CN109886947A (en) | The high-tension bus-bar defect inspection method of convolutional neural networks based on region | |
US11605163B2 (en) | Automatic abnormal cell recognition method based on image splicing | |
CN110533057A (en) | A kind of Chinese character method for recognizing verification code under list sample and few sample scene | |
CN107203606A (en) | Text detection and recognition methods under natural scene based on convolutional neural networks | |
CN111540006A (en) | Plant stomata intelligent detection and identification method and system based on deep migration learning | |
CN109086826A (en) | Wheat Drought recognition methods based on picture depth study | |
CN108985360A (en) | Hyperspectral classification method based on expanding morphology and Active Learning | |
CN110287806A (en) | A kind of traffic sign recognition method based on improvement SSD network | |
CN109614866A (en) | Method for detecting human face based on cascade deep convolutional neural networks | |
CN107679453A (en) | Weather radar electromagnetic interference echo recognition methods based on SVMs | |
CN112949517B (en) | Plant stomata density and opening degree identification method and system based on deep migration learning | |
CN114898472A (en) | Signature identification method and system based on twin vision Transformer network | |
CN113838009A (en) | Abnormal cell detection false positive inhibition method based on semi-supervision mechanism | |
CN113989536A (en) | Tomato disease identification method based on cuckoo search algorithm | |
CN108509953A (en) | A kind of TV station symbol detection recognition method | |
CN108230313A (en) | Based on the adaptively selected SAR image object detection method with discrimination model of component | |
Fu et al. | Complementarity-aware Local-global Feature Fusion Network for Building Extraction in Remote Sensing Images | |
Peng et al. | Application of deep residual neural network to water meter reading recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190910 |
|
WD01 | Invention patent application deemed withdrawn after publication |