CN109711433A - A kind of fine grit classification method based on meta learning - Google Patents
A kind of fine grit classification method based on meta learning Download PDFInfo
- Publication number
- CN109711433A CN109711433A CN201811451465.3A CN201811451465A CN109711433A CN 109711433 A CN109711433 A CN 109711433A CN 201811451465 A CN201811451465 A CN 201811451465A CN 109711433 A CN109711433 A CN 109711433A
- Authority
- CN
- China
- Prior art keywords
- training
- sample
- convolutional neural
- fine grit
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The present invention discloses a kind of fine grit classification method based on meta learning, and step is: establishing external data collection, data set is divided into training set, verifying collection and test set, the sample class between three mutually disjoints, and the sample class of test set is less than training set;Data enhancing is carried out to the sample that data are concentrated;Convolutional neural networks are established, the input of the convolutional neural networks is color image, is exported as color image generic, and the length for layer of classifying is equal to the classification number of external data collection, and loss function uses softmax loss;Using training set training fine grit classification network;It is tested using the test set convolutional neural networks good to pre-training, and convolutional neural networks is finely adjusted according to test result.Such method can quickly generate a good general initialization model, can only preferable classifying quality can be obtained with less sample, not have large data sets when solving the problems, such as fine grit classification in the related but different classification of test.
Description
Technical field
The invention belongs to calculate the technical field of reckoning, the technical field of computer vision of fine grit classification is especially belonged to,
It is related to a kind of fine grit classification method based on meta learning.
Background technique
Fine granularity image recognition is a challenging task in image classification, and target is in a major class
Target is correctly identified in numerous subclasses.In general, fine granularity image classification is some subtle regional areas of searching, and sharp
Classified with the feature of these regional areas to original image.But current fine grit classification algorithm is essentially all using logical
With model (such as VGG16) Lai Jinhang fine grit classification, the structure and migration effect for limiting disaggregated model in this way are also not very
It is good.Therefore we quickly generate a good general initialization model using the method for meta learning, then in this initialization
Fine grit classification is carried out on the basis of model.
Existing fine grit classification algorithm be all be to be trained using big data the set pair analysis model, and obtain relatively good knot
Fruit.In this case it is necessary to which large data sets provide a large amount of sample to deep neural network to extract feature abundant.But
It is that, if we will carry out fine grit classification for the data set of some small sample, we cannot utilize traditional fine granularity
The method of classification is handled.
Deep neural network improves the identification of model, but network training takes a substantial amount of time and calculation resources, because
This is general, and we can be finely adjusted using traditional trained neural network, but the model knot of traditional neural network
Fruit has been fixed, our diversified model selections are unfavorable for;On the other hand, fine granularity image classification is different from traditional classification
Problem, it focuses on the differentiation to the regional area of image, therefore its requirement to model and general sorting algorithm are different.
In order to better solve fine granularity image classification problem, needing a simplicity, accurately thus learning method, this case are generated.
Summary of the invention
The purpose of the present invention is to provide a kind of fine grit classification method based on meta learning, can quickly generate one
Good general initialization model can only be taken with less sample in the related but different classification of test
Preferable classifying quality is obtained, there is no large data sets when solving the problems, such as fine grit classification.
In order to achieve the above objectives, solution of the invention is:
A kind of fine grit classification method based on meta learning, includes the following steps:
Step 1, the data collected according to the open fine grit classification database of research institution or voluntarily establish external data
Data set is divided into training set, verifying collection and test set by collection, and the sample class of test set is less than training set, training set, verifying
Sample class between collection and test set mutually disjoints;
Step 2, data enhancing is carried out to the sample that data are concentrated, data enhancing is carried out using following at least one mode:
Translation scales, and rotates, overturning;
Step 3, convolutional neural networks are established, the input of the convolutional neural networks is color image, is exported as color image
Generic, the length for layer of classifying are equal to the classification number of external data collection, and loss function uses softmax loss;Using training
Collect training fine grit classification network;
Step 4, it is tested using the test set convolutional neural networks good to step 3 pre-training, and according to test result
Convolutional neural networks are finely adjusted.
In above-mentioned steps 1, fine grit classification database select 200 data set of Caltech-UCSD Brids or
DogNet。
In above-mentioned steps 1, all pictures in data set are scaled to the input size of convolutional neural networks.
In above-mentioned steps 3, the detailed process using training set training fine grit classification network is: carrying out sample to training set
Acquisition, N*K sample of random acquisition carry out weight more as a sample set, followed by the method for stochastic gradient descent
Newly, then by the weight after update it is input in a new sample set, error is acquired, using Adam optimizer to required
Sum of the deviations carries out reversed gradient decline, and loss function constantly declines with the backpropagation of error, on training accuracy rate is continuous
It rises, when loss function is restrained and does not continue to decline, saves convolutional neural networks model, obtain the good convolutional Neural of pre-training
Network.
In above-mentioned steps 4, when carrying out sample collection to test set, acquires K sample respectively in the different sample of N class and feed
Entering pre-training, good convolutional neural networks are tested.
Above-mentioned convolutional neural networks use top-down and bottom-up modular structure, each top-down and
The structure of bottom-up successively realizes down-sampled and up-sampling function by four connected convolution, then by obtained feature
It is blended with the characteristic pattern of a upper module, the last one top-down and bottom-up characteristic pattern exported is inputted into one
A full articulamentum obtains the feature vector for being input to classification layer.
Above-mentioned Top-down, bottom-up module include two convolution sums, two deconvolution, and input feature vector figure is first successively
The convolutional layer for being 2 into two step-lengths is inputted, the output characteristic pattern of 2*2 is obtained, then sequentially inputs the warp for being 2 into two step-lengths
Lamination obtains the output characteristic pattern of 6*6.
When need to change data category to be tested, one is fed with the entirely different test set of training set sample class first
Enter in the good convolutional neural networks of pre-training, then set 0 for the learning rate of meta learning person, by the learning rate of tasking learning person
It remains unchanged, convolutional neural networks model is finely adjusted using stochastic gradient descent algorithm, finally feeds test set data
Convolutional neural networks after fine tuning are tested.To realize online change test set sample class and without to trained
The function of convolutional neural networks progress re -training.
After adopting the above scheme, the invention has the following advantages:
(1) initialization model can be quickly generated the invention proposes one kind and different classes of sample is tested
Method, this method is by meta learning person dynamically control task learner, by the shared of the weight between two learners,
The function of fast and accurately generating general initialization model is realized, and is directed to test set, using tasking learning person to training
Good model is finely adjusted, and easy to operate compared to the conventional method being trained again to sample, calculation amount is small, and can
To solve the problems, such as small sample fine grit classification;
(2) for the present invention by top-down, the convolutional neural networks of bottom-up realize feature extraction, if passing through connection
The dry convolutional layer with step-length constitutes top-down, and bottom-up module improves network performance, reduces number of parameters and fortune
Calculation amount, robustness is stronger, and the scope of application is wider.
Detailed description of the invention
Fig. 1 is flow chart of the invention;
Fig. 2 is top-down, the structural schematic diagram of bottom-up module.
Specific embodiment
The present invention provides a kind of fine grit classification method based on meta learning, includes the following steps:
Establish external data collection: the data according to the open fine grit classification database of research institution or voluntarily collected are established
External data collection, illustratively, fine grit classification database can choose Caltech-UCSD Brids 200 (CUB-200) number
According to collection or DogNet.Every picture should all be marked containing identity, indicate which classification the picture belongs to.It should collect as far as possible
More individuals, each individual includes sample as much as possible, while reducing the quantity of error label sample in data set.Sample number
The increase of amount and categorical measure can improve training precision;The sample class of test set is less than training set, and the sample of test set
This classification and training set are completely non-intersecting.One big data set can be divided into the completely disjoint training set of classification, test
Collection and verifying collection, can also use a biggish data set as training set, select a relevant lesser data set as
Test set.
Data enhancing: being trained using deep neural network to Small Sample Database collection will lead to over-fitting, but training
Sample number is generally much less than the sample number needed, and manual data enhancing can reduce over-fitting.Data for dilated data set
Enhancement Method usually has following four: translation scales, and rotates, overturning.
Training pattern: using convolutional neural networks as feature extractor, and the input of neural network is color image, nerve
The output of network is picture generic, and the length for layer of classifying is equal to the classification number of external data collection, and loss function can use
softmax loss.It should be noted that meta learning network is trained using training set, it is assumed that carry out N-way, K-shot divides
Class processing carries out sample collection to training set, and N*K sample of random acquisition is as a sample set, followed by simple
The method of stochastic gradient descent carries out weight update, then the weight after update is input in a new sample set, asks
Error is obtained, reversed gradient decline is carried out to required sum of the deviations using Adam optimizer, loss function is reversed with error
Constantly decline is propagated, training accuracy rate constantly rises, and when loss function is restrained and does not continue to decline, saves convolutional Neural net
Network model.
Transfer learning: acquisition mode identical with training set is used to carry out sample collection test set, in the different sample of N class
The K good neural network of sample feeding pre-training is acquired in this respectively to be tested;Using test set sample set to pre-training
Good neural network is finely adjusted, and carries out simple stochastic gradient descent processing to test set sample, it can is obtained preferable
Effect.
In order to realize more efficient convolutional neural networks, the structure of top-down and bottom-up is used in a network,
Each top-down successively realizes down-sampled and up-sampling function with the convolution that the structure of bottom-up is connected by four, then
The characteristic pattern of obtained feature and a upper module is blended, the last one top-down and bottom-up is exported
Characteristic pattern inputs the feature vector for obtaining being input to classification layer into a full articulamentum.
Below with reference to drawings and the specific embodiments, technical solution of the present invention is described in detail.
Fig. 1 gives the flow chart of fine grit classification according to the present invention, which includes following three steps
Suddenly.
Step 1: establishing external data collection: using DogNet database as external data collection, first by DogNet data
Collection is divided into training set, verifying collection and test set, and the sample class between training set, verifying collection and test set mutually disjoints.It will
The data set of 120 classes is divided into three sample sets, and wherein training set includes 80 class samples, and verifying collection includes 20 class samples, test set
Include 20 class samples.All pictures are scaled to the input size of convolutional neural networks.Such as external data is obtained from other data sets
Collection, it is also desirable to follow training set and test set sample type non-intersecting and picture completely and meet neural network input dimension of picture
It is required that processing mode.
Step 2: carrying out data enhancing: rotation process is carried out to the sample in obtained data set, by every picture point
It Xuan Zhuan 90,180 and 270.Being trained using deep neural network to Small Sample Database collection will lead to over-fitting, but training
Sample number is generally much less than the sample number needed, and manual data enhancing can reduce over-fitting.Therefore using rotate method come
Carry out data enhancing.
Step 3: establishing convolutional neural networks: using training set as sample set training fine grit classification network: the present invention relates to
A kind of more efficient convolutional neural networks, as shown in Fig. 2, the input of neural network is the color image of 84*84 pixel, it is colored
Picture successively obtains the characteristic pattern of 84*84 by the convolutional layer that three step-lengths are 1 first, and the characteristic pattern of 84*84 is then successively defeated
Enter to the convolutional layer that four step-lengths are 2 and obtain the characteristic pattern of 6*6, the characteristic pattern of this 6*6 is exactly top-down, bottom-up
The input feature vector figure of module;Then the characteristic pattern of 6*6 is input to a top-down, bottom-up module.Top-down,
Bottom-up module includes two convolution sums, two deconvolution, the volume that it is 2 into two step-lengths that input feature vector figure is sequentially input first
Lamination obtains the output characteristic pattern of 2*2, this output characteristic pattern is then sequentially input the warp lamination for being 2 into two step-lengths,
Obtain the output characteristic pattern of 6*6, then by the input feature vector of obtained output characteristic pattern and top-down, bottom-up module
The output thermal map that figure is added to the end.The 6*6 dimensional vector of finally obtain 32 dimensions is inputted into a full articulamentum, institute is obtained
The vector of the identical dimension of test set classification number corresponding and to be sorted.When training, the training affiliated class of picture is exported in classification layer
Not, error and backpropagation are calculated;When test, it is finely adjusted using test the set pair analysis model, using the model after fine tuning to test
Collection is classified, and classification accuracy is calculated.
The present invention also provides a kind of easy methods for changing test set sample class, carry out fine granularity to test set every time
When classification, number to be sorted can be dynamically arranged in we.When carrying out fine grit classification to a new test set every time, I
Classification to be sorted is set first, then test set to be sorted is fed in trained convolutional neural networks, using simple
Stochastic gradient descent algorithm is finely adjusted model;The model finely tuned is recycled to test test set.
In summary, a kind of fine grit classification method based on meta learning of the present invention, first with a lesser sample
As training set, a preferable initialization model is quickly generated using element study method, followed by one and training sample
The small sample test set that this classification is not overlapped completely is finely adjusted this trained initialization model, recycles test set pair
This model is tested, and preferable effect can be obtained.This model is relatively more suitable for the less situation of sample set to be tested.
The present invention generates a preferable initialization model using training set, first we using meta learning person and
One tasking learning person carries out weight alternating by sharing the method for weight, so that model can be with fast convergence.Then
We will feed us in trained initialization model with the completely disjoint test set sample of training set sample class, utilize
The method of simple stochastic gradient descent is finely adjusted this network, this trained network is recycled to carry out test set
Class test.
After applying the present invention, the classification number that can dynamically change sample to be tested is carried out using training the set pair analysis model
When initialization training, manual setting can be carried out according to the classification number of our samples to be tested, recycle element study method pair
Model is trained.When being tested using the good model of this pre-training, it is only necessary to using test set to this mould
Type is finely adjusted.
The fine grit classification device includes a top-down, the structure of bottom-up.Fine grit classification is different from tradition
Image classification problem, more focus on regional area otherness, therefore use top-down, the structure of bottom-up, this
Sample can reinforce the regional area of image, improve classifying quality.
The above examples only illustrate the technical idea of the present invention, and this does not limit the scope of protection of the present invention, all
According to the technical idea provided by the invention, any changes made on the basis of the technical scheme each falls within the scope of the present invention
Within.
Claims (9)
1. a kind of fine grit classification method based on meta learning, it is characterised in that include the following steps:
Step 1, the data collected according to the open fine grit classification database of research institution or voluntarily establish external data collection, will
Data set be divided into training set, verifying collection and test set, and the sample class of test set be less than training set, training set, verifying collection and
Sample class between test set mutually disjoints;
Step 2, data enhancing is carried out to the sample that data are concentrated;
Step 3, convolutional neural networks are established, the input of the convolutional neural networks is color image, is exported as belonging to color image
Classification, the length for layer of classifying are equal to the classification number of external data collection, and loss function uses softmax loss;It is assembled for training using training
Practice fine grit classification network;
Step 4, it is tested using the test set convolutional neural networks good to step 3 pre-training, and according to test result to volume
Product neural network is finely adjusted.
2. a kind of fine grit classification method based on meta learning as described in claim 1, it is characterised in that: in the step 1,
Fine grit classification database selects 200 data set of Caltech-UCSD Brids or DogNet.
3. a kind of fine grit classification method based on meta learning as described in claim 1, it is characterised in that: in the step 1,
All pictures in data set are scaled to the input size of convolutional neural networks.
4. a kind of fine grit classification method based on meta learning as described in claim 1, it is characterised in that: in the step 3,
Detailed process using training set training fine grit classification network is: carrying out sample collection, N*K sample of random acquisition to training set
This carries out weight update as a sample set, followed by the method for stochastic gradient descent, then by the weight after update
It is input in a new sample set, acquires error, reversed gradient is carried out to required sum of the deviations using Adam optimizer
Decline, loss function constantly decline with the backpropagation of error, and training accuracy rate constantly rises, when loss function is restrained not
When being further continued for decline, convolutional neural networks model is saved, the good convolutional neural networks of pre-training are obtained.
5. a kind of fine grit classification method based on meta learning as claimed in claim 4, it is characterised in that: in the step 4,
When carrying out sample collection to test set, the K good convolutional Neural of sample feeding pre-training is acquired respectively in the different sample of N class
Network is tested.
6. a kind of fine grit classification method based on meta learning as described in claim 1, it is characterised in that: the convolutional Neural
Network uses the modular structure of top-down and bottom-up, and each top-down is connected with the structure of bottom-up by four
Convolution successively realize down-sampled and up-sampling function, then obtained feature is mutually melted with the characteristic pattern of a upper module
It closes, the last one top-down and bottom-up characteristic pattern exported is inputted into a full articulamentum, obtains being input to classification
The feature vector of layer.
7. a kind of fine grit classification method based on meta learning as claimed in claim 6, it is characterised in that: the Top-
Down, bottom-up module include two convolution sums, two deconvolution, and input feature vector figure, which is sequentially input first into two step-lengths, is
2 convolutional layer obtains the output characteristic pattern of 2*2, then sequentially inputs the warp lamination for being 2 into two step-lengths, obtains the defeated of 6*6
Characteristic pattern out.
8. a kind of fine grit classification method based on meta learning as described in claim 1, it is characterised in that: need to change to be tested
Data category when, first by a test set feeding pre-training good convolutional Neural entirely different with training set sample class
In network, 0 then is set by the learning rate of meta learning person, the learning rate of tasking learning person is remained unchanged, boarding steps are utilized
Degree descent algorithm is finely adjusted convolutional neural networks model, finally by the convolutional Neural net after the feeding fine tuning of test set data
Network is tested.
9. a kind of fine grit classification method based on meta learning as described in claim 1, it is characterised in that: in the step 2,
Carry out data enhancing using following at least one mode: translation scales, and rotates, overturning.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811451465.3A CN109711433A (en) | 2018-11-30 | 2018-11-30 | A kind of fine grit classification method based on meta learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811451465.3A CN109711433A (en) | 2018-11-30 | 2018-11-30 | A kind of fine grit classification method based on meta learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109711433A true CN109711433A (en) | 2019-05-03 |
Family
ID=66255276
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811451465.3A Pending CN109711433A (en) | 2018-11-30 | 2018-11-30 | A kind of fine grit classification method based on meta learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109711433A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112116002A (en) * | 2020-09-18 | 2020-12-22 | 北京旋极信息技术股份有限公司 | Determination method, verification method and device of detection model |
CN112508004A (en) * | 2020-12-18 | 2021-03-16 | 北京百度网讯科技有限公司 | Character recognition method and device, electronic equipment and storage medium |
CN112766388A (en) * | 2021-01-25 | 2021-05-07 | 深圳中兴网信科技有限公司 | Model acquisition method, electronic device and readable storage medium |
CN113052934A (en) * | 2021-03-16 | 2021-06-29 | 南开大学 | Nuclear magnetic resonance image motion artifact correction based on convolutional neural network |
CN113408554A (en) * | 2020-03-16 | 2021-09-17 | 阿里巴巴集团控股有限公司 | Data processing method, model training method, device and equipment |
CN116152525A (en) * | 2023-04-20 | 2023-05-23 | 北京大数据先进技术研究院 | Image classification model selection system based on remote procedure call and meta learning |
CN117688455A (en) * | 2024-02-04 | 2024-03-12 | 湘江实验室 | Meta-task small sample classification method based on data quality and reinforcement learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106485324A (en) * | 2016-10-09 | 2017-03-08 | 成都快眼科技有限公司 | A kind of convolutional neural networks optimization method |
CN107316063A (en) * | 2017-06-26 | 2017-11-03 | 厦门理工学院 | Multiple labeling sorting technique, device, medium and computing device |
-
2018
- 2018-11-30 CN CN201811451465.3A patent/CN109711433A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106485324A (en) * | 2016-10-09 | 2017-03-08 | 成都快眼科技有限公司 | A kind of convolutional neural networks optimization method |
CN107316063A (en) * | 2017-06-26 | 2017-11-03 | 厦门理工学院 | Multiple labeling sorting technique, device, medium and computing device |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113408554A (en) * | 2020-03-16 | 2021-09-17 | 阿里巴巴集团控股有限公司 | Data processing method, model training method, device and equipment |
CN112116002A (en) * | 2020-09-18 | 2020-12-22 | 北京旋极信息技术股份有限公司 | Determination method, verification method and device of detection model |
CN112508004A (en) * | 2020-12-18 | 2021-03-16 | 北京百度网讯科技有限公司 | Character recognition method and device, electronic equipment and storage medium |
CN112766388A (en) * | 2021-01-25 | 2021-05-07 | 深圳中兴网信科技有限公司 | Model acquisition method, electronic device and readable storage medium |
CN113052934A (en) * | 2021-03-16 | 2021-06-29 | 南开大学 | Nuclear magnetic resonance image motion artifact correction based on convolutional neural network |
CN116152525A (en) * | 2023-04-20 | 2023-05-23 | 北京大数据先进技术研究院 | Image classification model selection system based on remote procedure call and meta learning |
CN117688455A (en) * | 2024-02-04 | 2024-03-12 | 湘江实验室 | Meta-task small sample classification method based on data quality and reinforcement learning |
CN117688455B (en) * | 2024-02-04 | 2024-05-03 | 湘江实验室 | Meta-task small sample classification method based on data quality and reinforcement learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109711433A (en) | A kind of fine grit classification method based on meta learning | |
CN107909101B (en) | Semi-supervised transfer learning character identifying method and system based on convolutional neural networks | |
CN106650806B (en) | A kind of cooperating type depth net model methodology for pedestrian detection | |
CN106919951B (en) | Weak supervision bilinear deep learning method based on click and vision fusion | |
CN105825511B (en) | A kind of picture background clarity detection method based on deep learning | |
CN110188641A (en) | Image recognition and the training method of neural network model, device and system | |
CN102314614B (en) | Image semantics classification method based on class-shared multiple kernel learning (MKL) | |
CN106778853A (en) | Unbalanced data sorting technique based on weight cluster and sub- sampling | |
CN108171184A (en) | Method for distinguishing is known based on Siamese networks again for pedestrian | |
CN108846413B (en) | Zero sample learning method based on global semantic consensus network | |
CN105354565A (en) | Full convolution network based facial feature positioning and distinguishing method and system | |
CN110059741A (en) | Image-recognizing method based on semantic capsule converged network | |
CN110457982A (en) | A kind of crop disease image-recognizing method based on feature transfer learning | |
CN108460421A (en) | The sorting technique of unbalanced data | |
CN104881685B (en) | Video classification methods based on shortcut deep neural network | |
CN104834941A (en) | Offline handwriting recognition method of sparse autoencoder based on computer input | |
CN106022241B (en) | A kind of face identification method based on wavelet transformation and rarefaction representation | |
CN108596274A (en) | Image classification method based on convolutional neural networks | |
CN106874929A (en) | A kind of pearl sorting technique based on deep learning | |
CN106919710A (en) | A kind of dialect sorting technique based on convolutional neural networks | |
Shen et al. | A direct formulation for totally-corrective multi-class boosting | |
CN109213853A (en) | A kind of Chinese community's question and answer cross-module state search method based on CCA algorithm | |
CN109961093A (en) | A kind of image classification method based on many intelligence integrated studies | |
CN108416270A (en) | A kind of traffic sign recognition method based on more attribute union features | |
CN109359685A (en) | Multi-modal data classification method based on feature selecting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190503 |
|
RJ01 | Rejection of invention patent application after publication |