CN108647702A - A kind of extensive food materials image classification method based on transfer learning - Google Patents

A kind of extensive food materials image classification method based on transfer learning Download PDF

Info

Publication number
CN108647702A
CN108647702A CN201810332217.0A CN201810332217A CN108647702A CN 108647702 A CN108647702 A CN 108647702A CN 201810332217 A CN201810332217 A CN 201810332217A CN 108647702 A CN108647702 A CN 108647702A
Authority
CN
China
Prior art keywords
food materials
image
classification
tree
transfer learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810332217.0A
Other languages
Chinese (zh)
Other versions
CN108647702B (en
Inventor
肖光意
刘欢
刘毅
吴淇
黄宗杰
陈浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN201810332217.0A priority Critical patent/CN108647702B/en
Publication of CN108647702A publication Critical patent/CN108647702A/en
Application granted granted Critical
Publication of CN108647702B publication Critical patent/CN108647702B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of extensive food materials image classification method based on transfer learning, includes the following steps:An original image is inputted, by the knowledge of transfer learning, output includes both sides information, one is to the recognition result for environment of receiving, the second is to the recognition result of food materials type;After the propagated forward of the Prior Tree CNN models based on multitask, the union feature generated by the last one full articulamentum will be input into different classification tasks;The extensive food materials image classification method based on transfer learning of the present invention, convolutional neural networks model is improved based on transfer learning method, and Prior Tree CNN models are proposed to learn class formation and classifier parameters, improved model has high efficiency, high-accuracy and universality.

Description

A kind of extensive food materials image classification method based on transfer learning
Technical field
The present invention relates to a kind of extensive food materials image classification method based on transfer learning, belongs to food materials image classification skill Art field.
Background technology
With the rapid development of deep learning, it is hotter that Image Classfication Technology has evolved into domestic and international computer vision field The research direction of door, is applied in more and more fields;Based on actual scene, how to build a kind of to image progress Classification, and the method that accuracy rate is high, robustness is good is that Image Classfication Technology moves towards the inevitable key problem of practical application.
Training grader needs a large amount of sample, and when training examples are seldom, this just becomes a problem, for serious Class imbalance problem, then Priori tree CNN are presented using the inherent structure of one group of class in truthful environment data set Model;For example, it is desired to four kinds of food materials of purchase, big hot red pepper and screw green pepper, millet starch is related with sharp green pepper, but only there are one screws The image of green pepper, this would become hard to one grader of training to distinguish hundreds of pictures of screw green pepper and other three classifications;Pass through this The label example of a little related categories can allow model to be easier from a screw green pepper learn-by-example task by zero-shot;It is logical It crosses and learns new category from related category migration " knowledge ", it is only necessary to understand the feature of screw green pepper;Traditional image classification includes The shortcomings that the methods of KNN, SVM, KNN are classified by measuring the distance between different characteristic value, KNN includes calculation amount Greatly, the problems such as prediction deviation is bigger when sample imbalance;SVM belongs to supervised learning, is a kind of two disaggregated models, it basic The shortcomings that model is defined in the maximum linear classifier in the interval on feature space, SVM includes that the higher-dimension of kernel function is reflected It is not strong to penetrate explanation strengths, especially radially basic function;And the algorithm for only giving two classification solves more classification problems in the presence of tired It is difficult;In view of the problems of the existing technology, the present invention is based on transfer learning methods carries out convolutional neural networks model (CNN) It improves, it is proposed that Prior-Tree CNN models learn class formation and classifier parameters, fully utilize a large amount of existing phases " knowledge " migration for closing the flag data, the efficient related category of study of classification, to be obtained using the training on large sample collection Grader improve the accuracy rate and robustness of small sample set object classification, and establish Meal-53 Small Sample Database collection, mesh Be that classification accuracy is improved in the case of only a small number of sample.
Invention content
To solve the above problems, the present invention proposes a kind of extensive food materials image classification method based on transfer learning, Convolutional neural networks model (CNN) is improved based on transfer learning method, and proposes that Prior-Tree CNN models come Learn class formation and classifier parameters, improved model has high efficiency, high-accuracy and universality.
The extensive food materials image classification method based on transfer learning of the present invention, the sorting technique includes following step Suddenly:An original image is inputted, by the knowledge of transfer learning, output includes both sides information, one is to environment of receiving Recognition result, the second is to the recognition result of food materials type;In the forward direction of the Prior-Tree CNN models based on multitask After propagation, the union feature generated by the last one full articulamentum will be input into different classification tasks.
Further, the sorting technique specifically includes following steps:Increasing by a people on multitask classification CNN models is Class node is arranged according to this tree structure in the tree structure of setting first, then allows the subclass being newly added that can directly obtain Obtain the feature of parent, in the case retraining;Accordingly even when data are few, the subclass being newly added can also possess good effect; The model can also be considered as the specific softmax losses of different task;After optimization associated losses layer and shared visual signature, CNN models will transmit its relevant parameter in back-propagation process;The food materials image c ∈ { 0,1 } being collected into, which are carried out mark, is The no picture totally to receive under environment indicates food materials type with multiclass label k ∈ { 1 ... K }, and K is all categories in formula Quantity;Finally iteratively train whole network and parameter until convergence;
In real process, learning process is almost the same with normal gradients decline, when test, first verify that image whether be Clean environment picture of receiving;If image is predicted to be non-clean image, system will shoot new images to continue the mistake Journey, until finding clean image within given time;Secondly, if image is predicted to be vegetables classification, the model will by Determine other labels in sequential filtration vegetables;Finally, by combining the Priori Tree CNN moulds of the information such as order, weight Type, clean image will obtain the score of food materials classification and prediction.
Still further, described its formula expression way of Prior-Tree CNN models is as follows:
It assumes initially that and has possessed a priori tree, all food materials classifications are one three layers of tree composition, priori tree One shared K+1 leaf node, corresponding is K food materials class label and 1 label that is ignored;These leaf nodes with 7 class nodes of the second layer are connected, and K food materials classification has 4 parents, including S1, S2, S3, S4, environment category of receiving quilt It is divided into 3 groups, including N1, N2, N3;For this hierarchical classification, if there is input picture is classified as vegetables by high confidence level, then It filters out other classifications in order to be predicted, the classifications such as meat and aquatic products need not be paid close attention to, further increase the knowledge of food materials type The accuracy rate of other task;
In order to simplify Priori Tree models, the label in K classes and the relationship between its four father nodes are only focused on; Before softmax functions, leaf label node k and weight vectorsAssociation;Each superclass node s and vector It is associated with, s ∈ { 1,2,3,4 } in formula;For example, βcabbageAnd βcarrotIt has recorded and αvegatableBetween deviation;It is defined for β following Generation model:
This priori expression formula illustrates the relationship between classification, and the condition distribution on k is expressed as:
The value of { W, β, α } is inferred by MAP, is maximized:
LogP (k | I, t, W, β)+logP (k | I, W, β)+logP (W)+logP (β | α)+logP (α); (3)
From the perspective of loss function, minimize:
Herein, it is 0 by the value of fixed α, loss function will be reduced to standard loss function, enable
Cs=k | parant (k)=s }, then
Therefore, the loss function in formula (4) iteratively carries out following two steps to optimize;First, W is maximized And β, α is fixed using the standard stochastic gradient descent (SGD) of standard loss function;Secondly, by formula (5) fixing Beta, thus Maximize α.
The present invention compared with prior art, the extensive food materials image classification method of the invention based on transfer learning, Convolutional neural networks model is improved based on transfer learning method, and proposes Prior-Tree CNN models to learn class Structure and classifier parameters, improved model have the following advantages:High efficiency proposes multitask CNN models to sincere/non-sincere Letter ambient image is classified, and to reduce the noise in Meal-53 data sets, and more effectively utilizes the feature of training sample; High-accuracy, improved model also can correctly identify the category when data training set is less;Universality is improved CNN models afterwards are solved the problems, such as by learning, it is not limited to which some specific problem can automatically be established according to problem Model, it is similar to solve the problems, such as.
Description of the drawings
Fig. 1 is the model integral frame schematic diagram of the Prior-Tree CNN of the present invention;Wherein, figure (a) is Prior- Tree CNN illustratons of model;It is tree structure figure to scheme (b).
Fig. 2 is the schematic diagram that the label in filtering model is predicted by superclass of the present invention;Wherein, figure (a) is priori tree Schematic diagram;It is order classification schematic diagram to scheme (b).
Fig. 3 is the accuracy schematic diagram verified using 1-crop on the Mealcome data sets of the present invention.
Fig. 4 is the accuracy schematic diagram verified using 10-crops on the Mealcome data sets of the present invention.
Fig. 5 is the exemplary construction schematic diagram of the Priori Tree Model forecast samples of the present invention.
Specific implementation mode
The extensive food materials image classification method based on transfer learning of the present invention, the sorting technique includes following step Suddenly:In order to enhance effect of the model for classification, and model also can correctly be known when data training set is less The other category;Prior-Tree CNN models are proposed, as shown in Figure 1, one original image Ii of input, passes through transfer learning Knowledge, output includes both sides information, one is to the recognition result for environment of receiving, the second is to the identification knot of food materials type Fruit;After the propagated forward of the Prior-Tree CNN models based on multitask, by the connection of the last one full articulamentum generation Closing feature will be input into different classification tasks.
The sorting technique specifically includes following steps:Increase by a tree being manually set on multitask classification CNN models Class node is arranged according to this tree structure in shape structure first, then allows the subclass being newly added that can directly obtain parent Feature, in the case retraining;Accordingly even when data are few, the subclass being newly added can also possess good effect;The model The specific softmax losses of different task can be considered as;After optimization associated losses layer and shared visual signature, CNN models Its relevant parameter will be transmitted in back-propagation process;The food materials image c ∈ { 0,1 } being collected into are come whether mark is clean The picture received under environment indicates food materials type with multiclass label k ∈ { 1 ... K }, and K is the quantity of all categories in formula;Most Iteratively train whole network and parameter until convergence afterwards;
In real process, learning process is almost the same with normal gradients decline, when test, first verify that image whether be Clean environment picture of receiving;If image is predicted to be non-clean image, system will shoot new images to continue the mistake Journey, until finding clean image within given time;Secondly, if image is predicted to be vegetables classification, the model will by Determine other labels in sequential filtration vegetables;Finally, by combining the Priori Tree CNN moulds of the information such as order, weight Type, clean image will obtain the score of food materials classification and prediction.
Described its formula expression way of Prior-Tree CNN models is as follows:
It assumes initially that and has possessed a priori tree, all food materials classifications are one three layers of tree compositions, such as Fig. 2 institutes Show, one shared K+1 leaf node of priori tree, corresponding is K food materials class label and 1 label that is ignored;These Leaf node is connected with 7 class nodes of the second layer, and K food materials classification has 4 parents, that is, be divided into 4 groups (S1, S2, S3, S4), environment category of receiving is divided into 3 groups (N1, N2, N3);For this hierarchical classification, if there is high confidence level will input Image classification is vegetables, is predicted then filtering out other classifications in order, need not pay close attention to the classifications such as meat and aquatic products, into One step improves the accuracy rate of food materials category identification task;
In order to simplify Priori Tree models, the label in K classes and the relationship between its four father nodes are only focused on; Before softmax functions, leaf label node k and weight vectorsAssociation;Each superclass node s and vector It is associated with, s ∈ { 1,2,3,4 } in formula;For example, βcabbageAnd βcarrotIt has recorded and αvegatableBetween deviation;It is defined for β following Generation model:
This priori expression formula illustrates the relationship between classification, and the condition distribution on k is expressed as:
The value of { W, β, α } is inferred by MAP, is maximized:
LogP (k | I, t, W, β)+logP (k | I, W, β)+logP (W)+logP (β | α)+logP (α); (3)
From the perspective of loss function, minimize:
Herein, it is 0 by the value of fixed α, loss function will be reduced to standard loss function, enable
Cs=k | parant (k)=s }, then
Therefore, the loss function in formula (4) iteratively carries out following two steps to optimize;First, W is maximized And β, α is fixed using the standard stochastic gradient descent (SGD) of standard loss function;Secondly, by formula (5) fixing Beta, thus Maximize α.
Embodiment 1:
Using ResNet-50 multitask experiment as baseline experiment be trained, then use order, weight characteristics and Transfer learning technology gradually improves the accuracy of identification of food materials, its experimental method of Prior-Tree CNN models is specific as follows:
A. data set and experimental situation,
Data set derives from Mealcome, and Mealcome is one of food supply platform chain of Largest In China (www.mealcome.com), service is provided for nearly 1000 dining rooms;Choose 15020 clean environments picture and 15557 Dirty picture builds Meal-53 data sets;Clean picture is divided into 51 kinds of food material classifications, and dirty picture is divided into 3 kinds;It is all these Data label is all by manually adding;In the part of clean picture, the picture number of each classification is differed from 106 to 895; For all dirty pictures, " dark " is merged into " other ", therefore dirty picture is just divided into " not opening bag " and " other " two Part, picture number are 11382 and 4157 respectively;The ratio of training set, test set and verification collection is respectively 70%, 15% He 15%;Since the disequilibrium of data set ensures each class using the method for oversample in training ResNet models Do not have 500 for training image and 100 for test images;For depth CNN mould of the training based on multitask Type uses clean picture and dirty image data collection;
Clean picture in Meal-53 data sets contains order information, and the quantity of order information is 5026, and every is ordered For single food materials picture number between 10 to 35, this is also unbalanced;In each order, food materials image equally has weight Information;5026 parts of orders are also divided into training set, test set and verification collect three parts, and ratio is identical as above-mentioned data set;This A little data sets are used for model (Order Dropout, Order Weighting and Priori Tree) of the training based on order, It trains, during test and verification, order and weight information are integrated into image tag;
In an experiment, deep learning frame use it is modified after caffe, deep learning model selection be ResNet-50, ResNet-50 have passed through pre-training and can reduce the training time and improve accuracy;In all models, last A InnerProduct layers of weight lr_mult is set as 10.0, and bias term is set as 20.0 and is trained;Batch of training and test Secondary is 16, momentum 0.9;All experiments are all in Intel (R) i7CPU, 32GB RAM and GeForce GTX 1080Ti Upper progress;
B. baseline (baseline) and appraisal procedure,
Two tasks are completed at the same time as baseline using the deep learning model ResNet and CNN of multitask;
Data as shown in Figure 3 are mean accuracies, for Context awareness task of receiving, can obtain clean picture and dirty The identification ratio of picture:Integrity ratio and non-integrity ratio;The two ratios are real class rate (true positive rate,TPR);For food materials classification identification mission, the probability of each food materials classification, Top-k hits can be obtained The assessment formula of rate and recall rate is as follows:
In formula, Ni, i ∈ (0,1 ..., 50) be i-th of food materials classification number, NkiIt is i-th of food materials in test set Top-k probability, n be equal to 50;
C. experimental result and analysis,
The experimental result of ResNet, Baseline, Order Weighting Model and Priori Tree Model are such as Shown in Fig. 3 and Fig. 4, Fig. 3 shows the experimental result obtained using 1-crop verification methods;It may be concluded that in ring of receiving In the identification mission of border, Priori Tree Model further improve integrity ratio and non-integrity Ratio is 94.41% and 92.86% respectively;Equally, show Priori Tree Model in Top-1 and Top- in Fig. 4 Achieve optimum 99.12% and 99.94% in 3 accuracy rate, the accuracy rate of Top-5 on other several models all Achieve almost the same result 100%;Experimental result in Fig. 4 is obtained by the verification method of 10-crops, can be with Find out, this method ratio 1-crop verification methods have better effect, it can be seen that in wherein Priori Tree Model Integrity ratio and non-integrity ratio are further increased respectively to 94.84% and 94.56%;Fig. 5 is shown Recognition results of the baseline and Priori Tree Model to part food materials, wherein " Baseline " under sample image The recognition result of tag representation baseline (baseline), and " Priori Tree " indicates the recognition result of priori tree-model;According to The above result shows that the accuracy of identification of food materials can be improved using the Priori Tree CNN of transfer learning.
The extensive food materials image classification method based on transfer learning of the present invention, for only a small amount of in food materials data set When the classification of marker samples, convolutional neural networks model (CNN) is improved based on transfer learning method, proposes Prior- Tree CNN models learn food materials category structure and classifier parameters, fully utilize the label of a large amount of existing related categories The knowledge migration of data, the efficient related category of study is small to be improved using the grader of the training on large sample collection arrived The accuracy rate and robustness of sample set object classification;Meal-53 Small Sample Database collection is established, wherein include 32 kinds of vegetables classifications, 15 kinds of poultry eggs, 3 kinds of aquatic product food materials, including a kind of beans, bean product food materials, including 3 kinds of dirty pictures, i.e., do not open bag, light It is dark and other;
Convolutional neural networks model (CNN) is improved based on transfer learning method, and proposes Prior-Tree CNN models learn class formation and classifier parameters, and improved model has the following advantages:High efficiency proposes multitask CNN Model classifies to sincere/non-truthful environment image, to reduce the noise in Meal-53 data sets, and more effectively utilizes The feature of training sample;High-accuracy, improved model also can correctly identify this when data training set is less Classification;Universality, improved CNN models are solved the problems, such as by learning, it is not limited to some specific problem, it can Model is established automatically according to problem, it is similar to solve the problems, such as.
Above-described embodiment is only the better embodiment of the present invention, therefore all structures according to described in present patent application range It makes, the equivalent change or modification that feature and principle are done, is included within the scope of present patent application.

Claims (3)

1. a kind of extensive food materials image classification method based on transfer learning, which is characterized in that the sorting technique include with Lower step:An original image is inputted, by the knowledge of transfer learning, output includes both sides information, one is to receiving The recognition result of environment, the second is to the recognition result of food materials type;In the Prior-Tree CNN models based on multitask After propagated forward, the union feature generated by the last one full articulamentum will be input into different classification tasks.
2. the extensive food materials image classification method according to claim 1 based on transfer learning, which is characterized in that described Sorting technique specifically includes following steps:Increase by a tree structure being manually set on multitask classification CNN models, according to this A tree structure, is arranged class node first, then allows the subclass being newly added that can directly obtain the feature of parent;Join in optimization After closing loss layer and shared visual signature, CNN models will transmit its relevant parameter in back-propagation process;By what is be collected into Food materials image c ∈ { 0,1 } come whether mark is the picture totally received under environment, are marked with multiclass label k ∈ { 1 ... K } Will food materials type, K is the quantity of all categories in formula;Finally iteratively train whole network and parameter until convergence;
When test, first verify that whether image is clean environment picture of receiving;If image is predicted to be non-clean image, Then system will shoot new images to continue the process, until finding clean image within given time;Secondly, if image is pre- It is vegetables classification to survey, then other labels that the model will in a given order filter in vegetables;Finally, by combining order, again The Priori Tree CNN models of the information such as amount, clean image will obtain the score of food materials classification and prediction.
3. the extensive food materials image classification method according to claim 1 or 2 based on transfer learning, which is characterized in that Described its formula expression way of Prior-Tree CNN models is as follows:
It assumes initially that and has possessed a priori tree, all food materials classifications are one three layers of tree compositions, and priori tree has altogether There is K+1 leaf node, corresponding is K food materials class label and 1 label that is ignored;These leaf nodes and second 7 class nodes of layer are connected, and K food materials classification has 4 parents, including S1, S2, S3, S4, environment category of receiving are divided into 3 groups, including N1, N2, N3;For this
Hierarchical classification is carried out if there is input picture is classified as vegetables by high confidence level then filtering out other classifications in order Prediction, need not pay close attention to the classifications such as meat and aquatic products;
Before softmax functions, leaf label node k and weight vectorsAssociation;Each superclass node s and vectorIt is associated with, s ∈ { 1,2,3,4 } in formula;For example, βcabbageAnd βcarrotIt has recorded and αvegatableBetween deviation;It is fixed for β Generation model below justice:
This priori expression formula illustrates the relationship between classification, and the condition distribution on k is expressed as:
The value of { W, β, α } is inferred by MAP, is maximized:
LogP (k | I, t, W, β)+logP (k | I, W, β)+logP (W)+logP (β | α)+logP (α); (3)
From the perspective of loss function, minimize:
Herein, it is 0 by the value of fixed α, loss function will be reduced to standard loss function, enable
Cs=k | parant (k)=s }, then
Therefore, the loss function in formula (4) iteratively carries out following two steps to optimize;First, W and β is maximized, α is fixed using the standard stochastic gradient descent of standard loss function;Secondly, by formula (5) fixing Beta, to maximize α.
CN201810332217.0A 2018-04-13 2018-04-13 Large-scale food material image classification method based on transfer learning Active CN108647702B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810332217.0A CN108647702B (en) 2018-04-13 2018-04-13 Large-scale food material image classification method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810332217.0A CN108647702B (en) 2018-04-13 2018-04-13 Large-scale food material image classification method based on transfer learning

Publications (2)

Publication Number Publication Date
CN108647702A true CN108647702A (en) 2018-10-12
CN108647702B CN108647702B (en) 2021-06-01

Family

ID=63746074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810332217.0A Active CN108647702B (en) 2018-04-13 2018-04-13 Large-scale food material image classification method based on transfer learning

Country Status (1)

Country Link
CN (1) CN108647702B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409442A (en) * 2018-11-21 2019-03-01 电子科技大学 Convolutional neural networks model selection method in transfer learning
CN110598636A (en) * 2019-09-09 2019-12-20 哈尔滨工业大学 Ship target identification method based on feature migration
CN110807472A (en) * 2019-10-12 2020-02-18 北京达佳互联信息技术有限公司 Image recognition method and device, electronic equipment and storage medium
CN112001315A (en) * 2020-08-25 2020-11-27 中国人民解放军海军军医大学第一附属医院 Bone marrow cell classification and identification method based on transfer learning and image texture features
WO2020238293A1 (en) * 2019-05-30 2020-12-03 华为技术有限公司 Image classification method, and neural network training method and apparatus
CN112200249A (en) * 2020-10-13 2021-01-08 湖南大学 Autonomous updating solution of unmanned intelligent container
CN112257761A (en) * 2020-10-10 2021-01-22 天津大学 Method for analyzing food nutrient components in image based on machine learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894041A (en) * 2016-04-26 2016-08-24 国网山东省电力公司经济技术研究院 Method of extracting substation information in power distribution system based on hyperspectral remote sensing images
US20170032221A1 (en) * 2015-07-29 2017-02-02 Htc Corporation Method, electronic apparatus, and computer readable medium of constructing classifier for disease detection
CN107391772A (en) * 2017-09-15 2017-11-24 国网四川省电力公司眉山供电公司 A kind of file classification method based on naive Bayesian
CN107563439A (en) * 2017-08-31 2018-01-09 湖南麓川信息科技有限公司 A kind of model for identifying cleaning food materials picture and identification food materials class method for distinguishing
CN107679582A (en) * 2017-10-20 2018-02-09 深圳市唯特视科技有限公司 A kind of method that visual question and answer are carried out based on multi-modal decomposition model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170032221A1 (en) * 2015-07-29 2017-02-02 Htc Corporation Method, electronic apparatus, and computer readable medium of constructing classifier for disease detection
CN105894041A (en) * 2016-04-26 2016-08-24 国网山东省电力公司经济技术研究院 Method of extracting substation information in power distribution system based on hyperspectral remote sensing images
CN107563439A (en) * 2017-08-31 2018-01-09 湖南麓川信息科技有限公司 A kind of model for identifying cleaning food materials picture and identification food materials class method for distinguishing
CN107391772A (en) * 2017-09-15 2017-11-24 国网四川省电力公司眉山供电公司 A kind of file classification method based on naive Bayesian
CN107679582A (en) * 2017-10-20 2018-02-09 深圳市唯特视科技有限公司 A kind of method that visual question and answer are carried out based on multi-modal decomposition model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ERIC TZENG ET AL: ""Simultaneous Deep Transfer Across Domains and Tasks"", 《 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
丁淑艳等: ""基于Tchebichef不变矩与SVM交通标志分类算法研究"", 《机电一体化》 *
尚朝轩等: ""基于类决策树分类的特征层融合识别算法"", 《控制与决策》 *
马樱等: ""基于数据挖掘的软件缺陷预测技术"", 《厦门大学出版社》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409442A (en) * 2018-11-21 2019-03-01 电子科技大学 Convolutional neural networks model selection method in transfer learning
WO2020238293A1 (en) * 2019-05-30 2020-12-03 华为技术有限公司 Image classification method, and neural network training method and apparatus
CN110598636A (en) * 2019-09-09 2019-12-20 哈尔滨工业大学 Ship target identification method based on feature migration
CN110598636B (en) * 2019-09-09 2023-01-17 哈尔滨工业大学 Ship target identification method based on feature migration
CN110807472A (en) * 2019-10-12 2020-02-18 北京达佳互联信息技术有限公司 Image recognition method and device, electronic equipment and storage medium
CN110807472B (en) * 2019-10-12 2022-08-12 北京达佳互联信息技术有限公司 Image recognition method and device, electronic equipment and storage medium
CN112001315A (en) * 2020-08-25 2020-11-27 中国人民解放军海军军医大学第一附属医院 Bone marrow cell classification and identification method based on transfer learning and image texture features
CN112001315B (en) * 2020-08-25 2024-01-19 中国人民解放军海军军医大学第一附属医院 Bone marrow cell classification and identification method based on migration learning and image texture characteristics
CN112257761A (en) * 2020-10-10 2021-01-22 天津大学 Method for analyzing food nutrient components in image based on machine learning
CN112200249A (en) * 2020-10-13 2021-01-08 湖南大学 Autonomous updating solution of unmanned intelligent container

Also Published As

Publication number Publication date
CN108647702B (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN108647702A (en) A kind of extensive food materials image classification method based on transfer learning
Pérez-Rúa et al. Mfas: Multimodal fusion architecture search
CN108564029B (en) Face attribute recognition method based on cascade multitask learning deep neural network
CN113159095B (en) Model training method, image retrieval method and device
CN106203523B (en) The hyperspectral image classification method of the semi-supervised algorithm fusion of decision tree is promoted based on gradient
CN107301380A (en) One kind is used for pedestrian in video monitoring scene and knows method for distinguishing again
CN109961089A (en) Small sample and zero sample image classification method based on metric learning and meta learning
CN108764308A (en) A kind of recognition methods again of the pedestrian based on convolution loop network
CN108537136A (en) The pedestrian's recognition methods again generated based on posture normalized image
CN106779087A (en) A kind of general-purpose machinery learning data analysis platform
CN104933428B (en) A kind of face identification method and device based on tensor description
CN110221965A (en) Test cases technology, test method, device, equipment and system
CN108334849A (en) A kind of recognition methods again of the pedestrian based on Riemann manifold
CN104992142A (en) Pedestrian recognition method based on combination of depth learning and property learning
CN107766933A (en) A kind of method for visualizing for explaining convolutional neural networks
CN109598186A (en) A kind of pedestrian's attribute recognition approach based on multitask deep learning
CN108734184A (en) A kind of method and device that sensitive image is analyzed
CN109492075B (en) Transfer learning sequencing method based on loop generation countermeasure network
CN108416314A (en) The important method for detecting human face of picture
CN110377727A (en) A kind of multi-tag file classification method and device based on multi-task learning
CN107633038A (en) Tealeaves recognition methods and its system based on image recognition technology
CN104966075B (en) A kind of face identification method and system differentiating feature based on two dimension
CN113239801B (en) Cross-domain action recognition method based on multi-scale feature learning and multi-level domain alignment
CN110210550A (en) Image fine granularity recognition methods based on integrated study strategy
CN109145944A (en) A kind of classification method based on longitudinal depth of 3 D picture learning characteristic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant