CN104992191B - The image classification method of feature and maximum confidence path based on deep learning - Google Patents
The image classification method of feature and maximum confidence path based on deep learning Download PDFInfo
- Publication number
- CN104992191B CN104992191B CN201510438236.8A CN201510438236A CN104992191B CN 104992191 B CN104992191 B CN 104992191B CN 201510438236 A CN201510438236 A CN 201510438236A CN 104992191 B CN104992191 B CN 104992191B
- Authority
- CN
- China
- Prior art keywords
- layer
- image
- node
- class
- tree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The image classification method of feature and maximum confidence path based on deep learning, belongs to area of pattern recognition.The training convolutional neural networks in a sufficiently large image library;Utilize the convolutional neural networks model extraction characteristics of image trained;Calculate the mean vector of each class;Cluster is iterated per a kind of mean vector to representing using spectral clustering, to build Visual tree;Svm is trained for each non-leaf nodes of tree;It is top-down to given test image, test pictures are judged to the probability of corresponding child node, and it is final target class to find the maximum leaf node of path probability.Characteristics of image is extracted using CNN, there is good identification and robustness;The distance calculation formula of two classes is given, computational complexity is greatly optimized by deriving, obtains the similitude between class, so as to which iteration uses spectral clustering structure Visual tree;Using the vision relation between class, there is good effect for large-scale image classification.
Description
Technical field
The invention belongs to area of pattern recognition, more particularly, to available for large-scale image classify based on deep learning
Feature and the image classification method in maximum confidence path.
Background technology
In computer vision field, image classification is one extremely important, and one very classical studies a question.So
And as amount of images increases, image species increases, and large-scale image classification is still at present one very challenging
Task.Due to amount of images increase, amount of calculation can also increase, it is necessary to time can also increase, it is also high to hardware requirement, if still
So trained using conventional method if a multi classifier is used as final classification foundation, it will occur computational complexity and
A series of problems, such as accuracy.It is therefore necessary to design a set of new classification framework and sorting technique.
Compared with traditional image classification task, the difficult point of large-scale image classification task is:(1) amount of images is worked as
When increasing with species, amount of calculation also increases therewith, higher to hardware requirement.(2) a kind of target class is identified from many target class
Than identifying that a kind of target class difficulty increases much from a small amount of target class, because during class increasing number, certainly exist a kind of existing
As:Some classes are much like, and some class othernesses are very big.These similar classes severely impact the accuracy rate of classification.It is existing
Method is broadly divided into two major classes, and one kind is using deep learning structure depth convolutional neural networks, first builds model, then passes through
A large amount of training data adjusting parameters, this kind of method needs a large amount of training datas, computationally intensive, requires high to program capability, it is impossible to
The relation between classification is provided, lacks friendly effect of visualization, classification results can only be provided;Another kind of is exactly to build tree-like knot
Structure, using hierarchy classification method, it can visualize classifying quality very well, but due to no full utilization tree construction i.e. class
Between relation, i.e., do not provide a good marking mechanism.In addition, picture feature dimension is big, specificity and robustness are inadequate
It is good, cause classification results undesirable.
Ning Zhou and Jianping Fan are in document " Jointly Learning Visually Correlated
Structure Visual tree is mentioned in Dictionaries for Large-scale Visual Recognition Applications "
And joint dictionary learning, similar class is got together by building Visual tree, different nodes learn different dictionaries, to increase
The specificity of big graphical representation.But when being classified, they do not make full use of the relation between tree node, only
It is that every layer choosing selects a maximum classification results back-propagation, as long as so above there is classification error, classification will malfunction.Separately
Outside, the dictionary specificity that this method learning arrives is nor very well, final accuracy rate differs very with depth convolutional neural networks
Far.Therefore present invention utilizes the good advantage of feature specificity in depth convolutional neural networks, and the relation structure between class is combined
A good marking mechanism has been built, has improved classification accuracy.
The content of the invention
It is an object of the invention to for Large Graph as classified calculating amount greatly and the problems such as classification accuracy is low, there is provided one kind
The image classification method of feature and maximum confidence path based on deep learning.
The present invention comprises the following steps:
(1) using in ILSVRC2012 image library pre-training convolutional neural networks AlexNet.The input of the network is RGB
Image, size are 227*227, and the network is by 5 convolutional layers, two full articulamentums and an output layer composition.The knot of first layer
Structure is by convolutional layer, rectification active coating ReLU, and pond layer composition.Convolution layer parameter is that (96,11,4,0) represent that convolution kernel has
96, convolution kernel size is 11*11, and step-length is 4, and last position 0 of four-tuple represents to keep original image size, not zero padding;Convolution
The characteristic pattern formed afterwards passes through ReLU layers, obtains the characteristic pattern totally 96 that size is 55*55, is designated as 55*55*96;Pond layer is adopted
It is 3*3 with pond core size, step-length 2, is normalized behind pond, normalization size is 5.The convolution layer parameter of the second layer is
(256,5,1,2);By ReLu layers, into pond layer, Chi Huahe 3*3, step-length 2, it is normalized behind pond, normalizes
Size is that the parameter of 5. third layer convolution is (384,3,1,1);Enter the 4th layer after ReLU layers.4th layer of network structure
It is identical with third layer.The deconvolution parameter of layer 5 is (256,3,1,1);Enter pond layer by ReLU layers, pond core size is
3*3, step-length 2.Layer 6, layer 7 are full articulamentum, and output is all the vector of 4096 dimensions.8th layer is output layer, is to connect entirely
Layer is connect, is exported as 1000 dimensional vectors, the probability of 1000 classifications of expression.
(2) it is special with all pictures in the AlexNet extractions storehouse trained in step (1) to any extensive image library
Sign, image is represented with the output of the full articulamentum of the layer 7 of the network;
(3) any one class C in storehouse is extracted with AlexNeti, wherein sample image quantity is Ni, corresponding to l images
It is characterized asCalculate i-th of class mean vector Qi, calculate i-th of class variance
(4) the distance between each two class is calculated, forms a symmetrical Distance matrix D;
(5) similar matrix A is calculated according to Distance matrix D;
(6) according to similar matrix A, iteration uses spectral clustering, structure Visual tree T;
(7) support vector machine classifier (SVM) is trained to each cluster, all SVM classifiers, which form one, knot
The tree classificator of structure;
(8) any one test image, successively SVM classifier division corresponding to the root node from tree, each SVM are divided
Class device can all provide a confidence score, judge that the test image belongs to the probability of each child node of the node, until leaf
Child node, the corresponding confidence score of the node passed through in the path between leaf node and root node is multiplied, as path
The value of the confidence, wherein probability is arranged to 1 at root node;In order to accelerate speed, all filtered once in each layer of tree, only retain confidence
The node of K before fraction comes.
In step (3), the class mean vector QiCalculation formula be
I-th of class varianceCalculation formula be
In step (4), the formula of the distance between described calculating each two class is
OrThe latter equation is derived by previous equation.
In step (5), Similarity measures formula is between the class of the similar matrix A It is taken as
The dimension of characteristics of image.
Described according to similar matrix A in step (6), iteration uses spectral clustering, structure Visual tree T specific side
Method can be:First to similar matrix A corresponding to all categories, using spectral clustering, K cluster is formed, is contained inside each cluster
Multiple similar classes, continue to use spectral clustering to similar matrix corresponding to each cluster, the depth capacity limit until meeting tree
The infima species membership restrictive condition of condition processed or cluster just stops clustering;The non-leaf nodes of the corresponding tree of cluster, by multiple mesh
Mark class composition;The leaf node of tree is target class.
In step (8), following steps are performed using one-to-many support vector machine classifier:
(8.1) when SVM is divided, test pictures can be provided to the Confidence distance of each node of each layer.If
A certain layer is to node ciDistance be d, by logistic functions, the distance can be mapped to 0 to 1 probable value
On, its calculation formula isWherein parent (ci) it is until ciFather node road
Footpath;
(8.2) test image is obtained by a Bayesian network and is assigned to node ciPath probability, be to ask
The probability for the paths that root node is passed through to the node, calculation formula are
P(ci)=P (ci|parent(ci))*P(parent(ci)))
Wherein P (ci) it is until node ciFinal path probability, P (parent (ci)) it is until ciFather node road
Footpath parent (ci) probability;
(8.3) in order to accelerate calculating speed, avoid traveling through all paths, every layer of tree is all chosen in preceding K of maximum probability
Intermediate node.
The present invention utilizes the advantage of deep learning, extraction convolutional neural networks AlexNet last full articulamentum it is defeated
Go out as characteristics of image, and build Visual tree, grader corresponding to training, give corresponding marking mechanism.The present invention has
Following outstanding advantages:
1. the present invention has good identification and robustness using convolutional neural networks AlexNet extraction characteristics of image.
2. The present invention gives the distance calculation formula of two classes, it is contemplated that each sample, and by deriving greatly
Optimize computational complexity.And the similitude between class is further obtained, so as to which iteration uses spectral clustering structure Visual tree.
3. The present invention gives an efficient marking mechanism, the vision relation between class, experimental result are taken full advantage of
Show that method used in the present invention has good effect for large-scale image classification, and have in the method for current popular bright
Aobvious advantage.
Brief description of the drawings
Fig. 1 is the flow chart of present invention convolutional neural networks AlexNet extraction features.
Fig. 2 is the flow chart that the present invention judges test pictures.
Embodiment
With reference to Fig. 1 and 2, implementation steps of the invention include extraction characteristics of image, build Visual tree and train corresponding classify
Device, and according to three parts of marking mechanism test pictures proposed by the present invention.
Step 1, a convolutional neural networks AlexNet is trained.
A big image library is downloaded, such as ImageNet2012 image classifications match storehouse, trains a convolutional neural networks
AlexNet
Step 2, feature is extracted
With the convolutional neural networks AlexNet that step 1 trains to all image zooming-out features in experimental data base, also
It is the feature exported as image in the layer 7 of the network, with calculating later.
Step 3, similar matrix is calculated
(3a) calculates the mean vector of each classClass variance For picture i-th
Feature corresponding to the l pictures of class.
(3b) utilizes formulaThe distance between each two class is calculated, calculates institute
It is all 0 to have the value on rear can one symmetrical distance matrix of construction, positive diagonal.
(3c) calculates the similitude between two classes according to the distance between two classes, and calculation formula is Selection for image characteristic dimension, so as to construct a symmetrical similar matrix A.
Step 4, Visual tree is constructed
The similar matrix that (4a) is obtained by step 3, using spectral clustering, similar class is got together, N number of class is polymerized to
K cluster, each cluster are got together by some similar classes;
(4b) judges whether to reach the condition for stopping cluster, i.e., whether reaches the maximum height of the tree of setting, class in cluster
Whether number is less than the minimum threshold of setting;Otherwise (4c) is entered;
(4c) clusters the cluster of generation to last time, is continuing with spectral clustering, and corresponding similar matrix is A submatrix, i.e., by
Class in the cluster corresponding row and column composition in A;
(4d) repeat step (4b) and (4c), complete the structure of Visual tree.
Step 5, grader is trained.
For each non-leaf nodes of tree, SVM classifier is trained, for test image to be divided into its child node,
And provide corresponding fraction.
Step 6, classify.
(6a) loses to grader corresponding to root node in Visual tree, classified, give a mark, give to given test image
Go out k child node of fraction highest.
(6b) judges whether current k node is leaf node, if k node is all leaf node, stopped;Otherwise enter
Enter step (6c).
(6c), will with its corresponding grader to test pictures marking to each non-leaf nodes in k new node
It is divided into child node and gone, by fraction fractional multiplication corresponding with its father node, as the child node final score, then it is newborn
Into all nodes in select k before fraction highest.
(6d) repeat step (6b) and (6c), classification is completed, export k target class, and corresponding fraction.
The present invention carries out the proof of advantage and validity by following experiment
1. experiment condition:
Use for laboratory desktop computer parameter:The Tesla C2050GPU, CPU of 3G cachings are 16Inter (R) Xeon (R)
X5647, dominant frequency 2.93GHz, inside save as 32G, and operating system is 64 systems of Ubuntu12.04, experiment porch caffe,
python2.7。
The use for laboratory large-scale image classification proposed by the present invention based on convolutional neural networks feature and maximum confidence path
Method, wherein convolutional neural networks AlexNet training method see reference document " Krizhevsky A, Sutskever I,
Hinton G E.Imagenet classification with deep convolutional neural networks
[C]//Advances in neural information processing systems.2012:1097-1105.”。
1. experimental result and interpretation of result:
Table 1 is the of the invention and current other six popular method phases on ImageNet2010 image classifications match storehouse
Compare.As a result show that the present invention has very big advantage, wherein Top1 accuracy represent to provide a classification results, and classification is just
True accuracy rate, Top5 accuracy represent to provide 5 classification results, wherein there is a correct accuracy rate.
Table 1
Model | Top-1 accuracy | Top-5 accuracy |
Sparse coding[1] | 52.9% | 71.8% |
SIFT+FV[2] | 54.3% | 74.3% |
JDL+AP Clustering[3] | 38.9% | N/A |
Fisher Vector[4] | 45.7% | 65.9% |
NEC[5] | 52.9% | 71.8% |
Visual forest[6] | 41.1% | N/A |
The present invention | 61.2% | 81.7% |
Bibliography:
[1]Berg,A.,Deng,J.,Fei-Fei,L.:Large scale visual recognition
challenge 2010.www.image-net.org(2010)。
[2]Sánchez,J.,Perronnin,F.:High-dimensional signature compression for
large-scale image classification.In:Computer Vision and Pattern Recognition
(CVPR),2011 IEEE Conference on,pp.1665-1672.IEEE,(2011)。
[3]Zhou,N.,Fan,J.:Jointly learning visually correlated dictionaries
for large-scale visual recognition applications.Pattern Analysis and Machine
Intelligence,IEEE Transactions on 36,715-730(2014)。
[4]Perronnin,F.,Akata,Z.,Harchaoui,Z.,Schmid,C.:Towards good practice
in large-scale learning for image classification.In:Computer Vision and
Pattern Recognition(CVPR),2012IEEE Conference on,pp.3482-3489.IEEE,(2012)。
[5]Lin,Y.,Lv,F.,Zhu,S.,Yang,M.,Cour,T.,Yu,K.,Cao,L.,Huang,T.:Large-
scale image classification:fast feature extraction and svm training.In:
Computer Vision and Pattern Recognition(CVPR),2011IEEE Conference on,pp.1689-
1696.IEEE,(2011)。
[6]Fan,J.,Zhang,J.,Mei,K.,Peng,J.,Gao,L.:Cost-sensitive learning of
hierarchical tree classifiers for large-scale image classification and novel
category detection.Pattern Recognition(2014)。
Present invention mainly solves, because image category is more, classify in large-scale image classification problem caused by data volume is big
The problem of accuracy rate is low and computational complexity is big.The present invention has main steps that:1) volume is trained in a sufficiently large image library
Product neutral net.2) the convolutional neural networks model extraction characteristics of image trained is utilized.3) mean vector of each class is calculated.
4) cluster is iterated per a kind of mean vector to representing using spectral clustering, to build Visual tree.5) for tree
Each non-leaf nodes training svm.6) it is top-down to given test image, judge test pictures to corresponding child node
Probability, it is final target class to find the maximum leaf node of path probability.The present invention can be used for large-scale image classification.
Claims (6)
1. the image classification method of feature and maximum confidence path based on deep learning, it is characterised in that comprise the following steps:
(1) utilizing in ILSVRC2012 image library pre-training convolutional neural networks AlexNet, the input of the network is RGB image,
Size is 227 × 227, and the network is made up of 5 convolutional layers, two full articulamentums and an output layer, the structure of first layer by
Convolutional layer, rectification active coating ReLU and pond layer composition;Convolution layer parameter is that (96,11,4,0) represent that convolution kernel has 96,
Convolution kernel size is 11 × 11, and step-length is 4, and last position 0 of four-tuple represents to keep original image size, not zero padding;Shape after convolution
Into characteristic pattern pass through ReLU layers, obtain size be 55 × 55 characteristic pattern totally 96, be designated as 55 × 55 × 96;Pond layer uses
Pond core size is 3 × 3, step-length 2, is normalized behind pond, and normalization size is 5;The convolution layer parameter of the second layer is
(256,5,1,2);By ReLU layers, into pond layer, Chi Huahe is 3 × 3, step-length 2, is normalized behind pond, normalizing
It is 5 to change size;The parameter of third layer convolution is (384,3,1,1);Enter the 4th layer after ReLU layers;4th layer of network
Structure is identical with third layer;The deconvolution parameter of layer 5 is (256,3,1,1);Enter pond layer by ReLU layers, Chi Huahe is big
Small is 3 × 3, step-length 2;Layer 6, layer 7 are full articulamentum, and output is all the vector of 4096 dimensions;8th layer is output layer,
It is full articulamentum, exports as 1000 dimensional vectors, the probability of 1000 classifications of expression;
(2) to any extensive image library, with all picture features in the AlexNet extractions storehouse trained in step (1), use
The full articulamentum output of the layer 7 of the network represents image;
(3) any one class C in storehouse is extracted with AlexNeti, wherein sample image quantity is Ni, l features corresponding to image
For Il i, calculate i-th of class mean vector Qi, calculate i-th of class variances sigmai 2;
(4) the distance between each two class is calculated, forms a symmetrical Distance matrix D;
(5) similar matrix A is calculated according to Distance matrix D;
(6) according to similar matrix A, iteration uses spectral clustering, structure Visual tree T;
(7) support vector machine classifier is trained to each cluster, all SVM classifiers form one structured tree-like point
Class device;
(8) to any one test image, SVM classifier corresponding to the root node from tree divides successively, each SVM classifier
A confidence score will be provided, judges that the test image belongs to the probability of each child node of the node, until leaf section
Point, the corresponding confidence score of the node passed through in the path between leaf node and root node is multiplied, the confidence as path
Value, wherein probability is arranged to 1 at root node;In order to accelerate speed, only retain before confidence score comes K in each layer of tree
Node.
2. the image classification method in the feature based on deep learning and maximum confidence path, its feature exist as claimed in claim 1
In in step (3), the class mean vector QiCalculation formula be:
I-th of class varianceCalculation formula be:
3. the image classification method in the feature based on deep learning and maximum confidence path, its feature exist as claimed in claim 2
In in step (4), it is described calculate the distance between each two class formula be
OrThe latter equation is derived by previous equation.
4. the image classification method in the feature based on deep learning and maximum confidence path, its feature exist as claimed in claim 3
In in step (5), Similarity measures formula is between the class of the similar matrix A It is taken as image
Characteristic dimension.
5. the image classification method in the feature based on deep learning and maximum confidence path, its feature exist as claimed in claim 4
In in step (6), described according to similar matrix A, iteration uses spectral clustering, and structure Visual tree T specific method is:It is first
First to similar matrix A corresponding to all categories, using spectral clustering, K cluster is formed, is contained inside each cluster multiple similar
Class, continue to use spectral clustering to similar matrix corresponding to each cluster, until meet tree depth capacity restrictive condition or
The infima species membership restrictive condition of cluster just stops clustering;The non-leaf nodes of the corresponding tree of cluster, is made up of multiple target class;Tree
Leaf node be target class.
6. the image classification method in the feature based on deep learning and maximum confidence path, its feature exist as claimed in claim 5
Following steps are performed in the one-to-many support vector machine classifier in step (8), used:
(8.1) when SVM is divided, test pictures can be provided to the Confidence distance of each node of each layer, if a certain
Layer arrives node ciDistance be d, by logistic functions, the distance is mapped in 0 to 1 probable value, its calculate
Formula is:
Wherein, parent (ci) it is until ciFather node path;
(8.2) test image is obtained by a Bayesian network and is assigned to node ciPath probability, be rooting node
The probability of the paths passed through to the node, calculation formula are:
P(ci)=P (ci|parent(ci))*P(parent(ci)))
Wherein, P (ci) it is until node ciFinal path probability, P (parent (ci)) it is until ciFather node path
parent(ci) probability;
(8.3) in order to accelerate calculating speed, avoid traveling through all paths, every layer of tree all chooses the preceding K middle node of maximum probability
Point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510438236.8A CN104992191B (en) | 2015-07-23 | 2015-07-23 | The image classification method of feature and maximum confidence path based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510438236.8A CN104992191B (en) | 2015-07-23 | 2015-07-23 | The image classification method of feature and maximum confidence path based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104992191A CN104992191A (en) | 2015-10-21 |
CN104992191B true CN104992191B (en) | 2018-01-26 |
Family
ID=54304004
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510438236.8A Expired - Fee Related CN104992191B (en) | 2015-07-23 | 2015-07-23 | The image classification method of feature and maximum confidence path based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104992191B (en) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105512683B (en) * | 2015-12-08 | 2019-03-08 | 浙江宇视科技有限公司 | Object localization method and device based on convolutional neural networks |
WO2017113232A1 (en) * | 2015-12-30 | 2017-07-06 | 中国科学院深圳先进技术研究院 | Product classification method and apparatus based on deep learning |
CN108496185B (en) * | 2016-01-18 | 2022-09-16 | 北京市商汤科技开发有限公司 | System and method for object detection |
CN107688576B (en) * | 2016-08-04 | 2020-06-16 | 中国科学院声学研究所 | Construction and tendency classification method of CNN-SVM model |
CN107067022B (en) * | 2017-01-04 | 2020-09-04 | 美的集团股份有限公司 | Method, device and equipment for establishing image classification model |
CN106960243A (en) * | 2017-03-06 | 2017-07-18 | 中南大学 | A kind of method for improving convolutional neural networks structure |
CN107066586B (en) * | 2017-04-17 | 2019-07-05 | 清华大学深圳研究生院 | Shoes model index management method and system |
CN107247990A (en) * | 2017-06-01 | 2017-10-13 | 郑州云海信息技术有限公司 | A kind of method of RACK servers deep learning optimization |
EP3646240A4 (en) * | 2017-06-26 | 2021-03-17 | The Research Foundation for The State University of New York | System, method, and computer-accessible medium for virtual pancreatography |
CN107301417A (en) * | 2017-06-28 | 2017-10-27 | 广东工业大学 | A kind of method and device of the vehicle brand identification of unsupervised multilayer neural network |
CN107958259A (en) * | 2017-10-24 | 2018-04-24 | 哈尔滨理工大学 | A kind of image classification method based on convolutional neural networks |
CN108009472B (en) * | 2017-10-25 | 2020-07-21 | 五邑大学 | Finger back joint print recognition method based on convolutional neural network and Bayes classifier |
CN108376399A (en) * | 2018-02-11 | 2018-08-07 | 甘肃省电力公司风电技术中心 | The sand and dust influence degree analysis method monitored in real time based on photovoltaic panel |
CN109002843A (en) * | 2018-06-28 | 2018-12-14 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN108831519A (en) * | 2018-09-05 | 2018-11-16 | 上海麦色智能科技有限公司 | A kind of skin disease sorter based on morphology and clinical practice |
CN109522937B (en) * | 2018-10-23 | 2021-02-19 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111007871B (en) * | 2019-11-29 | 2022-04-29 | 厦门大学 | Unmanned aerial vehicle dynamic feature identification method, medium, equipment and device |
CN111708788A (en) * | 2020-05-08 | 2020-09-25 | 深圳市金蝶天燕云计算股份有限公司 | Method for calculating business document data and related equipment |
CN112200170B (en) * | 2020-12-07 | 2021-11-30 | 北京沃东天骏信息技术有限公司 | Image recognition method and device, electronic equipment and computer readable medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102147851B (en) * | 2010-02-08 | 2014-06-04 | 株式会社理光 | Device and method for judging specific object in multi-angles |
CN102902976A (en) * | 2011-07-29 | 2013-01-30 | 中国科学院电子学研究所 | Image scene classification method based on target and space relationship characteristics |
CN104346622A (en) * | 2013-07-31 | 2015-02-11 | 富士通株式会社 | Convolutional neural network classifier, and classifying method and training method thereof |
-
2015
- 2015-07-23 CN CN201510438236.8A patent/CN104992191B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN104992191A (en) | 2015-10-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104992191B (en) | The image classification method of feature and maximum confidence path based on deep learning | |
CN113378632B (en) | Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method | |
CN109670528B (en) | Data expansion method facing pedestrian re-identification task and based on paired sample random occlusion strategy | |
Chai et al. | Symbiotic segmentation and part localization for fine-grained categorization | |
WO2019237240A1 (en) | Enhanced generative adversarial network and target sample identification method | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
CN105303195B (en) | A kind of bag of words image classification method | |
CN109063719B (en) | Image classification method combining structure similarity and class information | |
CN106845525A (en) | A kind of depth confidence network image bracket protocol based on bottom fusion feature | |
CN104268593A (en) | Multiple-sparse-representation face recognition method for solving small sample size problem | |
CN105956560A (en) | Vehicle model identification method based on pooling multi-scale depth convolution characteristics | |
CN103425996B (en) | A kind of large-scale image recognition methods of parallel distributed | |
CN104504362A (en) | Face detection method based on convolutional neural network | |
CN104966104A (en) | Three-dimensional convolutional neural network based video classifying method | |
CN104036255A (en) | Facial expression recognition method | |
CN108154156B (en) | Image set classification method and device based on neural topic model | |
Afsari et al. | Group action induced distances for averaging and clustering linear dynamical systems with applications to the analysis of dynamic scenes | |
CN104751153B (en) | A kind of method and device of identification scene word | |
CN112784929A (en) | Small sample image classification method and device based on double-element group expansion | |
CN104598920A (en) | Scene classification method based on Gist characteristics and extreme learning machine | |
CN110751027A (en) | Pedestrian re-identification method based on deep multi-instance learning | |
CN113688894A (en) | Fine-grained image classification method fusing multi-grained features | |
CN110084136A (en) | Context based on super-pixel CRF model optimizes indoor scene semanteme marking method | |
CN103778913A (en) | Pathologic voice recognizing method | |
Huang et al. | Chinese herbal medicine leaves classification based on improved AlexNet convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180126 Termination date: 20200723 |
|
CF01 | Termination of patent right due to non-payment of annual fee |