CN105046272A - Image classification method based on concise unsupervised convolutional network - Google Patents
Image classification method based on concise unsupervised convolutional network Download PDFInfo
- Publication number
- CN105046272A CN105046272A CN201510368991.3A CN201510368991A CN105046272A CN 105046272 A CN105046272 A CN 105046272A CN 201510368991 A CN201510368991 A CN 201510368991A CN 105046272 A CN105046272 A CN 105046272A
- Authority
- CN
- China
- Prior art keywords
- characteristic face
- image
- pond
- histogram
- overbar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/192—Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
- G06V30/194—References adjustable by an adaptive method, e.g. learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides an image classification method based on a concise unsupervised convolutional network, and belongs to the image processing and deep learning technology field. The method of the present invention utilizes a classic unsupervised clustering algorithm K-means to cluster the image blocks of a training image set, the obtained each clustering center is a convolution kernel in a network model, and the time-consuming process in the conventional convolutional network of utilizing a stochastic gradient descent algorithm repeatedly to obtain the convolution kernels is abandoned. In addition, the present invention provides a probability pooling method to enhance the robustness of the network to the image deformation. By a concise unsupervised depth convolutional network classification model provided by the present invention, the model training time can be shortened effectively, at the same time, the identification capability of the model to the changeable scene pictures is improved.
Description
Technical field
The invention belongs to image procossing and degree of depth learning art field, relate to efficient Images Classification process, particularly relate to a kind of implementation of the Images Classification based on succinct non-supervisory formula convolutional network.
Background technology
In recent years, Images Classification obtains in fields such as industry, manufacturing industry, military affairs, medical treatment and pays close attention to widely and application.Although its development situation is very good, along with the coverage rate of practical application is widened gradually, mass image data is following, it is no matter the scale of image data base, or the diversity of picture material, all reach unprecedented peak, this makes traditional image processing method can't bear the heavy load.In the face of the image information of magnanimity like this, how to be carried out classifying by image exactly becomes the study hotspot of current association area.
At area of pattern recognition, the degree of depth study situation grow in intensity, wherein especially with degree of depth convolutional neural networks model for representative, in large-scale image classification task, achieve breakthrough effect.The success of degree of depth convolutional neural networks has owing to it the ability learning intermediate image expression, instead of the characteristics of image that manual designs is rudimentary.Even if degree of depth convolutional network has achieved certain success, but its model training process efficiency based on stochastic gradient descent algorithm is very low, cannot be competent at large-scale image classification task.
Summary of the invention
The present invention is intended to simplify traditional degree of depth convolutional network model, greatly reduces the complexity of network parameter quantity and network training.Degree of depth network model after simplifying is used in Images Classification task, improves Images Classification accuracy rate.
High in order to overcome traditional degree of depth convolutional neural networks model complexity, number of parameters is many, network model is difficult to training, and to problems such as the rigors of tape label view data, the present invention have studied and how to utilize simple non-supervisory formula algorithm to reduce the complexity of network model, can utilize a large amount of training carrying out network model without label image existed simultaneously.The present invention solves the technical scheme that its technical matters proposes: utilize classical Unsupervised Clustering algorithm K-means to carry out cluster to the image block of training plan image set, namely each cluster centre obtained is the convolution kernel in network model, iterates through stochastic gradient descent algorithm to obtain the time consuming process of convolution kernel in abandoning tradition convolutional network.The process being generated convolution kernel by K-means algorithm is very efficient, and the method to the view data of rare tape label without hard requirement, meanwhile, the convolution kernel obtained has the identification capability of height.For improving network model to the robustness of anamorphose, the present invention proposes a kind of pond method based on probability.The maximum pondization generally adopted relative to traditional convolutional network and average pond, the probability pond method that the present invention proposes take into account each neuronic effect, and has weighed the size of each neuron operation, therefore has stronger robustness.At the output layer of network, the present invention proposes statistic histogram on different scale, then on different scale, carries out maximum pond to histogram, chooses the characteristics of image of most competitive power.Different scale statistic histogram improves the image geometry unchangeability of model to greatest extent, simply, efficiently.Finally the characteristics of image that output layer obtains is input in sorter SVM, carries out Images Classification.
A kind of image classification method based on succinct non-supervisory formula convolutional network provided by the invention, its frame diagram as shown in Figure 1, comprises the following steps:
Step one: by training plan image set
in each Zhang Xunlian picture be divided into multiple image block, whole training image lump meter comprises T image block;
Step 2: pre-service: this T image block is normalized and whitening processing successively;
Step 3: for completing pretreated image block, utilize K-means algorithm pair
carry out cluster, obtain the K of network first tier
1individual convolution kernel
To each Zhang Xunlian picture X
noperate to step 8 by step 4;
Step 4: magnify the little training picture X for W × H for each
n, the convolution kernel obtained by step 3 is to X
ncarry out convolution operation:
it is convolution operation; Obtain K thus
1individual characteristic face (featuremap) namely
Wherein
x
nby convolution kernel
convolution obtains;
Step 5: each characteristic face (featuremap) obtained step 4, utilizes RectifiedLinearUnits (ReLU) function to activate neuron;
Step 6: probability pond is carried out to each characteristic face after activating, remembers that this pond turns to ground floor probability pond;
Step 7: step one is performed to step 6 to each characteristic face that step 6 obtains behind ground floor pond, obtains the characteristic face behind second layer pond, each characteristic face namely behind ground floor pond
all K can be obtained behind second layer pond
2individual characteristic face is designated as
will
in each characteristic face binaryzation, characteristic face
binaryzation postscript is
will
in all characteristic face binaryzations after the set of binaryzation characteristic face
and be superimposed as a new feature face I according to the following formula:
Wherein,
it is set
in i-th binaryzation characteristic face; Each characteristic face behind ground floor pond can be obtained as stated above
corresponding new feature face
k
1∈ [1, K
1];
Step 8: can to characteristic face in overlapping moving window
statistic histogram: arranging moving window size is R × R, and the sliding step of window is s, and moving window is placed in characteristic face
one end, by sliding step successively at characteristic face I
nupper slip is until travel through whole characteristic face
window often slides and once just calculates the histogram of current window, and amount to and obtain H histogram, each histogram includes B bin value;
By characteristic face on different scale q
2 are divided into successively by different scale
q× 2
qindividual block, q=0,1,2, as shown in Figure 2, remembers acquisition 21 piecemeals altogether; For yardstick q=0, build the histogram that comprises B bin value, position b in this histogram (b=1,2 ..., B) on bin value be maximal value in bin value from described H moving window histogram on the b of relevant position; Each piecemeal that yardstick q=1 and q=2 is corresponding obtains corresponding histogram according to above-mentioned same way, amounts to acquisition 21 histograms; Finally, namely the feature of each pictures is that the vector that these 21 histogram vectors are spliced into is formed, and namely the final intrinsic dimensionality of each picture is 21 × K
1× (K
2+ 1);
The feature of each Zhang Xunlian picture of training plan image set can be obtained as stated above;
Step 9: the feature of training plan image set is input in sorter SVM, SVM is trained;
Step 10: by test pattern image set, inputs in the SVM model trained, carries out the classification of image.
Beneficial effect of the present invention:
This invention simplifies traditional convolutional network model, improve the accuracy rate of Images Classification, compared to prior art, the present invention has the following advantages:
1, non-supervisory formula convolution kernel learning process is extremely succinct, has abandoned thousands of parameter initialization and tuning, has solved the bottleneck that traditional convolutional network must use tape label image simultaneously;
2, probability pondization had both considered each neuronic effect, had weighed again the size in various degree of its effect, had improve the robustness of network model to anamorphose;
3, statistic histogram in moving window, remains the spatial information of image, improves the geometric invariance of network model.
Accompanying drawing explanation
Fig. 1 is the model framework figure of the succinct non-supervisory formula convolutional network sorting technique that the present invention proposes.
Fig. 2 is characteristic face in the inventive method step 8
division schematic diagram.
Embodiment
The concrete implementation step that the present invention solves the employing of its technical matters is as follows:
Step one: by training plan image set
in each Zhang Xunlian picture be divided into the image block that multiple size is w × h, the pixel of each image block composition dimension is R
mvector, the channel value of wherein M=w × h × d, d representative image, for RGB picture, d=3, for gray scale picture, d=1; The visual lump meter of whole training comprises T image block, this T all image block vector composition matrix P={p
1..., p
t..., p
t, wherein, t=1 ..., T, p
t∈ R
m;
Step 2: pre-service is carried out to this T image block;
Be normalized according to formula (1), carry out albefaction according to formula (2) (3) (4):
Wherein, mean () asks vectorial mean value, and var () is the variance asking vector, and cov () is the covariance matrix asking vector, and Eig () asks feature value vector L and eigenvectors matrix U, λ
ii-th eigenwert;
Step 3: after completing the pre-service to image block, obtain image block set
utilize K-means algorithm pair
carry out cluster, obtain the K of network first tier
1individual convolution kernel
To each Zhang Xunlian picture X
noperate to step 8 by step 4;
Step 4: magnify the little training picture X for W × H for each
n, the convolution kernel obtained by step 3 is to X
ncarry out convolution operation:
it is convolution operation; Obtain K thus
1individual characteristic face (featuremap) namely
Wherein
x
nby convolution kernel
convolution obtains;
Step 5: each characteristic face (featuremap) that step 4 is obtained, utilize RectifiedLinearUnits (ReLU) function activate neuron, described ReLU function namely: f (x)=max{0, x};
Step 6: ground floor probability pond is carried out to each characteristic face after activating, the neuron number that the size in note territory, pond and territory, pond comprise is w2 × h2, then the operation of probability pondization is as shown in formula (5):
Wherein, a
i,jthe neuron in current featuremap on position (i, j), i=1 ..., w2, j=1 ..., h2, sum (a
i,j) be to the neuron value summation in territory, pond;
Step 7: step one is performed to step 6 to each characteristic face that step 6 obtains behind ground floor pond, obtains the characteristic face behind second layer pond, each characteristic face namely behind ground floor pond
all K can be obtained behind second layer pond
2individual characteristic face is designated as
will
in each characteristic face binaryzation:
Wherein, b
i,jthe neuron in current signature face on position (i, j), i=1 ..., w2, j=1 ..., h2, characteristic face
binaryzation postscript is
will
in all characteristic face binaryzations after the set of binaryzation characteristic face
and be superimposed as a new feature face I according to formula (7):
Wherein,
it is set
in i-th binaryzation characteristic face; Each characteristic face behind ground floor pond can be obtained as stated above
corresponding new feature face
k
1∈ [1, K
1];
Step 8: can to characteristic face in overlapping moving window
statistic histogram: arranging moving window size is R × R, and the sliding step of window is s, and moving window is placed in characteristic face
one end, by sliding step successively at characteristic face I
nupper slip is until travel through whole characteristic face
window often slides and once just calculates the histogram of current window, and amount to and obtain H histogram, each histogram includes B bin value;
Extract according to above-mentioned window sliding and statistics with histogram mode redundancy and the dimension disaster that picture feature may bring feature, therefore the present invention takes to choose feature on different scale, the document that sees reference [1], concrete operations are as follows:
By characteristic face on different scale q
2 are divided into successively by different scale
q× 2
qindividual block, q=0,1,2, as shown in Figure 2, remembers acquisition 21 piecemeals altogether; For yardstick q=0, build the histogram that comprises B bin value, position b in this histogram (b=1,2 ..., B) on bin value be maximal value in bin value from described H moving window histogram on the b of relevant position; Each piecemeal that yardstick q=1 and q=2 is corresponding obtains corresponding histogram according to above-mentioned same way, amounts to acquisition 21 histograms; Finally, namely the feature of each pictures is that the vector that these 21 histogram vectors are spliced into is formed, and namely the final intrinsic dimensionality of each picture is 21 × K
1× (K
2+ 1);
The feature that training image concentrates each Zhang Xunlian picture can be obtained as stated above;
Step 9: the feature of training plan image set is input in sorter SVM, SVM is trained;
Step 10: by test pattern image set, inputs in the SVM model trained, carries out the classification of image.
The list of references that the present embodiment is quoted is as follows:
[1]K.M.He,X.Y.Zhang,S.Q.Ren,andJ.Sun.Spatialpyramidpoolingindeepconvolutionalnetworksforvisualrecognition.InECCV,2014.
Claims (4)
1., based on an image classification method for succinct non-supervisory formula convolutional network, specifically comprise the following steps:
Step one: by training plan image set
in each Zhang Xunlian picture be divided into multiple image block, the pixel of image block composition dimension is R
mvector, whole training image lump meter comprises T image block;
Step 2: pre-service: this T image block is normalized and whitening processing successively;
Step 3: for completing pretreated image block, utilize K-means algorithm pair
carry out cluster, obtain the K of network first tier
1individual convolution kernel
To each Zhang Xunlian picture X
noperate to step 8 by step 4;
Step 4: magnify the little training picture X for W × H for each
n, the convolution kernel obtained by step 3 is to X
ncarry out convolution operation:
n=1 ..., N,
it is convolution operation; Obtain K thus
1individual characteristic face (featuremap) namely
wherein
x
nby convolution kernel
convolution obtains;
Step 5: each characteristic face obtained step 4, utilizes RectifiedLinearUnits (ReLU) function to activate neuron;
Step 6: probability pond is carried out to each characteristic face after activating, remembers that this pond turns to ground floor probability pond;
Step 7: step one is performed to step 6 to each characteristic face that step 6 obtains behind ground floor pond, obtains the characteristic face behind second layer pond, each characteristic face namely behind ground floor pond
all K can be obtained behind second layer pond
2individual characteristic face is designated as
will
in each characteristic face binaryzation, characteristic face
binaryzation postscript is
will
in all characteristic face binaryzations after the set of binaryzation characteristic face
and be superimposed as a new feature face I according to the following formula:
Wherein,
it is set
in i-th binaryzation characteristic face; Each characteristic face behind ground floor pond can be obtained as stated above
corresponding new feature face
k
1∈ [1, K
1];
Step 8: can to characteristic face in overlapping moving window
statistic histogram: arranging moving window size is R × R, and the sliding step of window is s, and moving window is placed in characteristic face
one end, by sliding step successively at characteristic face I
nupper slip is until travel through whole characteristic face
window often slides and once just calculates the histogram of current window, and amount to and obtain H histogram, each histogram includes B bin value;
By characteristic face on different scale q
2 are divided into successively by different scale
q× 2
qindividual block, q=0,1,2, remembers acquisition 21 piecemeals altogether; For yardstick q=0, build the histogram that comprises B bin value, position b in this histogram (b=1,2 ..., B) on bin value be maximal value in bin value from described H moving window histogram on the b of relevant position; Each piecemeal that yardstick q=1 and q=2 is corresponding obtains corresponding histogram according to above-mentioned same way, amounts to acquisition 21 histograms; Finally, namely the feature of each pictures is that the vector that these 21 histogram vectors are spliced into is formed, and namely the final intrinsic dimensionality of each picture is 21 × K
1× (K
2+ 1);
The feature that training image concentrates each Zhang Xunlian picture can be obtained as stated above;
Step 9: the feature of training plan image set is input in sorter SVM, SVM is trained;
Step 10: by test pattern image set, inputs in the SVM model trained, carries out the classification of image.
2. the image classification method based on succinct non-supervisory formula convolutional network according to claim 1, it is characterized in that, in the pre-service of step 2, be normalized according to formula (1), carry out albefaction according to formula (2) (3) (4):
Wherein, T image block vector composition matrix P={p
1..., p
t..., p
t, wherein, t=1 ..., T, mean () ask vectorial mean value, and var () is the variance asking vector, and cov () is the covariance matrix asking vector, and Eig () asks feature value vector L and eigenvectors matrix U, λ
ii-th eigenwert.
3. the image classification method based on succinct non-supervisory formula convolutional network according to claim 1, it is characterized in that, the pond mode of step 6 is specific as follows, the neuron number that the size in note territory, pond and territory, pond comprise is w2 × h2, then the operation of probability pondization is as shown in formula (5):
Wherein, a
i,jthe neuron in current featuremap on position (i, j), i=1 ..., w2, j=1 ..., h2, sum (a
i,j) be to the neuron value summation in territory, pond.
4. the image classification method based on succinct non-supervisory formula convolutional network according to claim 3, it is characterized in that, the binarization described in step 7 is specific as follows:
Wherein, b
i,jthe neuron in current signature face on position (i, j), i=1 ..., w2, j=1 ..., h2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510368991.3A CN105046272B (en) | 2015-06-29 | 2015-06-29 | A kind of image classification method based on succinct non-supervisory formula convolutional network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510368991.3A CN105046272B (en) | 2015-06-29 | 2015-06-29 | A kind of image classification method based on succinct non-supervisory formula convolutional network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105046272A true CN105046272A (en) | 2015-11-11 |
CN105046272B CN105046272B (en) | 2018-06-19 |
Family
ID=54452801
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510368991.3A Active CN105046272B (en) | 2015-06-29 | 2015-06-29 | A kind of image classification method based on succinct non-supervisory formula convolutional network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105046272B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631479A (en) * | 2015-12-30 | 2016-06-01 | 中国科学院自动化研究所 | Imbalance-learning-based depth convolution network image marking method and apparatus |
CN105894046A (en) * | 2016-06-16 | 2016-08-24 | 北京市商汤科技开发有限公司 | Convolutional neural network training and image processing method and system and computer equipment |
CN106127747A (en) * | 2016-06-17 | 2016-11-16 | 史方 | Car surface damage classifying method and device based on degree of depth study |
CN106845528A (en) * | 2016-12-30 | 2017-06-13 | 湖北工业大学 | A kind of image classification algorithms based on K means Yu deep learning |
CN106874956A (en) * | 2017-02-27 | 2017-06-20 | 陕西师范大学 | The construction method of image classification convolutional neural networks structure |
CN106919980A (en) * | 2017-01-24 | 2017-07-04 | 南京大学 | A kind of increment type target identification system based on neuromere differentiation |
CN107563493A (en) * | 2017-07-17 | 2018-01-09 | 华南理工大学 | A kind of confrontation network algorithm of more maker convolution composographs |
CN107832794A (en) * | 2017-11-09 | 2018-03-23 | 车智互联(北京)科技有限公司 | A kind of convolutional neural networks generation method, the recognition methods of car system and computing device |
WO2018076130A1 (en) * | 2016-10-24 | 2018-05-03 | 中国科学院自动化研究所 | Method for establishing object recognition model, and object recognition method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104036253A (en) * | 2014-06-20 | 2014-09-10 | 智慧城市系统服务(中国)有限公司 | Lane line tracking method and lane line tracking system |
CN104408405A (en) * | 2014-11-03 | 2015-03-11 | 北京畅景立达软件技术有限公司 | Face representation and similarity calculation method |
CN104408435A (en) * | 2014-12-05 | 2015-03-11 | 浙江大学 | Face identification method based on random pooling convolutional neural network |
CN104463172A (en) * | 2014-12-09 | 2015-03-25 | 中国科学院重庆绿色智能技术研究院 | Face feature extraction method based on face feature point shape drive depth model |
US20150110381A1 (en) * | 2013-09-22 | 2015-04-23 | The Regents Of The University Of California | Methods for delineating cellular regions and classifying regions of histopathology and microanatomy |
-
2015
- 2015-06-29 CN CN201510368991.3A patent/CN105046272B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150110381A1 (en) * | 2013-09-22 | 2015-04-23 | The Regents Of The University Of California | Methods for delineating cellular regions and classifying regions of histopathology and microanatomy |
CN104036253A (en) * | 2014-06-20 | 2014-09-10 | 智慧城市系统服务(中国)有限公司 | Lane line tracking method and lane line tracking system |
CN104408405A (en) * | 2014-11-03 | 2015-03-11 | 北京畅景立达软件技术有限公司 | Face representation and similarity calculation method |
CN104408435A (en) * | 2014-12-05 | 2015-03-11 | 浙江大学 | Face identification method based on random pooling convolutional neural network |
CN104463172A (en) * | 2014-12-09 | 2015-03-25 | 中国科学院重庆绿色智能技术研究院 | Face feature extraction method based on face feature point shape drive depth model |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631479A (en) * | 2015-12-30 | 2016-06-01 | 中国科学院自动化研究所 | Imbalance-learning-based depth convolution network image marking method and apparatus |
CN105631479B (en) * | 2015-12-30 | 2019-05-17 | 中国科学院自动化研究所 | Depth convolutional network image labeling method and device based on non-equilibrium study |
CN105894046A (en) * | 2016-06-16 | 2016-08-24 | 北京市商汤科技开发有限公司 | Convolutional neural network training and image processing method and system and computer equipment |
CN105894046B (en) * | 2016-06-16 | 2019-07-02 | 北京市商汤科技开发有限公司 | Method and system, the computer equipment of convolutional neural networks training and image procossing |
CN106127747A (en) * | 2016-06-17 | 2016-11-16 | 史方 | Car surface damage classifying method and device based on degree of depth study |
CN106127747B (en) * | 2016-06-17 | 2018-10-16 | 史方 | Car surface damage classifying method and device based on deep learning |
WO2018076130A1 (en) * | 2016-10-24 | 2018-05-03 | 中国科学院自动化研究所 | Method for establishing object recognition model, and object recognition method |
CN106845528A (en) * | 2016-12-30 | 2017-06-13 | 湖北工业大学 | A kind of image classification algorithms based on K means Yu deep learning |
CN106919980A (en) * | 2017-01-24 | 2017-07-04 | 南京大学 | A kind of increment type target identification system based on neuromere differentiation |
CN106919980B (en) * | 2017-01-24 | 2020-02-07 | 南京大学 | Incremental target identification system based on ganglion differentiation |
CN106874956A (en) * | 2017-02-27 | 2017-06-20 | 陕西师范大学 | The construction method of image classification convolutional neural networks structure |
CN107563493A (en) * | 2017-07-17 | 2018-01-09 | 华南理工大学 | A kind of confrontation network algorithm of more maker convolution composographs |
CN107832794A (en) * | 2017-11-09 | 2018-03-23 | 车智互联(北京)科技有限公司 | A kind of convolutional neural networks generation method, the recognition methods of car system and computing device |
Also Published As
Publication number | Publication date |
---|---|
CN105046272B (en) | 2018-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105046272A (en) | Image classification method based on concise unsupervised convolutional network | |
CN105184309B (en) | Classification of Polarimetric SAR Image based on CNN and SVM | |
CN103984959B (en) | A kind of image classification method based on data and task-driven | |
CN103942564B (en) | High-resolution remote sensing image scene classifying method based on unsupervised feature learning | |
CN109034224B (en) | Hyperspectral classification method based on double branch network | |
CN105426919A (en) | Significant guidance and unsupervised feature learning based image classification method | |
CN111695467A (en) | Spatial spectrum full convolution hyperspectral image classification method based on superpixel sample expansion | |
CN104268593A (en) | Multiple-sparse-representation face recognition method for solving small sample size problem | |
CN105678278A (en) | Scene recognition method based on single-hidden-layer neural network | |
CN106295694A (en) | A kind of face identification method of iteration weight set of constraints rarefaction representation classification | |
CN107273853A (en) | A kind of remote sensing images transfer learning method alignd based on the class heart and covariance | |
CN106156798B (en) | Scene image classification method based on annular space pyramid and Multiple Kernel Learning | |
CN102314614A (en) | Image semantics classification method based on class-shared multiple kernel learning (MKL) | |
CN103824272A (en) | Face super-resolution reconstruction method based on K-neighboring re-recognition | |
CN107977660A (en) | Region of interest area detecting method based on background priori and foreground node | |
CN112949738B (en) | Multi-class unbalanced hyperspectral image classification method based on EECNN algorithm | |
CN104050680B (en) | Based on iteration self-organizing and the image partition method of multi-agent genetic clustering algorithm | |
CN101833667A (en) | Pattern recognition classification method expressed based on grouping sparsity | |
CN105631469A (en) | Bird image recognition method by multilayer sparse coding features | |
CN104182767A (en) | Active learning and neighborhood information combined hyperspectral image classification method | |
CN115909052A (en) | Hyperspectral remote sensing image classification method based on hybrid convolutional neural network | |
CN107451594A (en) | A kind of various visual angles Approach for Gait Classification based on multiple regression | |
CN101515328A (en) | Local projection preserving method facing identification and having statistical noncorrelation | |
CN104408731A (en) | Region graph and statistic similarity coding-based SAR (synthetic aperture radar) image segmentation method | |
CN107909120A (en) | Based on alternative label K SVD and multiple dimensioned sparse hyperspectral image classification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |