CN106845528A - A kind of image classification algorithms based on K means Yu deep learning - Google Patents
A kind of image classification algorithms based on K means Yu deep learning Download PDFInfo
- Publication number
- CN106845528A CN106845528A CN201611259889.0A CN201611259889A CN106845528A CN 106845528 A CN106845528 A CN 106845528A CN 201611259889 A CN201611259889 A CN 201611259889A CN 106845528 A CN106845528 A CN 106845528A
- Authority
- CN
- China
- Prior art keywords
- image
- sample
- label
- cluster centre
- normalized
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of image classification algorithms based on K means Yu deep learning, including step:1) without label image as input picture, and image block composition size identical will be randomly selected without label image collection;2) Optimal cluster center is extracted using K means algorithms;3) construction feature mapping function, extracts the characteristics of image without label image collection;4) pondization operation and normalized are carried out;5) secondary Optimal cluster center is extracted using K means algorithms, and uses convolution operation, extract final image feature, final image feature is standardized;6) the final image feature by standardization is classified by sorter.The present invention has the advantages that simple, efficient, training parameter is few, has good effect for the classification of magnanimity dimensional images, and input picture is pre-processed, and reaches improvement image classification effect, improves the effect of nicety of grading.
Description
Technical field
Refer to a kind of image based on K-means Yu deep learning specifically the invention belongs to areas of information technology
Sorting algorithm, it is adaptable to the classification of magnanimity high dimensional data image in internet, is additionally operable to network image retrieval, video frequency searching, distant
The view data classification in the fields such as sense image classification, interactive entertainment, intelligent robot.
Background technology
In mass image data processing technology field, deep learning is a kind of relatively conventional algorithm.Deep learning conduct
A kind of algorithm proposed in 2006 by Hinton, and have been widely recognized with application, its essence be by set up have it is many
The artificial nerve network model of individual hidden layer and large-scale training data learn useful and abstract feature, final result
It is the accuracy for lifting image classification.Therefore, deep learning can well solve mass image data process problem.Hinton is carried
Go out DBN networks, and confirm that the shallower layer network structure of (1) deep layer network structure has more preferable feature learning ability;(2) pass through
The mode successively trained enables that deep layer network structure is trained well.Hereafter, there is more deep learning model successive
It is suggested, these models have also confirmed the viewpoint of Hinton.
The training method of traditional neural network is mainly by the way of back-propagation algorithm (BP), using random initializtion
Method, the output of current network is calculated by way of iteration, then according between current predictive label and physical tags
Difference goes constantly to adjust the parameter between preceding layers, until the convergence of whole model.Traditional BP algorithm has supervision to learn as one kind
Practise algorithm, there are problems that gradient disperse problem, lack of training samples and, at the same time, due to internet
Middle magnanimity is in explosive growth without label image, and traditional BP algorithm can not meet the need that magnanimity is classified without label image a few days ago
Ask.
The content of the invention
The training method that the purpose of the present invention is directed to traditional neural network has gradient disperse, training sample not
Be enough to and local optimum problem, propose a kind of image classification algorithms based on K-means Yu deep learning.
To achieve the above object, a kind of image classification algorithms based on K-means Yu deep learning designed by the present invention,
Comprise the following steps:
1) without label image as input picture, and image block composition size identical will be randomly selected without label image
Collection;
2) Optimal cluster center is extracted using K-means algorithms;
3) construction feature mapping function, extracts the characteristics of image without label image collection;
4) pondization operation and normalized are carried out;
5) secondary Optimal cluster center is extracted using K-means algorithms, and uses convolution operation, extract final image special
Levy, final image feature is standardized;
6) the final image feature by standardization is classified by sorter.
Preferably, the step 2) specific steps include:
21) k initial cluster centre { μ of setting1,μ2,μ3…μk, k is natural number, sets up the criterion function of initialization
Wherein, μjIt is each sample x(i)Corresponding cluster centre, j=1~k, i are natural number, and i > j;x(i)Represent
The n sample concentrated without label image, n is natural number, indicates the number that sample is concentrated without label image.
22) each sample x is asked for successively(i)To all initial cluster centre { μ1,μ2,μ3…μkDistance minimum
Value, is designated as the class label c of the sample(i), by sample x(i)It is classified as c(i)Class, further according to the class label c(i)Update and calculate
Cluster centre, obtains process cluster centre μj', j=1~k;
c(i)=argmin | | x(i)-μj′||
23) by all process cluster centre μj' bring criterion function calculating into, whether judgment criterion function restrains, no
Then return to step 22), be then near step 24);
24) by process cluster centre μj' it is defined as an Optimal cluster center { μ '1,μ′2,μ′3…μ′k, by each sample
This x(i)Sort out to closest cluster centre, be designated as xj (i), each sample x(i)To the class of closest cluster centre
Distinguishing label is designated as cj (i)。
Preferably, the step 3) concretely comprise the following steps:Define sample x(i)Feature Mapping function, extract characteristic vector y(i), wherein h (z) represents all samples in each class to the average value of the distance of cluster centre, Zj (i)Represent every in each class
One sample x(i)To the distance of corresponding cluster centre;
y(i)=fk(x)=max { 0, h (z)-Zj (i)}
Zj (i)=| | x(i)-μj||2
Preferably, the step 4) in the formula of normalized be:
y(i)It is the characteristic vector of sample image block, var and mean represents variance and average, and σ is denoising constant,It is to return
The characteristic vector of the image block after one change treatment.
Preferably, the step 5) detailed process include:
51) p initial cluster centre { μ of setting1,μ2,μ3…μp, p is natural number, repeat step 2), obtain it is secondary most
Good cluster centre { μ1′,μ2′,μ3′…μ′p};
52) convolution operation is used, final image feature is extractedConvolution Formula is:
WhereinIt is final image feature, μlIt is secondary Optimal cluster center, l=1~p;
53) final image feature is normalized, the same step 4) of formula of normalized.
Preferably, the step 1) and step 2) between also include to being normalized and whitening processing without label image collection
Pre-treatment step.
Preferably, the detailed process of the whitening processing includes:
A covariance matrix ∑) is calculated
Wherein,The sample concentrated without label image that indicates after normalized is represented, n is to indicate without label image
Concentrate the number of sample;
B the characteristic vector U for) making covariance matrix ∑ is U=[u1,u2…un], UTU=I, characteristic vector u1,u2…unStructure
Into a base vector, for mapping data,Represent postrotational image, after input normalized without label image collection
In sampleIt is expressed as by base vector of characteristic vector U
C the characteristic value for) setting covariance matrix ∑ is λ2,…,λn, then the image after PCA albefactions be
Wherein ε is expressed as the constant of smoothed image block.
Advantages of the present invention includes:
(1) using unsupervised learning algorithm K-means as deep layer network structure training method, it is to avoid to each seed ginseng
Several training, only needs training dictionary (i.e. cluster centre) so that training process is simple, and time efficiency is high, with it is simple, efficient,
The advantages of training parameter is few, has good effect for the classification of magnanimity dimensional images.
(2) input picture is pre-processed, reaches improvement image classification effect, improve the effect of nicety of grading.
(3) normalization operation is taken, strengthens picture contrast to reduce the influence of light.
(4) due to there is certain correlation between view data, it is superfluous between image that the present invention takes PCA whitening processings to eliminate
Yu Xing.
(5) average pondization is taken to process the dimension and integral image feature for reducing characteristic vector, after after the characteristics of image of pond,
For the influence of each component of balance characteristics, the characteristics of image to pond is normalized, follow-up to improve
Classifying quality of the Softmax graders to characteristics of image.
Brief description of the drawings
Fig. 1 is a kind of flow chart based on K-means Yu the image classification algorithms of deep learning of the present invention;
Fig. 2 is K-means algorithm flow charts;
Fig. 3 is that convolution extracts feature schematic diagram;
Fig. 4 a are the graph of a relation of the classification accuracy that three kinds of models are classified to MNIST data sets and iterations;
Fig. 4 b are the graph of a relation of the classification accuracy that three kinds of models are classified to Cifar-10 data sets and iterations;
Fig. 4 c are the pass of the classification accuracy that three kinds of models are classified to The four-vehicle data sets and iterations
System's figure.
Specific embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in further detail.
As depicted in figs. 1 and 2, a kind of image classification algorithms based on K-means Yu deep learning of the present invention, including it is as follows
Step:
1) without label image as input picture, and image block composition size identical will be randomly selected without label image
Collection.It is 100,000 that the sample size without label image collection is set in the present embodiment, and corresponding size is 12*12*3
Image block.
To being pre-processed without label image collection, including normalized and whitening processing.
The process of normalized is:
Wherein, x is the sample without label image collection of input,Represent indicating without label image collection after normalized
In sample, var and mean represents variance and average respectively, and σ is denoising constant, it is to avoid denominator is 0 and to image denoising.
The process of whitening processing is:
A covariance matrix ∑) is calculated
Wherein,The sample concentrated without label image that indicates after normalized is represented, n is to indicate without label image collection
The number of middle sample, n=100 in this example, 000.
B the characteristic vector U for) making covariance matrix ∑ is U=[u1,u2…un], UTU=I, characteristic vector u1,u2…unStructure
Into a base vector, for mapping data,Represent postrotational image, after input normalized without label image collection
In sampleIt is expressed as by base vector of characteristic vector U
C the characteristic value for) setting covariance matrix ∑ is λ2,…,λn, then the image after PCA albefactions be
Wherein ε is expressed as the constant of smoothed image block.
2) Optimal cluster center is extracted using K-means algorithms.
Using pretreated without label image collection as K-means cluster datas, by K-means clustering algorithm training nets
Network obtains cluster centre, that is, dictionary.K-means clustering algorithms are used as a kind of unsupervised learning algorithm, it is to avoid to various
The training of parameter, only needs training dictionary so that training process is simple, and time efficiency is high.
It is { x without label image collection to make pretreated(1),x(2),x(3),…,x(n)},x(i)∈Rn(wherein n represents image
Concentrate the number of sample, i=1~n, n=100 in this example, 000, x(i)Represent some sample in n sample, RnRepresent n
Dimensional vector), by K-means clustering algorithms to being clustered without label image collection.
21) k initial cluster centre { μ of setting1,μ2,μ3…μk, k is natural number, and k=1600 is taken in this example, is set up
The criterion function of initialization
Wherein, μjIt is each sample x(i)Corresponding cluster centre, j=1~k, i, j are natural number, and i > j;x(i)Table
Show the n sample concentrated without label image.
22) each sample x is asked for successively(i)To all initial cluster centre { μ1,μ2,μ3…μkDistance minimum
Value, is designated as the class label c of the sample(i), by sample x(i)It is classified as c(i)Class, further according to the class label c(i)Update and calculate
Cluster centre, obtains process cluster centre μj', j=1~k.
c(i)=argmin | | x(i)-μj′||
23) by all process cluster centre μj' bring criterion function calculating into, whether judgment criterion function is restrained, is otherwise returned
Return step 22), be then near step 24).
24) by process cluster centre μj' it is defined as an Optimal cluster center { μ '1,μ′2,μ′3…μ′k, by each sample
This x(i)Sort out to closest cluster centre, be designated as xj (i), each sample x(i)To the class of closest cluster centre
Distinguishing label is designated as cj (i)。
3) construction feature mapping function, extracts the characteristics of image without label image collection.
Define each sample x(i)Feature Mapping function, extract characteristic vector y(i), wherein h (z) represented in each class
All samples to the distance of cluster centre average value, Zj (i)Represent all samples in each class to corresponding cluster centre
Distance;
y(i)=fk(x)=max { 0, h (z)-Zj (i)}
Zj (i)=| | x(i)-μj||2
When Feature Mapping function-output is 0, represent that this feature is more than " average value " to the distance of cluster centre.To every
The image of one width input 64*64 sizes, the step-length s=1 of setting, selection receptive field size is (i.e. in the image block of 64*64 sizes
The area size of selection) for 12*12 sample block as Feature Mapping function input, by sample block be mapped as k dimension feature
Expression, the feature that a size is tieed up for the individual k of (64-12+1) * (64-12+1) can be obtained for every piece image.So ensure
Most of characteristic values are exported for 0, makes it have openness, this rarefaction representation is widely applied in computer vision.
4) pondization operation and normalized are carried out.
After this, we are processed by pondization reduces the dimension of characteristic vector.
Because too high by image feature vector dimension that Feature Mapping function is obtained, it is unfavorable for Softmax graders pair
Characteristics of image carries out classification and easily over-fitting occurs, so we take average pondization to process the characteristics of image for extracting reducing
The dimension and integral image feature of characteristic vector.After after the characteristics of image of pond, for the influence of each component of balance characteristics, with
Improve classifying quality of the follow-up Softmax graders to characteristics of image, our characteristics of image to pond are normalized place
Reason.
The formula of normalized is:
It is the characteristic vector of the image block after standardization, y(i)The characteristic vector of sample image block, var and
Mean represents variance and average, and σ is denoising constant.
5) secondary Optimal cluster center is extracted using K-means algorithms, and uses convolution operation, extract final image special
Levy, final image feature is standardized.The image block characteristics of a*a sizes are obtained by training, we by its
Convolution operation is carried out as convolution kernel and input picture, characteristics of image is extracted, as shown in Figure 2.
51) p initial cluster centre { μ of setting1,μ2,μ3…μp, p is natural number, and p=2000 is taken in this example, is repeated
Step 21)~23), by step 21)~23) in k replace with p, obtain secondary Optimal cluster center { μ1′,μ2′,μ3′…μ
′p};
52) convolution operation is used, the final image feature of the image block of input is done into convolution with the cluster centre for extracting, carried
Take final image featureConvolution Formula is:
WhereinIt is final image feature, μlIt is secondary Optimal cluster center, l=1~p;
53) final image feature is normalized, with the influence of each component of balance characteristics again, normalization
The same step 4) of formula for the treatment of.
6) the final image feature by normalized is classified by Softmax sorters.
Experimental data:
A kind of image classification algorithms (KDL is expressed as in following table) based on K-means Yu deep learning proposed by the present invention,
Compared with classical SAE, Stacked SAE algorithms.Experimental result is as shown in Table 1 to Table 3.
As can be seen from Table 1, to MNIST data sets classify in, set forth herein KDL category of model accuracys rate in iteration
Maximum 97.52%, sparse self-encoding encoder model (SAE) and storehouse self-encoding encoder (Stacked SAE) point are reached at 300 times
4.52% and 16.8% is not higher by.From Fig. 4 a, KDL models are to MNIST data sets classification accuracy as iterations increases
Plus SAE models and Stacked SAE models are far above all the time.Because SAE models can not effectively, accurately using single layer network structure
Expression characteristics of image, so having the ability of preferably expression characteristics of image using K-means multitiered networks structures herein.It is real
As shown by data KDL models entirety classification performance is tested better than SAE models, the experimental result it is anticipated that analyzing is also complied with.With
Stacked SAE models are compared, KDL models have in training process and assorting process compared to Stacked SAE models it is simple,
Efficiently, the advantages of training parameter is few, so its classification performance is better than Stacked SAE models.Experimental data equally confirms, KDL
Model is far above Stacked SAE models in Classification and Identification accuracy rate.
Table 1. 3 kinds of models classification accuracy on MNIST data sets
As can be seen from Table 2, in classifying to Cifar-10 data sets, when iterations is 100, KDL categories of model are accurate
True rate is 61.34%, and sparse self-encoding encoder model (SAE) and storehouse self-encoding encoder (Stacked SAE) are higher by respectively
1.65% and 31.02%.And knowable to from Fig. 4 b, as iterations increases, three kinds of category of model accuracys rate are in slow increasing
Long status.Now, KDL Model Identifications rate is less than SAE models less than 1%, and is higher by Stacked SAE models 22.2%.Due to
Identical experiment parameter is used when experimental result unavoidably has error and different pieces of information collection is classified, experiment may be caused to tie
There is certain deviation in fruit.Integral experiment as shown by data, KDL models are higher than another two in the classification accuracy to Cifar-10 data sets
Plant the classification accuracy of model.
Table 2. 3 kinds of models classification accuracy on Cifar-10 data sets
As can be seen from Table 3, in classifying to The four-vehicle data sets, KDL category of model recognition accuracies exist
Maximum 80.87%, sparse self-encoding encoder model (SAE) and storehouse self-encoding encoder (Stacked are reached during iteration 100 times
SAE 2.32% and 16.9%) is higher by respectively.Knowable to from Fig. 4 c, as iterations increases, the identification of KDL categories of model is accurate
Rate is consistently higher than SAE models and Stacked SAE models.SAE categories of model accuracy rate is in as iterations increases and first increases
Decline state after length, maximum 79.47% is reached when iterations is 400.Stacked SAE categories of model accuracys rate with
Iterations increase is also at first to increase and declines state afterwards, and maximum 67.31% is reached when iterations is 200.In synthesis
Stating analysis can obtain, and KDL categories of model recognition performance is better than SAE models and Stacked SAE models.
3. 3 kinds of models of table classification accuracy on The four-vehicle data sets
By three kinds of models, classification accuracy can be obtained on three data sets, set forth herein based on K-means and depth
The KDL image classification models of study, are not only better than SAE models and Stacked SAE models in Classification and Identification accuracy rate, and
Also have the advantages that K-means clustering algorithms it is simple, efficiently, learning parameter is few and deep learning has, and to be good at treatment big
The ability of scale image.
Besides these examples, the present invention can also have other ways of realization, and all use equivalents or equivalent transformation are formed
Scheme, all fall within the protection domain of this patent requirement.
The content not being described in detail in this specification belongs to prior art known to professional and technical personnel in the field.
Claims (7)
1. a kind of image classification algorithms based on K-means Yu deep learning, it is characterised in that:Comprise the following steps:
1) without label image as input picture, and image block composition size identical will be randomly selected without label image collection;
2) Optimal cluster center is extracted using K-means algorithms;
3) construction feature mapping function, extracts the characteristics of image without label image collection;
4) pondization operation and normalized are carried out;
5) secondary Optimal cluster center is extracted using K-means algorithms, and uses convolution operation, extract final image feature, it is right
Final image feature is standardized;
6) the final image feature by standardization is classified by sorter.
2. a kind of image classification algorithms based on K-means Yu deep learning according to claim 1, it is characterised in that:
21) k initial cluster centre { μ of setting1,μ2,μ3…μk, k is natural number, sets up the criterion function of initialization
Wherein, μjIt is each sample x(i)Corresponding cluster centre, j=1~k, i are natural number, and i > j;x(i)Represent n nothing
The sample that label image is concentrated, n is natural number, indicates the number that sample is concentrated without label image.
22) each sample x is asked for successively(i)To all initial cluster centre { μ1,μ2,μ3…μkDistance minimum value, note
It is the class label c of the sample(i), by sample x(i)It is classified as c(i)Class, further according to the class label c(i)Update and calculate cluster
Center, obtains process cluster centre μj', j=1~k;
c(i)=argmin | | x(i)-μ′j||
23) by all process cluster centre μ 'jBring criterion function calculating into, whether judgment criterion function is restrained, otherwise returned
Step 22), be then near step 24);
24) by process cluster centre μ 'jIt is defined as an Optimal cluster center { μ '1,μ′2,μ′3…μ′k, by each sample x(i)Sort out to closest cluster centre, be designated as xj (i), each sample x(i)To the classification mark of closest cluster centre
Label are designated as cj (i)。
3. a kind of image classification algorithms based on K-means Yu deep learning according to claim 2, it is characterised in that:
The step 3) concretely comprise the following steps:Define sample x(i)Feature Mapping function, extract characteristic vector y(i), wherein h (z) expressions
In each class all samples to the distance of cluster centre average value, Zj (i)Represent each sample x in each class(i)To right
The distance of the cluster centre answered;
4. a kind of image classification algorithms based on K-means Yu deep learning according to claim 3, it is characterised in that:
The step 4) in the formula of normalized be:
y(i)It is the characteristic vector of sample image block, var and mean represents variance and average, and σ is denoising constant,It is standardization
The characteristic vector of the image block after treatment.
5. a kind of image classification algorithms based on K-means Yu deep learning according to claim 4, it is characterised in that:
The step 5) detailed process include:
51) p initial cluster centre { μ of setting1,μ2,μ3…μp, p is natural number, repeat step 2), obtain two suboptimums and gather
Class center { μ '1,μ′2,μ′3…μ′p};
52) convolution operation is used, final image feature is extractedConvolution Formula is:
WhereinIt is final image feature, μlIt is secondary Optimal cluster center, l=1~p;
53) final image feature is normalized, the same step 4) of formula of normalized.
6. a kind of image classification algorithms based on K-means Yu deep learning according to claim 1, it is characterised in that:
The step 1) and step 2) between also include to be normalized without label image collection and whitening processing pre-treatment step.
7. a kind of image classification algorithms based on K-means Yu deep learning according to claim 6, it is characterised in that:
The detailed process of the whitening processing includes:
A covariance matrix ∑) is calculated
Wherein,The sample concentrated without label image that indicates after normalized is represented, n is to indicate concentrate sample without label image
This number,
B the characteristic vector U for) making covariance matrix ∑ is U=[u1,u2…un], UTU=I, characteristic vector u1,u2…unConstitute one
Base vector, for mapping data,Represent postrotational image, the sample concentrated without label image after input normalized
ThisIt is shown as by base table of characteristic vector U
C the characteristic value for) setting covariance matrix ∑ is λ2,…,λn, then the image after PCA albefactions be
Wherein ε is expressed as the constant of smoothed image block.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611259889.0A CN106845528A (en) | 2016-12-30 | 2016-12-30 | A kind of image classification algorithms based on K means Yu deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611259889.0A CN106845528A (en) | 2016-12-30 | 2016-12-30 | A kind of image classification algorithms based on K means Yu deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106845528A true CN106845528A (en) | 2017-06-13 |
Family
ID=59113687
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611259889.0A Pending CN106845528A (en) | 2016-12-30 | 2016-12-30 | A kind of image classification algorithms based on K means Yu deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106845528A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107341510A (en) * | 2017-07-05 | 2017-11-10 | 西安电子科技大学 | Image clustering method based on sparse orthogonal digraph Non-negative Matrix Factorization |
CN107609638A (en) * | 2017-10-12 | 2018-01-19 | 湖北工业大学 | A kind of method based on line decoder and interpolation sampling optimization convolutional neural networks |
CN107871011A (en) * | 2017-11-21 | 2018-04-03 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN108304920A (en) * | 2018-02-02 | 2018-07-20 | 湖北工业大学 | A method of multiple dimensioned learning network is optimized based on MobileNets |
CN108734653A (en) * | 2018-05-07 | 2018-11-02 | 商汤集团有限公司 | Image style conversion method and device |
CN109034248A (en) * | 2018-07-27 | 2018-12-18 | 电子科技大学 | A kind of classification method of the Noise label image based on deep learning |
CN109085181A (en) * | 2018-09-14 | 2018-12-25 | 河北工业大学 | A kind of surface defect detection apparatus and detection method for pipeline connecting parts |
CN109165309A (en) * | 2018-08-06 | 2019-01-08 | 北京邮电大学 | Negative training sample acquisition method, device and model training method, device |
CN109522973A (en) * | 2019-01-17 | 2019-03-26 | 云南大学 | Medical big data classification method and system based on production confrontation network and semi-supervised learning |
CN109727195A (en) * | 2018-12-25 | 2019-05-07 | 成都元点智库科技有限公司 | A kind of image super-resolution reconstructing method |
CN109829433A (en) * | 2019-01-31 | 2019-05-31 | 北京市商汤科技开发有限公司 | Facial image recognition method, device, electronic equipment and storage medium |
CN109948659A (en) * | 2019-02-23 | 2019-06-28 | 天津大学 | A method of promoting polar plot bitmap classification accuracy |
CN111126470A (en) * | 2019-12-18 | 2020-05-08 | 创新奇智(青岛)科技有限公司 | Image data iterative clustering analysis method based on depth metric learning |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104036293A (en) * | 2014-06-13 | 2014-09-10 | 武汉大学 | Rapid binary encoding based high resolution remote sensing image scene classification method |
JP2015036939A (en) * | 2013-08-15 | 2015-02-23 | 富士ゼロックス株式会社 | Feature extraction program and information processing apparatus |
CN104573731A (en) * | 2015-02-06 | 2015-04-29 | 厦门大学 | Rapid target detection method based on convolutional neural network |
CN105046272A (en) * | 2015-06-29 | 2015-11-11 | 电子科技大学 | Image classification method based on concise unsupervised convolutional network |
CN105809121A (en) * | 2016-03-03 | 2016-07-27 | 电子科技大学 | Multi-characteristic synergic traffic sign detection and identification method |
CN106023221A (en) * | 2016-05-27 | 2016-10-12 | 哈尔滨工业大学 | Remote sensing image segmentation method based on nonnegative low-rank sparse correlated drawing |
CN106096561A (en) * | 2016-06-16 | 2016-11-09 | 重庆邮电大学 | Infrared pedestrian detection method based on image block degree of depth learning characteristic |
CN106096605A (en) * | 2016-06-02 | 2016-11-09 | 史方 | A kind of image obscuring area detection method based on degree of depth study and device |
-
2016
- 2016-12-30 CN CN201611259889.0A patent/CN106845528A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015036939A (en) * | 2013-08-15 | 2015-02-23 | 富士ゼロックス株式会社 | Feature extraction program and information processing apparatus |
CN104036293A (en) * | 2014-06-13 | 2014-09-10 | 武汉大学 | Rapid binary encoding based high resolution remote sensing image scene classification method |
CN104573731A (en) * | 2015-02-06 | 2015-04-29 | 厦门大学 | Rapid target detection method based on convolutional neural network |
CN105046272A (en) * | 2015-06-29 | 2015-11-11 | 电子科技大学 | Image classification method based on concise unsupervised convolutional network |
CN105809121A (en) * | 2016-03-03 | 2016-07-27 | 电子科技大学 | Multi-characteristic synergic traffic sign detection and identification method |
CN106023221A (en) * | 2016-05-27 | 2016-10-12 | 哈尔滨工业大学 | Remote sensing image segmentation method based on nonnegative low-rank sparse correlated drawing |
CN106096605A (en) * | 2016-06-02 | 2016-11-09 | 史方 | A kind of image obscuring area detection method based on degree of depth study and device |
CN106096561A (en) * | 2016-06-16 | 2016-11-09 | 重庆邮电大学 | Infrared pedestrian detection method based on image block degree of depth learning characteristic |
Non-Patent Citations (2)
Title |
---|
何俐珺: "基于K-means特征学习的杂草识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
何鹏程: "改进的卷积神经网络模型及其应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107341510A (en) * | 2017-07-05 | 2017-11-10 | 西安电子科技大学 | Image clustering method based on sparse orthogonal digraph Non-negative Matrix Factorization |
CN107341510B (en) * | 2017-07-05 | 2020-04-14 | 西安电子科技大学 | Image clustering method based on sparse orthogonality double-image non-negative matrix factorization |
CN107609638B (en) * | 2017-10-12 | 2019-12-10 | 湖北工业大学 | method for optimizing convolutional neural network based on linear encoder and interpolation sampling |
CN107609638A (en) * | 2017-10-12 | 2018-01-19 | 湖北工业大学 | A kind of method based on line decoder and interpolation sampling optimization convolutional neural networks |
CN107871011A (en) * | 2017-11-21 | 2018-04-03 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107871011B (en) * | 2017-11-21 | 2020-04-24 | Oppo广东移动通信有限公司 | Image processing method, image processing device, mobile terminal and computer readable storage medium |
CN108304920A (en) * | 2018-02-02 | 2018-07-20 | 湖北工业大学 | A method of multiple dimensioned learning network is optimized based on MobileNets |
CN108304920B (en) * | 2018-02-02 | 2020-03-10 | 湖北工业大学 | Method for optimizing multi-scale learning network based on MobileNet |
CN108734653A (en) * | 2018-05-07 | 2018-11-02 | 商汤集团有限公司 | Image style conversion method and device |
CN108734653B (en) * | 2018-05-07 | 2022-05-13 | 商汤集团有限公司 | Image style conversion method and device |
CN109034248A (en) * | 2018-07-27 | 2018-12-18 | 电子科技大学 | A kind of classification method of the Noise label image based on deep learning |
CN109034248B (en) * | 2018-07-27 | 2022-04-05 | 电子科技大学 | Deep learning-based classification method for noise-containing label images |
CN109165309A (en) * | 2018-08-06 | 2019-01-08 | 北京邮电大学 | Negative training sample acquisition method, device and model training method, device |
CN109165309B (en) * | 2018-08-06 | 2020-10-16 | 北京邮电大学 | Negative example training sample acquisition method and device and model training method and device |
CN109085181A (en) * | 2018-09-14 | 2018-12-25 | 河北工业大学 | A kind of surface defect detection apparatus and detection method for pipeline connecting parts |
CN109727195A (en) * | 2018-12-25 | 2019-05-07 | 成都元点智库科技有限公司 | A kind of image super-resolution reconstructing method |
CN109522973A (en) * | 2019-01-17 | 2019-03-26 | 云南大学 | Medical big data classification method and system based on production confrontation network and semi-supervised learning |
CN109829433A (en) * | 2019-01-31 | 2019-05-31 | 北京市商汤科技开发有限公司 | Facial image recognition method, device, electronic equipment and storage medium |
CN109829433B (en) * | 2019-01-31 | 2021-06-25 | 北京市商汤科技开发有限公司 | Face image recognition method and device, electronic equipment and storage medium |
CN109948659A (en) * | 2019-02-23 | 2019-06-28 | 天津大学 | A method of promoting polar plot bitmap classification accuracy |
CN111126470A (en) * | 2019-12-18 | 2020-05-08 | 创新奇智(青岛)科技有限公司 | Image data iterative clustering analysis method based on depth metric learning |
CN111126470B (en) * | 2019-12-18 | 2023-05-02 | 创新奇智(青岛)科技有限公司 | Image data iterative cluster analysis method based on depth measurement learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106845528A (en) | A kind of image classification algorithms based on K means Yu deep learning | |
Zahisham et al. | Food recognition with resnet-50 | |
CN112308158B (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
US7362892B2 (en) | Self-optimizing classifier | |
CN101447020B (en) | Pornographic image recognizing method based on intuitionistic fuzzy | |
CN103955702A (en) | SAR image terrain classification method based on depth RBF network | |
CN106778921A (en) | Personnel based on deep learning encoding model recognition methods again | |
CN109871885A (en) | A kind of plants identification method based on deep learning and Plant Taxonomy | |
Sabrol et al. | Fuzzy and neural network based tomato plant disease classification using natural outdoor images | |
CN106709528A (en) | Method and device of vehicle reidentification based on multiple objective function deep learning | |
CN104809469A (en) | Indoor scene image classification method facing service robot | |
Pinto et al. | Crop disease classification using texture analysis | |
US7233692B2 (en) | Method and computer program product for identifying output classes with multi-modal dispersion in feature space and incorporating multi-modal structure into a pattern recognition system | |
CN109255339B (en) | Classification method based on self-adaptive deep forest human gait energy map | |
CN110263174A (en) | - subject categories the analysis method based on focus | |
CN110298434A (en) | A kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED | |
CN105894035B (en) | SAR image classification method based on SAR-SIFT and DBN | |
Nga et al. | Combining binary particle swarm optimization with support vector machine for enhancing rice varieties classification accuracy | |
US7164791B2 (en) | Method and computer program product for identifying and incorporating new output classes in a pattern recognition system during system operation | |
CN106326914A (en) | SVM-based pearl multi-classification method | |
CN106570514A (en) | Automobile wheel hub classification method based on word bag model and support vector machine | |
Gunawan et al. | Classification of rice leaf diseases using artificial neural network | |
CN113221913A (en) | Agriculture and forestry disease and pest fine-grained identification method and device based on Gaussian probability decision-level fusion | |
Cho et al. | Fruit ripeness prediction based on dnn feature induction from sparse dataset | |
CN111310838A (en) | Drug effect image classification and identification method based on depth Gabor network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170613 |