CN107392243A - A kind of image classification method of the semantic space supervised learning based on kernel LDA - Google Patents

A kind of image classification method of the semantic space supervised learning based on kernel LDA Download PDF

Info

Publication number
CN107392243A
CN107392243A CN201710586578.3A CN201710586578A CN107392243A CN 107392243 A CN107392243 A CN 107392243A CN 201710586578 A CN201710586578 A CN 201710586578A CN 107392243 A CN107392243 A CN 107392243A
Authority
CN
China
Prior art keywords
semantic space
sample
matrix
lda
scatter matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710586578.3A
Other languages
Chinese (zh)
Inventor
田晋宇
张霞
张太平
尚赵伟
唐远炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201710586578.3A priority Critical patent/CN107392243A/en
Publication of CN107392243A publication Critical patent/CN107392243A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/245Classification techniques relating to the decision surface
    • G06F18/2451Classification techniques relating to the decision surface linear, e.g. hyperplane
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to a kind of image classification method of the semantic space supervised learning based on kernel LDA, belong to image classification field.This method comprises the following steps:Image data base Z samples are gathered, on data Z original representation, obtain in group Scatter Matrix between Scatter Matrix, group, and full Scatter Matrix;To full Scatter Matrix StDo Eigenvalues Decomposition;Work as end condition | | YQ(k+1)‑X(k+1)||F≤ ε terminates iteration when setting up, otherwise k ← k+1;Sample semantic space construction vector X is obtained, sample Z is projected to semantic space and obtains semantic expressivenessKNN graders are represented applied to semantic spaceOn obtain label L.The present invention, which solves traditional LDA algorithm, can not effectively solve the small sample problem as caused by sample dimension is more than number of samples, have no small lifting in nicety of grading compared to DLDA, PCA+LDA and SRC algorithm.

Description

A kind of image classification method of the semantic space supervised learning based on kernel LDA
Technical field
The invention belongs to image classification field, is related to a kind of image of the semantic space supervised learning based on kernel LDA Sorting technique.
Background technology
Linear discriminant analysis (LinearDiscriminantAnalysis, LDA) is that a representative semanteme is empty Between sorting algorithm, it can trace back to 1936 earliest.Fisher criterions are proposed in famous statistician Fei Sheer.LDA The actually popularization of Fisher criterions, it is projected to realize classification by finding the semantic space that data primitive character represents. Specifically, LDA is by the way that the original representation of sample is projected to semantic space so that all samples meet in the projection of semantic space Divergence minimizes between similar sample and the divergence between inhomogeneity sample maximizes.
At the initial stage that LDA algorithm develops in artificial intelligence, data mining, shown when handling small-scale, low dimensional data Efficiently, the characteristics of stable.However, the arriving in big data epoch causes the quantity of data and characteristic dimension to be all significantly increased.This makes The sample that algorithm uses must be available for by following two limitations.First, although we can easily obtain large-scale number very much According to, but data often lack definite classification in itself, this causes among the process of algorithm application, is available for the mark of Algorithm Learning It is seldom to sign information.On the other hand, because the development of collecting device, the representation of data become increasingly complex, the mark sheet of data Show that dimension is increasing.Such as high definition picture, if being characterized with pixel form, often show million even millions Intrinsic dimensionality.And by contrast, the number of such picture is but very few.How to be realized using the data with as above feature The classification of data is referred to as small-sample learning problem with identification.Traditional algorithm showed in small-sample learning problem it is not good enough, because This improves traditional algorithm, and it is very deep realistic meaning to enable preferably to solve small sample.
Traditional LDA algorithm can not effectively adapt to small sample problem.When the dimension of sample is excessive, data set Vector Groups It is probably low-rank, Scatter Matrix S in this meaning groupwIt is irreversible.Although it can be substituted using generalized inverse, effect It is not good enough.In order to preferably adapt to high dimensional data classification problem, the existing LDA algorithm based on kernel.This kind of algorithm reduces language Adopted space base vector W hunting zone, from whole space contraction to Scatter Matrix SwKernel.LDA substantially passes through Find correct semantic space so that projection of the sample on semantic space enables to the least squares optimization between similar sample, And between class and class sample maximum variance.So kernel LDA thought is equivalent to project initial data to SwZero The quotient space in space.In this way, the data of identical category normalizing in the quotient space is a point.Then, it is only necessary to which it is appropriate to select Base maximizes the variance at quotient space midpoint, i.e., PCA is used in the quotient space.
In addition, the development of compressive sensing theory is also filled with new vitality to LDA algorithm, in traditional LDA algorithm, differentiate Vector (i.e. semantic space construction vector) is entered without the sparse limitation of introducing, and this make it that between discriminant vector phase may be carried Same information, so as to influence the effect of classification.But when running into the data of the few sample of high dimension, existing technology is difficult very Good solution data classification and identification problem.
The content of the invention
In view of this, it is an object of the invention to provide a kind of figure of the semantic space supervised learning based on kernel LDA As sorting technique, under the limitation of semantic space construction, orthogonality is fused in LDA algorithm with openness, allows orthogonality to protect Demonstrate,proved it is independent as far as possible between decision vector, it is openness to ensure that each decision vector only carries whole sample training collection Partial information, so as to classifying quality of the boosting algorithm in high-dimensional few sample problem.
To reach above-mentioned purpose, the present invention provides following technical scheme:
A kind of image classification method of the semantic space supervised learning based on kernel LDA, comprises the following steps:
S1:Image data base Z samples are gathered, on data Z original representation, obtain Scatter Matrix S in groupw, dissipate between group Spend matrix Sb, and full Scatter Matrix St=Sw+Sb
S2:To full Scatter Matrix StEigenvalues Decomposition is done, all samples are projected to feature corresponding to nonzero eigenvalue Space U obtains new Scatter Matrix S'w=UTSwU、S'b=UTSbU;
S3:Obtain Scatter Matrix S' in new groupwKernel N (S'w) in any one group of base Y;Respectively following parameter is assigned Initial value:K ← 0, error ε >=0,0 < ρ1< ρ2< 1, initial orthogonal matrix Q(0)For orthonormal vector group, wherein ρ1、ρ2For base In the step-length corrected parameter of Armijo-Wolfe conditions, k represents iterations, and ε is given termination error;
S4:According to iterative formula X(k+1)=Sμ[YQ(k)], solve subproblem Wherein SμFor soft-threshold operator, Q(k)For orthogonal matrix caused by kth time iteration;
S5:According to iterative formulaWherein W=(U-UT);U=[Q (X(k +1))T-YT(X(k+1))(X(k+1))T], solve subproblem
s.t.QTQ=I, i.e.,Wherein τ2For Cayley transformation parameter;
S6:Work as end condition | | YQ(k+1)-X(k+1)||F≤ ε terminates iteration when setting up, otherwise k ← k+1;Wherein | | YQ(k +1)-X(k+1)||FFor the F norms of matrix, the otherness between two matrixes has been measured;
S7:Sample semantic space construction vector X is obtained, sample Z is projected to semantic space and obtains semantic expressivenessKNN graders are represented applied to semantic spaceOn obtain label L.
Further, the iteration is orthogonal iteration, is comprised the following steps:
S401:Give initial orthogonal matrixWhereinFor Stiefel manifolds;
S402:Initialize iterations k ← 0, error ε >=0,0 < ρ1< ρ2< 1;
S403:CalculateΛ=GTX is Lagrange multiplier;
S404:Select step-length τkIt is set to meet Armijo-Wolfe conditions:F(YWk))≤F(YW(0))+ρ1τkF'τ(YW (0))、F(YWk))≥ρ2F'τ(YW(0));
S405:According to formulaUpdate Xk+1=YWk);
S406:According to a stage optimality conditionTerminate iteration.
The beneficial effects of the present invention are:The method that the present invention uses is compared with DLDA, PCA+LDA and SRC, Neng Gou More good classifying quality is obtained under the same terms, solve traditional LDA algorithm can not effectively solve it is big by sample dimension In small sample problem caused by number of samples, have not in nicety of grading compared to DLDA, PCA+LDA and SRC algorithm Small lifting.
Brief description of the drawings
In order that the purpose of the present invention, technical scheme and beneficial effect are clearer, the present invention provides drawings described below and carried out Explanation:
Fig. 1 is sparse orthogonal characteristic face;
Fig. 2 is main composition characteristics face;
Fig. 3 is 72 different visual angles of same class sample in COIL20;
Fig. 4 is 40 different faces in ORL;
Fig. 5 is 10 different expressions of ORL unified samples.
Embodiment
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described in detail.
(1) system overall framework
The algorithm establish on the basis of existing kernel LDA algorithm, by decision vector be applied with it is sparse just The limitation of friendship so that the semantic space construction of sample is provided with sparse characteristic.This is asked target by using alternating direction method Topic conversion solves for the subproblem of sparse limitation and the subproblem of an orthogonal limitation.
(2) basic theory
1. small sample problem LDA
The basic thought of LDA algorithm is as follows, i.e. projection Ws of the sample set X on semantic space WTX enables to similar sample This is similar as far as possible, and different classes of sample is different as far as possible, can be by minimizing variance within clusters while maximizing Inter-class variance carrys out correct classification samples.
Following expression has measured the compactness of similar sample in semantic space, is worth smaller similar sample in semantic space It is more intensive.
And the difference between metrics class and classification, a point is represented with the average of similar sample, i.e., per the equal of class sample The distribution covariance determinant of value point is bigger, and the otherness between sample is bigger.It is specific as follows:
Then, following optimization problem can be classified as by solving optimal semantic space base vector W:
Above formula is equivalent to following generalized eigenvalue problem:
Sw -1SbW=λ W (2.2)
Property 1:Scatter Matrix St=Sb+SwKernel N (St) meet N (St)=N (Sb)∩N(Sw);
2. orthogonal keep optimization
How to realize that the specific manifestation form of the optimization problem with orthogonal limitation is as follows:
Wherein I is p × p rank unit matrixs, and F (X) is Rn×pOn differentiable function.In the problem, feasible zoneIt is referred to as Stiefel manifolds.As p=1,Deteriorate to the unit ball in n-dimensional space Face.
Become the descent direction for bringing selection target function in existing literature using the Cayley of matrix, can fallOn.Given n rank antisymmetric matrix W, its Cayley transform definition is K (W)=(I+W)-1(I-W), it is that n ranks are special orthogonal Matrix.In the optimized algorithm of orthogonal holding, antisymmetric matrix W=GX is definedT-XGT, wherein G is the gradient of object function, fixed Justice is:
So as to prove iterative formula
Orthogonality can be kept.The Lagrangian in optimization problem (2.3) is considered first below:
Wherein Λ is Lagrange multiplier, and it is an antisymmetric matrix.Want the local optimum of acquisition optimization problem (2.3) Solution, it is necessary to there is following First Order Optimality Condition to set up:
Theorem 1 (orthogonal to keep optimization optimality condition) is if X is the locally optimal solution of problem 2.3, for Lagrange Multiplier Λ=GTX, First Order Optimality Condition are set up, i.e.,:
Therefore, we choose Lagrange multiplier Λ=GTX.As selection initial value X0Meet X0 TX0During=I, iterative formula (2.4) orthogonal limitation is obviously met.Theorem 2 can prove simultaneously, at τ=0, YW(τ) is the descent direction of object function.
Theorem 2 for suitable initial orthogonal matrix X, there is Y in formula (2.4)W(τ)TYW(τ)=I;And in τ=0 , there is F' at placeτ(YW(0)) < 0, i.e., now YW(τ) is descent direction.
Pass through theorem 1,2, it is known that iterative formula (2.4) can guarantee that the point Y of iteration each timeW(τ) all falls to flow in Stiefel ShapeOn, and descent direction be present.Although being one of descent direction as τ=0, but similar to the ladder of fixed step size Descent method is spent, its convergence can not be guaranteed.Searched for similar to the one-dimensional linear in unconfinement optimization problem, existing literature base In Armijo-Wolfe conditions
F(YWk))≤F(YW(0))+ρ1τkF'τ(YW(0)) (2.5a)
F(YWk))≥ρ2F'τ(YW(0)) (2.5b)
Give the curve search algorithm of amendment step-length.
Based on above-mentioned brief discussion, we provide the substantially step of orthogonal holding alternative manner:
1. give initial orthogonal matrix
2. initialize iterations k ← 0, error ε >=0,0 < ρ1< ρ2< 1.
3. calculate
4. select step-length τkIt is set to meet Armijo-Wolfe conditions (2.5a), (2.5b).
5. X is updated according to formula (2.4)k+1=YWk)。
6. according to a stage optimality conditionTerminate iteration.
2 orthogonal sparse kernel linear discriminant analysis
2.1SNLDA main thought
Assuming that by kernel LDA algorithm, decision vector group W is have selected in given Scatter Matrix S kernel.To every One sample x ∈ X, its semantic space represent
Y=WTX={ < w1, x > ..., < wp, x > }
Obviously, the semantic space of sample represents to be determined by the original sign of sample and decision vector w inner product (correlation) It is fixed.Therefore, if between decision vector group independently of each other, i.e. W is Orthogonal Vectors, then discrimination of the sample in semantic space It will get a promotion.Prior, when each decision vector w only includes the partial information of image data set, i.e. W is sparse square During battle array, sample will be also obviously improved in the classification capacity of semantic space.Fig. 1 provides SONLDA algorithms and applied in face number According to the sparse orthogonal decision vector drawn on collection, showed with characteristic formp.Fig. 2 be the obtained decision-making of principal component analysis to Amount.As can be seen that the eigenface information with orthogonal sparse limitation is evenly distributed, and it is overlapped less.Fig. 2, information distribution Concentrate on Partial Feature on the face, and lack detailed information.
Assuming that sample xi, xjBelong to same category, the two carries common information I1, and xkBelong to another category, carry letter Cease I2.When calculating correlation with each eigenface, because eigenface is sparse, then only limited individual eigenface carries information I1, I2, i.e., the nonzero component of only limited individual decision vector constitutes information I1Or I2.Assuming that wi, wjCarry information I1;wiCarry Information I2.Then, xi, xj, xkSemantic space be expressed as
Wherein middle i ≠ j ≠ k.Obviously, when decision vector group W has openness, classification energy of the sample in semantic space Power will significantly improve.
2.2SONLDA model
SONLDA models can be described as in set matrix kernel N (A), determine sparse orthogonal Vector Groups W. SONLDA models can do described below:
By any orthogonal basis being introduced into N (A), above mentioned problem can be converted into the problem of following of equal value:
Due to object function | | YQ | |1The overall situation can not differential, can be by problem above therefore by introducing variable X ≈ YQ It is converted into following form:
Problem (2.6) can be converted into two subproblems by the problem using alternating direction algorithm:
Problem (2.7) ensure that Vector Groups X falls among the kernel of given matrix A, and be sparse.Problem (2.8) Orthogonal property is applied to coordinates of the Vector Groups X under base Y, and then ensure that Vector Groups X orthogonality.
We use soft-threshold algorithm Solve problems (2.7), and now the solution of the problem can be expressed as
X(k+1)=Sμ[YQ(k)] (2.9)
Wherein SλFor soft-threshold operator, meet Sμ[x]=sign (x) max | x |-μ, 0 }.
Because object function can be micro-, thus we using the iterative algorithm of orthogonal holding introduced in 2 sections come Solve problems (2.8).Specifically:
Wherein W=(U-UT);U=[Q (X(k+1))T-YT(X(k+1))(X(k+1))T]。
It shown below is the substantially step of SONLDA algorithms:
Step1:On data Z original representation, Scatter Matrix S in group is obtainedw, Scatter Matrix S between groupb, and full divergence Matrix St=Sw+Sb
Step2:To full Scatter Matrix StEigenvalues Decomposition is done, all samples are projected to feature corresponding to nonzero eigenvalue Subspace U obtains new Scatter Matrix S'w=UTSwU、S'b=UTSbU。
Step3:Obtain Scatter Matrix S' in new groupwN (S' in kernelw) any one group of base Y.Respectively following ginseng Number assigns initial value:K ← 0, error ε >=0,0 < ρ1< ρ2< 1, and initial orthogonal matrix Q(0)For orthonormal vector group.
Step4:According to iterative formula (2.9), subproblem (2.7) is solved, i.e.,
X(k+1)=Sμ[YQ(k)]
Step5:According to iterative formula (2.10), subproblem (2.8) is solved, i.e.,
Step6:Work as end condition | | YQ(k+1)-X(k+1)||F≤ ε terminates iteration when setting up, otherwise k ← k+1.
Step7:By above-mentioned steps, sample semantic space construction vector X is drawn.Sample Z is projected to semantic space and obtained To semantic expressivenessKNN graders are represented applied to semantic spaceOn obtain label L.
(3) experimental design
3.1 experiment comparison other analyses
Direct linear discriminant analysis algorithm (DLDA) is classified by being chosen to remove the kernel of Scatter Matrix between group;It is main into Analysis+linear discriminant analysis (PCA+LDA) is divided then to be divided by carrying out dimensionality reduction to original sample from PCA using LDA Class;Sorting algorithm (SRC) based on sparse coding can also regard that the semantic space for finding sample represents as, then semantic empty Between in classified.And therefore, contrast algorithm of the three of the above algorithm as SONLDA is chosen in this experiment.
3.2 experimentation
This experiment employ Columbia University's database (COIL20, i.e. Fig. 3) and Olivetti face databases (ORL, That is Fig. 4, Fig. 5) come test SONLDA and its contrast algorithm classification accuracy.In SONLDA, and its contrast experiment, pass through by Image vector is filled with a data set matrix in database, obtains the data matrix X of 4096 × 1440 ranks.Obviously, The dimension of sample passes through Scatter Matrix S in solution group much larger than the free degree of this group of picturewInverse matrix show that decision vector is It is infeasible.Therefore, we are classified using SONLDA algorithms to X.For ORL data sets, with 2:8、4:6、6:4、8:2、 9:1 ratio cut partition training set and test set.For COIL20 data sets, then according to 14:58、29:43、43:29、58: 14:、65:7 ratio cut partition data set and test set.In two datasets, 50 realities have been carried out altogether to each division situation Test, and record the average of experimental result.Finally, we carry out classification effect using simplest matching rate, i.e.,:
3.3 experimental results count
Classifying quality on the COIL20 of table 1
Classifying quality on the ORL of table 2
Table 1, table 2 sets forth the performance of SONLDA and its contrast algorithm on data set ORL and COIL20.From experiment As a result as can be seen that SONLDA have under the division proportion of different training sets and test set compared to contrast algorithm it is obvious excellent Gesture.Compared with PCA+LDA and DLDA, SONLDA good behaviour is demonstrated by being applied to the decision vector in semantic space Orthogonal sparse limitation is added to lift discrimination of the sample in semantic space, although, the semantic space knot of SRC sorting algorithms Structure, which is similarly, has used sparse limitation, but the dimension of its semantic space is identical with the number of sample, divides in two datasets Not Wei 1400 and 400, and SONLDA semantic spaces dimension be equal to group in Scatter Matrix Sw kernel dimension, J.Yang, Y.Yu, andW.Kunz point out, N (S'w) dimension be K-1 under normal circumstances, K is categories of datasets number.Therefore, in ORL and On COIL20, the dimension of decision space is respectively 39 and 20, much smaller than 1400 and 400.This explanation, although SRC semanteme is empty Between in have openness, but the dimension of semantic space is too high causes the semantic space of sample to represent to carry the information of redundancy, from And it have impact on classification capacity.
Finally illustrate, preferred embodiment above is merely illustrative of the technical solution of the present invention and unrestricted, although logical Cross above preferred embodiment the present invention is described in detail, it is to be understood by those skilled in the art that can be Various changes are made to it in form and in details, without departing from claims of the present invention limited range.

Claims (2)

  1. A kind of 1. image classification method of the semantic space supervised learning based on kernel LDA, it is characterised in that:This method includes Following steps:
    S1:Image data base Z samples are gathered, on data Z original representation, obtain Scatter Matrix S in groupw, Scatter Matrix between group Sb, and full Scatter Matrix St=Sw+Sb
    S2:To full Scatter Matrix StEigenvalues Decomposition is done, all samples are projected to proper subspace U corresponding to nonzero eigenvalue Obtain new Scatter Matrix S'w=UTSwU、S′b=UTSbU;
    S3:Obtain Scatter Matrix S' in new groupwKernel N (S'w) in any one group of base Y;Respectively following parameter is assigned just Value:K ← 0, error ε >=0,0 < ρ1< ρ2< 1, initial orthogonal matrix Q(0)For orthonormal vector group, wherein ρ1、ρ2For based on The step-length corrected parameter of Armijo-Wolfe conditions, k represent iterations, and ε is given termination error;
    S4:According to iterative formula X(k+1)=Sμ[YQ(k)], solve subproblemIts Middle SμFor soft-threshold operator, Q(k)For orthogonal matrix caused by kth time iteration;
    S5:According to iterative formulaWherein W=(U-UT);U=[Q (X(k+1))T-YT (X(k+1))(X(k+1))T], solve subproblem
    s.t.QTQ=I, i.e.,Wherein τ2For Cayley transformation parameter;
    S6:Work as end condition | | YQ(k+1)-X(k+1)||F≤ ε terminates iteration when setting up, otherwise k ← k+1;Wherein | | YQ(k+1)-X(k +1)||FFor the F norms of matrix, the otherness between two matrixes has been measured;
    S7:Sample semantic space construction vector X is obtained, sample Z is projected to semantic space and obtains semantic expressivenessWill KNN graders represent applied to semantic spaceOn obtain label L.
  2. 2. a kind of image classification method of the semantic space supervised learning based on kernel LDA as claimed in claim 1, it is special Sign is:The iteration is orthogonal iteration, is comprised the following steps:
    S401:Give initial orthogonal matrixWhereinFor Stiefel manifolds;
    S402:Initialize iterations k ← 0, error ε >=0,0 < ρ1< ρ2< 1;
    S403:CalculateΛ=GTX is Lagrange multiplier;
    S404:Select step-length τkIt is set to meet Armijo-Wolfe conditions:F(YWk))≤F(YW(0))+ρ1τkF′τ(YW(0))、F (YWk))≥ρ2F′τ(YW(0));
    S405:According to formulaUpdate Xk+1=YWk);
    S406:According to a stage optimality conditionTerminate iteration.
CN201710586578.3A 2017-07-18 2017-07-18 A kind of image classification method of the semantic space supervised learning based on kernel LDA Pending CN107392243A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710586578.3A CN107392243A (en) 2017-07-18 2017-07-18 A kind of image classification method of the semantic space supervised learning based on kernel LDA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710586578.3A CN107392243A (en) 2017-07-18 2017-07-18 A kind of image classification method of the semantic space supervised learning based on kernel LDA

Publications (1)

Publication Number Publication Date
CN107392243A true CN107392243A (en) 2017-11-24

Family

ID=60340931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710586578.3A Pending CN107392243A (en) 2017-07-18 2017-07-18 A kind of image classification method of the semantic space supervised learning based on kernel LDA

Country Status (1)

Country Link
CN (1) CN107392243A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171792A (en) * 2018-01-15 2018-06-15 深圳市云之梦科技有限公司 A kind of method and system of the human 3d model recovery technology based on semantic parameter
CN109919056A (en) * 2019-02-26 2019-06-21 桂林理工大学 A kind of face identification method based on discriminate principal component analysis
CN112364372A (en) * 2020-10-27 2021-02-12 重庆大学 Privacy protection method with supervision matrix completion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JINYU TIAN等: ""Sparse Null Space LDA for Object Recognition"", 《2017 THE 2ND IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND BIG DATA ANALYSIS》 *
QING QU等: ""Finding a sparse vector in a subspace: Linear sparsity using alternating directions"", 《IEEE TRANSACTION ON INFORMATION THEORY》 *
ZAIWEN WEN 等: ""A Feasible Method for Optimization with Orthogonally Constraints"", 《MATHEMATICAL PROGRAMMING》 *
苗春玉: ""线性判别分析改进算法的分析与研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
蒲莉娟: ""模式判别中的子空间分析方法研究"", 《中国博士学位论文全文数据库 基础科学辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171792A (en) * 2018-01-15 2018-06-15 深圳市云之梦科技有限公司 A kind of method and system of the human 3d model recovery technology based on semantic parameter
CN109919056A (en) * 2019-02-26 2019-06-21 桂林理工大学 A kind of face identification method based on discriminate principal component analysis
CN109919056B (en) * 2019-02-26 2022-05-31 桂林理工大学 Face recognition method based on discriminant principal component analysis
CN112364372A (en) * 2020-10-27 2021-02-12 重庆大学 Privacy protection method with supervision matrix completion

Similar Documents

Publication Publication Date Title
Wu et al. Unsupervised Deep Hashing via Binary Latent Factor Models for Large-scale Cross-modal Retrieval.
Ying et al. Graph convolutional neural networks for web-scale recommender systems
Shao et al. Online multi-view clustering with incomplete views
Wang et al. Online feature selection with group structure analysis
Li et al. Unsupervised streaming feature selection in social media
Liu et al. Extreme support vector machine classifier
CN110348579A (en) A kind of domain-adaptive migration feature method and system
WO2018194812A1 (en) Hybrid approach to approximate string matching using machine learning
CN104616029B (en) Data classification method and device
Chadha et al. An improved K-means clustering algorithm: a step forward for removal of dependency on K
Zhu et al. 10,000+ times accelerated robust subset selection
CN107392243A (en) A kind of image classification method of the semantic space supervised learning based on kernel LDA
Gao et al. Multi-label active learning by model guided distribution matching
Cucuringu et al. An MBO scheme for clustering and semi-supervised clustering of signed networks
CN110188825A (en) Image clustering method, system, equipment and medium based on discrete multiple view cluster
Xia et al. Self-supervised contrastive attributed graph clustering
Chapel et al. Partial gromov-wasserstein with applications on positive-unlabeled learning
Zhang et al. A bayesian discrete optimization algorithm for permutation based combinatorial problems
Yang et al. A class of manifold regularized multiplicative update algorithms for image clustering
Bandyopadhyay et al. Integrating network embedding and community outlier detection via multiclass graph description
Zhang et al. EMD metric learning
Xu et al. Semi-supervised learning algorithm based on linear lie group for imbalanced multi-class classification
Zhu et al. FAST SPECTRAL CLUSTERING WITH SELF-WEIGHTED FEATURES.
CN114399653A (en) Fast multi-view discrete clustering method and system based on anchor point diagram
Nakis et al. A Hierarchical Block Distance Model for Ultra Low-Dimensional Graph Representations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171124