CN107886106A - A kind of improved two steps linear discriminant analysis method - Google Patents

A kind of improved two steps linear discriminant analysis method Download PDF

Info

Publication number
CN107886106A
CN107886106A CN201610878094.1A CN201610878094A CN107886106A CN 107886106 A CN107886106 A CN 107886106A CN 201610878094 A CN201610878094 A CN 201610878094A CN 107886106 A CN107886106 A CN 107886106A
Authority
CN
China
Prior art keywords
space
matrix
projector
improved
tslda
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610878094.1A
Other languages
Chinese (zh)
Inventor
陈亚瑞
陶鑫
熊聪聪
杨巨成
赵希
张晓曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Science and Technology
Original Assignee
Tianjin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Science and Technology filed Critical Tianjin University of Science and Technology
Priority to CN201610878094.1A priority Critical patent/CN107886106A/en
Publication of CN107886106A publication Critical patent/CN107886106A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of improved two steps linear discriminant analysis method, this method detailed process is:Solve the problems, such as that TSLDA algorithm computational complexities are larger using approximate matrix method, S is eliminated by way of approximation characteristic value matrix replaces primitive character value matrixwAnd SbSingularity;Simultaneously, it is proposed to be advantageous to the characteristic vector classified in a kind of method extraction TSLDA projector spaces for screening compression, the wherein optimal projector space of qualified characteristic vector composition is filtered out by introducing single feature Fisher information amount, so as to have compressed projector space, the validity of wherein all characteristic informations ensure that.By improved two steps linear discriminant analysis method, in fields such as image recognition, including recognition of face, fingerprint recognition, hand-written script identifications, all with preferable application value.

Description

A kind of improved two steps linear discriminant analysis method
Technical field
The present invention relates to machine learning, field of image recognition, especially for the small sample problem in image recognition, proposes A kind of quick, solution method of high-class performance.
Background technology
Linear discriminant analysis (Lincar discrimination analysis, LDA) is a kind of classical Dimensionality reduction And feature extraction algorithm, in recognition of face, the field extensive use such as fingerprint recognition and Gait Recognition.LDA core concept is pair In one group of linear separability data, optimal projector space (transition matrix) W ∈ R is foundd×h(h < d) makes sample tie up projection from d After being tieed up to h, the within-cluster variance of sample is minimum, and inter _ class relationship is maximum, i.e., there is the sample in projector space maximum can divide Property.Its form is expressed as:
Wherein Sw∈Rd×dFor the within class scatter matrix before projection, Sb∈Rd×dFor the inter _ class relationship matrix before projection.Work as Sw For nonsingular matrix when, traditional LDA passes through calculatingCharacteristic vector corresponding to middle nonzero eigenvalue is empty to obtain optimal projection Between W.However, in many practical applications such as recognition of face, image retrieval, sample dimension is often much larger than number of samples, causes Within class scatter matrix SwUnusual, now traditional LDA can not be calculated directlyCharacteristic vector, such issues that be referred to as " sample This SSS (Small Size Samples) problem ".
The LDA expandable algorithms for solving the problems, such as SSS are roughly divided into two classes:First, directly eliminate SwThe method of singularity;Second, The method of linear subspaces.Directly eliminate SwThe method of singularity includes several typical algorithms:FisherFace, regularization LDA (Regularized LDA, RLDA), direct regularization LDA (Direct Regularized LDA, DRLDA), and it is approximate LDA (Approximate LDA, ALDA).FishcrFace carries out dimensionality reduction first with PCA to sample so that overall dispersion square Battle array StReach full rank, then optimal projector space is obtained by LDA.But this method may loss section dtex in reduction process Reference ceases, and S can not also be completely securedwFor nonsingular matrix.RLDA passes through to singular matrix SwIntroducing regularization term α I, (α is is Number, I is unit matrix) its singularity is eliminated, recycle traditional LDA method to obtain optimal projector space.The algorithm solves The integrality of sample characteristics is ensure that while matrix singularity, its shortcoming is to obtain canonical using the method for cross validation It is larger to change factor alpha operand, practical application is time-consuming longer.This deficiency of DRLDA algorithm improvements, by maximizing Fisher letters Breath amount carrys out direct solution regularization coefficient, avoids the cross-validation process in RLDA, improves recognition efficiency.ALDA algorithms are put Any heuristic is abandoned and has solved regularization coefficient, by introducing SwReversible approximate matrix replace original matrix, reach The purpose of singular matrix is eliminated, there is higher recognition performance.Above-mentioned algorithm is all to utilize directly elimination SwSingularity solve The certainly thought of SSS problems, and the method for linear subspaces by analyzing S respectivelywAnd SbKernel with value space, retain wherein Important feature space solves the problems, such as SSS as the mode of optimal projector space, and such method mainly includes:Kernel LDA (NLDA), direct LDA (DLDA) and two step LDA (TSLDA).Training sample is projected to S by NLDA firstwKernel, this When matrix between samples be S 'b, then find so that S 'bReach the projecting direction of maximum as optimal projector space. Training sample is first projected to S by DLDAbValue space, matrix within samples S ' nowwIt is nonsingular, then finds So that S 'wFor minimum value projecting direction as optimal projector space.The difference of above two algorithm is that NLDA remains SwZero Space and SbValue space, DLDA remains SwValue space and SbValue space, deficiency is that both of which has abandoned part Subspace, but SwAnd SbValue space and the characteristic information that may all include of kernel.TSLDA proposition solves this Individual problem, it remains all proper subspaces:SwKernel and value space, and SbKernel and value space, and by four sons The optimal projector space of composition is mutually merged in space, solves SwAnd SbSingular Value sex chromosome mosaicism, on partial data collection have compared with Good feature extraction effect.But this method determines regularization coefficient when eliminating singular matrix using the method for cross validation, Add algorithm complex and training time;The characteristic information included simultaneously in aforementioned four subspace is not necessarily fully effective, It there may be and classification performance is not contributed, be not beneficial to the characteristic vector (or noise information) of classification even, how to filter out it In contribute classification performance the higher characteristic vector to be to need to solve the problems, such as.
The present invention utilizes the inverse mode of backward eigenvalue matrix approximate calculation primitive character value matrix, estimates SwAnd SbWith Its singularity is eliminated, effectively reduces algorithm complex, shortens the training time;Meanwhile a kind of method for screening compression is proposed, Be advantageous to the characteristic vector of classification in extraction TSLDA projector spaces, met by introducing single feature Fisher information amount and filtering out The characteristic vector of condition forms optimal projector space, improves recognition performance.
The content of the invention
It is an object of the invention to provide a kind of improved two steps linear discriminant analysis method, this method is advantageous to shortening figure The training time of decent, improve classification performance.
To achieve the above object, the technical scheme is that:A kind of improved two steps linear discriminant analysis method, including Following steps:
Step A:Pretreatment:PCA dimensionality reductions, simplified operation are carried out to original sample;
Step B:Approximate matrix method eliminates singular matrix:To the within class scatter matrix after dimensionality reduction and inter _ class relationship matrix Singularity is eliminated using approximate matrix method;
Step C:Projector space is analyzed:Solve respectively and merge two projector spaces;
Step D:Screening compression projector space:In screening extraction projector space W single feature Fisher information amount is reached Maximum characteristic vector forms optimal projector space, rejects the noise characteristic information that may influence classification performance;
Further, in above-mentioned steps A, PCA dimensionality reductions is carried out to original sample, obtain the within class scatter matrix after dimensionality reduction ForInter _ class relationship matrix is
Further, it is right in above-mentioned steps BEliminate singularity using approximate matrix method, i.e., it is right respectivelyCarry out SVD is decomposed:WhereinIt is characterized space, It is characterized value matrix.Inverse matrix be expressed as:Due to For singular matrix, order
I is unit matrix.HereFor nonsingular matrix, useApproximate representationThen
It is rightCarry out identical processing:
UsingApproximate representation
Further, in above-mentioned steps C, solve respectively and merge two projector spaces, i.e., it is first rightCarry out respectively special Value indicative decomposes to obtain feature spaceWhereinForValue space, chooseIt is empty as projection BetweenIt is right againCarry out Eigenvalues Decomposition, selected characteristic spaceInPreceding rbIndividual column vector The space of compositionAs projector space W2=ERL.Fusion above-mentioned two subspace obtains total projection spaceFor:
W=[W1, W2]
Further, in above-mentioned steps D, so-called screening compression projector space, i.e., single feature Fisher information is defined first Amount
Each feature column vector in WSubstitute into above formula, the set of single feature Fisher information amount can be tried to achieveAgain to the element in set DLimit span asFilter out qualified member Element, form new set D '={ d1, d2..., dg,According to single feature Fisher information amount { d after screening1, d2..., dg, corresponding feature column vector is found in WForm optimal projector space Wopt
Wopt=[W·1, W·2..., W·g], 0 < g≤2rb
Compared to prior art, the beneficial effects of the invention are as follows:Compared to the algorithm of existing solution small sample problem, sheet Invention is greatly enhanced on the premise of traditional TSLDA advantages are retained on classification performance and training speed.To sum up, this hair Bright algorithm is efficiently modified to one kind of TSLDA methods.
Brief description of the drawings
Fig. 1 is the implementation process figure of the inventive method.
Fig. 2 is the part facial image example for being related to five experimental data bases in the inventive method experiment.
Fig. 3 is the inventive method and the test of heuristics precision figure of contrast experiment.
Fig. 4 is using the inventive method and contrasts five face recognition database's information when other algorithms are tested.
Fig. 5 is using the inventive method and contrasts measuring accuracy of other algorithms in all image data bases.
Fig. 6 is the characteristic vector number using the inventive method and the projector space for contrasting other algorithms.
Fig. 7 is the training time in all image data bases using the inventive method and contrast algorithm.
Embodiment
By embodiment, the present invention is further detailed explanation below in conjunction with the accompanying drawings.
Fig. 1 is the implementation process figure of the two step linear discriminant analysis methods of a modification of the present invention.As shown in figure 1, institute The method of stating comprises the following steps:
Step A:Pretreatment:PCA dimensionality reductions, simplified operation are carried out to original sample.Specifically, to overall scatter matrix St Carry out SVD decomposition:
WhereinFor its eigenvalue matrix, rt=n-1 is StOrder, Ut=[UTR, UTN] it is that its feature is empty Between, andFor StIt is worth space,For StKernel.Choose UTRAll samples are entered as transition matrix Row projection, all samples are by d dimensionality reductions to rt(d > rt), the within class scatter matrix after projection isInter _ class relationship Matrix is
Step B:Approximate matrix method eliminates singular matrix.Specifically, between the within class scatter matrix after dimensionality reduction and class from Scatter Matrix eliminates singularity using approximate matrix method.It is rightSingularity is eliminated using approximate matrix method.Due to The relation of each rank of matrix isIt can be seen that work as All sample dimensionality reductions are to rtAfterwards,Singular matrix is remained as, thus it is right respectivelyCarry out SVD decomposition:
WhereinIt is characterized space,It is characterized value matrix.Inverse matrix be expressed as:
Due toFor singular matrix, order
I is unit matrix.HereFor nonsingular matrix, useApproximate representationThen
It is rightIdentical processing is carried out,
UsingApproximate representation
Step C:Projector space is analyzed:Solve respectively and merge two projector spaces.Specifically, with traditional TSLDA algorithms Analyze it is identical, it is rightEigenvalues Decomposition is carried out respectively obtains feature spaceWhereinFor Value space, chooseAs projector spaceIt is rightCarry out Eigenvalues Decomposition, selected characteristic spaceInPreceding rbIndividual Column vector groups into spaceAs projector space W2=ERL.Merge above-mentioned two Sub-spaces obtain total projection spaceFor:
W=[W1, W2]
Step D:Screening compression projector space:In screening extraction projector space W single feature Fisher information amount is reached Maximum characteristic vector forms optimal projector space, rejects the noise characteristic information that may influence classification performance.Specifically, define Single feature Fisher information amount di
Each feature column vector W in W·iSubstitute into above formula, you can try to achieve the set of single feature Fisher information amountHigher characteristic vector is contributed classification performance in order to filter out W, to the element in set DLimit Span isQualified element can be then filtered out, forms new set D '={ d1, d2..., dg,According to single feature Fisher information amount { d after screening1, d2..., dg, found in W corresponding to characteristic series to Measure W·i, form the feature space to classification performance with maximum contribution:
WoptAs required optimal projector space, and the sample in the space meets inter _ class relationship and within-cluster variance The ratio between maximum principle, there is maximum separability.
However, under disparate databases, optimal projection coefficient g value is also different, all possible is taken if traveling through its Value, computation complexity is too big, is unfavorable for practical application.Therefore, Improved TSLDA combine grab sample and key clicks The method that takes determines g.Accidental sampling makes span to meet condition as defined below:AndHereThree projector spaces can be constructed altogether.The purpose that key point chooses method is to choose two key valuesWithWhereinCorresponding projector space is W1,Corresponding projector space is W, can construct two projector spaces altogether. And if only if, and projector space corresponding to optimal training precision can be retained as optimal projector space.
For Improved TSLDA, time complexity corresponding to per step is respectively:O (dc), O (dn2), O (n3), O (n3), O (c), O (dn2), Improved TSLDA time complexity can be estimated as O (dn2);TSLDA time complexity can It is estimated as O (d2n);ALDA, NLDA and Fisherface Algorithms T-cbmplexity are consistent, can be estimated as O (dn2).Due to d》N, N > c, O (dn are had found by comparative analysis2)《O(d2N), Improved TSLDA time complexity is significantly lower than TSLDA.Cause This, can obtain this paper algorithms by theory deduction has positive role to improving former Algorithm for Training speed.
In the present embodiment on five classical face recognition databases comparative analysis Improved TSLDA, TSLDA, ALDA, NLDA, and FisherFace algorithm.Face database specifically includes ORL, YALE, AR, FERET and CMU-PIE, in Fig. 2 Give the part facial image example in five experimental data bases.Database details are as shown in figure 4, including database name Claim, sample dimension, experiment sample classification number and corresponding number of training and test sample number.Experiment porch information is as follows: CPU:Inter (R) Core (TM) i7-3520M CPU@2.90GHz, running memory:8GB, operating system:MAC OS X 10.11 instrument:MATLAB2014a.
Experiment pre-processes to all image patterns first, including luminance standard, picture size are scaled extremely 65×51;Then Improved TSLDA, TSLDA, ALDA, NLDA and FisherFace method progress feature is respectively adopted to carry Take;Finally classified from nearest neighbor classifier.For Improved TSLDA algorithms, optimal projection coefficient g needs are obtained Limit single feature Fisher information and measure value condition, i.e., only retainCorresponding characteristic vector W·1It is combined into optimal projection Space Wopt.The method that grab sample and key point in upper section are chosen, can be obtainedIn ORL, YALE, AR, FERET Corresponding optimal value, is followed successively by with CMU-PIEEach test of heuristics precision figure is shown in Fig. 3, each test of heuristics precision result are shown in Fig. 5, and the characteristic vector number of each algorithm projector space is shown in Fig. 6, the training time of algorithm See Fig. 7.
It can be seen from Fig. 5:1. seeing on the whole, Improved TSLDA algorithms have compared with other algorithms in five databases There is highest measuring accuracy.TSLDA algorithms are contrasted, Improved TSLDA algorithms are on YALE and FERET databases, and identification is just 7% is improved in true rate, about 3% and 1% has been respectively increased on ORL and PIE, has kept constant on AR databases, illustrates this The algorithm that text proposes has positive role to improving recognition of face precision.2. it is linear subspaces according to Fig. 5, NLDA and TSLDA Method, on AR and FERET databases, TSLDA measuring accuracies are higher than NLDA, and on ORL and CMU-PIE databases, TSLDA is surveyed Examination precision is slightly below NLDA;ALDA and FisherFace is directly elimination SwThe method of singularity, ALDA is on each database Measuring accuracy is above FisherFace.It can be seen from above-mentioned the above results, in linear subspaces method, although TSLDA compared with NLDA remains more features information, but only has more preferable feature extraction effect on partial data collection.Illustrate TSLDA's Non-effective characteristic vector (noise information) is there may be in projector space.Directly eliminating SwIn singularity method, ALDA leads to Cross approximation characteristic value matrix and directly replace the mode of primitive character value matrix and eliminate singular matrix, algorithm complex is low and avoids Characteristic information loss, and its classification performance is close with TSLDA and NLDA.FisherFace is eliminated using the method for Dimensionality reduction Singular matrix, important characteristic information is abandoned, caused its measuring accuracy to be less than above-mentioned three kinds of algorithms.③Improved TSLDA With reference to the advantage of TSLDA and ALDA algorithms, singular matrix is eliminated using the thought of ALDA approximate matrixs, so as to reduce algorithm complexity Degree, improve training speed;Again by screening compression method, extract and higher spy is contributed classification performance in TSLDA projector space Sign vector, forms optimal projector space WoptSo that its measuring accuracy is optimal in each database.
In order to analyse in depth the advantage of Improved TSLDA algorithms, Fig. 6 lists what each algorithm projector space was included Characteristic vector number.For face database ORL, YALE, AR, FERET and CMU-PIE, ALDA, NLDA and FisherFace The characteristic vector number of algorithm projector space is respectively 39,14,14,193,67;The characteristic vector number point of TSLDA projector spaces Wei 78,28,28,386,134;Improvcd TSLDA projector spaces WoptCharacteristic vector number be 28,14,28,63, 108.It can be seen from Fig. 7:1. for ORL and FERET databases, WoptCharacteristic vector number (28) be less than other algorithms under it is right The characteristic vector number answered, the i.e. < 386 of 28 <, 39 78,63 < of < 193, illustrates the W that Improved TSLDA are obtainedoptOnly choose W1In Partial Feature space, remaining feature space abandons as noise information, and validity feature information is present in W1Part is empty Between.2. for YALE databases, WoptCharacteristic vector number (14) be equal to c-1, i.e. 14=14 < 28, illustrate Improved The W that TSLDA is obtainedoptIt has chosen W1Whole characteristic informations, W2Abandoned as noise characteristic information, validity feature information is present in W1Space.Meanwhile ImprovedTSLDA is identical with ALDA measuring accuracy, and Improved TSLDA and ALDA projection are empty Between it is identical.3. for CMU-PIE databases, WoptCharacteristic vector number (108) be more than c-1 and be less than 2 (c-1), i.e. 67 < 108 < 134, illustrate the W that Improved TSLDA are obtainedoptIn include W1Whole characteristic informations and W2In Partial Feature letter Breath, validity feature information are present in W1Whole spaces and W2Segment space.4. for AR databases, Improved TSLDA with TSLDA measuring accuracies are identical, WoptCharacteristic vector number (28) be 2 (c-1), i.e. 28=28 > 14, illustrate Improved The W that TSLDA is obtainedoptRemain SwAnd SbValue space and kernel characteristic information, the characteristic information in four sub-spaces is complete It is complete effective and identical with TSLDA projector spaces.Understood with reference to Fig. 5, for disparate databases, in TSLDA projector spaces Characteristic information is not necessarily fully effective, it is possible to noise information be present, these features are not contributed classification performance, or even can be led Measuring accuracy is caused to decline.Improved TSLDA can be extracted in four sub-spaces and classification performance are contributed with cancelling noise information Higher characteristic vector, form optimal projector space Wopt.On ORL, FERET and CMU-PIE databases, Improved TSLDA measuring accuracy is higher than other contrast algorithms.On YALE databases, Improved TSLDA and ALDA measuring accuracy Be optimal and both projector spaces are identical, on AR databases, Improved TSLDA and TSLDA measuring accuracy it is optimal and Both projector spaces are identical.Illustrate that this paper algorithms can automatically select optimal feature extraction algorithm, and retain four sub-spaces In effective characteristic information so that Improved TSLDA classification performances can be optimal in all databases.
Fig. 7 gives the training time of each contrast algorithm, it can be deduced that to draw a conclusion:On the training time, Improved TSLDA, ALDA, NLDA, and FisherFace are close, and TSLDA is about 12 times of above-mentioned algorithm on ORL, YALE and AR, It it is about 3 times of above-mentioned algorithm on FERET and CMU-PIE.Illustrate that TSLDA searches for optimal regularization coefficient using cross validation Method considerably increases computational complexity, and Improved TSLDA algorithms eliminate singular matrix using approximate matrix method, compared with TSLDA Algorithm complex and training time are significantly reduced, the time for illustrating to obtain Improved TSLDA early stages by mathematical derivation answers Significantly lower than TSLDA, this conclusion is correct to miscellaneous degree, further verifies the validity of Improved TSLDA algorithms.
In summary, Improved TSLDA are remained in four sub-spaces to classification automatically by screening compression method The best characteristic vector of energy, forms optimal projector space Wopt;Singular matrix is eliminated using approximate matrix method, solves image SSS problems in processing.Description of test is in ORL, FERET, on CMU-PIE databases, Improved TSLDA classification performance It is above other contrast algorithms.On YALE databases, Improved TSLDA optimal projector space and ALDA projector spaces Unanimously, it is optimal measuring accuracy.On AR databases, Improved TSLDA optimal projector space and TSLDA projections are empty Between it is consistent, be optimal measuring accuracy.Illustrate that this paper algorithms can automatically select optimal feature extraction algorithm, and retain four Effective characteristic information in sub-spaces.It is demonstrated experimentally that Improved TSLDA not only have optimal classification performance, Er Qieyou Effect reduces TSLDA algorithm complex, is that one kind of TSLDA methods is efficiently modified.
The foregoing is only presently preferred embodiments of the present invention, it is all carried out in invention jurisdictions mandate limited range change Become, modification, belong to protection scope of the present invention.

Claims (5)

1. a kind of improved two steps linear discriminant analysis method, the described method comprises the following steps:
Step A:Pretreatment:PCA dimensionality reductions, simplified operation are carried out to original sample.
Step B:Approximate matrix method eliminates singular matrix:Within class scatter matrix after dimensionality reduction and inter _ class relationship matrix are used Approximate matrix method eliminates singularity.
Step C:Projector space is analyzed:Solve respectively and merge two projector spaces.
Step D:Screening compression projector space:Single feature Fisher information amount is caused to reach maximum in screening extraction projector space W Characteristic vector form optimal projector space, reject the noise characteristic information that may influence classification performance.
2. the improved two steps linear discriminant analysis method of one kind according to claim 1, it is characterised in that
In above-mentioned steps A, PCA dimensionality reductions are carried out to original sample, obtaining the within class scatter matrix after dimensionality reduction is Inter _ class relationship matrix is
3. the improved two steps linear discriminant analysis method of one kind according to claim 1, it is characterised in that
It is right in above-mentioned steps BEliminate singularity using approximate matrix method, i.e., it is right respectivelyCarry out SVD decomposition:WhereinIt is characterized space, It is characterized value matrix.Inverse matrix be expressed as:Due toFor singular matrix, orderI is unit matrix.HereTo be nonsingular Matrix, useApproximate representationThenIt is rightCarry out identical processing:UsingApproximate representation
4. the improved two steps linear discriminant analysis method of one kind according to claim 1, it is characterised in that:
In above-mentioned steps C, solve respectively and merge two projector spaces, i.e., it is first rightEigenvalues Decomposition is carried out respectively to obtain Feature spaceWhereinValue space, chooseAs projector spaceAgain It is rightCarry out Eigenvalues Decomposition, selected characteristic spaceInPreceding rbIndividual Column vector groups into spaceAs projector space W2=ERL.Fusion above-mentioned two subspace obtains total projection spaceFor:
W=[W1, W2]。
5. the improved two steps linear discriminant analysis method of one kind according to claim 1, it is characterised in that:
In the step D, so-called screening compression projector space, i.e., single feature Fisher information amount d is defined firsti
Each feature column vector W in W·iSubstitute into above formula, the set of single feature Fisher information amount can be tried to achieveTo the element d in set DiLimit span asQualified element is filtered out, Form new set D '={ d1, d2..., dg, 1 < g≤2rb.According to single feature Fisher information amount { d after screening1, d2..., dg, corresponding feature column vector W is found in W·i, form optimal projector space Wopt
Wopt=[W·1, W·2..., W·g], 0 < g≤2rb
However, under disparate databases, optimal projection coefficient g value is also different, if traveling through its all possible value, meter It is too big to calculate complexity, is unfavorable for practical application.Therefore, the side that Improved TSLDA combine grab sample and key point is chosen Method determines g.Accidental sampling makes diSpan isMeet condition as defined below:AndHere a=max (di), three projector spaces can be constructed altogether.The purpose that key point chooses method is to choose two key valuesWithWhereinCorresponding projector space is W1,Corresponding projector space is W, can construct two projector spaces altogether. And if only if, and projector space corresponding to optimal training precision can be retained as optimal projector space Wopt
CN201610878094.1A 2016-09-29 2016-09-29 A kind of improved two steps linear discriminant analysis method Pending CN107886106A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610878094.1A CN107886106A (en) 2016-09-29 2016-09-29 A kind of improved two steps linear discriminant analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610878094.1A CN107886106A (en) 2016-09-29 2016-09-29 A kind of improved two steps linear discriminant analysis method

Publications (1)

Publication Number Publication Date
CN107886106A true CN107886106A (en) 2018-04-06

Family

ID=61769724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610878094.1A Pending CN107886106A (en) 2016-09-29 2016-09-29 A kind of improved two steps linear discriminant analysis method

Country Status (1)

Country Link
CN (1) CN107886106A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919056A (en) * 2019-02-26 2019-06-21 桂林理工大学 A kind of face identification method based on discriminate principal component analysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919056A (en) * 2019-02-26 2019-06-21 桂林理工大学 A kind of face identification method based on discriminate principal component analysis
CN109919056B (en) * 2019-02-26 2022-05-31 桂林理工大学 Face recognition method based on discriminant principal component analysis

Similar Documents

Publication Publication Date Title
CN109145921B (en) Image segmentation method based on improved intuitive fuzzy C-means clustering
Le et al. Fingerprint enhancement based on tensor of wavelet subbands for classification
CN102915436A (en) Sparse representation face recognition method based on intra-class variation dictionary and training image
CN107330412B (en) Face age estimation method based on depth sparse representation
CN110598728B (en) Semi-supervised overrun learning machine classification method based on graph balance regularization
CN109241813B (en) Non-constrained face image dimension reduction method based on discrimination sparse preservation embedding
CN103745237A (en) Face identification algorithm under different illumination conditions
CN110874576B (en) Pedestrian re-identification method based on typical correlation analysis fusion characteristics
Xu et al. Lightweight facenet based on mobilenet
CN105069403B (en) A kind of three-dimensional human ear identification based on block statistics feature and the classification of dictionary learning rarefaction representation
CN110287973B (en) Image feature extraction method based on low-rank robust linear discriminant analysis
CN104463085A (en) Face recognition method based on local binary pattern and KFDA
CN107886106A (en) A kind of improved two steps linear discriminant analysis method
CN105069402A (en) Improved RSC algorithm for face identification
CN112861881A (en) Honeycomb lung recognition method based on improved MobileNet model
CN110532915B (en) Three-dimensional face shielding discrimination method based on normal vector azimuth local entropy
CN110298341B (en) Enhanced image significance prediction method based on direction selectivity
Chen et al. Attention-aware conditional generative adversarial networks for facial age synthesis
CN105868713A (en) Method for identifying parallel features integrating human face expression based on nucleus LDA
Tjoa et al. Improving deep neural network classification confidence using heatmap-based eXplainable AI
CN114492830A (en) Deep learning model depolarization method and device based on individual discrimination example pair generation
An et al. Deep Facial Emotion Recognition Using Local Features Based on Facial Landmarks for Security System.
Erbir et al. The do’s and don’ts for increasing the accuracy of face recognition on VGGFace2 dataset
CN112241680A (en) Multi-mode identity authentication method based on vein similar image knowledge migration network
Dharavath et al. A parallel deep learning approach for age invariant face recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180406

WD01 Invention patent application deemed withdrawn after publication