CN106250929A - The method for designing of elastomeric network constraint self-explanatory rarefaction representation grader - Google Patents

The method for designing of elastomeric network constraint self-explanatory rarefaction representation grader Download PDF

Info

Publication number
CN106250929A
CN106250929A CN201610620582.2A CN201610620582A CN106250929A CN 106250929 A CN106250929 A CN 106250929A CN 201610620582 A CN201610620582 A CN 201610620582A CN 106250929 A CN106250929 A CN 106250929A
Authority
CN
China
Prior art keywords
centerdot
sample
phi
rsqb
lsqb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610620582.2A
Other languages
Chinese (zh)
Inventor
王立
刘宝弟
韩丽莎
王延江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN201610620582.2A priority Critical patent/CN106250929A/en
Publication of CN106250929A publication Critical patent/CN106250929A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The present invention relates to the method for designing of the method for designing of a kind of elastomeric network constraint self-explanatory rarefaction representation grader, containing following steps: read training sample, training sample is carried out nonlinear transformation, transform to the nuclear space of higher-dimension, at higher-dimension nuclear space, each class training sample is learnt, finding out the contribution (i.e. weight) that in such training sample, each individuality is done for constructing such training sample subspace, such training sample constitutes dictionary with the product of weight matrix;The sparse expression dictionary obtained by training obtains the test sample elastomeric network coefficient coding at nuclear space, finally by the elastomeric network sparse coding matching test sample corresponding to the dictionary of each class and dictionary, and calculating this error of fitting, error of fitting infima species is the classification of test sample.The present invention combines ridge regression and the advantage of lasso recurrence, makes the sparse coding feature of sample both have openness, has again less error of fitting, effectively reduce error in classification, promotes the recognition performance of grader.

Description

The method for designing of elastomeric network constraint self-explanatory rarefaction representation grader
Technical field
The present invention is under the jurisdiction of mode identification technology, specifically, relates to a kind of elastomeric network constraint self-explanatory sparse The method for designing of the method for designing of presentation class device.
Background technology
Classifier design (Classifier Design) is the research branch that area of pattern recognition one is important, and feature carries Taking is the important component part of PRS, is the precondition of pattern classification identification, but how to the spy being drawn into Levy and classify to greatest extent, be then the final purpose of pattern recognition, be the core cell of PRS.From classification certainly From the perspective of plan, discriminant classification rule is to reduce error recognition rate, the principal element of raising pattern recognition precision effectively.
At present, main classifier design method has following several.
1, support vector machine method is (English: Support Vector Machine)
Support vector machine method is that first Corinna Cortes and Vapnik put forward equal to nineteen ninety-five, and it is intended to lead to Cross maximization classification interval and set up optimal classification surface.Such method is table in solving small sample, non-linear and high dimensional pattern identification Reveal many distinctive advantages.But, such grader only has a small amount of boundary point (i.e. supporting vector) to participate in classifying face to build Vertical, if the position of boundary point distribution is bad, then to be the most disadvantageous for classification.
2, classifier design method based on rarefaction representation is (English: Sparse Representationbased Classifier)
Classifier design method based on rarefaction representation is proposed in 2009 by J.Wright et al., and this grader sets First test sample is carried out sparse coding in all training sets by meter method, then according to the classification producing minimum code error Determine classification results.This classifier design method achieves the biggest success in multicategory classification, but, this classifier design side Method does not has skilled process, directly every class training sample is constructed respective subspace, does not consider in this classification samples each The individual contribution to constructor space, easily produces bigger error of fitting.
3, classifier design method based on collaborative expression is (English: Collaborative Representationbased Classifier)
Classifier design method based on collaborative expression is to be proposed in 2011 by zhang et al., this classifier design side First test sample is carried out collaborative expression in all training sets by method, then determines according to the classification producing minimum code error Classification results.This classifier design method performance on some data set is better than classifier design method based on rarefaction representation. Similarly, this classifier design method does not has skilled process, directly every class training sample is constructed respective subspace, easily produces Raw bigger error of fitting, causes classification performance the highest.
4, classifier design method based on dictionary study
Classifier design method based on dictionary study is to be proposed in 2010 by Yang et al., this classifier design method Compensate for traditional classifier design method based on rarefaction representation easily to produce bigger error of fitting and cause classification accuracy not High problem, but, this classifier design method can only be carried out in theorem in Euclid space, and very difficult process has the number of nonlinear organization According to so that it is range is the most limited.
From the foregoing, it will be observed that existing classifier design method all exist error of fitting bigger and cause classify degree of accuracy the highest Problem.
Summary of the invention
The present invention is directed to that the grader of existing classifier design method design exists that error of fitting is big, that degree of accuracy is the highest is upper State deficiency, it is provided that the method for designing of a kind of elastomeric network constraint self-explanatory rarefaction representation grader.By grader of the present invention The grader error of fitting of method for designing design is little, accuracy of identification is high.
The technical scheme is that a kind of elastomeric network retrains setting of the method for designing of self-explanatory rarefaction representation grader Meter method, containing following steps:
Step one: design grader, the steps include:
(1) reading training sample, training sample has C class altogether, defines X=[X1,X2,…,Xc,…,XC]∈RD×NRepresent instruction Practicing sample, D is face characteristic dimension, and N is the number that training sample is total, X1,X2,…,Xc,…,XCRepresent the 1,2nd respectively ..., C ..., C class sample, define N1,N2,…,Nc,…,NCRepresent every class training sample number, then N=N respectively1+N+,…+Nc+…+ NC
(2) training sample is carried out two norm normalization, obtain normalized training sample;
(3) take out each class in training sample successively, and to such sample training dictionary, the process of training dictionary be:
(1) c class sample X is taken outc, by XcIt is mapped to nuclear space φ (Xc);
(2) according to φ (Xc) train dictionary B based on sparse coding algorithmc, BcRepresent the dictionary that c class sample learning arrives, The training need of this dictionary meets constraints, and the object function of described constraints is:
f ( S c ) = | | φ ( X c ) - φ ( X c ) W c S c | | F 2 + 2 α Σ n = 1 N c | | S · n c | | 1 + β | | S c | | F 2 | | φ ( X c ) W · k c | | 2 2 ≤ 1 , ∀ k = 1 , 2 , ... , K - - - ( 1 )
In formula, α is the penalty coefficient of sparse item constraint in sparse coding algorithm, and β is collaborative constraint in sparse coding algorithm Penalty coefficient, ScBeing the rarefaction representation matrix of c class nuclear space training sample, K is the size of the dictionary that study obtains,A weight matrix, its each list show nuclear space sample to the contribution of each entry in structure dictionary, Dictionary Bc=φ (Xc)Wc
(3) object function of constraints in step (2) is solved, i.e. formula (1) is solved, its solution procedure For: fixing Wc, update Sc;Randomly generate matrix Wc, carry it into the object function of constraints, at this moment this object function converts Become an elastomeric network regularization least square problem, i.e. object function is converted into:
f ( S c ) = | | φ ( X c ) - φ ( X c ) W c S c | | F 2 + 2 α Σ n = 1 N c | | β · n c | | 1 + β | | S c | | F 2 - - - ( 2 )
Above-mentioned formula (2) can be reduced to:
f ( S c ) = t r a c e { φ ( X c ) T φ ( X c ) - 2 φ ( X c ) T φ ( X c ) W c S c } + t r a c e { S c T ( W c T φ ( X c ) T φ ( X c ) W c ) S c } + 2 α Σ n = 1 N c | | S · n c | | 1 + β | | S c | | F 2 = t r a c e { κ ( X c , X c ) } - 2 t r a c e { κ ( X c , X c ) W c S c } + t r a c e { S c T ( W c T κ ( X c , X c ) W c ) S c } + 2 α Σ n = 1 N c | | S · n c | | 1 + β | | S c | | F 2 = t r a c e { κ ( X c , X c ) } - 2 Σ n = 1 N c [ κ ( X c , X c ) W c ] n · S · n c + Σ n = 1 N S · n c T [ W c T κ ( X c , X c ) W c ] S · n c + 2 α Σ k = 1 K Σ n = 1 N c | S k n c | + β Σ k = 1 K Σ n = 1 N c ( S k n c ) 2 - - - ( 3 )
Further formula (3) is resolved into a series of subproblem to solve;For ScIn each element solve, and Weed out and solve unrelated item, then formula (3) can be reduced to:
f ( S k n c ) = - 2 [ κ ( X c , X c ) W c ] n k S k n c + ( S k n c ) 2 [ W c T κ ( X c , X c ) W c ] k k + 2 Σ l = 1 , l ≠ k K [ W c T κ ( X c , X c ) W c ] l k S k n c + 2 α | S k n c | + β ( S k n c ) 2 - - - ( 4 )
According to parabola theories, it is easy to obtain the solution of formula (4);And owing to each sample point is independent, every time Solve ScA line, its solution formula is as follows:
S k · c = 1 1 + β min { [ W c T κ ( X c , X c ) ] k · - [ E S c ‾ k ] k · , - α } + 1 1 + β max { [ W c T κ ( X c , X c ) ] k · - [ E S c ‾ k ] k · , α } - - - ( 5 )
In formula,E=Wc^Tκ(Xc,Xc)Wc+ β I, I representation unit battle array;
Traversal ScEvery string, complete ScOnce renewal;
(4) S after updating in fixing step (3)c, update Wc, at this moment the object function of constraints is converted to a l2Model The least square problem of number constraint, i.e. object function is converted into:
f ( W c ) = | | φ ( X c ) - φ ( X c ) W c S c | | F 2 s . t . | | φ ( X c ) W · k c | | 2 2 ≤ 1 , ∀ k = 1 , 2 , ... , K . - - - ( 6 )
Above-mentioned formula (6) uses the method for Lagrange multiplier to solve, and finally tries to achieveSolution be:
W · k c = ( S · k c ) - [ W c ‾ k F ] · k ( S k · c T - [ W c ‾ k F ] · k ) T κ ( X c , X c ) ( ( S k · c ) T - [ W c ‾ k F ] · k ) - - - ( 7 )
In formula, F=ScScT,
(5) alternating iteration step (3) and step (4), finally gives optimum sparse coding dictionary Bc=φ (Xc)Wc
(6) obtain the optimum sparse coding dictionary of every class sample according to step (1) to (5), every class sample is obtained Excellent sparse coding dictionary is put together, it is thus achieved that dictionary B=[B1,…,Bc,…,BC];
Step 2: classify sample, the steps include:
(1) read the characteristics of image of test sample to be identified, and characteristics of image is carried out two norm normalization, define y ∈ RD×1Represent a test sample image feature to be identified;
(2) test sample image feature y is mapped to nuclear space φ (y);
(3) using the dictionary B obtained in step one, be fitted nuclear space φ (y), fitting function is:
f ( s ) = | | φ ( y ) - B s | | 2 2 + 2 α | | s | | 1 + β | | s | | 2 2 - - - ( 8 )
The sparse coding of test sample image feature y during s represents nuclear space in formula;
(4) solving the fitting function in step (3), solving result is:
s · k = 1 1 + β min { [ W c T κ ( X c , y ) ] k · - [ W c ^ T κ ( X c , X c ) W c s ‾ k ] k · , - α } + 1 1 + β max { [ W c T κ ( X c , y ) ] k · - [ W c ^ T κ ( X c , X c ) W c s ‾ k ] k · , α } - - - ( 9 )
In formula,S=[s1,…,sc,…,sC];
(5) ask nuclear space φ (y) in the error of fitting of every constituted subspace of class sample, represent with r (c), its expression formula For:
r ( c ) = | | φ ( y ) - B c s c | | 2 2 = | | φ ( y ) - φ ( X c ) W c s c | | 2 2 - - - ( 10 )
(6) comparing the error of fitting of nuclear space φ (y) and every class sample, image to be identified then belongs to error of fitting minimum That classification.
The invention has the beneficial effects as follows: syncaryon skill of the present invention and lexicography learning method, elastomeric network constraint is added In the middle of sparse expression classifier design, first the sample in former feature space is mapped by kernel function, transform to one high The nuclear space of dimension, and solve the dilute of optimum elastomeric network constraint by geo-nuclear tracin4 applied nonlinear analysis in new nuclear space Relieving the exterior syndrome reaches dictionary;The present invention efficiently extracts the nonlinear organization being hidden in sample characteristics, uses elastomeric network constraint simultaneously Asking for sparse expression dictionary, elastomeric network constraint absorbs ridge regression constraint and the advantage of lasso constraint simultaneously, makes to train Sparse expression dictionary has less error of fitting, has again openness advantage, thus improves the nicety of grading of grader.This First bright grader reads training sample, and training sample is carried out nonlinear transformation, transforms to the nuclear space of higher-dimension, then at height Each class training sample is learnt by dimension nuclear space, finds out each individuality in such training sample and trains sample for constructing such The contribution (i.e. weight) that this subspace is done, such training sample constitutes dictionary with the product of weight matrix;Obtained by training Sparse expression dictionary obtain test sample at the elastomeric network coefficient coding of nuclear space, finally with dictionary and the dictionary of each class Corresponding elastomeric network sparse coding matching test sample, and calculate this error of fitting, error of fitting infima species is test specimens This classification.Compared with prior art, the present invention uses elastomeric network to retrain, and combines ridge regression and the advantage of lasso recurrence, The sparse coding feature making sample had both had openness, had again less error of fitting, effectively reduced error in classification, promoted The recognition performance of grader.
Accompanying drawing explanation
Fig. 1 designs the flow chart of grader for the specific embodiment of the invention.
Fig. 2 is the flow chart that sample is classified by the specific embodiment of the invention.
Detailed description of the invention
Below in conjunction with a simulation example and combine accompanying drawing the present invention is further illustrated.
The method for designing of the method for designing of a kind of elastomeric network constraint self-explanatory rarefaction representation grader, containing following step Rapid:
Step one: see Fig. 1, designs grader, the steps include:
(1) reading training sample, training sample has C class altogether, defines X=[X1,X2,…,Xc,…,XC]∈RD×NRepresent instruction Practicing sample, D is face characteristic dimension, and N is the number that training sample is total, X1,X2,…,Xc,…,XCRepresent the 1,2nd respectively ..., C ..., C class sample, define N1,N2,…,Nc,…,NCRepresent every class training sample number, then N=N respectively1+N+,…+Nc+…+ NC
(2) training sample is carried out two norm normalization, obtain normalized training sample;
(3) take out each class in training sample successively, and to such sample training dictionary, the process of training dictionary be:
(1) c class sample X is taken outc, by XcIt is mapped to nuclear space φ (Xc);
(2) according to φ (Xc) train dictionary B based on sparse coding algorithmc, BcRepresent the dictionary that c class sample learning arrives, The training need of this dictionary meets constraints, and the object function of described constraints is:
f ( S c ) = | | φ ( X c ) - φ ( X c ) W c S c | | F 2 + 2 α Σ n = 1 N c | | S · n c | | 1 + β | | S c | | F 2 | | φ ( X c ) W · k c | | 2 2 ≤ 1 , ∀ k = 1 , 2 , ... , K - - - ( 1 )
In formula, α is the penalty coefficient of sparse item constraint in sparse coding algorithm, and β is collaborative constraint in sparse coding algorithm Penalty coefficient, ScBeing the rarefaction representation matrix of c class nuclear space training sample, K is the size of the dictionary that study obtains,A weight matrix, its each list show nuclear space sample to the contribution of each entry in structure dictionary, Dictionary Bc=φ (Xc)Wc
(3) object function of constraints in step (2) is solved, i.e. formula (1) is solved, its solution procedure For: fixing Wc, update Sc;Randomly generate matrix Wc, carry it into the object function of constraints, at this moment this object function converts Become an elastomeric network regularization least square problem, i.e. object function is converted into:
f ( S c ) = | | φ ( X c ) - φ ( X c ) W c S c | | F 2 + 2 α Σ n = 1 N c | | S · n c | | 1 + β | | S c | | F 2 - - - ( 2 )
Above-mentioned formula (2) can be reduced to:
f ( S c ) = t r a c e { φ ( X c ) T φ ( X c ) - 2 φ ( X c ) T φ ( X c ) W c S c } + t r a c e { S c T ( W c T φ ( X c ) T φ ( X c ) W c ) S c } + 2 α Σ n = 1 N c | | S · n c | | 1 + β | | S c | | F 2 = t r a c e { κ ( X c , X c ) } - 2 t r a c e { κ ( X c , X c ) W c S c } + t r a c e { S c T ( W c T κ ( X c , X c ) W c ) S c } + 2 α Σ n = 1 N c | | S · n c | | 1 + β | | S c | | F 2 = t r a c e { κ ( X c , X c ) } - 2 Σ n = 1 N c [ κ ( X c , X c ) W c ] n · S · n c + Σ n = 1 N S · n c T [ W c T κ ( X c , X c ) W c ] S · n c + 2 α Σ k = 1 K Σ n = 1 N c | S k n c | + β Σ k = 1 K Σ n = 1 N c ( S k n c ) 2 - - - ( 3 )
Further formula (3) is resolved into a series of subproblem to solve;For ScIn each element solve, and Weed out and solve unrelated item, then formula (3) can be reduced to:
f ( S k n c ) = - 2 [ κ ( X c , X c ) W c ] n k S k n c + ( S k n c ) 2 [ W c T κ ( X c , X c ) W c ] k k + 2 Σ l = 1 , l ≠ k K [ W c T κ ( X c , X c ) W c ] l k S k n c + 2 α | S k n c | + β ( S k n c ) 2 - - - ( 4 )
According to parabola theories, it is easy to obtain the solution of formula (4);And owing to each sample point is independent, every time Solve ScA line, its solution formula is as follows:
S k · c = 1 1 + β min { [ W c T κ ( X c , X c ) ] k · - [ E S c ‾ k ] k · , - α } + 1 1 + β max { [ W c T κ ( X c , X c ) ] k · - [ E S c ‾ k ] k · , α } - - - ( 5 )
In formula,E=Wc^Tκ(Xc,Xc)Wc+ β I, I representation unit battle array;
Traversal ScEvery string, complete ScOnce renewal;
(4) S after updating in fixing step (3)c, update Wc, at this moment the object function of constraints is converted to a l2Model The least square problem of number constraint, i.e. object function is converted into:
f ( W c ) = | | φ ( X c ) - φ ( X c ) W c S c | | F 2 s . t . | | φ ( X c ) W · k c | | 2 2 ≤ 1 , ∀ k = 1 , 2 , ... , K . - - - ( 6 )
Above-mentioned formula (6) uses the method for Lagrange multiplier to solve, and finally tries to achieveSolution be:
W · k c = ( S · k c ) - [ W c ‾ k F ] · k ( S k · c T - [ W c ‾ k F ] · k ) T κ ( X c , X c ) ( ( S k · c ) T - [ W c ‾ k F ] · k ) - - - ( 7 )
In formula, F=ScScT,
(5) alternating iteration step (3) and step (4), finally gives optimum sparse coding dictionary Bc=φ (Xc)Wc
(6) obtain the optimum sparse coding dictionary of every class sample according to step (1) to (5), every class sample is obtained Excellent sparse coding dictionary is put together, it is thus achieved that dictionary B=[B1,…,Bc,…,BC];
Step 2: see Fig. 2, classifies to sample, the steps include:
(1) read the characteristics of image of test sample to be identified, and characteristics of image is carried out two norm normalization, define y ∈ RD×1Represent a test sample image feature to be identified;
(2) test sample image feature y is mapped to nuclear space φ (y);
(3) using the dictionary B obtained in step one, be fitted nuclear space φ (y), fitting function is:
f ( s ) = | | φ ( y ) - B s | | 2 2 + 2 α | | s | | 1 + β | | s | | 2 2 - - - ( 8 )
The sparse coding of test sample image feature y during s represents nuclear space in formula;
(4) solving the fitting function in step (3), solving result is:
s · k = 1 1 + β min { [ W c T κ ( X c , y ) ] k · - [ W c ^ T κ ( X c , X c ) W c s ‾ k ] k · , - α } + 1 1 + β max { [ W c T κ ( X c , y ) ] k · - [ W c ^ T κ ( X c , X c ) W c s ‾ k ] k · , α } - - - ( 9 )
In formula,S=[s1,…,sc,…,sC];
(5) ask nuclear space φ (y) in the error of fitting of every constituted subspace of class sample, represent with r (c), its expression formula For:
r ( c ) = | | φ ( y ) - B c s c | | 2 2 = | | φ ( y ) - φ ( X c ) W c s c | | 2 2 - - - ( 10 )
(6) comparing the error of fitting of nuclear space φ (y) and every class sample, image to be identified then belongs to error of fitting minimum That classification.
In one embodiment of the invention: use the method for designing (being called for short KECSSC) of the grader of present invention proposition to set The grader of meter carries out classification experiments on CUM-PIE data base, and the grader designed with other classifier design method enters Row compares, and every pictures is cut into 32*32 pixel by the method for designing of grader of the present invention, and is arranged by the picture pixels of well cutting Become a column vector, the embodiment of the present invention uses L2-norm this column vector is normalized.
In the present embodiment, it is right that the present invention proposes the method for designing of grader and 5 kinds of classifier design method have been carried out Ratio, in described 5, classifier methods is respectively as follows: nearest neighbor classifier method (being called for short NN), grader based on collaborative expression sets Meter method (being called for short CRC), classifier design method based on rarefaction representation (being called for short SRC), the grader of self-explanatory rarefaction representation Method for designing (be called for short CSDL), based on support vector machines classifier design method (being called for short SVM).Public affairs for comparing result Levelling, this classification experiments every class sample is randomly choosed 5 pictures as test picture, 10 pictures as training picture.
In the method for designing of the present embodiment grader, α is the penalty coefficient of sparse item constraint, β in sparse coding algorithm For the penalty coefficient of constraint collaborative in sparse coding algorithm, the method for designing of the present embodiment grader is set to-9 times of 2 to α Side, β is set to-4 powers of 2.
In the present embodiment, the method for designing of grader is as follows in the experimental result of CUM-PIE data base:
TABLE I
RECOGNITION RATE ON THE CMU PIE DATASET (%).
3 kinds of kernel methods (i.e. linear kernel liner, Hellinger core, poly core) are used respectively in the experiment of the present embodiment Grader is designed experiment.By TABLE I it can be seen that in CUM-PIE data base's many experiments, using three kinds of cores The discrimination of method is apparently higher than other five kinds of classifier design method.Wherein, the method for designing of grader of the present invention uses line The discrimination of property core (liner) is 79.89 ± 1.98, higher by 5.43% than CSDL method discrimination;Use the knowledge of Hellinger core Not rate is 81.78% ± 1.77, higher by 7% than CSDL method;The discrimination using poly core is 79.84% ± 1.64, compares CSDL Method is high by 6.35%.
In another embodiment of the present invention, the method for designing (being called for short KECSSC) of the grader of present invention proposition is used The grader of design carries out classification experiments on ExtendedYaleB data base, and designs with other classifier design method Grader compares, and every pictures is cut into 32*32 pixel by the method for designing of grader of the present invention, and by the figure of well cutting Sheet pixel lines up a column vector, uses L2-norm to be normalized this column vector in the embodiment of the present invention.
In the present embodiment, it is right that the present invention proposes the method for designing of grader and 5 kinds of classifier design method have been carried out Ratio, in described 5, classifier methods is respectively as follows: nearest neighbor classifier method (being called for short NN), grader based on collaborative expression sets Meter method (being called for short CRC), classifier design method based on rarefaction representation (being called for short SRC), the grader of self-explanatory rarefaction representation Method for designing (be called for short CSDL), based on support vector machines classifier design method (being called for short SVM).Public affairs for comparing result Levelling, this classification experiments every class sample is randomly choosed 5 pictures as test picture, 10 pictures as training picture.
In the method for designing of the present embodiment grader, α is the penalty coefficient of sparse item constraint, β in sparse coding algorithm For the penalty coefficient of constraint collaborative in sparse coding algorithm, the method for designing of the present embodiment grader is set to-9 times of 2 to α Side, β is set to-4 powers of 2.
In the present embodiment, the method for designing of grader is as follows in the experimental result of ExtendedYaleB data base:
TABLE II
RECOGNITION RATE ON THE EXTENDED YALEB DATASET (%).
3 kinds of kernel methods (i.e. linear kernel liner, Hellinger core, poly core) are used respectively in the experiment of the present embodiment Grader is designed experiment.By TABLE II it can be seen that in Extended YaleB data base's many experiments, make With the discrimination of three kinds of kernel methods apparently higher than other five kinds of classifier design method.Wherein, the design side of grader of the present invention Method uses the discrimination of linear kernel (liner) to be 79.09 ± 1.87, higher by 0.54% than CSDL method discrimination;Use The discrimination of Hellinger core is 91.22% ± 1.51, higher by 2.24% than CSDL method;The discrimination of use poly core is 80.13% ± 2.04, higher by 0.45% than CSDL method.
In yet another embodiment of the present invention, the method for designing (being called for short KECSSC) of the grader of present invention proposition is used The grader of design carries out classification experiments on AR data base, and the grader designed with other classifier design method compares Relatively, every pictures is cut into 32*32 pixel by the method for designing of grader of the present invention, and the picture pixels of well cutting is lined up one Individual column vector, uses L2-norm to be normalized this column vector in the embodiment of the present invention.
In the present embodiment, it is right that the present invention proposes the method for designing of grader and 5 kinds of classifier design method have been carried out Ratio, in described 5, classifier methods is respectively as follows: nearest neighbor classifier method (being called for short NN), grader based on collaborative expression sets Meter method (being called for short CRC), classifier design method based on rarefaction representation (being called for short SRC), the grader of self-explanatory rarefaction representation Method for designing (be called for short CSDL), based on support vector machines classifier design method (being called for short SVM).Public affairs for comparing result Levelling, this classification experiments every class sample is randomly choosed 5 pictures as test picture, 10 pictures as training picture.
In the method for designing of the present embodiment grader, α is the penalty coefficient of sparse item constraint, β in sparse coding algorithm For the penalty coefficient of constraint collaborative in sparse coding algorithm, the method for designing of the present embodiment grader is set to-9 times of 2 to α Side, β is set to-4 powers of 2.
This method experimental result in AR data base is as follows:
TABLE III
RECOGNITION RATE ON THE AR DATASET (%).
3 kinds of kernel methods (i.e. linear kernel liner, Hellinger core, poly core) are used respectively in the experiment of the present embodiment Grader is designed experiment.By TABLEIII it can be seen that in AR data base's many experiments, using three kinds of kernel methods Discrimination apparently higher than other five kinds of classifier design method.Wherein, classifier design method of the present invention uses linear kernel (liner) discrimination is 94.11 ± 1.16, higher by 2.99% than CSDL method discrimination;Use the discrimination of Hellinger core It is 93.45% ± 0.84, higher by 3.68% than CSDL method;The discrimination using poly core is 93.48% ± 0.96, ratio CSDL side Method is high by 4.46%.
From the experimental result of above-described embodiment it can be seen that the method for designing of grader of the present invention is than other current kind Classifier design method error in classification little, the recognition performance of grader is high, and the method using the present invention to propose can substantially carry Rise the recognition performance of grader.
Embodiment provided above only with illustrating the present invention, not limiting the scope of the invention for convenience, Technical scheme category of the present invention, person of ordinary skill in the field is made various simple deformation and modification, all should comprise In above claim.

Claims (1)

1. the method for designing of the method for designing of an elastomeric network constraint self-explanatory rarefaction representation grader, it is characterised in that: contain There are following steps:
Step one: design grader, the steps include:
(1) reading training sample, training sample has C class altogether, defines X=[X1,X2,…,Xc,…,XC]∈RD×NRepresent training sample This, D is face characteristic dimension, and N is the number that training sample is total, X1,X2,…,Xc,…,XCRepresent the 1,2nd respectively ..., c ..., C class sample, defines N1,N2,…,Nc,…,NCRepresent every class training sample number, then N=N respectively1+N+,…+Nc+…+NC
(2) training sample is carried out two norm normalization, obtain normalized training sample;
(3) take out each class in training sample successively, and to such sample training dictionary, the process of training dictionary be:
(1) c class sample X is taken outc, by XcIt is mapped to nuclear space φ (Xc);
(2) according to φ (Xc) train dictionary B based on sparse coding algorithmc, BcRepresent the dictionary that c class sample learning arrives, this word The training need of allusion quotation meets constraints, and the object function of described constraints is:
f ( S c ) = | | φ ( X c ) - φ ( X c ) W c S c | | F 2 + 2 α Σ n = 1 N | S · n c | | 1 + β | β c | | F 2 | | φ ( X c ) W · k c | | 2 2 ≤ 1 , ∀ k = 1 , 2 , ... , K - - - ( 1 )
In formula, α is the penalty coefficient of sparse item constraint in sparse coding algorithm, and β is to work in coordination with punishing of constraint in sparse coding algorithm Penalty factor, ScBeing the rarefaction representation matrix of c class nuclear space training sample, K is the size of the dictionary that study obtains,A weight matrix, its each list show nuclear space sample to the contribution of each entry in structure dictionary, Dictionary Bc=φ (Xc)Wc
(3) solving the object function of constraints in step (2), i.e. solve formula (1), its solution procedure is: Gu Determine Wc, update Sc;Randomly generate matrix Wc, carry it into the object function of constraints, at this moment this object function transforms into one Individual elastomeric network regularization least square problem, i.e. object function is converted into:
f ( S c ) = | | φ ( X c ) - φ ( X c ) W c S c | | F 2 + 2 α Σ n = 1 N c | | S · n c | | 1 + β | S c | | F 2 - - - ( 2 )
Above-mentioned formula (2) can be reduced to:
f ( S c ) = t r a c e { φ ( X c ) T φ ( X c ) - 2 φ ( X c ) T φ ( X c ) W c S c } + t r a c e { S c T ( W c T φ ( X c ) T φ ( X c ) W c ) S c } + 2 α Σ n = 1 N c | | S · n c | | 1 + β | | S c | | F 2 = t r a c e { κ ( X c , X c ) } - 2 t r a c e { κ ( X c , X c ) W c S c } + t r a c e { S c T ( W c T κ ( X c , X c ) W c ) S c } + 2 α Σ n = 1 N c | | S · n c | | 1 + β | | S c | | F 2 = t r a c e { κ ( X c , X c ) } - 2 Σ n = 1 N c [ κ ( X c , X c ) W c ] n · S · n c + Σ n = 1 N S · n c T [ W c T κ ( X c , X c ) W c ] S · n c + 2 α Σ k = 1 K Σ n = 1 N c | S k n c | + β Σ k = 1 K Σ n = 1 N c ( S k n c ) 2 - - - ( 3 )
Further formula (3) is resolved into a series of subproblem to solve;For ScIn each element solve, and reject Fall and solve unrelated item, then formula (3) can be reduced to:
f ( S k n c ) = - 2 [ κ ( X c , X c ) W c ] n k S k n c + ( S k n c ) 2 [ W c T κ ( X c , X c ) W c ] k k + 2 Σ l = 1 , l ≠ k K [ W c T κ ( X c , X c ) W c ] l k S k n c + 2 α | S k n c | + β ( S k n c ) 2 - - - ( 4 )
According to parabola theories, it is easy to obtain the solution of formula (4);And owing to each sample point is independent, solve S every timec A line, its solution formula is as follows:
S k · c = 1 1 + β min { [ W c T κ ( X c , X c ) ] k · - [ E S c ‾ k ] k · , - α } + 1 1 + β max { [ W c T κ ( X c , X c ) ] k · - [ E S c ‾ k ] k · , α } - - - ( 5 )
In formula,I representation unit battle array;
Traversal ScEvery string, complete ScOnce renewal;
(4) S after updating in fixing step (3)c, update Wc, at this moment the object function of constraints is converted to a l2Norm is about The least square problem of bundle, i.e. object function is converted into:
f ( W c ) = | | φ ( X c ) - φ ( X c ) W c S c | | F 2 s . t . | | φ ( X c ) W · k c | | 2 2 ≤ 1 , ∀ k = 1 , 2 , ... , K . - - - ( 6 )
Above-mentioned formula (6) uses the method for Lagrange multiplier to solve, and finally tries to achieveSolution be:
W · k c = ( S · k c ) - [ W c ‾ k F ] · k ( S k · c T - [ W c ‾ k F ] · k ) T κ ( X c , X c ) ( ( S k · c ) T - [ W c ‾ k F ] · k ) - - - ( 7 )
In formula, F=ScScT,
(5) alternating iteration step (3) and step (4), finally gives optimum sparse coding dictionary Bc=φ (Xc)Wc
(6) obtain the optimum sparse coding dictionary of every class sample according to step (1) to (5), the optimum obtained by every class sample is dilute Dredge coding dictionary to put together, it is thus achieved that dictionary B=[B1,…,Bc,…,BC];
Step 2: classify sample, the steps include:
(1) read the characteristics of image of test sample to be identified, and characteristics of image is carried out two norm normalization, define y ∈ RD×1Table Show a test sample image feature to be identified;
(2) test sample image feature y is mapped to nuclear space φ (y);
(3) using the dictionary B obtained in step one, be fitted nuclear space φ (y), fitting function is:
f ( s ) = | | φ ( y ) - B s | | 2 2 + 2 α | | s | | 1 + β | | s | | 2 2 - - - ( 8 )
The sparse coding of test sample image feature y during s represents nuclear space in formula;
(4) solving the fitting function in step (3), solving result is:
s · k = 1 1 + β min { [ W c T κ ( X c , y ) ] k · - [ W c ^ T κ ( X c , X c ) W c s ‾ k ] k · , - α } + 1 1 + β max { [ W c T κ ( X c , y ) ] k · - [ W c ^ T κ ( X c , X c ) W c s ‾ k ] k · , α } - - - ( 9 )
In formula,S=[s1,…,sc,…,sC];
(5) asking nuclear space φ (y) in the error of fitting of every constituted subspace of class sample, represent with r (c), its expression formula is:
r ( c ) = | | φ ( y ) - B c s c | | 2 2 = | | φ ( y ) - φ ( X c ) W c s c | | 2 2 - - - ( 10 )
(6) comparing the error of fitting of nuclear space φ (y) and every class sample, image to be identified then belongs to that of error of fitting minimum Classification.
CN201610620582.2A 2016-07-29 2016-07-29 The method for designing of elastomeric network constraint self-explanatory rarefaction representation grader Pending CN106250929A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610620582.2A CN106250929A (en) 2016-07-29 2016-07-29 The method for designing of elastomeric network constraint self-explanatory rarefaction representation grader

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610620582.2A CN106250929A (en) 2016-07-29 2016-07-29 The method for designing of elastomeric network constraint self-explanatory rarefaction representation grader

Publications (1)

Publication Number Publication Date
CN106250929A true CN106250929A (en) 2016-12-21

Family

ID=57606980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610620582.2A Pending CN106250929A (en) 2016-07-29 2016-07-29 The method for designing of elastomeric network constraint self-explanatory rarefaction representation grader

Country Status (1)

Country Link
CN (1) CN106250929A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875928A (en) * 2017-05-15 2018-11-23 广东石油化工学院 Multi-output regression network and learning method
CN111695464A (en) * 2020-06-01 2020-09-22 温州大学 Modeling method for linear coring feature space grouping based on fusion kernel

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392251A (en) * 2014-11-28 2015-03-04 西安电子科技大学 Hyperspectral image classification method based on semi-supervised dictionary learning
CN105608433A (en) * 2015-12-23 2016-05-25 北京化工大学 Nuclear coordinated expression-based hyperspectral image classification method
CN105740908A (en) * 2016-01-31 2016-07-06 中国石油大学(华东) Classifier design method based on kernel space self-explanatory sparse representation
CN105760821A (en) * 2016-01-31 2016-07-13 中国石油大学(华东) Classification and aggregation sparse representation face identification method based on nuclear space
CN105787430A (en) * 2016-01-12 2016-07-20 南通航运职业技术学院 Method for identifying second level human face with weighted collaborative representation and linear representation classification combined

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392251A (en) * 2014-11-28 2015-03-04 西安电子科技大学 Hyperspectral image classification method based on semi-supervised dictionary learning
CN105608433A (en) * 2015-12-23 2016-05-25 北京化工大学 Nuclear coordinated expression-based hyperspectral image classification method
CN105787430A (en) * 2016-01-12 2016-07-20 南通航运职业技术学院 Method for identifying second level human face with weighted collaborative representation and linear representation classification combined
CN105740908A (en) * 2016-01-31 2016-07-06 中国石油大学(华东) Classifier design method based on kernel space self-explanatory sparse representation
CN105760821A (en) * 2016-01-31 2016-07-13 中国石油大学(华东) Classification and aggregation sparse representation face identification method based on nuclear space

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
B.D LIU等: "Class specific subspace learning for collaborative representation", 《2014 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875928A (en) * 2017-05-15 2018-11-23 广东石油化工学院 Multi-output regression network and learning method
CN108875928B (en) * 2017-05-15 2021-02-26 广东石油化工学院 Multi-output regression network and learning method
CN111695464A (en) * 2020-06-01 2020-09-22 温州大学 Modeling method for linear coring feature space grouping based on fusion kernel

Similar Documents

Publication Publication Date Title
US11170502B2 (en) Method based on deep neural network to extract appearance and geometry features for pulmonary textures classification
CN105740908B (en) Classifier design method based on kernel space self-explanatory sparse representation
Xu et al. Stacked Sparse Autoencoder (SSAE) based framework for nuclei patch classification on breast cancer histopathology
CN103854645B (en) A kind of based on speaker's punishment independent of speaker's speech-emotion recognition method
CN106897685A (en) Face identification method and system that dictionary learning and sparse features based on core Non-negative Matrix Factorization are represented
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN105138993A (en) Method and device for building face recognition model
CN109992779A (en) A kind of sentiment analysis method, apparatus, equipment and storage medium based on CNN
CN108090830B (en) Credit risk rating method and device based on facial portrait
CN105138973A (en) Face authentication method and device
CN105868796A (en) Design method for linear discrimination of sparse representation classifier based on nuclear space
CN102930301A (en) Image classification method based on characteristic weight learning and nuclear sparse representation
CN106295694A (en) A kind of face identification method of iteration weight set of constraints rarefaction representation classification
CN106529395B (en) Signature image identification method based on depth confidence network and k mean cluster
CN111401156A (en) Image identification method based on Gabor convolution neural network
CN105550649A (en) Extremely low resolution human face recognition method and system based on unity coupling local constraint expression
CN106157249A (en) Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood
CN108154924A (en) Alzheimer's disease tagsort method and system based on support vector machines
CN110349170A (en) A kind of full connection CRF cascade FCN and K mean value brain tumor partitioning algorithm
CN104036242B (en) The object identification method of Boltzmann machine is limited based on Centering Trick convolution
CN108520201A (en) A kind of robust human face recognition methods returned based on weighted blend norm
CN106021402A (en) Multi-modal multi-class Boosting frame construction method and device for cross-modal retrieval
CN106250929A (en) The method for designing of elastomeric network constraint self-explanatory rarefaction representation grader
CN115658886A (en) Intelligent liver cancer staging method, system and medium based on semantic text
CN104933410A (en) United classification method for hyper-spectral image spectrum domain and spatial domain

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination