CN107066964B - Rapid collaborative representation face classification method - Google Patents
Rapid collaborative representation face classification method Download PDFInfo
- Publication number
- CN107066964B CN107066964B CN201710233148.3A CN201710233148A CN107066964B CN 107066964 B CN107066964 B CN 107066964B CN 201710233148 A CN201710233148 A CN 201710233148A CN 107066964 B CN107066964 B CN 107066964B
- Authority
- CN
- China
- Prior art keywords
- sample
- class
- coefficient
- lda
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000012360 testing method Methods 0.000 claims abstract description 67
- 238000012549 training Methods 0.000 claims abstract description 55
- 238000005457 optimization Methods 0.000 claims abstract description 27
- 230000009467 reduction Effects 0.000 claims abstract description 16
- 230000002195 synergetic effect Effects 0.000 claims description 14
- 238000010276 construction Methods 0.000 claims description 10
- 238000005286 illumination Methods 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 abstract description 7
- 230000001815 facial effect Effects 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000011160 research Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000013386 optimize process Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for rapidly and cooperatively representing face classification, which comprises the following steps: acquiring a plurality of facial images of all persons to be tested to form a training sample set and a testing sample set; projecting the training sample set to a PCA subspace to obtain a characteristic face set of the sample set; linearly representing the test sample by using the feature face and the variable dictionary in the class together; in dictionary space by l2Iterative calculation of initial dictionary coefficient in normal form, and subsequent introduction of l1The norm completion coefficient is accurately optimized; projecting the PCA coefficient and the dimensionality reduction sample of the reconstructed test sample to an LDA subspace to respectively obtain the LDA coefficients of the reconstructed test sample and the training sample; and selecting the class with the minimum reconstruction error with the test sample as the final candidate class of the face test sample. The method not only effectively eliminates the information redundancy between the training samples and the testing samples, improves the face recognition precision, but also greatly reduces the time overhead of the traditional representation classification optimization method, and has better universality and robustness.
Description
Technical field
The invention discloses a kind of fast synergistic indicate face classification method, be related to indicate study, dictionary optimization and
Data Dimensionality Reduction technology.The particularly construction including class internal variable dictionary, PCA dimensionality reduction, secondary sparse collaboration optimization and LDA mirror
Other reconstructed error assessment;Belong to the technical field of image procossing and pattern-recognition.
Background technique
Recognition of face (FR) be research field most popular in pattern-recognition, computer vision and biological identification technology it
One.But it is often by factors, and such as: posture, expression, illumination and block etc. influence.In addition, not all acquisition
Obtained high dimensional data all works to classification, and to obtain efficient recognition of face performance, researcher proposes various warps in recent years
The dimension reduction method of allusion quotation, such as: pivot analysis (PCA), linear discriminant analysis (LDA), laplacian eigenmaps, part keep reflecting
Penetrate (LPP), be locally linear embedding into (LLE) etc., to achieve the immense success in dimensionality reduction Statistical learning model.In these allusion quotations
In the eigentransformation method of type, PCA is a kind of unsupervised linear dimension reduction method, its target be by certain linear projection,
High dimensional data is mapped to lower dimensional space, and it is expected to reach data variance maximum in the dimension projected.LDA is that one kind has prison
The linear dimension-reduction algorithm superintended and directed, it guarantees different classes of data as far as possible by finding projection vector for Data Dimensionality Reduction
It separates, similar data polymerize as far as possible.Therefore, these above-mentioned conversion methods when by data projection to new subspace all
Have itself certain important characteristic.
At the same time, the rarefaction representation classification method (SRC) occurred in recent years due to its succinct theoretical construct and largely at
The application example of function has also caused paying close attention to for various countries researcher.The purpose of SRC is, constitutes from a large amount of different objects
A small amount of training sample is selected in dictionary, indicates a new object to reconstruct.In order to achieve this goal, SRC is in target letter
L is used in number1Normal form is constrained to obtain sparse vector.But in practical applications, every class training sample often without it is enough very
To only one sample data, cause SRC algorithm effect bad, here it is famous small sample problems.For this purpose, some dictionaries
Learning method is proposed to improve the performance of SRC.For example, Deng proposes the SRC of an extension by construction class internal variable dictionary
Method (ESRC) solves the face representation optimization problem under one training sample.Yang and Zhu is in coordinate expression classification (CRC)
The similitude of introduced feature and discrimination property obtain a more generally model.Song expresses disaggregated model according to face and designs
Half face dictionary Integrated Algorithm, this method go out the training matrix of half face of biserial (row) in quantization integrated study atomic time, Successful construct,
This conjecture face has been found to be conducive to recognition of face.
Currently, in terms of the research of SRC is concentrated mainly on and improves rarefaction representation ability and classification performance two, for how
Dimensionality reduction optimization problem during solution sparse decomposition is to improve Algorithms T-cbmplexity rare research.Traditional SRC method
Image classification is carried out using the Optimized model that high dimensional data constructs single normal form, ignores the letter between test sample and training sample
Redundancy is ceased, probabilistic Decision Classfication is caused.
Goal of the invention
In the case where lack of training samples, indicate that study optimization method is solving the reconstruct of higher-dimension initial data for tradition
During the problem of ignoring the information redundancy between sample and lacking distinctive, while to avoid using l1Normal form, which solves, to be minimized
The defects of very big time overhead caused by problem, proposes that a kind of fast synergistic indicates that face classification method, this method are being established
After Data Dimensionality Reduction model, more robust sparse coefficient vector is obtained by designing the double optimization strategy based on mixing normal form,
To accelerate optimal speed, and improve recognition of face precision.
Summary of the invention
Technical solution: a kind of fast synergistic expression face classification method includes the following steps:
(a) all personnel's face images to be detected are obtained, and each testing staff in different illumination, expression and blocks ring
Several images are all shot under border, all face images of a testing staff represent one kind, thus by all testing staff's
Face image is combined into sample data set;
(b) it is concentrated from the sample data having had, picks out 15~25% sample class at random to construct class internal variable word
Allusion quotation DI, the part sample in remaining sample class is chosen as training sample set A, while establishing the test sample data of each classification
Collection;
(c) training sample set A is projected into the subspace PCA, obtains the eigenface U of the sample set;
(d) D is usedITest sample y linearly is described with U, obtains the sparse representation model of dimensionality reduction constrained optimization, construction is based on
The double optimization method of mixing normal form iterates to calculate out the PCA coefficient z of reconstruct test sample;Wherein, the double optimization method
To pass through l in dictionary space2Normal form iterates to calculate initial dictionary coefficient, subsequent secondary introducing l1It is accurately excellent that normal form carries out coefficient
Change;
(e) training sample of z and the subspace PCA is projected into the subspace LDA, respectively obtains the LDA of reconstruct test sample
CoefficientWith training sample LDA coefficient
(f) the LDA coefficient centers of every a kind of training sample k is calculatedk;
(g) it counts respectivelyWith all centerskReconstructed error, choose minimal reconstruction error DkClassification as face
The final candidate categories of test sample.
Preferably, including the following steps: in the step (b)
(b1) assume to pick out the total m sample building class internal variable dictionary of P class, note Indicate the variables collection of the i-th class, here DiEach column indicate a variable, then P
It is D that class, which has corresponding variable dictionary altogether,I=[D1,…,DP];
(b2) it is construction class internal variable dictionary, other similar images is subtracted to such natural image, are rememberedWhereinAndSample set after indicating the i-th class removal natural image,Represent the natural image of the i-th class sample set.
Preferably, including the following steps: in the step (d)
(d1) it is directed to test sample y ∈ Rd, y is linearly described by the eigenface and class internal variable dictionary of training sample set
=Uz+DIβ+t, wherein t ∈ RdIt is the error term that cannot be indicated by variable dictionary, β is variable dictionary coefficient;
(d2) by above formula y=Uz+DIβ+t regards a typical least square problem as, obtains the solution of variable dictionary factor beta
β=(DI TDI+λI)-1DI T(y-Uz), wherein λ is positive disturbance term, and I is unit matrix;
(d3) according to the linear description of test sample, the value z=U of reconstruct test sample PCA coefficient z is solvedT(y-DIβ);
(d4) (d2) and (d3) step is repeated, when the number of iterations of least square method reaches defined threshold, termination changes
For calculating operation;
(d5) 1 normal form Optimized model is constructed again, passes through secondary introducing l1Constraint, obtains the more accurate solution of βWherein ε is reconstruct error amount, and ε > 0.Therefore reconstruct test
The value of sample PCA coefficient z is updated to
Preferably, the training sample of z and the subspace PCA is projected into the subspace LDA in the step (e), respectively
To the LDA coefficient of reconstruct test sampleWith training sample LDA coefficientWherein W, which refers to, passes through
Training sample in PCA spaceObtained LDA base.
Preferably, in the step (f), it is assumed that training sample set A includes K class sample, calculates every class training sample
LDA coefficientWhereinIt representsMiddle j-th of sample of kth class, and
And nkIt is the number of samples in kth class.
Preferably, in the step (g), by constructing reconstructed error formulaTo spend
Flow control k class training sample LDA coefficient centerskWith the LDA coefficient of reconstruct face test sampleThe distance between, by test specimens
Originally it is referred to and possesses minimum DkThat one kind in.
A kind of fast synergistic expression face classification method, comprising:
Several face images for obtaining all personnel to be detected, are combined into sample data set;From the sample number having had
According to concentration, 15~25% sample class is picked out at random to construct class internal variable dictionary DI, choose the part in remaining sample class
Sample establishes the test sample data set of each classification as training sample set A;Training sample set A is projected into PCA
Space obtains the eigenface U of the sample set;Use DITest sample y linearly is described with U, obtains the sparse of dimensionality reduction constrained optimization
It indicates model, constructs the PCA coefficient z that the double optimization method based on mixing normal form calculates reconstruct test sample;By z and PCA
The training sample of subspace projects the subspace LDA, respectively obtains the LDA coefficient of reconstruct test sampleWith training sample LDA
CoefficientCalculate the LDA coefficient centers of every a kind of training sample kk;It counts respectivelyWith all centerskWeight
Structure error chooses minimal reconstruction error DkFinal candidate categories of the classification as face test sample.
The utility model has the advantages that compared with prior art, the present invention establishes low-dimensional training pattern and carrys out linear expression test sample, and
Dictionary passes through l in space2Normal form iterates to calculate initial dictionary coefficient, subsequent secondary introducing l1Normal form carries out coefficient accurate optimization, has
Effect solves and indicates that learning method calculates merely l1Huge time overhead caused by minimization problem, while calculation has been fully ensured that again
The recognition performance of method.The technology, which finally identifies in space in LDA, completes identification mission, not only improves to test sample sparse table
The robustness shown also further enhances the distinguishing ability of SRC algorithm.
Detailed description of the invention
Fig. 1 is the method flow diagram of the embodiment of the present invention;
Fig. 2 is the part face schematic diagram data of training sample of the present invention;
Fig. 3 is present invention test face sample schematic diagram;
Fig. 4 is the Partial Variable schematic diagram of class internal variable dictionary of the present invention;
Fig. 5 is the Partial Feature face face schematic diagram of training sample set of the present invention;
Fig. 6 is that the present invention solves class internal variable dictionary coefficient and two kinds of different calculating when reconstruct test sample PCA coefficient are suitable
Experimental result line chart under sequence;
Fig. 7 is present invention iterative calculation l2After normal form, secondary introducing l1Discrimination after normal form optimized coefficients;
Fig. 8 is discrimination of the invention and only uses l under original dictionary1Normal form optimizes the comparison of method;
Fig. 9 is arithmetic speed of the invention and only uses l under original dictionary1The comparison of normal form arithmetic speed.
Specific embodiment
Combined with specific embodiments below, the present invention is furture elucidated, it should be understood that these embodiments are merely to illustrate the present invention
Rather than limit the scope of the invention, after the present invention has been read, those skilled in the art are to various equivalences of the invention
The modification of form falls within the application range as defined in the appended claims.
As shown in Figure 1, a kind of fast synergistic indicates face classification method, include the following steps:
(a) all personnel's face images to be detected are obtained, and each testing staff in different illumination, expression and blocks ring
Several images are all shot under border, all face images of a testing staff represent one kind, thus by all testing staff's
Face image is combined into sample data set;
(b) it is concentrated from the sample data having had, picks out 15~25% sample class at random to construct class internal variable word
Allusion quotation DI, such as Fig. 4.The part sample in remaining sample class is chosen as training sample set A, such as Fig. 2.Each classification is established simultaneously
Test sample data set, such as Fig. 3, test sample data set can concentrate personnel's face image to be detected by obtaining training sample
Building;
The step b includes the following steps:
(b1) assume to pick out the total m sample building class internal variable dictionary of P class, note Indicate the variables collection of the i-th class, here DiEach column indicate a variable, then P
It is D that class, which has corresponding variable dictionary altogether,I=[D1,…,DP];
(b2) it is construction class internal variable dictionary, other similar images is subtracted to such natural image, are rememberedWhereinAndSample set after indicating the i-th class removal natural image,Represent the natural image of the i-th class sample set.
C, remaining training sample set A is projected into the subspace PCA, obtains the eigenface U of the sample set, as Fig. 5 is provided
The Partial Feature face face schematic diagram of training sample set;
D, D is usedITest sample y linearly is described with U, obtains the sparse representation model of dimensionality reduction constrained optimization, construction is based on
The double optimization method of mixing normal form iterates to calculate out the PCA coefficient z of reconstruct test sample;Wherein, the double optimization method
To pass through l in dictionary space2Normal form iterates to calculate initial dictionary coefficient, subsequent secondary introducing l1It is accurately excellent that normal form carries out coefficient
Change.
The step d includes the following steps:
(d1) it is directed to test sample y ∈ Rd, y is linearly described by the eigenface and class internal variable dictionary of training sample set
=Uz+DIβ+t, wherein t ∈ RdIt is the error term that cannot be indicated by variable dictionary, β is variable dictionary coefficient;
(d2) by above formula y=Uz+DIβ+t regards a typical least square problem as, obtains the solution of variable dictionary factor beta
β=(DI TDI+λI)-1DI T(y-Uz), wherein λ is positive disturbance term, and I is unit matrix;
(d3) according to the linear description of test sample, the value z=U of reconstruct test sample PCA coefficient z is solvedT(y-DIβ);
(d4) (d2) and (d3) step is repeated, when the number of iterations of least square method reaches defined threshold, termination changes
For calculating operation;
(d5) 1 normal form Optimized model is constructed again, passes through secondary introducing l1Constraint, obtains the more accurate solution of βWherein ε is reconstruct error amount, and ε > 0.Therefore reconstruct test
The value of sample PCA coefficient z is updated to
E, the training sample of z and the subspace PCA is projected into the subspace LDA, respectively obtains the LDA system of reconstruct test sample
NumberWith training sample LDA coefficientWherein W refers to through the training sample in PCA space?
The LDA base arrived.
F, assume that training sample set A includes K class sample, calculate the LDA coefficient of every class training sampleWhereinIt representsMiddle j-th of sample of kth class, and nkIt is
Number of samples in k class.
G, pass through construction reconstructed error formulaTo measure kth class training sample LDA system
Number centerskWith the LDA coefficient of reconstruct face test sampleThe distance between, test sample is referred to and possesses minimum Dk's
In that one kind.
In the present invention, about the research to small sample problem, existing But most of algorithms is mainly by original higher-dimension sample
This EDS extended data set improves.Therefore, such method often ignores the redundancy between sample, it is not only increased
Huge time overhead can also reduce accuracy of identification sometimes.The present invention designs a kind of fast synergistic expression face classification method, should
Method indicates the difference between test sample and training sample, and the structure in sparse expression by construction class internal variable dictionary
The dimensionality reduction constraint condition of PCA and LDA is made, so that interference brought by sample extension is effectively reduced, so that sparse expression be made to optimize
Process is more robust.Under the conditions of this constrained optimization, test sample by rapidly linear expression and can execute classification.In addition,
Traditional expression learning method usually utilizes l1Normal form calculation expression coefficient.But it is relevant research shows that l1The calculation amount of normal form
Greatly, time complexity is high.In fact, l2Normal form has been demonstrated and l1Normal form has similar effect, and when can substantially reduce
Between expense.Therefore, the present invention establishes a kind of quick dimensionality reduction sparse Optimized model and uses training sample first in the model
Eigenface and class internal variable dictionary linear expression test sample.In view of including various types of interior difference in class internal variable dictionary,
Its coefficient is necessarily sparse, therefore we use least square method to solve, and obtains the coefficient of eigenface indirectly.But another is crucial
Problem should also arouse attention, that is, the information redundancy between test sample and training sample still will lead to probabilistic knowledge
Other result.In order to capture this difficult point, we are on the basis of obtaining above-mentioned initial dictionary coefficient, secondary introducing l1Constraint, from
And the solution that the test sample PCA coefficient that obtains class internal variable dictionary coefficient and reconstruct is more accurate.The above method makes sample
Interference information includes that illumination, expression and block etc. is effectively rejected in optimization process.It is detailed to the optimization process below
Beneficial effect analysis:
Firstly, the problem of discussing class internal variable dictionary coefficient and eigenface coefficient computation sequence.Fig. 6 illustrates two kinds of coefficients
Experimental result line chart under different computation sequences, the experiment carry out on AR data set.As can be seen that working as least square method
The number of iterations be 1 when, first calculate eigenface coefficient recognition effect it is more preferable.But when the number of iterations is greater than 1, first calculates in class and become
Amount dictionary coefficient possesses higher discrimination.In addition, identification curve is gradually gentle as the number of iterations is continuously increased, this meaning
Accurate recognition effect do not need a large amount of the number of iterations, once iteration reaches a certain threshold value, discrimination will restrain.Its
It is secondary, after discussing the present invention with least square method iterative solution class internal variable dictionary coefficient, secondary introducing l1The validity of normal form.
Fig. 7 illustrates iterative calculation l2Secondary introducing l after normal form1The recognition effect of normal form.It can clearly be seen that secondary introducing l1Normal form
Discrimination not only than directly utilizing the method for least square method iterative solution coefficient excellent, but also its curve is also more steady.
In other words, the dimensionality reduction Constraint Anchored Optimization that the present invention is constructed obtains more accurate recognition effect, therefore uses l2Model
L is re-introduced into after formula iterative solution class internal variable dictionary coefficient1Normal form is effective.Finally, summarizing the dictionary that the present invention is constructed
Optimization process and their total recognition effect.Fig. 8 and Fig. 9 has opened up discrimination and recognition speed of the invention, Yi Jiyu respectively
L is only used under original dictionary1The comparison of normal form optimization method.As can be seen that discrimination of the invention is more slightly higher than the latter, but know
Other speed is greatly improved.As can be seen that fast synergistic disclosed by the invention indicates that face classification method is effectively promoted really
The recognition performance that face collaboration indicates.
In short, the present invention is trained with low-dimensional restricted model and linear expression test sample, and novel most by design
Smallization objective function can obtain more robust sparse coefficient vector and reconstructed sample PCA coefficient simultaneously, and identify son in LDA
Identification mission is completed in space.The present invention passes through iterative calculation l in dictionary space2Normal form, and secondary introducing l1Normal form carries out
The accurate optimization of coefficient, efficiently solving indicates that learning method utilizes merely l1It is huge caused by normal form computational minimization problem
Time overhead, while the recognition performance of algorithm has been fully ensured that again.Compared with existing similar technique, the robustness of the model is more
It is good.
The above is only a preferred embodiment of the present invention, it should be pointed out that: for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (6)
1. a kind of fast synergistic indicates face classification method, characterized by the following steps:
(a) all personnel's face images to be detected are obtained, and each testing staff in different illumination, expression and is blocked under environment
Several images are all shot, all face images of a testing staff represent one kind, thus by the face of all testing staff
Image is combined into sample data set;
(b) it is concentrated from the sample data having had, picks out 15~25% sample class at random to construct class internal variable dictionary DI,
The part sample in remaining sample class is chosen as training sample set A, while establishing the test sample data set of each classification;
(c) training sample set A is projected into the subspace PCA, obtains the eigenface U of the sample set;
(d) D is usedITest sample y linearly is described with U, obtains the sparse representation model of dimensionality reduction constrained optimization, construction is based on mixing
The double optimization method of normal form iterates to calculate out the PCA coefficient z of reconstruct test sample;Wherein, the double optimization method be
Dictionary passes through l in space2Normal form iterates to calculate initial dictionary coefficient, subsequent secondary introducing l1Normal form carries out coefficient accurate optimization;
(e) training sample of z and the subspace PCA is projected into the subspace LDA, respectively obtains the LDA coefficient of reconstruct test sample
With training sample LDA coefficient
(f) the LDA coefficient centers of every a kind of training sample k is calculatedk;
(g) it counts respectivelyWith all centerskReconstructed error, choose minimal reconstruction error DkClassification as face test
The final candidate categories of sample.
2. fast synergistic as described in claim 1 indicates face classification method, which is characterized in that in the step (b), including
Following steps:
(b1) assume to pick out the total m sample building class internal variable dictionary of P class, note Indicate the variables collection of the i-th class, here DiEach column indicate a variable, then P
It is D that class, which has corresponding class internal variable dictionary altogether,I=[D1,…,DP];
(b2) it is construction class internal variable dictionary, other similar images is subtracted to such natural image, are rememberedWhereinAndSample set after indicating the i-th class removal natural image,Represent the natural image of the i-th class sample set.
3. fast synergistic as described in claim 1 indicates face classification method, which is characterized in that in the step (d), including
Following steps:
(d1) it is directed to test sample y ∈ Rd, y=Uz+ is linearly described by the eigenface and class internal variable dictionary of training sample set
DIβ+t, wherein t ∈ RdIt is the error term that cannot be indicated by variable dictionary, β is variable dictionary coefficient;
(d2) by above formula y=Uz+DIβ+t regards a typical least square problem as, obtain the solution β of variable dictionary factor beta=
(DI TDI+λI)-1DI T(y-Uz), wherein λ is positive disturbance term, and I is unit matrix;
(d3) according to the linear description of test sample, the value z=U of reconstruct test sample PCA coefficient z is solvedT(y-DIβ);
(d4) (d2) and (d3) step is repeated, when the number of iterations of least square method reaches defined threshold, terminates iteration meter
Calculate operation;
(d5) 1 normal form Optimized model is constructed again, passes through secondary introducing l1Constraint, obtains the more accurate solution of βWherein ε is reconstructs error amount, and ε > 0, therefore reconstructs test
The value of sample PCA coefficient z is updated to
4. fast synergistic as described in claim 1 indicates face classification method, which is characterized in that in the step (e), by z with
The training sample of the subspace PCA projects the subspace LDA, respectively obtains the LDA coefficient of reconstruct test sampleAnd instruction
Practice sample LDA coefficientWherein W refers to through the training sample in PCA spaceObtained LDA base.
5. fast synergistic as described in claim 1 indicates face classification method, which is characterized in that in the step (f), it is assumed that
Training sample set A includes K class sample, calculates the LDA coefficient of every class training sampleWhereinIt representsMiddle j-th of sample of kth class, and nkIt is
Number of samples in k class.
6. fast synergistic as described in claim 1 indicates face classification method, which is characterized in that in the step (g), pass through
Construct reconstructed error formulaTo measure kth class training sample LDA coefficient centerskWith again
The LDA coefficient of structure face test sampleThe distance between, test sample is referred to and possesses minimum DkThat one kind in.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710233148.3A CN107066964B (en) | 2017-04-11 | 2017-04-11 | Rapid collaborative representation face classification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710233148.3A CN107066964B (en) | 2017-04-11 | 2017-04-11 | Rapid collaborative representation face classification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107066964A CN107066964A (en) | 2017-08-18 |
CN107066964B true CN107066964B (en) | 2019-07-02 |
Family
ID=59601916
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710233148.3A Expired - Fee Related CN107066964B (en) | 2017-04-11 | 2017-04-11 | Rapid collaborative representation face classification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107066964B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108664917A (en) * | 2018-05-08 | 2018-10-16 | 佛山市顺德区中山大学研究院 | Face identification method and system based on auxiliary change dictionary and maximum marginal Linear Mapping |
CN109359505A (en) * | 2018-08-23 | 2019-02-19 | 长安大学 | A kind of facial expression feature extracts, identification model building and recognition methods |
CN110070136B (en) * | 2019-04-26 | 2022-09-09 | 安徽工程大学 | Image representation classification method and electronic equipment thereof |
CN111931595B (en) * | 2020-07-17 | 2022-05-24 | 信阳师范学院 | Face image classification method based on generalized representation |
CN113298143B (en) * | 2021-05-24 | 2023-11-10 | 浙江科技学院 | Foundation cloud robust classification method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104794498A (en) * | 2015-05-07 | 2015-07-22 | 西安电子科技大学 | Image classification method based on combination of SRC and MFA |
CN105844261A (en) * | 2016-04-21 | 2016-08-10 | 浙江科技学院 | 3D palmprint sparse representation recognition method based on optimization feature projection matrix |
-
2017
- 2017-04-11 CN CN201710233148.3A patent/CN107066964B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104794498A (en) * | 2015-05-07 | 2015-07-22 | 西安电子科技大学 | Image classification method based on combination of SRC and MFA |
CN105844261A (en) * | 2016-04-21 | 2016-08-10 | 浙江科技学院 | 3D palmprint sparse representation recognition method based on optimization feature projection matrix |
Non-Patent Citations (2)
Title |
---|
Extended SRC:Undersampled Face Recongnition via Intraclass Variant Dictionary;Weihong Deng,et al.;《IEEE Transaction on pattern analysis and machine intelligence》;20120930;第34卷(第9期);第1864-1870页 * |
基于改进型PCA和LDA融合算法的人脸图像识别;伊力哈木·亚尔买买提.;《计算机仿真》;20130131;第30卷(第1期);第415-419页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107066964A (en) | 2017-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107066964B (en) | Rapid collaborative representation face classification method | |
CN110532900B (en) | Facial expression recognition method based on U-Net and LS-CNN | |
CN107122809B (en) | Neural network feature learning method based on image self-coding | |
CN101271469B (en) | Two-dimension image recognition based on three-dimensional model warehouse and object reconstruction method | |
CN105005772B (en) | A kind of video scene detection method | |
CN105760821A (en) | Classification and aggregation sparse representation face identification method based on nuclear space | |
CN110188653A (en) | Activity recognition method based on local feature polymerization coding and shot and long term memory network | |
CN103617609B (en) | Based on k-means non-linearity manifold cluster and the representative point choosing method of graph theory | |
CN110675421B (en) | Depth image collaborative segmentation method based on few labeling frames | |
CN109255289A (en) | A kind of across aging face identification method generating model based on unified formula | |
CN109902662A (en) | A kind of pedestrian recognition methods, system, device and storage medium again | |
CN109993208A (en) | A kind of clustering processing method having noise image | |
CN110263790A (en) | A kind of power plant's ammeter character locating and recognition methods based on convolutional neural networks | |
CN111401156A (en) | Image identification method based on Gabor convolution neural network | |
CN116681962A (en) | Power equipment thermal image detection method and system based on improved YOLOv5 | |
US20230222768A1 (en) | Multiscale point cloud classification method and system | |
CN114283285A (en) | Cross consistency self-training remote sensing image semantic segmentation network training method and device | |
CN114821299A (en) | Remote sensing image change detection method | |
Yu et al. | A diagnosis model of soybean leaf diseases based on improved residual neural network | |
Niu et al. | An indoor pool drowning risk detection method based on improved YOLOv4 | |
CN108388918B (en) | Data feature selection method with structure retention characteristics | |
CN108038467B (en) | A kind of sparse face identification method of mirror image in conjunction with thickness level | |
CN114943862B (en) | Two-stage image classification method based on structural analysis dictionary learning | |
CN111563180A (en) | Trademark image retrieval method based on deep hash method | |
Kaiyan et al. | Underwater object detection using transfer learning with deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220524 Address after: Room i-19-04, dagongfang shared International Innovation Center, building 9, Xuelang Town, No. 99, Qingshu Road, Wuxi Economic Development Zone, Jiangsu 214131 Patentee after: Tufei (Wuxi) Intelligent Technology Co.,Ltd. Address before: 212003, No. 2, Mengxi Road, Zhenjiang, Jiangsu Patentee before: Song Jiaying |
|
TR01 | Transfer of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190702 |
|
CF01 | Termination of patent right due to non-payment of annual fee |