CN104063715B - A kind of face classification method based on the nearest feature line - Google Patents
A kind of face classification method based on the nearest feature line Download PDFInfo
- Publication number
- CN104063715B CN104063715B CN201410307765.XA CN201410307765A CN104063715B CN 104063715 B CN104063715 B CN 104063715B CN 201410307765 A CN201410307765 A CN 201410307765A CN 104063715 B CN104063715 B CN 104063715B
- Authority
- CN
- China
- Prior art keywords
- equal
- training
- characteristic
- sample
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000012549 training Methods 0.000 claims abstract description 52
- 239000011159 matrix material Substances 0.000 claims abstract description 30
- 239000013598 vector Substances 0.000 claims abstract description 22
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 4
- 238000012360 testing method Methods 0.000 abstract description 15
- 238000000513 principal component analysis Methods 0.000 abstract description 13
- 238000005286 illumination Methods 0.000 abstract description 3
- 239000000284 extract Substances 0.000 abstract 2
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 abstract 1
- 238000004422 calculation algorithm Methods 0.000 description 26
- 238000002474 experimental method Methods 0.000 description 12
- 230000000694 effects Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 6
- 239000006002 Pepper Substances 0.000 description 4
- 238000012706 support-vector machine Methods 0.000 description 4
- 235000002566 Capsicum Nutrition 0.000 description 3
- 241000722363 Piper Species 0.000 description 3
- 235000016761 Piper aduncum Nutrition 0.000 description 3
- 235000017804 Piper guineense Nutrition 0.000 description 3
- 235000008184 Piper nigrum Nutrition 0.000 description 3
- 150000003839 salts Chemical class 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of face classification method based on the nearest feature line, with arest neighbors characteristic theory as foundation, define a kind of new weighted index, characteristic line method is simplified after proposing the criterion based on weighted index and improving, construct a kind of face classification device for being suitable for illumination and the various changes of attitude, there are lower computation complexity, less recognition time, and more preferably robustness compared to other graders.The grader extracts the feature of training storehouse image first by principal component analysis, builds training storehouse matrix and extracts sample image feature, builds test sample vector.Then weight coefficient is calculated, and dicision rules, the nearest feature line method that structure is simplified is formulated according to weight coefficient.It is in the case of various test result indicate that, below identical hardware environment, the grader has smaller computational complexity and more preferable robustness compared with other graders.
Description
Technical Field
The invention relates to a face classification method based on nearest neighbor characteristic lines, in particular to a method for realizing automatic classification and discrimination of faces by utilizing a computer technology, a digital image processing technology, a pattern recognition technology and the like, belonging to the technology about face feature extraction and recognition in the field of biological feature recognition.
Background
1. Face recognition technology
Face recognition has become an important research direction in biometric identification, and the key technology is the implementation of a feature vector extraction and classification method. Researchers have proposed a large number of face recognition methods, in which there are Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) that are popular for feature vector extraction; PCA is an unsupervised algorithm that obtains the principal components by solving eigenvalues of the covariance matrix of multivariate variables. As the classification method, a K-Nearest Neighbor (KNN) method, a Nearest Neighbor subspace method, a Support Vector Machine (SVM) method, a compressed sensing-based classifier (SRC) and the like are popular.
2. Principal component analysis
Principal component analysis, also known as KL transformation, computes a generator matrix ∑ of the KL transformation, which may be the overall step matrix S of the training sampletOr the interspecies scattering matrix S of the training samplebAnd so on, the scatter matrix is generated from the training set.
The overall scatter matrix can be expressed as:
if the total distribution matrix S is takentAs the generator matrix ∑, note that
Then Σ can be written as:
∑=XXT∈Rm×m
if the intra-class scatter matrix SbThe generator matrix ∑, which is a KL transform, is:
where c is the number of pattern classes in the training sample set,the mean vector of all kinds of mode samples in the training sample set is recorded as follows:
the generator matrix Σ is:
∑=XXT∈Rn×n
finally, calculating the eigenvalue and the eigenvector of the generated matrix, constructing a subspace, projecting the training image and the test image into the eigenspace, and projecting each image into a point in the space corresponding to the subspace. And can be classified by the theory of pattern recognition.
3. Nearest neighbor characteristic line classifier
There is provided a class L pattern in which class k has NkA sampleFor test samples y and NkAny sample of the samplesDefining the distance:wherein,representing y to vectorHang enough toBy calculating y toThe distance of the feature line, the classification result is as follows, ifThen y belongs to class 1.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the invention provides a face recognition method based on nearest neighbor characteristic lines, which has higher recognition rate and better robustness under the condition of maintaining the same computational complexity as the existing classifier.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the technical scheme that:
a face distinguishing method based on nearest neighbor characteristic lines comprises the following steps:
(1) establishing a training library: extracting a characteristic value of a sample by using a PCA method, obtaining a base vector of a characteristic subspace by taking the extracted characteristic value as training data, and projecting the sample to the characteristic subspace according to the base vector to obtain a coordinate of the sample in the characteristic subspace; establishing a training base matrix A ═ A1,A2,...,Ak]∈Rm×nWherein m is the dimension of each sample after sampling by PCA method, n is the total number of samples in the training library, k is the total number of sample classes in the training library, AkA set of training pictures of each type;
(2) projecting the picture to be detected to the feature subspace to obtain the coordinates of the picture to be detected in the feature subspace
(3) Calculating a weight coefficient wjAnd performing preliminary judgment, comprising the following steps:
(31) defining an error functionIs provided with
Wherein,j is more than or equal to 1 and less than or equal to n, and j is a natural number;
(32) the local covariance matrix isThe weight coefficient is obtained by calculation
Wherein k is more than or equal to 1 and less than or equal to n, and k is a natural number; l is more than or equal to 1 and less than or equal to n, m is more than or equal to 1 and less than or equal to n, and l and m are natural numbers;
(33) calculating the wjWeight coefficients corresponding to each sample class in the training library
(34) Calculating a weight vector discrimination index
(35) Designing a threshold value tau of a weight vector discriminant index:
wherein,
(36) comparing the weighted discrimination index W (x) with the threshold τ: if W (x) > tau, directly outputThe class corresponding to the maximum modulus is a classification result;
(4) if W (x) is less than or equal to tau, the following treatment is carried out:
(41) correcting a training library: sorting outThe largest X sample classes reestablish the training library matrix A' ═ Amax1,Amax2,...,AmaxX]∈Rm×n;
(42) Calculating a simplified characteristic line: any two images in the correction training library areObtaining a characteristic line in the characteristic space
(43) Calculating the coordinate of the picture to be measured in the feature subspace to the feature lineIs a distance ofWherein,
(44) according to the aboveIs classified to obtain
Wherein N iskFor the number of samples in each class, k is more than or equal to 1 and less than or equal to n, kcIs the number of class C samples.
Has the advantages that: according to the face distinguishing method based on the nearest neighbor characteristic line, a new weight coefficient W (x) is defined based on the nearest neighbor characteristic line theory, a face classifier suitable for illumination and various changes of postures is constructed based on the discrimination criterion of W (x) and a threshold value calculation method, and compared with other classifiers, the face distinguishing method based on the nearest neighbor characteristic line has the advantages of similar calculation complexity, higher recognition rate and better robustness; under the conditions of illumination change and multi-pose human faces, the recognition success rate of the method is more than 98 percent; the method can achieve higher recognition rate for various feature data (such as PCA, LDA, random sampling and the like), and can still achieve higher recognition rate under the condition of less sample feature dimensions, and the characteristic of the method can reduce the sampling requirement and reduce the data storage space, thereby reducing the cost of face recognition and being better suitable for software and hardware environments with limited resources (such as battery power supply, small storage capacity and the like); under the condition that noise interference and shielding are less than 50%, the method still has a good recognition rate, has a better recognition success rate and better robustness compared with the classical algorithms such as NFL and KNN, has good adaptability and effectiveness to face recognition in a severe environment, and greatly improves the operation complexity.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 shows the effect of classification when w (x) is 0.47;
fig. 3 shows the effect of classification when w (x) is 0.98;
FIG. 4 is an illustration of the impact of weight discrimination index on recognition success rate;
FIG. 5 is a comparison of the present algorithm with the NFL algorithm in the case of superimposed salt and pepper noise;
FIG. 6 is a comparison of the present algorithm with the NNL algorithm in the case of superimposed salt and pepper noise;
FIG. 7 is a comparison of the present algorithm with the KNN algorithm in the case of superimposed salt and pepper noise;
FIG. 8 is a comparison of the present algorithm with the NFL algorithm in the case of overlay block occlusion;
FIG. 9 is a comparison of the present algorithm with the NNL algorithm for the case of overlay patch occlusion;
fig. 10 is a comparison of the present algorithm with the KNN algorithm in the case of overlay patch occlusion.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
As shown in fig. 1, a face distinguishing method based on nearest neighbor feature lines includes the following steps:
(1) establishing a training library: extracting a characteristic value of a sample by using a PCA method, obtaining a base vector of a characteristic subspace by taking the extracted characteristic value as training data, and projecting the sample to the characteristic subspace according to the base vector to obtain a coordinate of the sample in the characteristic subspace; establishing a training base matrix A ═ A1,A2,...,Ak]∈Rm×nWherein m is the dimension of each sample after sampling by PCA method, n is the total number of samples in the training library, k is the total number of sample classes in the training library, AkA set of training pictures of each type;
(2) projecting the picture to be detected to the feature subspace to obtain the picture to be detectedMeasuring the coordinates of the picture in the feature subspace
(3) Calculating a weight coefficient wjAnd performing preliminary judgment, comprising the following steps:
(31) defining an error functionIs provided withWherein,j is more than or equal to 1 and less than or equal to n, and j is a natural number;
(32) the local covariance matrix isThe weight coefficient is obtained by calculationWherein k is more than or equal to 1 and less than or equal to n, and k is a natural number; l is more than or equal to 1 and less than or equal to n, m is more than or equal to 1 and less than or equal to n, and l and m are natural numbers;
(33) calculating the weight coefficient corresponding to each classIs a weight coefficient wjA coefficient corresponding to each sample class in the training library;
(34) calculating a weighted discrimination index
(35) Making a judgment according to the magnitude of the calculated weight discrimination index W (x):
if W (x) is 1, maxi||i(x)||2/||x||21, the weight coefficients are basically distributed in only one type;
if W (x) is 0, thenIllustrating that the weight coefficients are distributed almost in each class;
therefore, a threshold τ e (0, 1) of the weight discrimination index can be designed to represent the distribution of the weight coefficient, and the specific design method is as follows:
wherein,
(36) comparing the weighted discrimination index W (x) with the threshold τ:
if W (x) is more than tau, the distribution of the weight coefficients is more concentrated, the classification effect is better, and the sample class with the minimum residual error can be directly output as a classification result;
if W (x) is less than or equal to tau, the weight coefficient is not well distributed, the classification effect is better, the classification effect is not good, the range of the training library needs to be reduced, and the classification is carried out again;
(4) if W (x) is less than tau, the following is performed:
(41) correcting a training library: sorting outThe largest X sample classes reestablish the training library matrix A' ═ Amax1,Amax2,...,AmaxX]∈Rm×n;
(42) Calculating a simplified characteristic line: any two images in the correction training library areObtaining a characteristic line in the characteristic space
(43) Calculating the coordinate of the picture to be measured in the feature subspace to the feature lineIs a distance ofWherein,
(44) according to the aboveIs classified to obtain
Wherein N iskFor the number of samples in each class, k is more than or equal to 1 and less than or equal to n, kcIs the number of class C samples.
The following describes some of the details of the present invention in its implementation.
1. The selected test databases are an ORL face database, a UMIST face database and an Extended YaleB face database, and the three databases all contain face images and mainly change directions and angles.
2. The PCA method is used for extracting features, and experiments show that the PCA has higher success rate compared with random sampling.
1) Reading in a face library, normalizing the face library, selecting a certain number of images from each person in the library to form a training set, and forming a test set by the rest of the images. Assuming that the normalized image is N × M, pixel points are connected in columns to form an N × M dimensional vector, which can be regarded as a point in N × M space, and this image can be described by a low dimensional subspace through KL transform.
2) Let N face images in the face image library be represented as X by vector1,X2,...,XNCalculating the average image of the human face asFrom this, the mean difference of each image is obtained
3) Computing a covariance matrixCalculating the eigenvalues λ of the matrix CkAnd corresponding feature vector muk. In actual calculation, the calculation amount is large, and in order to reduce the calculation amount, the mean deviation of each image is formed into a matrix: x' ═ X1′,X2′,...XN′]Then the covariance matrix can be written asAccording to the linear algebraic theory, X '(X')TCharacteristic value λ ofjAnd corresponding eigenvectors ΦjConverts the problem of (2) into a calculation (X')TCharacteristic value λ of XjAnd corresponding eigenvectors Φj', obtaining phij' rear phijCan be prepared fromThus obtaining the product. Further, the eigenvalue lambda of the matrix C is obtained by SVD theoremk。
4) Projecting the training image to a feature subspace, projecting the average differences of all N human face images in a human face library to the space to obtain respective projection vectors Y1,Y2,...,YN:
(Yi)T=[y1i,y2i,...,yMi],i=1,2,...,N
yji=(uj)TX′j,j=1,2,...,M
Wherein u isjIs a set of feature vectors, X'jTo train the mean difference of the images.
Form a training matrix A ═ Y1,Y2,...,YN]The image vectors are arranged in a class order.
3. Classifying for the first time, projecting the given test picture into a characteristic subspace to obtain a characteristic coordinate vector
1) Calculating weight coefficient, making preliminary judgment, defining error functionIs provided withWherein,n training library samples are provided, j is more than or equal to 1 and less than or equal to n, and j is a natural number;
2) the local covariance matrix isThe weight coefficient is obtained by calculationWherein k is more than or equal to 1 and less than or equal to n, and k is a natural number; l is more than or equal to 1 and less than or equal to n, m is more than or equal to 1 and less than or equal to n, and l and m are natural numbers;
3) calculate each classCorresponding weight coefficientCorresponding the weight coefficient w to the coefficient of each sample class in the training library, and calculating the weight discrimination index
4. And designing a threshold value tau epsilon (0, 1) of the weight discrimination index to represent the distribution condition of the weight coefficient.
5. Design ofWhere k is the total number of sample classes in the training library:
if W (x) is more than tau, the weight coefficient distribution is more concentrated and the weight coefficient is directly outputThe class corresponding to the maximum modulus is a classification result;
if W (x) is less than or equal to tau, the weight coefficient is not well distributed, then the following treatment is carried out, the range of the training pants needs to be reduced, and the training pants are classified again;
6. the appropriate selection of τ can effectively improve the success rate of identification, as shown in fig. 4:
1) sorting outThe largest X classes rebuild a new smaller training library matrix A' ═ Amax1,Amax2,Amax3](ii) a In most cases, the correct classification is contained in the modified smaller training library, thus narrowing the recognition range, and X is given by
2) Calculating a simplified characteristic line: any two images in the correction training library areObtaining a characteristic line in the characteristic space
3) Calculating the coordinate of the picture to be measured in the feature subspace to the feature lineIs a distance ofWherein,
4) according to the aboveIs classified to obtain
Wherein N iskFor the number of samples in each class, k is more than or equal to 1 and less than or equal to n, kcIs the number of class C samples.
The experimental results of the present invention are explained in detail below:
1. the database adopted by the experiment of the invention is an international ORL, UMIST and Extended YaleB face database. The ORL library contains a total of 40 volunteers, each containing 10 pictures with 92 × 112 pixels, for a total of 400 pictures. We selected 5 images of each person as the training library and 5 more as the test images. For the UMIST library, a total of 20 volunteers each selected 18 images for use, with pixels 92 × 112, of which 3 were training images and the rest were test images. For the Extended YaleB library, a total of 38 volunteers were included, and 58 images were selected for each person, with 168 × 192 pixels.
2. Experiment one: FIG. 4 shows the effect of different values on test success and time at W (x), which is performed in the UMIST library. The abscissa represents the value of W (x), from 0 to 1. The left ordinate represents the recognition success rate, and the right ordinate represents time. Experiments show that with the increase of the value of W (x), after W (x) is more than 0.5, the algorithm can achieve a higher success rate than the NFL algorithm, and with the increase of W (x), the testing time is increased.
3. Experiment two: fig. 5-10 are experiments in which test images are superimposed with noise. The experiment is carried out on an ORL face library, and randomface, eigenface and fisherface are selected as a feature extraction mode. Fig. 6 shows an experiment of superimposing salt-pepper noise, and fig. 7 shows an experiment of superimposing random block occlusion. The abscissa represents the percentage of noise in the image, and the ordinate represents the success rate of recognition. Experiments show that the algorithm has a better recognition effect than NFL, KNN and NNL algorithms under the condition of noise superposition, and particularly when eigenface is selected as feature extraction, the algorithm can achieve improvement of the recognition success rate of nearly 20% under the condition of 50% noise ratio.
4. Experiment three: table 1 shows algorithm complexity tests, where the experiments are performed in Extended YaleB library, and 3 methods of feature extraction, namely randomface, eingentface, and fisherface, are selected, and the algorithm and the classical classification algorithm, such as KNN and NNL, have similar algorithm complexity, but the algorithm recognition success rate and robustness are better than those of the classical algorithm. Compared with other algorithms such as NFL, NFP and SVM, the test time of the algorithm is remarkably reduced.
TABLE 1 calculation time test(s)
Randomfaces | Eigenfaces | Fisherfaces | |
SFL | 110.88 | 143.42 | 188.85 |
KNN | 106.24 | 112.81 | 135.06 |
NNL | 107.95 | 133.76 | 175.51 |
NFL | 598.23 | 639.03 | 789.35 |
NFP | 7890.06 | 9180.31 | 9862.89 |
SVM | 1824.08 | 483.81 | 565.25 |
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
Claims (1)
1. A face distinguishing method based on nearest neighbor characteristic lines is characterized in that: the method comprises the following steps:
(1) establishing a training library: extracting a characteristic value of a sample by using a PCA method, obtaining a base vector of a characteristic subspace by taking the extracted characteristic value as training data, and projecting the sample to the characteristic subspace according to the base vector to obtain a coordinate of the sample in the characteristic subspace; establishing a training base matrix A ═ A1,A2,...,Ak]∈Rm×nWherein m is the dimension of each sample after sampling by PCA method, and n is the sample in training libraryK is the total number of sample classes in the training library, AkA set of k-th class training pictures;
(2) projecting the picture to be detected to the feature subspace to obtain the coordinates of the picture to be detected in the feature subspace
(3) Calculating a weight coefficient wjAnd performing preliminary judgment, comprising the following steps:
(31) defining an error functionIs provided with
Wherein,j is more than or equal to 1 and less than or equal to n, and j is a natural number;
(32) the local covariance matrix isThe weight coefficient is obtained by calculation
Wherein z is more than or equal to 1 and less than or equal to n, and z is a natural number; l is more than or equal to 1 and less than or equal to n, m is more than or equal to 1 and less than or equal to n, and l and m are natural numbers;
(33) calculating the wjWeight coefficients corresponding to each sample class in the training libraryi is 1 to k;
(34) calculating a weight vector discrimination index
(35) Designing a threshold value tau of a weight vector discriminant index:
wherein,
(36) comparing the weighted discrimination index W (x) with the threshold τ: if W (x) > tau, directly outputThe class corresponding to the maximum modulus is a classification result;
(4) if W (x) is less than or equal to tau, the following treatment is carried out:
(41) correcting a training library: sorting outReestablishing a training library matrix A' of the X sample classes with the maximum modulus as [ A ═ Amax1,Amax2,...,AmaxX]∈Rm×n;
(42) Calculating a simplified characteristic line: any two images in the correction training library areObtaining a characteristic line in the characteristic space
(43) Calculating the coordinate of the picture to be measured in the feature subspace to the feature lineIs a distance ofWherein,
(44) according to the aboveIs classified to obtain
Wherein k iscIs the number of class C samples.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410307765.XA CN104063715B (en) | 2014-06-30 | 2014-06-30 | A kind of face classification method based on the nearest feature line |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410307765.XA CN104063715B (en) | 2014-06-30 | 2014-06-30 | A kind of face classification method based on the nearest feature line |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104063715A CN104063715A (en) | 2014-09-24 |
CN104063715B true CN104063715B (en) | 2017-05-31 |
Family
ID=51551417
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410307765.XA Expired - Fee Related CN104063715B (en) | 2014-06-30 | 2014-06-30 | A kind of face classification method based on the nearest feature line |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104063715B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109800723A (en) * | 2019-01-25 | 2019-05-24 | 山东超越数控电子股份有限公司 | A kind of recognition of face and the computer booting system and method for staying card is logged in violation of rules and regulations |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102163279A (en) * | 2011-04-08 | 2011-08-24 | 南京邮电大学 | Color human face identification method based on nearest feature classifier |
CN102402784A (en) * | 2011-12-16 | 2012-04-04 | 武汉大学 | Human face image super-resolution method based on nearest feature line manifold learning |
CN103345621A (en) * | 2013-07-09 | 2013-10-09 | 东南大学 | Face classification method based on sparse concentration index |
-
2014
- 2014-06-30 CN CN201410307765.XA patent/CN104063715B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102163279A (en) * | 2011-04-08 | 2011-08-24 | 南京邮电大学 | Color human face identification method based on nearest feature classifier |
CN102402784A (en) * | 2011-12-16 | 2012-04-04 | 武汉大学 | Human face image super-resolution method based on nearest feature line manifold learning |
CN103345621A (en) * | 2013-07-09 | 2013-10-09 | 东南大学 | Face classification method based on sparse concentration index |
Non-Patent Citations (2)
Title |
---|
"Face Recognition Using the Nearest Feature Line Method";Stan Z.Li 等;《IEEE TRANSACTIONS ON NEURAL NETWORKS》;19990331;第10卷(第2期);439-443 * |
"一种用于人脸识别的监督局部线性嵌入算法及其改进";沈杰 等;《计算机应用与软件》;20130415;第30卷(第4期);77-80 * |
Also Published As
Publication number | Publication date |
---|---|
CN104063715A (en) | 2014-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105956582B (en) | A kind of face identification system based on three-dimensional data | |
Harada et al. | Discriminative spatial pyramid | |
Zuo et al. | Bidirectional PCA with assembled matrix distance metric for image recognition | |
CN104123560B (en) | Fuzzy facial image verification method based on phase code feature and more metric learnings | |
CN107862267A (en) | Face recognition features' extraction algorithm based on full symmetric local weber description | |
CN101872424A (en) | Facial expression recognizing method based on Gabor transform optimal channel blur fusion | |
CN106096517A (en) | A kind of face identification method based on low-rank matrix Yu eigenface | |
CN106980848A (en) | Facial expression recognizing method based on warp wavelet and sparse study | |
CN109241813B (en) | Non-constrained face image dimension reduction method based on discrimination sparse preservation embedding | |
Beksi et al. | Object classification using dictionary learning and rgb-d covariance descriptors | |
CN102915435A (en) | Multi-pose face recognition method based on face energy diagram | |
Lee et al. | Face image retrieval using sparse representation classifier with gabor-lbp histogram | |
CN108898153B (en) | Feature selection method based on L21 paradigm distance measurement | |
Tao et al. | Illumination-insensitive image representation via synergistic weighted center-surround receptive field model and weber law | |
CN110287973B (en) | Image feature extraction method based on low-rank robust linear discriminant analysis | |
CN109919056B (en) | Face recognition method based on discriminant principal component analysis | |
CN104063715B (en) | A kind of face classification method based on the nearest feature line | |
Bao et al. | A supervised neighborhood preserving embedding for face recognition | |
Zhao et al. | 3D object recognition and pose estimation using kernel PCA | |
Goncharova et al. | Greedy algorithms of feature selection for multiclass image classification | |
Al-Wajih et al. | A new application for gabor filters in face-based gender classification. | |
CN105550677B (en) | A kind of 3D palmprint authentications method | |
CN110781802B (en) | Face image recognition method based on information theory manifold | |
CN107239749A (en) | A kind of face spatial pattern recognition method | |
Gao et al. | Face Recognition Algorithm Based on Optimal Weighted Multi-Directional Log-Gabor Features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170531 |