CN105787443A - Face identification method based on embedded platform - Google Patents

Face identification method based on embedded platform Download PDF

Info

Publication number
CN105787443A
CN105787443A CN201610094964.6A CN201610094964A CN105787443A CN 105787443 A CN105787443 A CN 105787443A CN 201610094964 A CN201610094964 A CN 201610094964A CN 105787443 A CN105787443 A CN 105787443A
Authority
CN
China
Prior art keywords
pca
feature
identification
lda
lbp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610094964.6A
Other languages
Chinese (zh)
Inventor
杨新武
马壮
袁顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201610094964.6A priority Critical patent/CN105787443A/en
Publication of CN105787443A publication Critical patent/CN105787443A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical filed of mode identification, and specifically relates to a face identification method based on an embedded platform. The method is characterized by comprising the following steps: step one, image preprocessing; step two, completing PCA feature extraction, LDA feature extraction and LBP feature extraction in parallel; step three, performing integration classification identification, and completing PCA feature identification, LDA feature identification and LBP feature identification in parallel; and 4, integrating vote output results, wherein identification results are m(pca), m(lda) and m(pca), if m(pca), m(lda) and m(pca) are all the same, the results are directly output, if two results are the same and one result is different, the two results are taken as an output result, and if the three results are different, output is refused. Through such a method, overall features and local features of a face image can be integrated so that the misrecognition rate of the face identification method is greatly reduced. At the same time, since the embedded platform is already developed to have multiple cores, due to a parallel algorithm, the running speed of the algorithm is accelerated, and the load of each cpu is averaged.

Description

A kind of face identification method based on embedded platform
Technical field
The invention belongs to mode identification technology, it is specifically related to a kind of face identification method based on embedded platform, it is a kind of utilize computer technology, Digital image processing technique, pattern recognition etc. to realize automatically analyzing and sentencing method for distinguishing of face, is that living things feature recognition field is extracted and the algorithm identified about face characteristic.
Background technology
Biometrics identification technology refers to the technology utilizing physiological feature that the mankind itself have, that can be identified for that its identity or behavior characteristics to carry out authentication.Compared with traditional identity validation technology, biometrics identification technology has fundamentally been stopped to forge and steal, and has higher reliability, safety, is increasingly widely used in the authentication of some security systems.
Compared with other biological feature identification technique, image capture device is not had too high requirement by face recognition technology, and user is easier to accept.The purpose of research recognition of face finds one classifying identification method fast and effectively exactly, it is possible to quickly judge whether there is facial image in input picture, just it is quickly sorted out if existed.Along with society's urgent needs to recognition of face, increasing researcher has been put in recognition of face, it is intended to find a kind of quickly, effectively, and be applicable to the face identification method in reality.
In face identification system practical at present, the face identification system of Based PC platform account for great majority.But it is as the development of electronic technology and the changes in demand of society, hardware processing platform develops towards microminiaturization, low-power consumption, portable direction, and PC platform has the shortcomings such as volume is big, power consumption is high, portability is poor, limit the extensive use of recognition of face with universal.Along with the development of technology, the arithmetic speed of embedded platform is increasingly faster, and volume is more and more less, and power consumption and cost are more and more lower so that had sufficient hardware supported developing portable face identification system.Therefore, develop the embedded human face identification system with broader application to be possibly realized.
Many face recognition algorithms achieve good effect all on a pc platform, but in view of embedded platform and PC platform gap in performance, many algorithms can not use at embedded platform.The current algorithm used at embedded platform, mostly is simple feature extraction and face recognition algorithms, and discrimination is relatively low, and more crucially its misclassification rate is higher.In the application scenarios of embedded platform, misclassification rate mostly is an important reference factor, for instance in the application scenarios of gate inhibition, if there is rejection, personnel's mode of can passing through to swipe the card etc. enters.If occur to know by mistake, then the personnel that being likely does not have authority to enter open the door, and cause damage.Therefore, the misclassification rate of algorithm is reduced while improving the discrimination of algorithm as much as possible.
In face recognition process, the more commonly used feature extracting method mainly has the methods such as PCA, LDA, LBP.PCA is that a kind of basic multidimensional data describes method, and it was just applied to recognition of face before 20 years, and developed into a class face recognition algorithms of feature based face.This algorithm main purpose is by finding one group of most representational pivot, and reconstructs this storehouse sample with their linear combination, and makes mean square error between reconstructed sample and former state basis minimum, thus realizing recognition of face.LDA is a kind of linear discriminant method having supervision, make use of different classes of between label information, therefore LDA wishes that the feature extracted can have this feature: namely same category of feature pitch is compact as much as possible, and the feature pitch of different classes of separates as much as possible.The poly-degree that dissipates between them can be weighed by collision matrix.LBP method is a kind of based on the texture description statistical method in tonal range.The method has that computation complexity is low, translation and the advantage such as rotational invariance, and illumination variation has stronger robustness.Owing to texture is the essential attribute of image, and the method can realize the description to image texture preferably.
The feature extracting method of PCA and LDA more pays close attention to the global feature of image, and the feature extracting method of LBP more pays close attention to the local feature of image.In embedded platform, we use tri-kinds of feature extracting methods of PCA, LDA, LBP, independently carry out feature extraction and recognition of face, and its result is carried out integrated ballot, comprehensively complete face recognition process.By the method, can the comprehensively global feature of facial image and local feature, greatly reduce the misclassification rate of face identification method.Simultaneously as embedded platform develops to multi-core, due to algorithm can parallelization, accelerate the speed of service of algorithm, the average load of each cpu.
Summary of the invention
On embedded platform, better but the higher feature extracting method of complexity and sorting technique are infeasible for result of use, if using simple feature extracting method and sorting technique, are likely to result in the decline of discrimination and the rising of misclassification rate.Based on the thought of integrated study, we calculate the classification results after tri-kinds of method features of PCA, LDA, LBP extract respectively, and comprehensive ballot draws final classification results.Its face identification method algorithm idiographic flow is as follows:
Step one: Image semantic classification
(1) face original image is carried out color and size normalization processes.
(2) facial image is carried out histogram equalization
Step 2: feature extraction
Create 3 threads, complete (1), (2), (3) parallel.
(1) utilize PCA method to calculate eigenmatrix and obtain the global feature W of training setpca
Utilize WpcaEach sample is carried out dimensionality reduction.
(2) utilize PCA method that sample is carried out dimensionality reduction, recycle LDA method after dimensionality reduction and sample is processed again.
(3) LBP method is utilized to calculate image LBP feature.
1. facial image is carried out piecemeal, be divided into 7*7 block, obtain several regional areas.
2. each small images region after piecemeal is carried out LBP feature extraction conversion, obtain the histogram feature vector of every facial image.
Step 3: Ensemble classifier identification
Create 3 threads, complete (1), (2), (3) parallel.
(1) utilizing arest neighbors NN method that sample PCA feature carries out Classification and Identification, output category label is to mpcaIf, refusal, then recognition result mpca=0.
(2) utilizing arest neighbors NN method that sample LDA feature carries out Classification and Identification, output category label is to mldaIf, refusal, then recognition result mlda=0.
(3) utilize arest neighbors NN method that sample LBP feature is carried out Classification and Identification.Output category label is to mlbpIf, refusal, then recognition result mlbp=0.
(4) comprehensive ballot output result.
If mpca, mlda, mlbpResult is identical, then directly export result.If two results are identical, a result is different then using the result of two as exporting result.If three results are all different, then export refusal.
The beneficial effects of the present invention is: 1, the present invention is by integrating multiple simple feature extracting method, consider image global feature and local characteristic information, improve the accuracy rate of embedded human face identification system identification, reduce misclassification rate.2, design multiple simple feature extraction algorithm, make the algorithm can parallelization.Multi-core CPU is utilized to be calculated, average CPU load, improves the speed of service.
Accompanying drawing explanation
Fig. 1 is the flow chart of inventive algorithm.
Detailed description of the invention
Provide the explanation of each detailed problem involved in this inventive technique scheme in detail below:
Step one: preprocessing process is as follows
First, the facial image obtained after Face datection, being normalized to 256 color shade images, image size normalization is height × wide=77 × 64 pixels.Image size after normalization, makes image detail and recognition speed be attained by good level.
Secondly, image is carried out algorithm of histogram equalization.If the grey level distribution ratio in piece image is shallower, such image should have good visual effect.
Histogram equalization method is as follows, assume r, s represent respectively original image and conversion after image at point (x, y) gray value at place, gray level adds up to L, and s=T (r), T (r) is transforming function transformation function, and image enhaucament transforming function transformation function needs to meet following two condition here:
(1) T (r) is a monodrome single-increasing function within the scope of 0≤r≤L-1.
(2) 0≤r≤1 there is 0≤T (r)≤1.
First condition ensures that image gray levels still keeps order from low to high after the conversion, and second condition ensures the concordance of gray value dynamic range before and after conversion.
Being transformed to by s to r
R=T-1(s)(0≤s≤L-1)
Here T-1S s is also met above-mentioned two condition by ().
The gray level of piece image can be considered the stochastic variable on interval [0, L-1], it is possible to proves that transforming function transformation function is the cumulative distribution function of original image, and meets two above condition.Assume that Q is the sum of pixel, n in piece imagekFor the pixel count of kth level gray scale, rkRepresent kth gray level, then in this image, gray level is rkPixel occur probability can be expressed as:
P(rk)=nk/Q
To its transforming function transformation function carrying out Homogenization Treatments it is
s k = T ( r k ) = Σ j = 0 k P ( r j ) = Σ j = 0 k n j Q
Utilizing above-mentioned formula that image is converted accordingly, it is possible to the image after being equalized, this image has higher contrast and good visual effect compared with original image.
Step 2: feature extraction
Create 3 threads, complete (1), (2), (3) parallel.
(1) PCA feature is calculated.
Utilize PCA algorithm that facial image is carried out feature extraction, it is necessary first to calculate PCA global feature.
Principal component analysis is the statistical analysis technique based on Karhunen-Loeve transformation, and its basic thought is the projecting method found and can represent initial data under Minimum Mean Square Error meaning.By a small amount of original higher-dimension sample information of characteristic present, and maintain the principal character information in legacy data.First the standard feature vector that training sample covariance matrix eigenvalue is corresponding is calculated;Then d wherein bigger eigenvalue characteristic of correspondence vector composition dimensionality reduction matrix W is selectedpca.With this matrix, original sample is carried out dimensionality reduction, obtain the facial image after feature extraction.
(2) LDA feature is calculated
The purpose of LDA is exactly the low dimensional feature extracting from high-dimensional feature space and having discriminating power most, these features can help to flock together other for same class all samples, different classes of sample as far as possible separately, the feature that the ratio that namely selects to make between-class scatter and within-cluster variance is maximum.When LDA is applied to recognition of face, within class scatter matrix is almost always unusual, and is all very big matrix.So first with PCA to image dimensionality reduction, then re-using LDA method.Obtain the LDA feature of image.
(3) LBP feature is calculated
Original face image division is become region 7 × 7=49 block, these regions are converted to a series of column vector.These column vectors are added up as histogrammic characteristic vector.Finally, the characteristic vector of piecemeal to some extent is attached in order, obtains a final histogram feature vector.
Step 3: Ensemble classifier identification
Create 3 threads, complete (1), (2), (3) parallel.
Assume there be c classification w1,w2,...,wc, every class has the sample N indicating classificationiIndividual, i=1,2 ..., c.Total sample number is N.
(1) utilize NN method that sample PCA feature is carried out Classification and Identification.
Nearest neighbor method (NN) its visual interpretation is to say unknown sample a, if the Euclidean distance compared between a and other samples, and decision-making a is similar with the sample nearest from it.
Assume the PCA feature that x is sample to be tested image, sampleFor the PCA feature of image in storehouse.WhereinFootmark i represent wiClass, k represents wiClass NiKth in individual sample.According to Nearest Neighbor Method, the classification of sample is equal to the classification of the sample closest with it, it is possible to regulation wiThe discriminant function of class isThen decision rules can be written as, if sample characteristics x and sample characteristics xjClosestWherein i, j are category label, and i, j=1,2 ..., c, then the PCA of sample a differentiates that result a (pca) belongs to wjClass, a (pca) ∈ wj, the PCA of sample differentiates classification mpca=j.If g is (xj) more than threshold value, then judging that current face is not belonging to this face database, refusal identifies, the PCA of sample differentiates classification mpca=0.By many experiments, when threshold value is 2200, algorithm has good discrimination and relatively low misclassification rate.
(2) utilize NN method that sample LDA feature is carried out Classification and Identification.
Assume the LDA feature that y is sample to be tested image, sampleFor the LDA feature of image in storehouse.According to Nearest Neighbor Method, it is stipulated that wiThe discriminant function of class isThen decision rules can be written as, if sample characteristics y and sample characteristics yjClosestWherein i, j are category label, and i, j=1,2 ..., c, then the LDA of sample a differentiates that result a (lda) belongs to wjClass, a (lda) ∈ wj, the LDA of sample differentiates classification mlda=j.If p is (yj) more than threshold value, then judging that current face is not belonging to this face database, refusal identifies, the LDA of sample differentiates classification mlda=0.By many experiments, when threshold value is 1200, algorithm has good discrimination and relatively low misclassification rate.
(3) utilize NN method that sample LBP feature is carried out Classification and Identification
Based on, in the recognition of face of LBP, generally adopting the arest neighbors sorting technique based on histogrammic similarity measurement to classify, its method for measuring similarity generally uses Chi square statistic.
Assume the LBP feature that z is sample to be tested image, sampleFor the LBP feature of image in storehouse.According to Nearest Neighbor Method, it is stipulated that wiThe discriminant function of class isThen decision rules can be written as, if sample characteristics z and sample characteristics zjClosestWherein i, j are category label, and i, j=1,2 ..., c, then the LBP of sample a differentiates that result a (lbp) belongs to wjClass, a (lbp) ∈ wj, the LBP of sample differentiates classification mlbp=j.If q is (yj) more than threshold value, then judging that current face is not belonging to this face database, refusal identifies, the LBP of sample differentiates classification mlbp=0.By many experiments, when threshold value is 70, algorithm has good discrimination and relatively low misclassification rate.
(4) comprehensively vote, export result
Judge mpca, mlda, mlbpAs a result, if three is identical, then directly export result.If two results are identical, a result is different then using the result of two as exporting result.If three's result is all different, then export refusal.
PCA, LDA method is classified for the global feature of face, and LBP method is classified for the local feature of face, does not rely only on single feature extracting method, has considered entirety and the local feature of facial image.Eliminate owing to local feature interference or global feature disturb the misrecognized caused.Utilize the combination of multiple simple feature extracting method, it is simple to the parallelization of algorithm, the complicated algorithm operation time on embedded platform can be reduced further.
Experimental result that the present invention be described is explained in detail below:
The data base that the experiment of the present invention adopts has tested in Yale face database, CMU face database, AR face database.Wherein Yale has 15 facial images, and everyone has 11, altogether 165 images, and in experiment, every class randomly draws 7 images as training set, remaining 4 conduct test sample sets.Having 13 class facial images, every class face to select 5 as training set in CMU data base, all the other 69 as test set.Having 100 class facial images in AR data base, wherein 50 classes are male, and 50 classes are women, and every apoplexy due to endogenous wind selects unscreened 7 images as training set, and all the other 7 unscreened images are as test set.Above-mentioned training set facial image is normalized to 64*77 pixel.
Table 1 gives PCA method, LDA method, LBP method and PLL mixed method experimental result on Yale face database;Carry out altogether 5 times experiment, after average as the experimental result in table 1.
Table 2 gives PCA method, LDA method, LBP method and PLL mixed method experimental result on CMU face database;Carry out altogether 5 times experiment, after average as the experimental result in table 2.
Table 3 gives PCA method, LDA method, LBP method and PLL mixed method experimental result on AR face database;Carry out altogether 5 times experiment, after average as the experimental result in table 3.
Table 1
Table 2
Table 3
From experimental result it can be seen that PLL mixed method misclassification rate on three face databases all can remain 0%, its recognition correct rate is also only only below LDA algorithm.Although the accuracy of PLL method is not so good as LDA algorithm, but compared to LDA algorithm, PLL method misclassification rate is lower, it is possible to well meet the face identification system requirement for safety.
We can reach a conclusion that the method compared to single feature extraction, and PLL mixed method achieves good recognition effect.Especially, on the misclassification rate valued the most in embedded systems, situation about by mistake not knowing is kept to occur in Yale, CMU, AR face database.Integrated comparative, the method can have good application in the actual environment of embedded human face identification.

Claims (1)

1. the face identification method based on embedded platform, it is characterised in that step is as follows:
Step one, Image semantic classification
(1) face original image is carried out color and size normalization processes;
(2) facial image is carried out histogram equalization
Step 2, feature extraction
Complete PCA feature extraction, LDA feature extraction, LBP feature extraction parallel;
(1) utilize PCA method to calculate eigenmatrix and obtain the global feature W of training setpca
Utilize WpcaEach sample is carried out dimensionality reduction;
(2) utilize PCA method that sample is carried out dimensionality reduction, recycle LDA method after dimensionality reduction and sample is processed again;
(3) LBP method is utilized to calculate image LBP feature;
1. facial image is carried out piecemeal, be divided into 7*7 block, obtain several regional areas;
2. each small images region after piecemeal is carried out LBP feature extraction conversion, obtain the histogram feature vector of every facial image;
Step 3: Ensemble classifier identification
Complete PCA feature identification, LDA feature identification, LBP feature identification parallel;
(1) utilizing NN method that sample PCA feature carries out Classification and Identification, output category label is to m (pca), if refusal, then recognition result m (pca)=0;
(2) utilizing NN method that sample LDA feature carries out Classification and Identification, output category label is to m (lda), if refusal, then recognition result m (lda)=0;
(3) utilize NN method that sample LBP feature is carried out Classification and Identification, utilizing Chi square statistic to measure as histogram similarity, the LBP feature of sample is classified, output category label is to m (lbp), if refusal, then recognition result m (lbp)=0;
(4) comprehensive ballot output result;If m (pca), m (lda), m (pca) result is identical, then directly export result;If two results are identical, a result is different then using the result of two as exporting result;If three results are all different, then export refusal.
CN201610094964.6A 2016-02-20 2016-02-20 Face identification method based on embedded platform Pending CN105787443A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610094964.6A CN105787443A (en) 2016-02-20 2016-02-20 Face identification method based on embedded platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610094964.6A CN105787443A (en) 2016-02-20 2016-02-20 Face identification method based on embedded platform

Publications (1)

Publication Number Publication Date
CN105787443A true CN105787443A (en) 2016-07-20

Family

ID=56403417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610094964.6A Pending CN105787443A (en) 2016-02-20 2016-02-20 Face identification method based on embedded platform

Country Status (1)

Country Link
CN (1) CN105787443A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409220A (en) * 2021-06-28 2021-09-17 展讯通信(天津)有限公司 Face image processing method, device, medium and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303730A (en) * 2008-07-04 2008-11-12 西安电子科技大学 Integrated system for recognizing human face based on categorizer and method thereof
KR100950776B1 (en) * 2009-10-16 2010-04-02 주식회사 쓰리디누리 Method of face recognition
KR101314293B1 (en) * 2012-08-27 2013-10-02 재단법인대구경북과학기술원 Face recognition system robust to illumination change
CN103903004A (en) * 2012-12-28 2014-07-02 汉王科技股份有限公司 Method and device for fusing multiple feature weights for face recognition
CN104008375A (en) * 2014-06-04 2014-08-27 北京工业大学 Integrated human face recognition mehtod based on feature fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303730A (en) * 2008-07-04 2008-11-12 西安电子科技大学 Integrated system for recognizing human face based on categorizer and method thereof
KR100950776B1 (en) * 2009-10-16 2010-04-02 주식회사 쓰리디누리 Method of face recognition
KR101314293B1 (en) * 2012-08-27 2013-10-02 재단법인대구경북과학기술원 Face recognition system robust to illumination change
CN103903004A (en) * 2012-12-28 2014-07-02 汉王科技股份有限公司 Method and device for fusing multiple feature weights for face recognition
CN104008375A (en) * 2014-06-04 2014-08-27 北京工业大学 Integrated human face recognition mehtod based on feature fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕兴会: "基于多特征集成分类器的人脸表情识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
赖小萍: "基于多特征多分类器融合的人脸识别研究", 《中国学位论文全文数据库》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409220A (en) * 2021-06-28 2021-09-17 展讯通信(天津)有限公司 Face image processing method, device, medium and equipment

Similar Documents

Publication Publication Date Title
Ullah et al. Gender recognition from face images with local wld descriptor
CN101739555B (en) Method and system for detecting false face, and method and system for training false face model
CN1908960A (en) Feature classification based multiple classifiers combined people face recognition method
Berbar Three robust features extraction approaches for facial gender classification
CN102156887A (en) Human face recognition method based on local feature learning
Dong et al. Finger vein recognition based on multi-orientation weighted symmetric local graph structure
CN101593269B (en) Face recognition device and method thereof
WO2022178978A1 (en) Data dimensionality reduction method based on maximum ratio and linear discriminant analysis
CN107220627B (en) Multi-pose face recognition method based on collaborative fuzzy mean discrimination analysis
CN105023006A (en) Face recognition method based on enhanced nonparametric margin maximization criteria
Zhou et al. Improved-LDA based face recognition using both facial global and local information
Sisodia et al. ISVM for face recognition
CN103246877A (en) Image contour based novel human face recognition method
Chen et al. Generalized Haar-like features for fast face detection
CN103745242A (en) Cross-equipment biometric feature recognition method
Lang et al. Study of face detection algorithm for real-time face detection system
Fengxiang Face Recognition Based on Wavelet Transform and Regional Directional Weighted Local Binary Pattern.
CN103942572A (en) Method and device for extracting facial expression features based on bidirectional compressed data space dimension reduction
CN109740429A (en) Smiling face's recognition methods based on corners of the mouth coordinate mean variation
CN105787443A (en) Face identification method based on embedded platform
Hassan et al. Facial image detection based on the Viola-Jones algorithm for gender recognition
Zhang et al. Multimodal 2D and 3D facial ethnicity classification
Masood et al. Spatial analysis for colon biopsy classification from hyperspectral imagery
CN102819731A (en) Face identification based on Gabor characteristics and Fisherface
CN112241680A (en) Multi-mode identity authentication method based on vein similar image knowledge migration network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160720

RJ01 Rejection of invention patent application after publication