CN107506694B - Robust face recognition method based on local median representation - Google Patents

Robust face recognition method based on local median representation Download PDF

Info

Publication number
CN107506694B
CN107506694B CN201710625631.6A CN201710625631A CN107506694B CN 107506694 B CN107506694 B CN 107506694B CN 201710625631 A CN201710625631 A CN 201710625631A CN 107506694 B CN107506694 B CN 107506694B
Authority
CN
China
Prior art keywords
sample
test
training sample
identified
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710625631.6A
Other languages
Chinese (zh)
Other versions
CN107506694A (en
Inventor
黄璞
杨庚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201710625631.6A priority Critical patent/CN107506694B/en
Publication of CN107506694A publication Critical patent/CN107506694A/en
Application granted granted Critical
Publication of CN107506694B publication Critical patent/CN107506694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a robust face recognition method based on local median representation, which comprises the following steps: (1) the method comprises the steps of obtaining a face image training sample set, (2) obtaining neighbor samples of samples to be recognized in each class of training samples, (3) calculating a median value according to the neighbor samples of the samples to be recognized in each class, (4) calculating a reconstruction representation coefficient of the samples to be recognized according to the median value, and (5) determining class labels of the samples to be recognized according to the size of the reconstruction representation coefficient. The invention utilizes the neighborhood information of the sample, and the obtained median value can effectively process various changes in the face image, such as changes in facial expression, posture, illumination, shielding and the like, thereby meeting the high-precision requirement on face identification in practical application.

Description

Robust face recognition method based on local median representation
Technical Field
The invention relates to an image recognition method, in particular to a robust face recognition method based on local median representation, and belongs to the technical field of image recognition.
Background
The face recognition is an important method for identity authentication, and has wide application prospects in the fields of file management systems, security verification systems, credit card verification, criminal identity recognition of public security systems, bank and customs monitoring, man-machine interaction and the like. In general, the step of face recognition can be divided into three parts: firstly, detecting and segmenting human faces from a complex scene; secondly, extracting face features from the found face image; thirdly, matching and recognizing the human face by adopting a proper algorithm according to the extracted human face features.
The existing face feature extraction and identification method comprises the following steps:
(1) eigenfaces (Eigenfaces), i.e. face recognition methods based on Principal Component Analysis (PCA), are described in m.turn and a.pentland in 1991, in Journal of Cognitive Neuroscience, volume 3, pages 1, 71-86, Eigenfaces for recognition, which aim to find a projection direction so that the total divergence after projection of a face sample is maximized.
(2) Fisher face (fisherface), a face recognition method based on Linear Discriminant Analysis (LDA), is described in IEEE Transactions on Pattern Analysis and Machine Analysis, volume 19, page 7, 711 and 720 of IEEE Transactions on facial Analysis and Machine Analysis, d.j.kriegman, 1997 in eigenface vs. first face of fisher, which uses the class information of a sample to delineate the identification structure contained in the sample.
(3) Laplacian Face (laplacian Face), a Face recognition method based on Local Preserving Projection (LPP), is described in "Face recognizing faces" published by IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 27, page 3, 328 and 340, of x.he, s.yan, y.hu et al, 2005.
(4) Sparse Representation Classifier (SRC), described in J.Wright, A.Y.Yang, A.Ganesh, S.Satry, Y.Ma, 2008 in IEEE Transactions on Pattern Analysis and Machine integrity, volume 31, phase 2, 210, 227, is a classification method that first represents the samples to be identified as linear combinations of other samples, solves the linear reconstruction coefficients by solving the L1 norm optimization problem, and finally determines the class labels of the samples to be identified according to the reconstruction residuals of the samples to be identified.
(5) A Linear Regression Classifier (LRC), described in i.naseem, r.togneri, m.bennamoun, Linear regression for surface recognition, published 2010 at IEEE Transactions on pattern analysis and machine analysis, volume 32, page 11, 2106, 2112, is similar to SRC, and is a classification method, which assumes that samples of the same type are located in the same Linear space, linearly represents the samples to be recognized as a Linear combination of samples of a certain type, then solves reconstruction coefficients by a least square method, and finally determines the class label of the samples to be recognized by the size of the reconstruction residual of the samples to be recognized.
In the above face recognition method, the PCA, the LDA, and the LPP belong to a feature extraction method, wherein the PCA does not consider an identification structure of a sample, so the robustness is poor, the LDA does not consider class membership of the sample, so the face recognition problem cannot be robustly processed, and the LPP belongs to an unsupervised method although considering a local structure of the sample, and does not consider a class structure of the sample. The SRC and the LRC both belong to a classification method based on expression learning, the problems of occlusion, illumination change and the like can be effectively processed, the robustness is high, however, the SRC needs to solve the L1 norm optimization problem, the required time is long, the LRC does not fully utilize prior information of a sample, such as neighborhood information, and when the number of the sample is small, the face features cannot be robustly expressed.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a robust face recognition method based on local median representation is designed, so that a plurality of changes in a face image can be effectively processed, and the high-precision requirement on face recognition in practical application is met.
The invention adopts the following technical scheme for solving the technical problems:
the invention provides a robust face recognition method based on local median representation, which comprises the following steps:
step 1, acquiring a face image training sample set, wherein the training sample set comprises C different classes, normalizing each training sample and sample to be identified in the training sample set, and performing preprocessing by using Principal Component Analysis (PCA) to reduce data dimension;
step 2: acquiring K adjacent points of a sample to be identified in each type of training sample;
and step 3: calculating a median value according to K neighbor points of the sample to be recognized in each type of training sample;
and 4, step 4: linearly reconstructing the sample to be identified by using the C median values of the sample to be identified obtained in the step 3, and calculating a reconstruction coefficient according to a least square method;
and 5: and judging the class mark of the sample to be identified by utilizing the maximum similarity principle.
Further, the robust face recognition method based on local median representation of the present invention specifically includes the following steps:
assuming that the size of the image is w x h, the training sample comes from C image classes, and matrix vectorization operation is carried out on each facial image to obtain the ith facial image as xi∈RDWherein D ═ w × h;
denote the training sample set as X ═ X1,x2,...,xn]The sample to be identified is denoted xtestN, n represents the number of face image training samples;
for training sample xiThe normalization operation modulo 1 is performed:
xi=xi/||xi||2
also for test sample xtestNormalization is also performed:
xtest=xtest/||xtest||2
the normalized samples are preprocessed using PCA to reduce the data dimensionality.
Furthermore, the robust face recognition method based on local median representation of the invention uses X ═ X1,x2,…,xn]And xtestRepresenting a training sample set after PCA preprocessing and a sample to be identified, preprocessing the normalized sample by using PCA in step 1 to reduce the data dimension, and the steps are as follows:
(1) let Z be [ x ]1-m,x2-m,…,xn-m]Calculating
Figure GDA0002744597390000031
The characteristic vector corresponding to the first d non-zero characteristic values, let λ12…>λdIs composed of
Figure GDA0002744597390000032
First d non-zero maximum eigenvalues, v1,v2,…,vdIs the corresponding feature vector;
(2) expressing the PCA projection vector as:
Figure GDA0002744597390000033
(3) let APCA=[a1,a2,…,ad]And obtaining the data after PCA pretreatment as follows:
xi=APCA Txi,
xtest=APCA Txtest
wherein, i is 1,2, n, n represents the number of face image training samples.
Further, in the robust face recognition method based on local median representation of the present invention, the step 2 of obtaining K neighboring points of the sample to be recognized in each type of training sample specifically includes:
order to
Figure GDA0002744597390000034
Representing class c samples, calculating xtestAnd XcEuclidean distance of each training sample:
Figure GDA0002744597390000035
screening the obtained distances according to the sequence from small to large to obtain the distance x in the class c training sampletestThe most recent K training sample sets:
Xtest_c_K=[xc_1,xc_2,...,xc_K],
wherein, C is 1, 2.
Further, in the robust face recognition method based on local median representation of the present invention, step 3, according to K neighbor points of the sample to be recognized in each class of training samples, calculating the median specifically is:
class c training samples from xtestThe most recent K training sample sets are represented as:
Figure GDA0002744597390000041
let Mc=[Mc,1,Mc,2,...,Mc,D]T∈RDRepresents Xtest_c_KThe median vector of (1), then McIs denoted as mean (x)c_1,j,xc_2,j,...,xc_K,j);
X is to bec_1,j,xc_2,j,...,xc_K,jArranged in order from small to large, assuming xc_1,j<xc_2,j<...<xc_K,jThen mean (x)c_1,j,xc_2,j,...,xc_K,j) The calculation formula is as follows:
Figure GDA0002744597390000042
further, the robust face recognition method based on local median representation of the present invention includes the following specific steps in step 4:
assuming that C median values of the samples to be identified obtained in step 3 are M ═ M1,M2,...,MC]Then linearly expressing the sample to be identified as:
xtest=Mw
wherein w ═ w1,w2,...,wC]T∈RCTo linearly reconstruct the coefficient vector, it can be solved using the least squares method:
w=(MTM)-1MTxtest
further, in the robust face recognition method based on local median representation of the present invention, in step 5, the sample class mark to be recognized is discriminated according to the following criteria: if it is not
Figure GDA0002744597390000043
Where C is 1, 2.. and C, the sample to be identified belongs to the s-th class.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
the method comprises the steps of calculating a median value according to the neighbors of a sample to be recognized in each class of training samples, linearly representing the sample to be recognized by the median value, calculating a linear reconstruction coefficient according to a least square method, and finally judging the class mark of the sample to be recognized according to a maximum similarity principle. The method utilizes neighborhood information of the sample, and the obtained median value can effectively process various changes in the face image, such as changes in facial expression, posture, illumination, shielding and the like, thereby meeting the high-precision requirement on face identification in practical application.
Drawings
FIG. 1 is a flowchart of the overall method of the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
it will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention provides a robust face recognition method based on local median representation, and a specific flow is shown in figure 1.
Firstly, a training sample set is obtained.
Assuming that the size of the image is w x h, the training sample comes from C image classes, and matrix vectorization operation is carried out on each facial image to obtain the ith facial image as xi∈RDWherein D ═ w × h. The training sample set may be represented as X ═ X1,x2,...,xn]The sample to be identified can be represented as xtestAnd n represents the number of training samples of the face image.
For training sample xiThe normalization operation modulo 1 is performed:
xi=xi/||xi||2,(i=1,2,...,n)
also for test sample xtestNormalization is also performed:
xtest=xtest/||xtest||2
the normalized samples are preprocessed by PCA to reduce the data dimensionality, again using X ═ X for convenience1,x2,…,xn]And xtestRepresenting the training sample set after PCA pretreatment and the sample to be identified, the calculation steps are as follows:
(1) let Z be [ x ]1-m,x2-m,…,xn-m]Calculating
Figure GDA0002744597390000051
And the feature vectors corresponding to the first d non-zero feature values. Let lambda12…>λdIs composed of
Figure GDA0002744597390000052
First d non-zero maximum eigenvalues, v1,v2,…,vdM represents the mean of the ensemble of training samples for the corresponding feature vector.
(2) The PCA projection vector can be expressed as:
Figure GDA0002744597390000061
(3) let APCA=[a1,a2,…,ad]Then the data after PCA pretreatment can be obtained as:
xi=APCA Txi,(i=1,2,...,n)
xtest=APCA Txtest
and (II) acquiring K adjacent points of the sample to be identified in each type of training sample.
Order to
Figure GDA0002744597390000062
Representing class c samples, calculating xtestAnd XcEuclidean distance of each training sample:
Figure GDA0002744597390000063
the obtained distances are screened from small to large to obtain the distance x in the class c training sampletestThe most recent K training sample sets:
Xtest_c_K=[xc_1,xc_2,…,xc_K],c=1,2,...,C
and (III) calculating a median value according to K neighbor points of the sample to be identified in each type of training sample.
Class c training samples from xtestThe most recent K training sample sets can be represented as:
Figure GDA0002744597390000064
let Mc=[Mc,1,Mc,2,…,Mc,D]T∈RDRepresents Xtest_c_KThe median vector of (1), then McThe jth element of (a) can be represented as mean (x)c_1,j,xc_2,j,…,xc_K,j)。
Calculating mean (x)c_1,j,xc_2,j,…,xc_K,j) It is necessary to first xc_1,j,xc_2,j,…,xc_K,jArranged in order from small to large, where for convenience we assume xc_1,j<xc_2,j<...<xc_K,jThen mean (x)c_1,j,xc_2,j,...,xc_K,j) The calculation formula is as follows:
Figure GDA0002744597390000065
and (IV) linearly reconstructing the sample to be identified by using the C median values of the sample to be identified, which are obtained in the step (3), and calculating a reconstruction coefficient according to a least square method.
Assuming that C median values of the samples to be identified obtained in step 3 are M ═ M1,M2,...,MC]So that the sample to be identified can be linearExpressed as:
xtest=Mw
wherein w ═ w1,w2,...,wC]T∈RCTo linearly reconstruct the coefficient vector, it can be solved using the least squares method:
w=(MTM)-1MTxtest
and (V) judging the class mark of the sample to be identified by utilizing the maximum similarity principle.
The sample class mark to be identified can be distinguished according to the following criteria:
if it is not
Figure GDA0002744597390000071
The sample to be identified belongs to the s-th class.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (2)

1. A robust face recognition method based on local median representation comprises the following steps:
step 1, acquiring a face image training sample set, wherein the training sample set comprises C different classes, normalizing each training sample and sample to be identified in the training sample set, and performing preprocessing by using Principal Component Analysis (PCA) to reduce data dimension; the method comprises the following specific steps:
assuming that the size of the image is w x h, the training sample comes from C image classes, and matrix vectorization operation is carried out on each facial image to obtain the ith facial image as xi∈RDWherein D ═ w × h;
denote the training sample set as X ═ X1,x2,...,xn]The sample to be identified is denoted xtestN, n represents the number of face image training samples;
for training sample xiThe normalization operation modulo 1 is performed:
xi=xi/||xi||2
also for test sample xtestNormalization is also performed:
xtest=xtest/||xtest||2
preprocessing the normalized sample by utilizing PCA to reduce the data dimension;
step 2, obtaining K adjacent points of a sample to be identified in each type of training sample; the method specifically comprises the following steps:
order to
Figure FDA0002744597380000011
Representing class c samples, calculating xtestAnd XcEuclidean distance of each training sample:
Figure FDA0002744597380000012
screening the obtained distances according to the sequence from small to large to obtain the distance x in the class c training sampletestThe most recent K training sample sets:
Xtest_c_K=[xc_1,xc_2,...,xc_K],
wherein, C is 1, 2.., C;
step 3, calculating a median value according to K neighbor points of the sample to be recognized in each type of training sample; the method specifically comprises the following steps:
class c training samples from xtestThe most recent K training sample sets are represented as:
Figure FDA0002744597380000013
let Mc=[Mc,1,Mc,2,...,Mc,D]T∈RDRepresents Xtest_c_KThe median vector of (1), then McIs denoted as median(xc_1,j,xc_2,j,...,xc_K,j);
X is to bec_1,j,xc_2,j,...,xc_K,jArranged in order from small to large, assuming xc_1,j<xc_2,j<...<xc_K,jThen mean (x)c_1,j,xc_2,j,...,xc_K,j) The calculation formula is as follows:
Figure FDA0002744597380000021
step 4, linearly reconstructing the sample to be identified by using the C median values of the sample to be identified obtained in the step 3, and calculating a reconstruction coefficient according to a least square method; the method specifically comprises the following steps:
assuming that C median values of the samples to be identified obtained in step 3 are M ═ M1,M2,...,MC]Then linearly expressing the sample to be identified as:
xtest=Mw
wherein w ═ w1,w2,...,wC]T∈RCTo linearly reconstruct the coefficient vector, it can be solved using the least squares method:
w=(MTM)-1MTxtest
step 5, judging the class mark of the sample to be identified by utilizing the maximum similarity principle; specifically, the judgment is carried out according to the following criteria: if it is not
Figure FDA0002744597380000022
Where C is 1, 2.. and C, the sample to be identified belongs to the s-th class.
2. The method of claim 1, wherein X ═ X is used1,x2,...,xn]And xtestRepresenting a training sample set after PCA preprocessing and a sample to be identified, preprocessing the normalized sample by using PCA in step 1 to reduce the data dimension, and the steps are as follows:
(1) let Z be [ x ]1-m,x2-m,…,xn-m]Calculating
Figure FDA0002744597380000023
The characteristic vector corresponding to the first d non-zero characteristic values, let λ12…>λdIs composed of
Figure FDA0002744597380000024
First d non-zero maximum eigenvalues, v1,v2,…,vdM is the mean value of the whole training sample;
(2) the PCA projection vector is represented as:
Figure FDA0002744597380000025
(3) let APCA=[a1,a2,…,ad]And obtaining the data after PCA pretreatment as follows:
xi=APCA Txi,
xtest=APCA Txtest
wherein, i is 1,2, n, n represents the number of face image training samples.
CN201710625631.6A 2017-07-27 2017-07-27 Robust face recognition method based on local median representation Active CN107506694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710625631.6A CN107506694B (en) 2017-07-27 2017-07-27 Robust face recognition method based on local median representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710625631.6A CN107506694B (en) 2017-07-27 2017-07-27 Robust face recognition method based on local median representation

Publications (2)

Publication Number Publication Date
CN107506694A CN107506694A (en) 2017-12-22
CN107506694B true CN107506694B (en) 2021-02-09

Family

ID=60688786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710625631.6A Active CN107506694B (en) 2017-07-27 2017-07-27 Robust face recognition method based on local median representation

Country Status (1)

Country Link
CN (1) CN107506694B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008575B (en) * 2019-11-25 2022-08-23 南京邮电大学 Robust face recognition method based on multi-scale context information fusion
CN113688697A (en) * 2021-08-06 2021-11-23 南京审计大学 Palm print identification method based on local similarity keeping feature representation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826161B (en) * 2010-04-09 2013-03-20 中国科学院自动化研究所 Method for identifying target based on local neighbor sparse representation
CN104239858B (en) * 2014-09-05 2017-06-09 华为技术有限公司 A kind of method and apparatus of face characteristic checking

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于流形学习的特征提取与人脸识别研究;黄璞;《中国博士学位论文全文数据库信息科技辑》;20150615(第06期);第53-55页 *
模式分类的K-近邻方法;苟建平;《中国博士学位论文全文数据库信息科技辑》;20130515(第05期);第24-26页 *
面向人脸识别的图像表示和分类研究;张键;《中国博士学位论文全文数据库信息科技辑》;20170615(第06期);第68-69页、第76页 *

Also Published As

Publication number Publication date
CN107506694A (en) 2017-12-22

Similar Documents

Publication Publication Date Title
Hassaballah et al. Ear recognition using local binary patterns: A comparative experimental study
Wang et al. Face feature extraction: a complete review
Han A hand-based personal authentication using a coarse-to-fine strategy
CN107220627B (en) Multi-pose face recognition method based on collaborative fuzzy mean discrimination analysis
Rakshit et al. Face identification using some novel local descriptors under the influence of facial complexities
Attia et al. Feature-level fusion of major and minor dorsal finger knuckle patterns for person authentication
CN111259780B (en) Single-sample face recognition method based on block linear reconstruction discriminant analysis
Ramya et al. Certain investigation on iris image recognition using hybrid approach of Fourier transform and Bernstein polynomials
Kamlaskar et al. Iris-Fingerprint multimodal biometric system based on optimal feature level fusion model
CN107506694B (en) Robust face recognition method based on local median representation
CN110956113A (en) Robust face recognition method based on secondary cooperation representation identification projection
Ripon et al. Convolutional neural network based eye recognition from distantly acquired face images for human identification
Rong et al. Channel group-wise drop network with global and fine-grained-aware representation learning for palm recognition
Krishnaprasad et al. A Conceptual Study on User Identification and Verification Process using Face Recognition Technique
Al-Waisy et al. A multi-biometric face recognition system based on multimodal deep learning representations
Hwang et al. Example image-based feature extraction for face recognition
Bhatt et al. Covariates of face recognition
Chen et al. Face Recognition Using Self-Organizing Maps
Dubovečak et al. Face detection and recognition using raspberry PI computer
Hast Age-invariant face recognition using face feature vectors and embedded prototype subspace classifiers
Darini et al. Personal authentication using palm-print features–a SURVEY
Arora et al. Age invariant face recogntion using stacked autoencoder deep neural network
Priyadharshini et al. Ai-Based Card-Less Atm Using Facial Recognition
Marcialis et al. Decision-level fusion of PCA and LDA-based face recognition algorithms
BalaYesu et al. Comparative study of face recognition techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant