AU2019101141A4 - Human face recognition based on Principal Component Analysis - Google Patents

Human face recognition based on Principal Component Analysis Download PDF

Info

Publication number
AU2019101141A4
AU2019101141A4 AU2019101141A AU2019101141A AU2019101141A4 AU 2019101141 A4 AU2019101141 A4 AU 2019101141A4 AU 2019101141 A AU2019101141 A AU 2019101141A AU 2019101141 A AU2019101141 A AU 2019101141A AU 2019101141 A4 AU2019101141 A4 AU 2019101141A4
Authority
AU
Australia
Prior art keywords
face
face recognition
sep
training
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2019101141A
Inventor
Jincheng Bao
Yujie HUANG
Yizhen Li
Yuepeng Li
Yongyi Xiong
Haoran Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huang Yujie Miss
Li Yizhen Miss
Original Assignee
Huang Yujie Miss
Li Yizhen Miss
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huang Yujie Miss, Li Yizhen Miss filed Critical Huang Yujie Miss
Priority to AU2019101141A priority Critical patent/AU2019101141A4/en
Application granted granted Critical
Publication of AU2019101141A4 publication Critical patent/AU2019101141A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

This invention is a face recognition system lies in the field of computer vision. Since the information gained during the face recognition process is usually limited and in order to avoid interference from external redundant environmental infor mation during face recognition, our invention proposes a face recognition system under limited information condition, in which a face-region location method based on a novel color space YCgCr is proposed to realize the face detection. The recognition stage includes the following steps: Firstly, the invention uses the PCA algorithm for face recognition. The main idea is to express the largest 1-D vector of pixel constructed from 2-D facial image into the compact principal components of the feature space. Secondly, the KNN algorithm used in the invention determines the face tag according to the different distance. The average of multiple tags is used to determine which training sample the test face belongs to.

Description

This invention is a face recognition system lies in the field of computer vision. Since the information gained during the face recognition process is usually limited and in order to avoid interference from external redundant environmental information during face recognition, our invention proposes a face recognition system under limited information condition, in which a face-region location method based on a novel color space YCgCr is proposed to realize the face detection. The recognition stage includes the following steps: Firstly, the invention uses the PCA algorithm for face recognition. The main idea is to express the largest 1 -D vector of pixel constructed from 2-D facial image into the compact principal components of the feature space. Secondly, the KNN algorithm used in the invention determines the face tag according to the different distance. The average of multiple tags is used to determine which training sample the test face belongs to.
2019101141 30 Sep 2019
TITLE
Human face recognition based on Principal Component Analysis
FIELD OF THE INVENTION
This invention is in the field of digital image processing and serves as recognition of different human faces based on the method called Principal Component Analysis powered by Matlab.
BACKGROUND
Computer vision is the science of how to make machines see. It refers to the use of cameras and computers instead of human eyes to identify, track and measure objects, and further do graphics processing. This makes it easier for the human eye to observe or transmit images to an instrument for inspection. Pattern recognition is the process of processing and analyzing the information of the identified, described, classified and interpreted objects. The pattern recognition involved in the present research is mainly to identify and classify the specific patterns of pictures, photos, texts, symbols and other objects. The development and application of computer vision rely on the assistance of pattern recognition. Computer vision and pattern recognition are two important research fields in engineering science. The ultimate goal is to make computers the same visual function as human beings and the recognition ability of various objects. With the development of computer vision and pattern recognition technology, fingerprint recognition system, face recognition system, automatic driving system and other fields have been greatly developed.
With the ceaseless progress of the society and the urgent requirement of various aspects for rapid and effective automatic identification, biometrics technology has developed rapidly in recent decades. Biometrics, as an inherent attribute of human
2019101141 30 Sep 2019 beings, is the ideal basis for automatic identity authentication. Current biometric identification technologies mainly include fingerprint identification, retina identification, iris identification, gait identification, vein identification, face recognition, etc. Compared with other recognition methods, face recognition has been widely studied and applied because of its direct, friendly and convenient features. Therefore, users have no psychological barriers and they can accept it easily. In addition, we are able to further analyze the results of face recognition and obtain rich additional information about people’s gender, expression, age and so on, which expands the application prospect of face recognition.
In this invention, we use Matlab, which is a powerful tool for numerical analysis and image processing to implement the system of face recognition. The recognition system is based on Principal Component Analysis. After image preprocessing, we reshape the matrices of the images into lines and get the average face by calculating the average of the lines. Then the differences between the average face and original faces are used to build the covariance matrix. The most contributive eigenvalues and eigenvectors of the matrix are chosen to build the eigenface space. Afterwards differences are projected to the eigenface space. In the recognition process, distances of original images and projected images is calculated and the kNearest Neighbor classification algorithm is used to classify the input and recognize whether the input is among the training set.
SUMMARY
In order to meet the requirements of security check, advanced human-machine interaction and many other scenarios, this invention proposes an image recognition method for human faces based on the Primary Component Analysis(PCA). Using PCA, the redundancy of the input will be removed and facial features will be extracted for face recognition. In the process, the information in high dimension is reserved as much as possible in low dimension Consequently, this invention
2019101141 30 Sep 2019 increases the speed of processing input data remarkably.
PCA is one of the most effective mathematical methods to reduce the dimensionality of the data space. It is widely accepted in image processing and data compression. In the algorithm, linear transformation of the original data reduces the dimension of features with tiny loss of features by maximizing the variance of projected vectors. PCA is an unsupervised learning algorithm of dimension reduction, which doesn’t require human intervention and the calculation is rather small compared to other methods.
The framework of our program of recognizing faces includes: the image preprocessing part, the training part, the test part and the GUI part for testers to identify themselves.
In order to build the human face database, we collect photos of ourselves, combined with the ORL database of human facesfl]. With regard to our own photos, the program detects the face part of the whole image by a face detection method based on color-face feature and cut the face part. Then in the preprocessing part, the images are converted into grayscale and adjusted according to the histogram equalization.
In the training part, the algorithm of our invention is based on PCA. Images in the training set are reshaped into column vectors, and the average of the vectors is calculated as the eigenface. Then we calculate the covariance matrix, the product of the differences of the column vectors and the average face and its transposition. The eigenvalues of the covariance matrix indicate their contributions to express information, and the corresponding eigenvectors of some biggest eigenvalues are chosen to be the base vectors so as to reduce the dimension. The projected vectors of the differences of the column vectors and the average face constitute the eigenface space. Face images will be projected into the eigenface space.
In the test part, the k-Nearest Neighbor(kNN) algorithm is used to classify the test data. After the similar data processing in the training process, we evaluate the
2019101141 30 Sep 2019 smallest three distances of the test data and the existing training data projected in the eigenface space and get the three labels by kNN. The mode of the three labels decides the classification.
Finally, the GUI provides users with buttons to utilize the recognition system. In the interface, users is able to take photos and get the recognition result that matches in the training set. The program will take five photos consecutively and the judgement will be made 5 times individually to get a more precise recognition result. The mode of 5 recognition results will be chosen to be ultimate output. On top of that, new photos can be token in this interface and added into the training set so that the diversification of the database will be improved and consequently this invention is able to enhance its robustness and accuracy automatically.
DESCRIPTION OF DRAWING
Figure 1 shows the principle of PCA algorithm.
Figure 2 shows the data flow of how to intercept face from photo.
Figure 3 shows the data flow to culculate the projection vectors.
Figure 4 shows the figure average image.
Figure 5 shows the results of mean subtraction.
Figure 6 shows the original image in the webcams.
Figure 7 shows the workspace output.
Figure 8 shows the figure output.
DESCRIPTION OF PREFERRED EMBODIMENT
1.1 Conception of PCA Algorithm
The main idea of PCA is to use an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables. [2] These uncorrelated variables are called principle
2019101141 30 Sep 2019 components. The work of PCA is to find is to find a set of orthogonal coordinate axis from the original space sequentially. The choice of new coordinate axis is closely related to the data itself. [3] Among them, the direction with the largest variance in the original data is selected as the first new coordinate axis, the direction with the largest variance in the orthogonal plane of the first coordinate is selected as the second new coordinate axis, the direction with the largest variance in the orthogonal plane of the first and the second coordinate is selected as the third new coordinate axis. By analogy, we can obtain n such coordinate axis in the same way. Through the coordinates axis we can find that most variance are contained in the first k coordinate axis (k is a positive integer), and the latter one contains almost zero variances. Therefore, we can ignore the remaining coordinate axis and only retaining the first k coordinate axis which contains the most variances. In fact, this is equivalent to only keep the feature dimensions that contain most variances while ignoring the feature dimensions that contains almost zero variances, so as toreduce the dimension of data features.
1.2 PCA Applied in Face Recognition
The main idea of face recognition using PCA algorithm is to express the largest 1-D vector of pixel constructed from 2-D facial image into the compact principal components of the feature space. For a single facial image, we can obtain a set of coefficients by mapping the image into the feature space, which is the eigen factors of this face. If two facial images have almost the same eigen factors, these two images represent the same face. [4] The details of the procedures and algorithm will be demonstrated in the following paragraphs.
K-NEAREST NEIGHBORS (KNN)
2.1 Conception of KNN
The k-nearest neighbors algorithm is a classification algorithm used in pattern recognition. And it is one of the simplest machine learning algorithm. In KNN classification. For a given sample, the distance from it to all the training samples will be computed and the number of k closest training samples will be set as the
2019101141 30 Sep 2019 nearest neighbors of the target sample. According to the classes which the nearest neighbors belong to, the output is the class which is the most common one selected by plurality vote. The target sample will be assigned to that class to complete the classification[5].
2.2 KNN applied in image classification
For our image classification, we use Euclidean distance value to calculate the similarity of each training data to the target sample. And we set k as 3. Each pixel in a image has its own value and by applying the following formula: L(x,y)=|xz·yi I, where x/ is the pixel value in target image,y; is the pixel value in training image We calculate the absolute deference values of the pixels and selected 3 leastvalue-images as closest neighbors. The classes of these three images are signed as class 1, class 2 and class 3 respectively. Then there exists several situations and solutions [6]:
(a) If class 1 £class2£class 3, The target sample will be classified to class1;
(b) If class 1 =class2Qclass 3, The target sample will be classified to class1;
(c) If class l£class2=class 3, The target sample will be classified to class2;
(d) If class l=class2=class 3, The target sample will be classified to class1;
FACE DETECTION
In the process of transferring training data to the database, the original image will include not only the facial part but also redundant environment information. But the training process will handle all the information in a image. So we have to import a face detection function to reshape the original image, highlighting the facial part and ignore irrelevant variables. Here we apply the First Face-region Location Based on a Novel Color Space YCgCr method to realize the face detection.
(1) Transforming RGB color space into YCbCr space by formula: gF = 0.257 * R + 0.504 * G + 0.098 * B + 16
Cr = R - Y = 0.439 * R - 9.368 * G - 0.071 * B = 128 (1) □ Cb = B - Y = -0.148 * R - 0.291 * G + 0.439 * B = 128
2019101141 30 Sep 2019
The following table explains the variables that appear in the formula, and Q,and Cr are relatively independent.
variable name meaning Y brightness
R red component
G green component
B blue component
Cr the difference between R and Y Cb the difference between B and Y (2) (2) Extracting the Feature region: the distribution of skin color points on each color component is counted. The threshold is set according to prior knowledge, and then the region of interest to be detected is segmented from the background through the threshold. The pixels in the threshold range are identified as skin color points, which are represented by 1; The pixels outside the threshold range are considered as nonskin color points, which are represented by 0.
YCrCb skin color model as shown in formula :
Hl, (if (a < Cr < β) && (γ < Cr < η)) •^skin □O, (else)
2019101141 30 Sep 2019
In our experiment, we take the range of color points as [ 140,160].
(3) Face Interception: The extracted features are projected onto the original imageto get the part of the face we need.
PROCEDURES OF FACE REGONITION BASED ON PCA ALGORITHM
4.1 Training Process
7.1.1 Data Acquisition
The image acquisition toolbox has to be installed to realize the function of taking pictures and videos in Matlab. After installing the package, we set the camera to recording a 10-frame-video in one time and store each frame as a picture in the corresponding folder. Therefore we have 10 pictures as training data for one specific target. The pictures are taken and processed by the face detection function, finally transformed into gray scale image for smaller data size and faster processing speed. The pictures are taken as gray scale image for smaller data size and faster processing speed. Meanwhile we collected some other image data by downloading on the internet, using the data base ORL-92xll2, which includes 40 persons’face and each person has 10 facial images. All the images in this database were taken as gray scale image, which is the same form with the test image data taken by ourselves. By combining the data base images online and the image taken by ourselves, we can enlarge out testing set to meet higher requirement andaccuracy.
7.1.2 Image Processing (1) There is no specific image file type required in our algorithm and we finally choose to transform all the data into format of ‘.pgm’.
(2) Transforming shape of the data: the image size taking by the camera or from the online data base will have various kinds of size. To optimize the accuracy and speed of later data processing, we reshape all the images into the size of 112*92.
(3) Dimension reduction: In computer every image is stored as a data matrix
2019101141 30 Sep 2019 (2-dimension array), we reconstruct the 2-dimension array in to a 1 dimension array. Therefore the new array is in the form of 1*10304(10204=112*92).
specific method is to move all the none-first-row number to the first row successively. The following formulas shows an example of array dimension reduction. Formula (1) is the original matrix, and Formula (2) is the reshaping matrix.
I 1 2 3k .4 5 61™ iI \ 7 8 9 I
147258369(4) (4) Computing average face: Now all the images are reconstructed into a onedimension matrix. This matrix can be also treated as a vector. Assuming we have m images in the training data set, so there exist m vectors di,d2,. . . ,d«,
We construct a new m*n matrix including all the vectors,, where n is the total pixels in one single image(in this program we have reshape the images and set n = 10304), M is the elements each image concludes. This matrix can be treated as an all-sample matrix. The function mean() enables us to calculate the average number all the vectors and get a new average vector(yi,y2,. . . ,y«).
The average image can be displayed by reshaping the average vector in to the original image size 112*92.[7] (5) Mean subtraction
The mean subtraction is realized by subtracting the mean image vector from every image vector (di,d2,. . . ,dw),[6] The following figures shows couple of images and ones after mean subtraction.
(6) Covariance matrix construction
After mean subtraction, we are able to obtain m new vectors d(i-avg),d(2-avg),.. . ,d(n-avg)· These n vectors can form a new subtraction m*n matrix A. Comput9
2019101141 30 Sep 2019 ing the following equation can obtain a covariance matrix B.
- 1 B = - di-avtdT_mg = —AA) (i = 1, 2.....„) (5) /-1 (7) Computing eigenvalues and eigenvectors
There several methods to compute the eigenvalue and eigenvector of a matrix. In matlab, the function eig() can be applied to calculate the eigenvalues λζ and eigenvectors vz.
(8) Selecting the best eigenvectors
The contribution rate is described as what is the proportion of the sum of selecting eigenvalues in sum of all eigenvalues. Here we set the contribution rate as 99% to select the first several eigenvalues. We remain the corresponding eigenvectors as the best eigenvectors and ignore others.
(9) Normalization
Assuming we obtain the number of p best eigenvectors. The following formula will be applied to conduct normalization of each best eigenvector.
M/ = ~^AVi(i= 1,2, ...,p) (6) λζ·
2019101141 30 Sep 2019
Where μζ is the normalized vectors.
(10) Eigenface
The normalized eigenvectors can be performed as a n*p matrix C, which represent the eigenface space.
(11) Projection Multiply di,d2,. . . ,d„ with matrix C to obtain the projection vectors pz of dz on eigenface space. The number of projection vectors dependson the number of training images.
7.1.3 Recognition (1) Reform the testing image as a l*n vector.
(2) Multiply the testing image vector with eigenface matrix C and obtain a projection vector pz.
(3) Compute the Euclidean Distance values from pz to each pz and find out the corresponding pz with shortest distance. The face which that specific pz represents has as the most similarity with testing face. Applying the 3-nearest neighbors algorithm to realize the distance comparation.[8]
IMPROVEMENT AND DISCUSSION
After many experiments, we finally chose a precise way. We firstly transform RGB format image into YCbCr format, and then we extract the range of the original face according to the skin color range of the face. After those two steps, we transform the image into gray image and standardize the size of the image, so as to eliminate the influence of the background and exposure of the image on the result. In each experiment, we take five images in real time to from the Webcam to ensure the accuracy. First, we get the original image in the Webcams: Then we
2019101141 30 Sep 2019 get the five images after image processing:
We can figure that the result of image processing gives a fairly accurate result which can be put in use.
Then we use the images after image processing to do the recognition with the method 5.1.2.12, which will give us five result in total. Finally, we will take the result of the largest frequency as our final output.
We can see the output result matches the test object.
2019101141 30 Sep 2019

Claims (3)

1. A human face recognition based on principal component analysis, which makes full use of PCA algorithm to realize face recognition, wherein a lot of training sets on the basis of the original data set are added, and a KNN algorithm is used in the test set to make the results accurate and reliable.
2. The human face recognition based on principal component analysis according to claim 1, wherein one PCA algorithm and KNN algorithm are introduced in order to avoid face recognition with limited information; t he KNN algorithm used in the invention determines the face tag according to the different distance; the average of multiple tags is used to determine which training sample the test face belongs to; through a single test, our invention reaches higher recognitionaccuracy than other methods.
3. The human face recognition based on principal component analysis according to claim 1, wherein during the training process, we used 42 groups of training samples, each sample contains five pictures,and each picture has a different expression; basically, the training set covering most possible conditions; s o we make surethat the results are close to optimal.
2019101141 30 Sep 2019
Figure 1
2019101141 30 Sep 2019
Figure 2
2019101141 30 Sep 2019
A total of 42 training sets, 5 in each group.
The training set pictures are all 112 x 92 pixels grayscale images.
Figure 3
2019101141 30 Sep 2019
Figure 4
2019101141 30 Sep
Figure 5
2019101141 30 Sep 2019
Figure 6
2019101141 30 Sep 2019
38 38 38 38 38
Figure 7
AU2019101141A 2019-09-30 2019-09-30 Human face recognition based on Principal Component Analysis Ceased AU2019101141A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2019101141A AU2019101141A4 (en) 2019-09-30 2019-09-30 Human face recognition based on Principal Component Analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2019101141A AU2019101141A4 (en) 2019-09-30 2019-09-30 Human face recognition based on Principal Component Analysis

Publications (1)

Publication Number Publication Date
AU2019101141A4 true AU2019101141A4 (en) 2019-10-31

Family

ID=68342011

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2019101141A Ceased AU2019101141A4 (en) 2019-09-30 2019-09-30 Human face recognition based on Principal Component Analysis

Country Status (1)

Country Link
AU (1) AU2019101141A4 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070896A (en) * 2020-09-07 2020-12-11 哈尔滨工业大学(威海) Portrait automatic slimming method based on 3D modeling
CN113295142A (en) * 2021-05-14 2021-08-24 上海大学 Terrain scanning analysis method and device based on FARO scanner and point cloud
CN113743236A (en) * 2021-08-11 2021-12-03 交控科技股份有限公司 Passenger portrait analysis method, device, electronic equipment and computer readable storage medium
CN115035462A (en) * 2022-08-09 2022-09-09 阿里巴巴(中国)有限公司 Video identification method, device, equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070896A (en) * 2020-09-07 2020-12-11 哈尔滨工业大学(威海) Portrait automatic slimming method based on 3D modeling
CN112070896B (en) * 2020-09-07 2022-05-03 哈尔滨工业大学(威海) Portrait automatic slimming method based on 3D modeling
CN113295142A (en) * 2021-05-14 2021-08-24 上海大学 Terrain scanning analysis method and device based on FARO scanner and point cloud
CN113295142B (en) * 2021-05-14 2023-02-21 上海大学 Terrain scanning analysis method and device based on FARO scanner and point cloud
CN113743236A (en) * 2021-08-11 2021-12-03 交控科技股份有限公司 Passenger portrait analysis method, device, electronic equipment and computer readable storage medium
CN115035462A (en) * 2022-08-09 2022-09-09 阿里巴巴(中国)有限公司 Video identification method, device, equipment and storage medium
CN115035462B (en) * 2022-08-09 2023-01-24 阿里巴巴(中国)有限公司 Video identification method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
AU2019101141A4 (en) Human face recognition based on Principal Component Analysis
Jayaraman et al. Recent development in face recognition
Tome et al. Facial soft biometric features for forensic face recognition
Satonkar Suhas et al. Face recognition using principal component analysis and linear discriminant analysis on holistic approach in facial images database
KR20080033486A (en) Automatic biometric identification based on face recognition and support vector machines
Hartanto et al. Face recognition for attendance system detection
Shrivastava et al. Conceptual model for proficient automated attendance system based on face recognition and gender classification using Haar-Cascade, LBPH algorithm along with LDA model
Yadav et al. A novel approach for face detection using hybrid skin color model
Kim et al. A new biased discriminant analysis using composite vectors for eye detection
Sudhakar et al. Facial identification of twins based on fusion score method
Mohamed et al. Automated face recogntion system: Multi-input databases
George et al. Face recognition on surgically altered faces using principal component analysis
KR20160042646A (en) Method of Recognizing Faces
Ganakwar et al. Face detection using boosted cascade of simple feature
Mousa Pasandi Face, Age and Gender Recognition Using Local Descriptors
Mavadati et al. Fusion of visible and synthesised near infrared information for face authentication
Hannan et al. Analysis of Detection and Recognition of Human Face Using Support Vector Machine
Paul et al. Face detection using skin color recursive clustering and recognition using multilinear PCA
Hussein et al. Face Recognition Using The Basic Components Analysis Algorithm
Alrikabi et al. Deep Learning-Based Face Detection and Recognition System
Deepa et al. Challenging aspects for facial feature extraction and age estimation
Emadi et al. Human face detection in color images using fusion of Ada Boost and LBP feature
Rujirakul et al. Parallel optimized pearson correlation condition (PO-PCC) for robust cosmetic makeup facial recognition.
Lee et al. Advanced face recognition and verification in mobile platforms
ERCAN et al. A Face Authentication System Using Landmark Detection

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry