KR101749268B1 - A robust face recognition method for pose variations based on pose estimation - Google Patents

A robust face recognition method for pose variations based on pose estimation Download PDF

Info

Publication number
KR101749268B1
KR101749268B1 KR1020150155364A KR20150155364A KR101749268B1 KR 101749268 B1 KR101749268 B1 KR 101749268B1 KR 1020150155364 A KR1020150155364 A KR 1020150155364A KR 20150155364 A KR20150155364 A KR 20150155364A KR 101749268 B1 KR101749268 B1 KR 101749268B1
Authority
KR
South Korea
Prior art keywords
pose
face
recognition
data
vector
Prior art date
Application number
KR1020150155364A
Other languages
Korean (ko)
Other versions
KR20170053069A (en
Inventor
오성권
김진율
Original Assignee
수원대학교산학협력단
위아코퍼레이션 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 수원대학교산학협력단, 위아코퍼레이션 주식회사 filed Critical 수원대학교산학협력단
Priority to KR1020150155364A priority Critical patent/KR101749268B1/en
Publication of KR20170053069A publication Critical patent/KR20170053069A/en
Application granted granted Critical
Publication of KR101749268B1 publication Critical patent/KR101749268B1/en

Links

Images

Classifications

    • G06K9/00288
    • G06F17/30244
    • G06K9/627
    • G06K9/66

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

More particularly, the present invention relates to a face recognition method robust to a pose change based on a pose estimation, comprising: (1) storing a face image of plural pose detected from a preset learning movie; A database; (2) performing preprocessing of dimensional reduction using a principal component analysis (PCA) algorithm for each of the plurality of pose face images stored in the database; (3) inputting the preprocessed data in step (2) into a polynomial-based RBFNNSs (Radial Basis Function Neural Networks) pattern classifier to learn each pose data; (4) acquiring optimized parameters for each pose data using a particle swarm optimization (PSO) algorithm; (5) After the learning process is completed through steps (1) to (4), a 2-directional 2-Dimensional Principal Component A step of performing a pre-processing of dimension reduction using an analysis algorithm; (6) estimating and detecting a pseudo-pose for the test face image subjected to the pre-processing of dimension reduction through step (5); And (7) determining the test face image as an object of recognition by applying the detected similar pose to the optimized parameters obtained through the learning process of steps (1) to (4) .
According to the face recognition method based on the pose estimation based on the pose change proposed in the present invention, the facial images of multiple poses are stored in a database and stored, and after the preprocessing process of dimension reduction using the principal component analysis algorithm, the polynomial-based RBFNNs pattern classifier (PSO), obtains optimized parameters, reduces the size of the test data by using the two-way two-dimensional principal component analysis algorithm, and reduces the dimensionally reduced test face image A pose is obtained through a process of classifying and estimating a similar pose, and the test pose can be recognized by applying the obtained similar pose to the obtained optimized parameter, so that even if a pose other than the front is used, Face recognition is possible, as well as various sizes It is possible to recognize the face even for changes of change and pose.
Further, according to the present invention, the recognition speed and the recognition rate are improved by reducing the data dimension using the two-way two-dimensional principal component analysis algorithm for the test data, and the recognition performance can be improved by fast learning convergence in the optimized parameter .

Description

FIELD OF THE INVENTION [0001] The present invention relates to a face recognition method and a face recognition method,

The present invention relates to a face recognition method, and more particularly, to a face recognition method that is robust to pose change based pose estimation.

With the development of network environment and information, the importance of security in many fields has become a hot topic. However, the recognition technology currently used requires users to memorize a security code or a security key, and there is a problem that exposure such as loss or theft may occur. In order to overcome these problems, biometrics such as fingerprint recognition and iris recognition are being studied in various fields.

Biometrics technology uses individual physical characteristics such as fingerprint, hand, face, iris, retina, vein, and physical characteristics such as font, voice, and gait, Such as loss and theft occurring in the security-aware technology of the system. Unlike other biometrics technologies, face recognition, which is one of the biometrics technologies, is implemented in a non-contact manner in which a person to be recognized does not need to directly contact the recognition unit, which is advantageous in that the user is less inconvenienced.

Conventional facial recognition has a disadvantage in that it is impossible to recognize a facial image unless a person stares at the facial face because a controlled method of gazing at a fixed camera is generally used. In addition, the conventional face recognition method performs recognition based on the degree of brightness of the acquired two-dimensional image. Accordingly, the greater the difference in intensity of the given learning data and the test data, the more difficult it is to obtain accurate recognition performance. Further, in the face recognition method according to the related art, the learning data is configured using the front face image of the recognition object, and the test image is also recognized by acquiring the front face image in which the recognition object is gazing at the camera, The face must be gazed at the camera, and when the face image of the other pose is inputted, the face is difficult to be recognized correctly. That is, in the conventional face recognition method, the recognition performance of the existing two-dimensional face recognition is deteriorated due to an external environment such as illumination and pose change. Korean Patent Registration No. 10-0955255 (Registered on Apr. 21, 2010) discloses a face recognition apparatus and method thereof, and a method for estimating the degree of change in the internal and external environment of a face.

The present invention has been proposed in order to solve the above-mentioned problems of the previously proposed methods. The present invention proposes a database of facial images of multiple poses, stores the facial images, preprocesses the dimension reduction using the principle component analysis algorithm, Each pose data is learned by a pattern classifier, the optimized parameters are obtained by using a particle cluster optimization algorithm (PSO), a dimension is reduced by using a two-way two-dimensional principal component analysis algorithm for test data, A pose is acquired through a process of classifying and estimating a similar pose with respect to an image, and the obtained pseudo pose is applied to the obtained optimized parameter so that the test face image can be recognized, In addition to being able to recognize the subject's face, To enable the recognition of the face about the change of the change in pose, a pose that to provide a robust face detection method based on the estimated change in pose to the object.

The present invention also provides a method and apparatus for improving the recognition rate and the recognition rate by reducing the data dimension using the two-way two-dimensional principal component analysis algorithm for the test data and improving the recognition performance by fast learning convergence in the optimized parameter Another object of the present invention is to provide a face recognition method robust to a pose estimation based pose change.

According to an aspect of the present invention, there is provided a face recognition method based on a pose estimation,

1. A face recognition method based on a pose estimation-based pose change,

(1) storing a face image of a plurality of poses detected from a learning movie set in advance and storing it in a database;

(2) performing preprocessing of dimensional reduction using a principal component analysis (PCA) algorithm for each of the plurality of pose face images stored in the database;

(3) inputting the preprocessed data in step (2) into a polynomial-based RBFNNSs (Radial Basis Function Neural Networks) pattern classifier to learn each pose data;

(4) acquiring optimized parameters for each pose data using a particle swarm optimization (PSO) algorithm;

(5) After the learning process is completed through the above steps (1) to (4), a 2-directional 2-Dimensional Principal Component Analysis A step of performing a pre-processing of dimension reduction using an analysis algorithm;

(6) estimating and detecting a pseudo-pose for the test face image subjected to the pre-processing of dimension reduction through step (5); And

(7) determining that the test face image is to be recognized by applying the detected similar pose to the optimized parameters obtained through the learning process of steps (1) to (4) do.

Preferably, in the step (2)

(2-1) constructing a face vector set of recognition candidates for the face images of the plurality of poses;

(2-2) normalizing the face image based on the average and variance of the vector sets of the face images;

(2-3) calculating and calculating an average face vector from a vector set of the face images;

(2-4) calculating and calculating a difference vector between the face candidate vector and the average face vector of the face image;

(2-5) calculating and calculating a covariance matrix on the face of the recognized face of the face image using the calculated difference vector; And

(2-6) selecting only M 's having the largest eigenvalues in M eigenvectors of the covariance matrix, and acquiring weights through projection of each recognition candidate face and eigenvector have.

Preferably, in the step (3)

The preprocessed data for each pose is classified by measuring the degree of belonging based on the distance between the data and each cluster using a FCM (Fuzzy C-Means) clustering algorithm,

The data classification in the step (3)

(3-1) selecting a number of clusters and a fuzzy coefficient, and initializing a belonging function;

(3-2) calculating and obtaining a center vector for each cluster;

(3-3) calculating a new membership function by calculating a distance between the center vector and each data, and calculating the new membership function; And

(3-4) terminating the algorithm when the error between the membership function and the new membership function falls within the allowable range, and if the error does not reach the allowable range, proceeding to the step (3-2) .

Advantageously, the optimized parameter in step (4)

A polynomial type of the connection weight, a number of nodes, and a number of dimensions to be reduced,

The polynomial type of the connection weights,

A first linear inference type, a second linear inference type, and a modified second order inferential type,

In the step (4)

The parameters for each rule can be obtained by independently arithmetically using the Weighted Least Square Estimator of the regional learning method.

Preferably, in the step (6)

Prior to classifying facial pose using the Multi-Space PCA, a DB was constructed with images classified according to yaw angles (± 90 °, ± 45 °, 0 °), and PCA was performed for each pose. We construct a Multi-Space PCA space with a star-specific face vector. We can project a face image to be tested into each PCA space, calculate its distance, and classify pose with minimum distance to detect pseudo pose.

According to the face recognition method based on the pose estimation based on the pose change proposed in the present invention, the facial images of multiple poses are stored in a database and stored, and after the preprocessing process of dimension reduction using the principal component analysis algorithm, the polynomial-based RBFNNs pattern classifier (PSO), obtains optimized parameters, reduces the size of the test data by using the two-way two-dimensional principal component analysis algorithm, and reduces the dimensionally reduced test face image A pose is obtained through a process of classifying and estimating a similar pose, and the test pose can be recognized by applying the obtained similar pose to the obtained optimized parameter, so that even if a pose other than the front is used, Face recognition is possible, as well as various sizes It is possible to recognize the face even for changes of change and pose.

Further, according to the present invention, the recognition speed and the recognition rate are improved by reducing the data dimension using the two-way two-dimensional principal component analysis algorithm for the test data, and the recognition performance can be improved by fast learning convergence in the optimized parameter .

Brief Description of the Drawings Fig. 1 is a flowchart illustrating a method of recognizing a face that is robust to a pose change based on a pose estimation according to an embodiment of the present invention.
FIG. 2 illustrates a flow of a dimension reduction step using a principal component analysis algorithm among face recognition methods robust to a pose change based on a pose estimation according to an embodiment of the present invention.
FIG. 3 illustrates a flow of FCM clustering among face recognition methods robust to pose change based pose estimation according to an exemplary embodiment of the present invention.
4 is a diagram illustrating a configuration of a Honda / UCSD database used in a face recognition method robust to a pose change based on a pose estimation according to an embodiment of the present invention.
FIG. 5 is a diagram illustrating a dimension reduction of a principal component analysis method (PAC) among methods of recognizing facial pose change based on pose estimation according to an exemplary embodiment of the present invention.
FIG. 6 illustrates an overall structure of a polynomial-based RBFNNSs (Radial Basis Function Neural Networks) pattern classifier used in a robust face recognition method based on pose estimation based pose variation according to an embodiment of the present invention.
7 is a diagram illustrating a configuration of a multi-space PCA used in a face recognition method robust to a pose change based on a pose estimation according to an embodiment of the present invention.
8 is a diagram illustrating distance values between adjacent pauses used in a face recognition method robust to pose change based pose estimation according to an embodiment of the present invention;
9 is a view illustrating pose estimation using a Euclidean distance used in a face recognition method robust to a pose change based on a pose estimation according to an embodiment of the present invention.
10 is a diagram illustrating a configuration of an experimental example of a face recognition method robust to a pose change based on a pose estimation according to an embodiment of the present invention.
11 is a diagram illustrating generation of a proposed model by the k-cluster cross validation method used in the experimental example of the face recognition method robust to the pose change based pose estimation according to the embodiment of the present invention.
FIG. 12 illustrates pose estimation of test data using a (2D) 2 PCA in an experimental example of a face recognition method robust to pose change based pose estimation according to an embodiment of the present invention;

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings, in order that those skilled in the art can easily carry out the present invention. In the following detailed description of the preferred embodiments of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. In the drawings, like reference numerals are used throughout the drawings.

In addition, in the entire specification, when a part is referred to as being 'connected' to another part, it may be referred to as 'indirectly connected' not only with 'directly connected' . Also, to "include" an element means that it may include other elements, rather than excluding other elements, unless specifically stated otherwise.

FIG. 1 illustrates a flow of a face recognition method robust to a pose change based on a pose estimation according to an embodiment of the present invention. As shown in FIG. 1, a face recognition method based on a pose estimation-based pose change according to an embodiment of the present invention includes storing a plurality of face images of a plurality of poses detected from a preset learning movie, A step S130 of inputting the preprocessed data into a polynomial-based RBFNNSs pattern classifier to learn each pose data (step S130), a particle cluster optimization algorithm (PSO (S140). In the step S140, preprocessing of the dimension reduction using the two-way two-dimensional principal component analysis algorithm is performed on the test face image detected from the preset test moving image In step S150, a similar pose is estimated for the test face image on which the pre-processing of dimension reduction has been performed (S160), and determining the test face image as an object to be recognized by applying the parameter to the optimized parameter (S170).

In step S110, the face images of the plurality of poses detected from the learning movie set in advance are stored and converted into a database. Here, in step S110, a plurality of poses, each consisting of five poses per person, are extracted from a video image of a plurality of persons to be used for face recognition, Images can be stored in the Honda / UCSD DB. At this time, the shape of the face image is obtained in five face shapes according to the change of each pose angle (left 90 °, left 45 °, front face, right 45 °, right 90 °), and the size of the image extracted from the image is 90 × 90 < / RTI > size. Here, the Honda / UCSD DB (Honda UCSD Video Database) is an authorized DB, and is provided for the face tracking and recognition algorithm. In the present invention, the image is recorded for 15 seconds in a video sequence environment of 15 frames per second. Images are stored in a size of 640 × 480. To calculate the recognition performance of the face recognition using the Honda / UCSD DB, 20 persons And a total of 500 images constituted by extracting 5 images per pose in the image arbitrarily in the images of each pose. 4 shows the configuration of the Honda / UCSD database.

In step S120, preprocessing of dimension reduction using a principal component analysis (PCA) algorithm is performed on each of the plurality of pose face images stored in the database. Principal component analysis is a typical algorithm that reduces dimensionality so that the loss of information is minimized by using a high-dimensional feature vector as a low-dimensional feature vector. When face image size is N × N and the number of recognition candidate face images is M, face recognition using PCA algorithm constructs a face vector set of candidates recognizing each candidate face image as M N 2 × 1 row vectors, The average image can be obtained using the face vector set of the candidates. Also, to obtain the covariance matrix, the difference image vector between the learning image vector and the average face image vector is obtained. The result is N 2 × M matrix, and the covariance matrix has N 2 × N 2 dimension. From these covariance matrices, M eigenvalues λ i and their corresponding eigenvectors are calculated, and by rearranging the eigenvectors obtained from the eigenvalues into N × N, a unique face resembling a human shape can be obtained. Hereinafter, the dimension reduction process using the principal component analysis algorithm of step S120 will be described in detail with reference to FIG. 2 attached hereto.

FIG. 2 is a diagram illustrating a flow of a dimension reduction step using a principal component analysis algorithm among face recognition methods robust to pose change based pose estimation according to an embodiment of the present invention. As shown in FIG. 2, step S120 includes a step S121 of constructing a face vector set of recognition candidates for a face image of a plurality of poses, a step S122 of normalizing the face image based on the average and variance of the vector sets of the face images, A step S123 of calculating an average face vector from the vector set of the face images and calculating S123, a step S124 of calculating a difference vector between the face candidate vector and the average face vector, (S125) calculating and calculating a covariance matrix on the face of the recognized face of the face image using the calculated difference vector, and selecting only M 's having the largest eigenvalues in the M eigenvectors of the covariance matrix, And acquiring a weight through projection of the recognition candidate face and the eigenvector (S126).

In step S121, the face vector set S of the face images Γ of M recognition candidates can be set as in the following equation (1).

Figure 112015107972411-pat00001

In step S122, the face image can be normalized based on the mean μ and variance? Of the vector set of the face image? As shown in the following equation (2).

Figure 112015107972411-pat00002

In step S123, the average face vector? Can be calculated from the vector set of the face image? As shown in the following equation (3).

Figure 112015107972411-pat00003

In step S124, the difference vector? Between the recognition candidate face vector and the average face vector can be calculated and calculated as in the following equation (4).

Figure 112015107972411-pat00004

In step S125, the covariance matrix C can be calculated from the face of the recognized face of the face image using the calculated difference vector? As shown in the following equation (5).

Figure 112015107972411-pat00005

In step S126, eigenvectors having the largest eigenvalues in M eigenvectors are selected as shown in the following equations (6) and (7), and the eigenvectors of the respective recognition candidate faces and the eigenvectors are weighted Ω T can be obtained.

Figure 112015107972411-pat00006

Figure 112015107972411-pat00007

FIG. 5 is a diagram illustrating the dimension reduction of the principal component analysis method (PAC) among the robust face recognition methods based on the pose estimation based pose change according to the embodiment of the present invention. Fig. 5 (a) shows a general data distribution, and Fig. 5 (b) shows a dimension reduction of data.

In step S130, the preprocessed data in step S120 may be input to a polynomial-based RBFNNSs (Radial Basis Function Neural Networks) pattern classifier to learn each pose data. Hereinafter, the structure and optimization of a polynomial-based RBFNNSs (Radial Basis Function Neural Networks) pattern classifier will be described in detail with reference to the accompanying drawings.

FIG. 6 is a diagram illustrating the overall structure of a polynomial-based RBFNNSs (Radial Basis Function Neural Networks) pattern classifier used in a robust face recognition method based on pose estimation based pose variation according to an embodiment of the present invention. The basic neural network is an algorithm for implementing the human brain. The RBFNNs (Pattern Baseline Function Neural Networks) pattern classifier can be classified into three types: input layer, hidden layer, and output layer based on the structure of a neural network. In the input layer, there are active functions as many as the number of input variables in each node, and the active function is composed of radial basis functions and mainly uses Gaussian form. The input data is subjected to a preprocessing process and reduced to low dimensional data suitable for use in recognition and input to the hidden layer input. At this time, the output converted from the hidden layer to the active function is multiplied by the connection weight value between the hidden layer and the output layer, and is obtained as the final output of the output layer, and the connection weight value is used as the constant. The polynomial-based RBFNNs pattern classifier used in the present invention is classified into three types of structural modules as the input layer, hidden layer, and output layer in comparison with the conventional RBFNNs. However, the functional module includes three types of conditional part, Can be distinguished. Also, by using the membership (fitness) value of the FCM (Fuzzy C-Means) clustering algorithm instead of the Gaussian function used as the conditional active function, it is possible to better reflect the characteristics of the input data. In addition, the connection weight of the conclusion part is extended in the constant term to a polynomial form such as a linear equation, a quadratic equation and a modified quadratic equation, as shown in the following equations (8) to (12).

[Type 1] Linear linear inference (Linear)

Figure 112015107972411-pat00008

[Type 2] Second order linear inference (Quadratic)

Figure 112015107972411-pat00009

Figure 112015107972411-pat00010

[Type 3] Modified Quadratic Inferences (Modified Quadratic)

Figure 112015107972411-pat00011

Figure 112015107972411-pat00012

Here, x = [x1, x2, ... , Xk] from k denotes the number of input variables, R j is the j-th fuzzy rule (j = 1, ..., c ) an represents, c is the fuzzy number of rules, f j (x1, ..., xk) is j The second part of the rule is the local model for the jth fuzzy rule. At this time, the output of the model can be expressed by the following equation (13).

Figure 112015107972411-pat00013

Thus, by using the connection weight in the form of a polynomial, it becomes possible to perform a linguistic analysis such as the following expression (14) of the fuzzy rule.

Figure 112015107972411-pat00014

In this step S130, the preprocessed pose-specific data can be classified by measuring the degree of belonging based on the distance between the data and each cluster using the FCM (Fuzzy C-Means) clustering algorithm. FCM (Fuzzy C-Means) Clustering algorithm is an algorithm that classifies data based on similar patterns, attributes, and types. It classifies data by measuring the degree of affiliation based on the distance between data and each cluster. Can be used to represent the hidden layer active function form of the polynomial-based RBFNNs pattern classifier and can be performed through the steps as shown in FIG.

FIG. 3 is a diagram illustrating a flow of FCM clustering among face recognition methods robust to pose change based pose estimation according to an embodiment of the present invention. As shown in FIG. 3, the FCM clustering data classification process includes a step of selecting the number of clusters and a fuzzy coefficient, initializing a belonging function (S131), calculating a center vector for each cluster (S132 A step S133 of calculating and calculating a new belonging function by calculating the distance between the center vector and each data, and terminating the algorithm when the error of the belonging function and the new belonging function falls within the allowable range, And if it is not within the range, the process proceeds to step S132 and repeats step S134.

In step S131, the number of clusters and the fuzzification coefficient are selected and the belonging function U (0 ) is initialized as shown in the following equation (15).

Figure 112015107972411-pat00015

In step S132, a center vector for each cluster is calculated and found as in the following equation (16).

Figure 112015107972411-pat00016

In step S133, the distance d between the center and the data is calculated as in the following equations (17) and (18), and a new membership function U (1 ) is calculated through the calculation.

Figure 112015107972411-pat00017

Figure 112015107972411-pat00018

In step S134, the algorithm is terminated when the error reaches the permissible range epsilon as in the following equation (19), and the process returns to step S132.

Figure 112015107972411-pat00019

On the other hand, in the conventional fuzzy inference system, Least Square Estimator (LSE), which is a global learning method that mainly obtains parameters simultaneously in each rule, is mainly used in the latter half parameter identification, Overfitting can occur when the number of fuzzy rules is large, or when the number of fuzzy rules is large or the number of inputs is large, the number of parameters to be searched increases and the calculation time is long. In order to compensate for this, the present invention uses a weighted least square estimator (WLSE) to apply a local learning method for independently obtaining parameters for each rule. Table 1 below shows the characteristics of LSE and WLSE.

Figure 112015107972411-pat00020

The LSE measures the coefficient so that the sum of the squared errors is minimal, but the WLSE has a difference that the square of the error is multiplied by the weight. The performance evaluation function in the WLSE can be represented by a matrix expression as in Equation (20) below.

Figure 112015107972411-pat00021

Here, a j denotes a coefficient of the j-th polynomial to be estimated, Y denotes output data, and U j denotes a value belonging to input data of the j-th input space. X j denotes an input data matrix for estimating the coefficients of the j-th local model, and when the local model is linear, it can be defined as the following equation (21).

Figure 112015107972411-pat00022

Here, m is the number of data, and the coefficient of the polynomial, which is a local model for the jth rule, can be obtained by the following equation (22).

Figure 112015107972411-pat00023

The polynomial-based RBFNNs pattern classifier designed as described above is easy to solve multidimensional input and output problems, has strong network characteristics, and has excellent prediction performance.

In step S140, optimized parameters for data for each pose can be obtained by using Particle Swarm Optimization (PSO). The optimized parameters in this step S140 include the fuzzy number, the polynomial type of the connection weights, the number of nodes, and the number of dimensions to be reduced, and the polynomial type of the connection weights includes a first order linear inference type, Type, and a modified second order linear inference type. In addition, in step S140, the parameters for each rule can be obtained by independently arithmetically using the weighted least square method of the area learning method.

Particle cluster optimization algorithm (PSO) for parameter optimization is an optimization algorithm based on social behaviors of biological communities such as swarms and fish flocks, which were first introduced by Kennedy and Eberhart. And efficiency of computation. It can generate optimal solution within short computation time and shows more stable convergence than other probabilistic methods. Such a particle cluster optimization algorithm can be performed by the following process.

First, the initial swarm and particle velocity are randomly generated, and then the initial particle is selected as pbest, and the optimal value is selected again as gbest. Next, the inertia load value is calculated through the following equations (23) and (24), and the jth particle velocity is calculated based on the inertia load value.

Figure 112015107972411-pat00024

Figure 112015107972411-pat00025

Next, based on the particle velocity, the position information of the particle can be corrected as shown in the following equation (25).

Figure 112015107972411-pat00026

Next, the fitness of each particle is compared with the fit of pbest to reset, and the pbest and gbest of the optimal solution are compared and reset, the search is continued until the end condition is satisfied, and finally the gbest with optimal position information is generated do. In the present invention, this optimization algorithm is used to optimize the fuzzification coefficient, the polynomial type of the connection weight, the number of nodes, and the number of reduced dimensions.

In step S150, a 2-directional 2-dimensional Principal Component Analysis algorithm is applied to the test face image detected from the preset test moving image after the learning process is terminated through steps S110 to S140. Pre-processing of the used dimension reduction can be performed. That is, the in step S150 in the second direction the two-dimensional principal component analysis ((2D) 2 PCA) of using an algorithm to reduce the dimension in the row direction (2D) PCA and reduce the dimension in the column direction (2D) PCA feature vector of the matrix It extracts only the major component and reduces the dimension. The two-way two-dimensional principal component analysis ((2D) 2 PCA) algorithm is a combination of (2D) PCA in which the dimension is reduced in the row direction and (2D) PCA in which the dimension is reduced in the column direction. It means to reduce the dimension in the direction. When new recognition candidates are input using the two feature matrices thus generated, they can be expressed as the following equation (26).

Figure 112015107972411-pat00027

That is, with reduced dimensions as previously described row direction (2D) PCA and characterized matrices U and V of reduced dimension in the column direction (2D) PCA is reduced dimensions as m 1 × d 1 in the row direction, the column in m 2 direction The dimension is reduced by × d 2 . Thus, the size of the matrix of Y is m 1 × d 2 . By extracting only the largest number of principal components among the feature vectors of rows and columns, the 2DPCA has the same recognition rate as the 2DPCA, and the calculation speed is reduced by reducing the dimension to be calculated.

In step S160, a similar pose can be estimated and detected for the test face image on which the size reduction pre-processing has been performed through step S150. In this step S160, DBs are classified into images classified according to yaw angles (± 90 °, ± 45 °, 0 °) before classification of facial pose using Multi-Space PCA, and PCA To construct a Multi-Space PCA space having pose-specific eigenface vectors. The facial images to be tested are projected onto each PCA space, and the distances are calculated. The pose having the minimum distance is classified to detect the pose.

FIG. 7 is a diagram illustrating a configuration of a multi-space PCA used in a face recognition method robust to a pose change based on a pose estimation according to an embodiment of the present invention. FIG. FIG. 9 is a diagram illustrating a distance value between adjacent pauses used in a face recognition method robust to change. FIG. 9 is a diagram illustrating a distance value between adjacent pauses used in a face recognition method, Fig. As shown in FIG. 7, when the face image sequence obtained from the face tracker is directly used for face recognition, the configuration of the Multi-Space PCA does not take into account the difference in the face image due to the pose change. Therefore, in order to improve the face recognition performance, the face extracted from the tracker should be classified according to the pose and the face recognition should be performed in the classified pose. In the present invention, in order to classify facial pose, the same poses of other people are classified according to yaw angles (± 90 °, ± 45 °, 0 °) as shown in FIG. 7, PCA is performed for each pose to construct a Multi-Space PCA space having pose-specific eigenface vectors.

Next, the face image to be tested is projected into each PCA space, and the distance is calculated and classified into a pose having a minimum distance. In this case, if the pose is classified by using all the components of the unique face vector for each pose, the pose is not classified by matching the wrong value in the pose-specific eigenvector. Therefore, in order to classify the pose, Several principal components are extracted and the extracted principal component is projected onto the test image, and the pose is estimated when the Euclidean distance from the PCA space per pose is minimum. The process of classifying poses is as follows. First, a PCA algorithm is executed for each pose to generate an eigenface vector. Next, a large principal component representing a feature of each pose is extracted. Next, the feature of the recognition candidate is projected as a principal component for each pose to find a close pose.

At this time, the image extraction having a pose similar to the database can be classified into one of five pose when the pose is determined by the method using the Multi-Space PCA. However, if face recognition is performed in the classified database by pose, the face recognition rate may decrease because there is a difference in pose angle between the learned image and the discriminated image in the database. In order to solve this problem, it is used for face recognition only when the extracted image is similar to the pose-specific database, thereby improving the performance of face recognition in the video. That is, when a subject has a certain pose, the difference in distance to the other pose becomes a quadratic curve shape, and it is known that the distance from the estimated pose to the adjacent pose is the same. For example, assuming that the current pose is the front as shown in FIG. 8A, there is almost no difference in the distance between the two poses of the left (Left 45) and the right (Right 45) . Therefore, we can calculate the distances to the left (Left 45) and right (Right 45) pose, respectively, and find that the current image is the most similar image to the database when there is little difference.

Thus, when the difference between the calculated distance values of Left 45 (Point 2) and Right 45 (Point 3) is denoted by λ and the difference is within a certain range (close to 0), the image indicates a pose similar to the image in the database. The face image can be used for face recognition. On the other hand, in FIG. 8 (b), the distance difference between the two points increases in a process in which the yaw angle changes and the face moves from one pose to the next pose. That is, as shown in FIG. 9, it is possible to confirm which pose the recognition result is performed through the Euclidean distance. When the front pose angle is set to '3' and the right and left photographs are set to '4' and '2', pose is properly estimated for any right 45 ° picture and any frontal picture, In the case of a tile, it can be confirmed that the pose is mistaken for the front side rather than the left side.

In step S170, the detected similarity pose is applied to the optimized parameters obtained through the learning process of steps S110 to S140 to determine the test face image as the recognition target.

As described above, the face recognition method robust to the pose change based on the pose estimation according to the embodiment of the present invention is characterized in that a face image of a plurality of poses is stored as a database, and a preprocessing process of dimension reduction using a principal component analysis algorithm is performed Each pose data is learned with a polynomial based RBFNNs pattern classifier. The optimized parameters are obtained using Particle Swarm Optimization Algorithm (PSO). Dimension reduction is performed using 2-way 2-D principal component analysis algorithm for test data, A pose is obtained through a process of classifying and estimating a similar pose with respect to the test face image, and the obtained pseudo pose is applied to the obtained optimized parameter so as to recognize the test face image, It is possible to recognize the face of the object, It is possible to recognize a face even for a change in size and a change in a pose. Further, by reducing the data dimension using the two-way two-dimensional principal component analysis algorithm for the test data, the recognition speed and recognition rate can be improved and the recognition performance can be improved by fast learning convergence in the optimized parameter.

Experimental Example

FIG. 10 is a diagram illustrating a configuration of an example of a method of recognizing a strong face based on a pose change based on a pose estimation according to an embodiment of the present invention. FIG. 12 is a flowchart illustrating a method of recognizing a face based on a pose change based on a pose estimation according to an exemplary embodiment of the present invention. (2D) < / RTI > 2 PCA in the experimental example. In this experiment, Honda / UCSD DB was used to acquire a database of two-dimensional images for designing a face recognition system robust to pose change in two-dimensional face recognition. First, in order to pose classification (2D) recognition by using a 2 PCA was used for PCA and RBFNNs for recognition performance comparison of the pattern classifier recognition using the PCA and (2D) 2 PCA performance calculation or PCA and (2D) 2 PCA based RBFNNs Performance.

The learning and verification data were obtained from the Honda / UCSD database extracted from the video. The training and verification data were collected from each pose (left 90 °, left 45 °, frontal, right 45 °, And 90 °). We used 500 data for five images. Experimental data were obtained by dividing the learning, verification, and testing using the 5-fold cross validation as shown in [Table 2]. The test data was input to arbitrary frontal (left, right approximately 7˚), arbitrary left (approximately 45˚), and arbitrary right (approximately 45˚) images. The reason why the test image is extracted and inserted at an arbitrary angle is that it is possible to obtain a high recognition performance by performing the face recognition by classifying it finely, but it is theoretically impossible to configure all the learning data corresponding to the model in the present situation . Therefore, learning data using pose images of arbitrary 5 directions are used and recognition performance is confirmed by performing recognition in the most similar pose model when test data is input.

Figure 112015107972411-pat00028

Also, as shown in FIG. 11, by using k-fold cross validation, the data for each pose is entered into the verification data once, and recognition is performed by evaluating the average of the performance obtained by each model as one model Respectively.

The pose estimation of the face recognition system in this experiment used (2D) 2 PCA. As shown in FIG. 12, the first 3 photographs were composed of 20 persons . A total of 60 images are estimated in the order of front, random left, and random right as in the right image order. In this case, the portion marked with a red color indicates a pose estimation different from the original image angle.

(2D) 2 processing time for recognition by the Case (1) and the Case (2) in the training data classified into pose estimation through the PCA can be seen in the Table 3 and Table 4 below. In case of face recognition system using case (1) and case (2), it is confirmed that recognition processing speed is faster in case of face recognition system using (2D) 2 PCA. Also, in case (2) of [Table 3], recognition performance using RBFNNs pattern classifier is verified to have more processing time than Case (1) in [Table 3] . That is, [Table 3] and [Table 4] show the results of the performance of the face recognition system.

Figure 112015107972411-pat00029

Figure 112015107972411-pat00030

Table 5 below shows the performance results of the face recognition system. We compared recognition performance using PCA, (2D) 2PCA algorithm of case (1) and case (2) and RBFNNs based on PCA or (2D) 2PCA for pose classification of Honda / UCSD DB after pose classification. In case (1) and case (2), when PCA is used for facial recognition, it is confirmed that (2D) 2PCA produces slightly better performance than PCA. Also, The performance of the case (2) which performed learning and verification using the RBFNNs pattern classifier is better than that of the performance.

Figure 112015107972411-pat00031

In this experiment, (2D) were the Honda / UCSD data posing classified through a 2 PCA compared to PCA and (2D) 2 PCA or PCA and (2D) 2 PCA based polynomial face recognition performance of RBFNNs pattern classification group, the experimental results As shown in FIG. 12, the results of the pose estimation of the test data using the (2D) 2 PCA were confirmed, and the performance results of the face recognition system up to [Table 3] through [Table 5] We can confirm the computing time and recognition performance. In the case of using RBFNNs pattern classifier, it is confirmed that the recognition performance is higher than Case (1) by using optimization parameters acquired through learning, although it takes relatively long computing time by learning and verification. In addition, as compared to Case (1) and the Case (2) (2D) 2 PCA has a faster processing speed than the PCA, the amount of D to reduce the row direction and column direction, respectively, and calculates reduced, Case (1) (2D) 2 PCA in the case of using PCA in the calculation of recognition performance in case (2) and case (2). Case (2), as mentioned above, was confirmed to produce higher recognition performance than Case (1) because of recognition performance calculation through learning and verification.

The present invention may be embodied in many other specific forms without departing from the spirit or essential characteristics of the invention.

S110: a step of storing a face image of a plurality of poses detected from a preset learning moving picture into a database
S120: Performing preprocessing of dimension reduction using principal component analysis (PCA) algorithm
S121: constructing a face vector set of recognition candidates for a face image of a plurality of poses
S122: normalizing the face image based on the average and variance of the vector sets of the face images
S123: Calculating and calculating an average face vector from a vector set of face images
S124: Calculating and calculating a difference vector between the face candidate vector and the average face vector of the face image
S125: Calculating and calculating a covariance matrix in the face of the recognized candidate face image using the calculated difference vector
S126: selecting only M 'having the largest eigenvalue in M eigenvectors of the covariance matrix, and obtaining a weight through projection of each recognition candidate face and eigenvector
S130: inputting the preprocessed data to a polynomial-based RBFNNSs pattern classifier and learning each pose data
S131: selecting the number of clusters and the fuzzification coefficient, and initializing the belonging function
S132: calculating a center vector for each cluster
S133: a step of calculating a new membership function by calculating the distance between the center vector and each data, and calculating
S134: When the error between the belonging function and the new belonging function reaches the allowable range, the algorithm is terminated. If the error does not reach the allowable range, the process proceeds to S132 and repeats
S140: acquiring an optimized parameter for each pose-specific data using a particle cluster optimization algorithm (PSO)
S150: a step of performing preprocessing of dimension reduction using a two-way two-dimensional principal component analysis algorithm for a test face image detected from a preset test moving image
S160: Estimating and detecting a similar pose with respect to the test face image on which the preprocessing of dimension reduction has been performed
S170: determining the test face image as an object to be recognized by applying to the optimized parameter

Claims (5)

1. A face recognition method based on a pose estimation-based pose change,
(1) storing a face image of a plurality of poses detected from a learning movie set in advance and storing it in a database;
(2) performing preprocessing of dimensional reduction using a principal component analysis (PCA) algorithm for each of the plurality of pose face images stored in the database;
(3) inputting the preprocessed data in step (2) into a polynomial-based RBFNNSs (Radial Basis Function Neural Networks) pattern classifier to learn each pose data;
(4) acquiring optimized parameters for each pose data using a particle swarm optimization (PSO) algorithm;
(5) After the learning process is completed through the above steps (1) to (4), a 2-directional 2-Dimensional Principal Component Analysis A step of performing a pre-processing of dimension reduction using an analysis algorithm;
(6) estimating and detecting a pseudo-pose for the test face image subjected to the pre-processing of dimension reduction through step (5); And
(7) determining the test face image to be recognized by applying the detected similar pose to the optimized parameters obtained through the learning process of steps (1) to (4)
In the above step (1)
A face image of a plurality of poses constituted by five pose for each person in an image of a plurality of persons to be used for face recognition and constructed by arbitrarily extracting five images per pose in an image of each pose, The shape of the face image is obtained in five face shapes according to the change of each pose angle (left 90 °, left 45 °, front face, right 45 °, right 90 °), and the size of the image extracted from the image is 90 × 90,
In the step (5)
The feature of 2D (PCA) and 2D (2D) PCA in which the dimension is reduced in the row direction and the dimension in the column direction is reduced by extracting only the largest principal component from the vector of the matrix using the 2-way 2D principal component analysis algorithm,
In the step (6)
Prior to classifying facial pose using the Multi-Space PCA, a DB was constructed with images classified according to yaw angles (± 90 °, ± 45 °, 0 °), and PCA was performed for each pose. Space PCA space having a star unique face vector, projecting a face image to be tested into each PCA space, calculating the distance, and classifying the pose having a minimum distance to detect a similar pose. A robust face recognition method based on estimation based pose change.
2. The method according to claim 1, wherein in the step (2)
(2-1) constructing a face vector set of recognition candidates for the face images of the plurality of poses;
(2-2) normalizing the face image based on the average and variance of the vector sets of the face images;
(2-3) calculating and calculating an average face vector from a vector set of the face images;
(2-4) calculating and calculating a difference vector between the face candidate vector and the average face vector of the face image;
(2-5) calculating and calculating a covariance matrix on the face of the recognized face of the face image using the calculated difference vector; And
(2-6) selecting only M 'having the largest eigenvalue in M eigenvectors of the covariance matrix, and obtaining a weight through projection of each recognition candidate face and eigenvector, A method for robust face recognition based on pose estimation based pose change.
2. The method according to claim 1, wherein in the step (3)
The preprocessed data for each pose is classified by measuring the degree of belonging based on the distance between the data and each cluster using a FCM (Fuzzy C-Means) clustering algorithm,
The data classification in the step (3)
(3-1) selecting a number of clusters and a fuzzy coefficient, and initializing a belonging function;
(3-2) calculating and obtaining a center vector for each cluster;
(3-3) calculating a new membership function by calculating a distance between the center vector and each data, and calculating the new membership function; And
(3-4) terminating the algorithm when the error of the membership function and the new membership function falls within the allowable range, and if the error does not reach the allowable range, repeating the process of step (3-2) A method for recognizing a face based on a pose estimation, the method comprising:
2. The method of claim 1, wherein the optimized parameter in step (4)
A polynomial type of the connection weight, a number of nodes, and a number of dimensions to be reduced,
The polynomial type of the connection weights,
A first linear inference type, a second linear inference type, and a modified second order inferential type,
In the step (4)
A method of face recognition robust to changes in pose based on pose estimation, characterized in that parameters for each rule are independently calculated by using a Weighted Least Square Estimator of the local learning method.
delete
KR1020150155364A 2015-11-05 2015-11-05 A robust face recognition method for pose variations based on pose estimation KR101749268B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150155364A KR101749268B1 (en) 2015-11-05 2015-11-05 A robust face recognition method for pose variations based on pose estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150155364A KR101749268B1 (en) 2015-11-05 2015-11-05 A robust face recognition method for pose variations based on pose estimation

Publications (2)

Publication Number Publication Date
KR20170053069A KR20170053069A (en) 2017-05-15
KR101749268B1 true KR101749268B1 (en) 2017-06-20

Family

ID=58739574

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150155364A KR101749268B1 (en) 2015-11-05 2015-11-05 A robust face recognition method for pose variations based on pose estimation

Country Status (1)

Country Link
KR (1) KR101749268B1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633236B (en) * 2017-09-28 2019-01-22 北京达佳互联信息技术有限公司 Picture material understanding method, device and server
CN108228823A (en) * 2017-12-29 2018-06-29 中国电子科技集团公司信息科学研究院 A kind of binary-coding method and system of high dimensional image dimensionality reduction
KR102016082B1 (en) 2018-02-01 2019-08-29 고려대학교 산학협력단 Method and apparatus for pose-invariant face recognition based on deep learning
KR20200086168A (en) 2019-01-08 2020-07-16 연세대학교 산학협력단 System and Method for Supporting Pragmatic or Practical Clinical Trial
KR102194282B1 (en) * 2019-05-17 2020-12-23 네이버 주식회사 Method for generating pose similarity measurement model and apparatus for the same
KR102364040B1 (en) 2020-05-07 2022-02-17 대한민국 Method and apparatus for determining footprint identity using dimensional reduction algorithm
CN112949576B (en) * 2021-03-29 2024-04-23 北京京东方技术开发有限公司 Attitude estimation method, apparatus, device and storage medium
CN116645732B (en) * 2023-07-19 2023-10-10 厦门工学院 Site dangerous activity early warning method and system based on computer vision

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101438011B1 (en) 2013-11-08 2014-09-04 수원대학교산학협력단 Three-dimensional face recognition system using 3d scanner

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101438011B1 (en) 2013-11-08 2014-09-04 수원대학교산학협력단 Three-dimensional face recognition system using 3d scanner

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Proceedings of KIIS Fall Conference 2013 Vol. 23, No. 2, pp. 107~108, 2013. (2013년 공개)
The Transactions of the Korean Institute of Electrical Engineers Vol. 64, No. 5, pp. 766~778, 2015.05. (2015.5월 공개)

Also Published As

Publication number Publication date
KR20170053069A (en) 2017-05-15

Similar Documents

Publication Publication Date Title
KR101749268B1 (en) A robust face recognition method for pose variations based on pose estimation
Kasar et al. Face recognition using neural network: a review
Gutta et al. Mixture of experts for classification of gender, ethnic origin, and pose of human faces
Bergasa et al. Unsupervised and adaptive Gaussian skin-color model
Lu et al. A method of face recognition based on fuzzy c-means clustering and associated sub-NNs
Yoo et al. Optimized face recognition algorithm using radial basis function neural networks and its practical applications
US8320643B2 (en) Face authentication device
KR101589149B1 (en) Face recognition and face tracking method using radial basis function neural networks pattern classifier and object tracking algorithm and system for executing the same
Liang et al. Pose-invariant facial expression recognition
Yang et al. Privileged information-based conditional regression forest for facial feature detection
Pandey et al. Image processing using principle component analysis
KR101687217B1 (en) Robust face recognition pattern classifying method using interval type-2 rbf neural networks based on cencus transform method and system for executing the same
Ahmadi et al. Iris tissue recognition based on GLDM feature extraction and hybrid MLPNN-ICA classifier
CN105654035B (en) Three-dimensional face identification method and the data processing equipment for applying it
KR100445800B1 (en) Face-image recognition method of similarity measure using correlation
Tong et al. Cross-view gait recognition based on a restrictive triplet network
Ohmaid et al. Iris segmentation using a new unsupervised neural approach
Juang et al. Human posture classification using interpretable 3-D fuzzy body voxel features and hierarchical fuzzy classifiers
Bashier et al. Face detection based on graph structure and neural networks
Gül Holistic face recognition by dimension reduction
Reddy et al. A novel face recognition system by the combination of multiple feature descriptors.
Espinosa-Duro et al. Face identification by means of a neural net classifier
Deng et al. View-invariant gait recognition based on deterministic learning and knowledge fusion
Sivapalan Human identification from video using advanced gait recognition techniques
Pankaj et al. Face recognition using fuzzy neural network classifier

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right