KR101705584B1 - System of Facial Feature Point Descriptor for Face Alignment and Method thereof - Google Patents

System of Facial Feature Point Descriptor for Face Alignment and Method thereof Download PDF

Info

Publication number
KR101705584B1
KR101705584B1 KR1020150094719A KR20150094719A KR101705584B1 KR 101705584 B1 KR101705584 B1 KR 101705584B1 KR 1020150094719 A KR1020150094719 A KR 1020150094719A KR 20150094719 A KR20150094719 A KR 20150094719A KR 101705584 B1 KR101705584 B1 KR 101705584B1
Authority
KR
South Korea
Prior art keywords
random forest
vector
feature point
feature
feature points
Prior art date
Application number
KR1020150094719A
Other languages
Korean (ko)
Other versions
KR20170005273A (en
Inventor
노명철
Original Assignee
주식회사 에스원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 에스원 filed Critical 주식회사 에스원
Priority to KR1020150094719A priority Critical patent/KR101705584B1/en
Publication of KR20170005273A publication Critical patent/KR20170005273A/en
Application granted granted Critical
Publication of KR101705584B1 publication Critical patent/KR101705584B1/en

Links

Images

Classifications

    • G06K9/00268
    • G06K9/00281
    • G06K9/00288

Landscapes

  • Image Analysis (AREA)

Abstract

According to an aspect of the present invention, there is provided a description vector generating method for extracting feature points of a feature point, the method comprising: inputting a face image taken by a camera; extracting a patch image vector for each feature point with respect to the input image; Calculating an output value by tracing a random forest through a training facial feature point descriptor (FFPD) learning for each extracted feature vector for each feature point, calculating a probability of each feature point based on the output value of the random forest, And a step of generating a description vector which is a value that is a value of the expression vector.

Description

Technical Field [0001] The present invention relates to a descriptor generation system for extracting features of a feature point for face alignment, and a method of generating a descriptor for feature points using the feature point descriptor for face alignment.

The present invention relates to a feature point descriptor for extracting feature points for face alignment. In general, face alignment is to find the position and shape of eyes, nose, mouth, and jaw by finding the position and shape of the feature points of the face. CLM (Constrained Local Model) and SDM (Supervised Descent Method) In the face alignment method, feature points for face alignment are extracted and used by a general descriptor or a learned model for each feature point. Scale-Invariant Feature Transform (SIFT) or Histogram of Oriented Gradient) is used and the learned model is using Linear-SVM.

       The prior art related to the present invention is disclosed in Korean Patent No. 10-1096049 (published on December 19, 2011). FIG. 1 is a block diagram of a face automatic alignment system for the conventional robust face recognition. In FIG. 1, the conventional automatic face alignment system for strong face recognition includes an image storage unit 10 for storing an aligned reference image, an input face image, and a deformed thermal image. A feature point extracting unit 20 for extracting feature points of the aligned reference image and the inputted face image; A feature point distribution computing unit (30) for computing a feature point distribution using feature points of the aligned reference image and the input face image; An entropy processing unit 40 for calculating, comparing, and storing entropy values using the feature point distribution of the aligned reference image and the feature point distribution of the input face image; An image transformation unit 50 for rotating, translating, enlarging or reducing the input face image; And a face detection unit (60) for detecting a finally automatically aligned face. First, it is required to prepare an arbitrarily inputted face image which requires an aligned reference image facing the front face and automatic alignment of faces, and each image can be inputted to the present system 100 using a digital camera, a camcorder, The input image is stored in the image storage unit 10 and can be processed in the present system.

In addition, the features extraction method of the feature points used for conventional face alignment is characterized by using descriptor using SIFT and HOG. This method has advantages of easy to use but it is not based on learning, Or the calculation time is long. Another feature extraction method for feature points for conventional face alignment is a learning-based probabilistic / score extraction method, such as Linear-SVM, which is learned by each feature point. Therefore, the conventional learning-based probabilistic / score extraction method is a learning-based method, and thus it can show a good performance for a face, but it takes a long time to calculate. 2 is a block diagram illustrating a probability / score method for extracting features of feature points for conventional face alignment. In the probability / score method for extracting the features of the conventional feature points in FIG. 2, when N feature points are used, (a), (b), and (c) The overlapping area can be reduced by minimizing / optimizing the sliding window area. However, since there is a problem that overlapped area is generated, and since it is learned using only the local area, This may not be abundant in terms of expressiveness.

The conventional face alignment method as described above extracts features of feature points using a general-purpose descriptor such as a conventional SIFT used in the conventional alignment method or a learning-based descriptor such as a conventional Linear-SVM, There is a problem that it is caught or the accuracy is low. Accordingly, an object of the present invention is to provide a learning-based descriptor capable of extracting features at a time, performing computation at a high speed, and accurately extracting features.

A description vector generating method for extracting features of a feature point for the face alignment according to the present invention has the steps of receiving a photographed face image from a camera and extracting a patch image vector for each feature point with respect to the input image And a training facial feature point descriptor (FFPD) learning for each patch image vector extracted from the facial image to generate a random forest, and a pair of two points used in each branch node of the random forest Selecting and storing a probability value at each of the leaf-nodes, and generating a description vector using values (output values) of the leaf-nodes that have reached the random forest repeatedly. . In addition, a description vector generation system for extracting features of feature points for face alignment includes a camera that transmits face image information by capturing a face, a camera which receives face image information from a camera and stores the received face image information, A feature point extracting unit for extracting a patch image vector, and a FFT learning unit for performing FFPD learning on the extracted patch image vectors for each extracted feature point to generate a random forest, find a pair of two points used in each branch node of the random forest, A feature point learning unit for calculating and storing a probability value at a node, and a descriptor generating unit for receiving a probability value at each leaf-node reached by repeating the random forest and generating a descriptor in the form of a description vector, .

      The descriptor generation system for extracting features of feature points for the face alignment according to the present invention as described above and the method of generating descriptors by feature points using the feature feature extraction at one time and using a simple binary comparison, There is a quick effect. Further, another effect of the present invention is that the descriptor constitutes a description vector as a set of description values of each face feature point, so that the feature of the face can be expressed abundantly and accurately. Further, another effect of the present invention is to improve the performance of facial recognition, facial expression recognition, and 3D face restoration because accurate face alignment and eye detection are possible.

1 is a block diagram of a face automatic alignment system for a conventional robust face recognition,
FIG. 2 is a block diagram illustrating a probability / score method for extracting features of feature points for conventional face alignment;
FIG. 3 is a configuration diagram of a description vector generating system for extracting features of feature points for face alignment according to the present invention;
FIG. 4 is a control flowchart of a description vector generation method for each feature point for face alignment according to the present invention;
FIG. 5 is a flowchart of an FFPD learning method applied to the present invention,
FIG. 6 is a flowchart of a tree generation method applied to the present invention;
FIG. 7 is a control flow chart for branch function determination used in generating a tree applied to the present invention; FIG.
FIG. 8 shows an example of a random forest having K trees by inputting a patch image and a serial number of a minutiae point applied to the present invention,
9 is a diagram showing an example of a description vector obtained by the present invention,
10 is an example showing the characteristics of the descriptor using the description vector obtained in the present invention.

A description vector generating system and a description vector generating method for extracting features of feature points for the face alignment according to an embodiment of the present invention will be described with reference to FIGS. 3 to 10. FIG.

FIG. 3 is a configuration diagram of a description vector generating system for extracting features of feature points for face alignment according to the present invention. 3, the description vector generation system for extracting features of feature points for face alignment according to an embodiment of the present invention includes a camera 110 for capturing a face and transmitting face image information, A feature point extraction unit 120 for extracting a patch image vector for each feature point of the face from the facial image information, a FFT learning unit 120 for performing FFPD learning on the extracted patch image vectors for each feature point, and generating a random forest, A feature point learning unit 130 for finding a pair of two points and calculating and storing a probability value at each of the leaf nodes and a feature point learning unit 130 for receiving a probability value at each leaf node reached by repeating the random forest, And a descriptor generating unit 140 for generating a descriptive vector in the form of a descriptive vector.

4 is a control flowchart of a description vector generation method for each feature point for face alignment of the present invention. 4, the description vector generation method for each feature point for face alignment according to an embodiment of the present invention includes a step S11 of inputting a face image photographed by a camera, a step of extracting a patch image vector for each feature point with respect to the input image ), A training facial feature point descriptor (FFPD) learning for each patch image vector for each feature point extracted from the face image, and a pair of two points used in each branch node of the random forest (S13) of selecting and storing a probability value at each leaf-node and generating (S14) a description vector using the leaf-node values (output value) reached by repeating the random forest, And a control unit. The step of obtaining the leaf-node values (output value) reached in step S13 and repeatedly repeating the random forest is performed in a random forest (K trees and K is an arbitrary constant) (A, b) of a pair of images and determining whether ab is a positive number or a negative number so as to determine a branching direction (right or left direction) of the patch image vector, and repeating the above- And a step of generating an output value obtained by going through the random forest in a manner of selecting a pair having the lowest entropy. The above equation for obtaining the entropy (E) can be defined as the following equation (1).

Figure 112015064465869-pat00001

In Equation 1, c represents the right (R) and left (L), and P (n, c) represents the probability that the nth feature point appears on the right (or left) The number of the minutiae points is divided by the total number of data classified to the right (or left).

Also, the output value obtained through the random forest in the above can be defined by the following equation (2).

Figure 112015064465869-pat00002

R is a random forest model, R k is each tree, f n is a feature point serial number , I x is the input patch image, K is the number of trees in the random forest, and n (? N) is the number of feature points.

In addition, in step S14, the description vector can be defined as the following equation (3).

Figure 112015064465869-pat00003

The above equation (3) indicates that the leaf-node value of each tree represents the probability distribution for N feature points, and that it can be obtained at once without needing to perform the operation several times in order to obtain the probability value of all the feature points.

5 is a flowchart of an FFPD learning method applied to the present invention. 5, the FFPD learning method according to the present invention includes a step S31 of extracting a patch image vector for each feature point, a step S32 of generating a random forest composed of K trees based on the patch image vector, (S33) of finding a pair of two points used in each branch node of the random forest and calculating a probability of each feature point of data arriving at each leaf-node (S33).

6 is a flowchart of a tree generation method applied to the present invention. The tree generation method applied to the present invention in FIG. 6 includes a step S41 of selecting an arbitrary pair of two points a and b of a patch image to determine a branch function, (S42) of determining a branching direction (right node (R) or left node (L)) of the vector set by judging a)> X (b) and steps S41 and S42 for a vector set for each branching direction And repeating the steps of: If there is no vector corresponding to R or L, the repetition is stopped.

FIG. 7 is a control flowchart for branch function determination used in generating a tree applied to the present invention. FIG. In FIG. 7, the branch function determination used in generating the tree according to the present invention includes a step (S51) of selecting an arbitrary pair of two points (a, b) of a patch image to determine a branch function, (S52) of judging X (a)> X (b) with respect to X and determining the branching direction of the vector set (the right node R or the left node L) (S54) of repeating the steps S51 to S53 to select entropy a 'and b' having the lowest entropy as coefficients of the branch functions X (a) and X (b) . The entropy (E) can be similarly defined by Equation (1).

FIG. 8 is a diagram illustrating an example of a random forest having K trees by inputting patch images and serial numbers of feature points applied to the present invention. 8, a random forest having K trees by inputting a patch image and a serial number for each minutiae according to the present invention is a patch having a serial number (ID) of each minutiae and a patch centered on minutiae for each learning image, Image pair is used as an input vector of each tree of random pointers and the probability of each serial number of each feature point in leaf-node of each tree is calculated.

Fig. 9 shows an example of a description vector obtained by the present invention. In FIG. 9, the description vector obtained by the present invention indicates that a description vector can be obtained by sequentially passing the input patch image to a random forest. In the above description, the description vector is obtained as a result of tree traversal K (number of trees) N (number of feature points) dimensional vectors, and the description vector describes a patch image, and each dimension corresponds to a serial number of a corresponding feature point Is a probability value.

10 is an example showing the characteristics of the descriptor using the description vector obtained in the present invention. In FIG. 10, using the description vector according to the present invention, it is possible to obtain probability values for a plurality of minutiae points by one sliding window search, and a fast and rich description is possible.

110: camera, 120: feature point extracting unit,
130: feature point learning unit, 140: descriptor generating unit,

Claims (15)

A description vector generating system for extracting features of feature points for face alignment of a received camera image
A description vector generation system for extracting features of feature points for face alignment,
A camera 110 for photographing a face and transmitting facial image information;
A feature point extraction unit 120 for receiving and storing face image information from a camera and extracting a patch image vector for each feature point of the face from the received face image information;
FFPD learning is performed on each extracted patch image vector for each feature point to generate a random forest, a pair of two points used in each branch node of the random forest is found, and a feature value for calculating and storing a probability value at each leaf- A learning unit 130;
And a descriptor generating unit (140) for receiving a probability value (output value) at each leaf-node that arrives repeatedly in the random forest and generating a descriptor in the form of a descriptive vector. A descriptive vector generation system for extracting star features.
The method according to claim 1,
A description vector generation system for extracting features of feature points for face alignment,
(A, b) of the patch image at the internal nodes of each tree in the random forest (K trees are arbitrary constants), and determines whether ab is a positive number or a negative number, (Right or left direction) of the random forest is determined, and an output value obtained by going through the random forest is generated by selecting a pair having the lowest entropy. A description vector generation system for extracting.
The method according to claim 1,
Wherein the output value
Figure 112015064465869-pat00004
And a feature point extraction unit for extracting feature points of feature points for face alignment.
R is a random forest model, R k is each tree, f n is a feature point serial number , I x is the input patch image, K is the number of trees in the random forest, and n (≤ N) is the number of feature points.
The method according to claim 1,
The description vector includes,
Figure 112015064465869-pat00005
And a feature point extraction unit for extracting feature points of feature points for face alignment.
3. The method of claim 2,
The entropy,
Figure 112015064465869-pat00006
And a feature point extraction unit for extracting feature points of feature points for face alignment.
In Equation 1, c represents the right (R) and left (L), and P (n, c) represents the probability that the nth feature point appears on the right (or left).
5. The method of claim 4,
In Equation (3)
Wherein a leaf-node value of each tree in the random forest represents a probability distribution for N feature points, and a description vector generation system for extracting features of feature points for face alignment.
A description vector generating method for extracting features of feature points for face alignment of a received camera image,
A description vector generation method for extracting features of feature points for face alignment,
A step (S11) of receiving a face image photographed from a camera;
Extracting a patch image vector for each feature point with respect to the input image (S12);
The training facial feature point descriptor (FFPD) for each patch image vector extracted from the facial image is used to generate a random forest, a pair of two points used at each branch node in the random forest is searched, Selecting and storing a probability value at the node (S13);
And a step (S14) of generating a description vector by using leaf-node values (output values) that have arrived at the random forest repeatedly (S14). A description vector generation method.
8. The method of claim 7,
A random forest is created through training Facial Feature Point Descriptor (FFPD) learning for each patch image vector extracted from facial images, a random forest is found for each pair of two points used in each branch node, Selecting and storing a probability value in the random forest and obtaining the leaf-node values (output value) reached by repeating the random forest,
(A, b) of the patch image at the internal nodes of each tree in the random forest (K trees are arbitrary constants), and determines whether ab is a positive number or a negative number, (Right or left direction) of the light beam;
And generating an output value obtained by sequentially repeating the random forest in a manner of repeating the above and selecting the pair having the lowest entropy. The feature vector generating method for extracting features of feature points for face alignment .
8. The method of claim 7,
Wherein the output value
Figure 112015064465869-pat00007
Wherein the feature point feature extraction unit extracts features of feature points for face alignment.
R is a random forest model, R k is each tree, f n is a feature point serial number , I x is the input patch image, K is the number of trees in the random forest, and n (≤ N) is the number of feature points.
8. The method of claim 7,
The description vector includes,
Figure 112015064465869-pat00008
Wherein the feature point feature extraction unit extracts features of feature points for face alignment.
8. The method of claim 7,
In the FFPD learning,
Extracting a patch image vector for each feature point (S31);
A step (S32) of generating a random forest consisting of K trees based on the patch image vector;
And calculating (S33) a probability of each feature point of data arriving at each leaf-node based on the output of the random forest (S33). Generation method.
8. The method of claim 7,
The method of generating a tree of a random forest includes:
Selecting (S41) an arbitrary pair of two points (a, b) of the patch image to determine a branch function;
(S42) determining X (a)> X (b) for all the patch image vectors X and determining the branch direction (right node R or left node L) of the vector set;
And repeating steps S41 and S42 for a vector set for each branching direction. The method of claim 1,
9. The method of claim 8,
The entropy,
Figure 112015064465869-pat00009
Wherein the feature point feature extraction unit extracts features of feature points for face alignment.
In Equation 1, c represents the right (R) and left (L), and P (n, c) represents the probability that the nth feature point appears on the right (or left).
13. The method of claim 12,
The method of generating a tree of a random forest includes:
And if the vector corresponding to R or L is absent, the repetition is stopped. A method of generating a description vector for extracting features of feature points for face alignment.
15. The method according to claim 12 or 14,
The determination of the branch function used for generating the tree may be performed,
Selecting (S51) an arbitrary pair of two points (a, b) of the patch image to determine a branch function;
(S52) of determining X (a)> X (b) for all the patch image vectors X and determining the branch direction (right node (R) or left node (L)) of the vector set;
Calculating (S53) entropy for the vector set of the pair of branch directions;
And repeating the steps S51 to S53 to select (S54) the coefficients a 'and b' having the lowest entropy as coefficients of the branch functions X (a) and X (b) A descriptive vector generation method for extracting feature points of feature points.


KR1020150094719A 2015-07-02 2015-07-02 System of Facial Feature Point Descriptor for Face Alignment and Method thereof KR101705584B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150094719A KR101705584B1 (en) 2015-07-02 2015-07-02 System of Facial Feature Point Descriptor for Face Alignment and Method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150094719A KR101705584B1 (en) 2015-07-02 2015-07-02 System of Facial Feature Point Descriptor for Face Alignment and Method thereof

Publications (2)

Publication Number Publication Date
KR20170005273A KR20170005273A (en) 2017-01-12
KR101705584B1 true KR101705584B1 (en) 2017-02-13

Family

ID=57811540

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150094719A KR101705584B1 (en) 2015-07-02 2015-07-02 System of Facial Feature Point Descriptor for Face Alignment and Method thereof

Country Status (1)

Country Link
KR (1) KR101705584B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363187A (en) * 2019-08-29 2019-10-22 上海云从汇临人工智能科技有限公司 A kind of face identification method, device, machine readable media and equipment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108711B (en) * 2017-12-29 2019-12-17 深圳云天励飞技术有限公司 Face control method, electronic device and storage medium
CN109508089B (en) * 2018-10-30 2022-06-14 上海大学 Sight line control system and method based on hierarchical random forest
KR102324589B1 (en) * 2020-04-07 2021-11-09 주식회사 에스원 Authentication server and method for supporting service based on face authentication

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101096049B1 (en) 2010-06-24 2011-12-19 한국과학기술원 Automatic face alignment system for robust face recognition and method therefor
KR101506812B1 (en) 2013-10-10 2015-04-08 재단법인대구경북과학기술원 Head pose estimation method using random forests and binary pattern run length matrix
KR101515308B1 (en) 2013-12-31 2015-04-27 재단법인대구경북과학기술원 Apparatus for face pose estimation and method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101096049B1 (en) 2010-06-24 2011-12-19 한국과학기술원 Automatic face alignment system for robust face recognition and method therefor
KR101506812B1 (en) 2013-10-10 2015-04-08 재단법인대구경북과학기술원 Head pose estimation method using random forests and binary pattern run length matrix
KR101515308B1 (en) 2013-12-31 2015-04-27 재단법인대구경북과학기술원 Apparatus for face pose estimation and method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363187A (en) * 2019-08-29 2019-10-22 上海云从汇临人工智能科技有限公司 A kind of face identification method, device, machine readable media and equipment

Also Published As

Publication number Publication date
KR20170005273A (en) 2017-01-12

Similar Documents

Publication Publication Date Title
CN108491794B (en) Face recognition method and device
JP7248807B2 (en) Automatic recognition and classification of hostile attacks
JP6330385B2 (en) Image processing apparatus, image processing method, and program
EP3772036A1 (en) Detection of near-duplicate image
US8538164B2 (en) Image patch descriptors
CN109871821B (en) Pedestrian re-identification method, device, equipment and storage medium of self-adaptive network
KR101705584B1 (en) System of Facial Feature Point Descriptor for Face Alignment and Method thereof
CN105069457B (en) Image recognition method and device
CN112200057A (en) Face living body detection method and device, electronic equipment and storage medium
CN111191535B (en) Pedestrian detection model construction method based on deep learning and pedestrian detection method
US8989505B2 (en) Distance metric for image comparison
CN112200056A (en) Face living body detection method and device, electronic equipment and storage medium
CN109815823B (en) Data processing method and related product
CN111539456B (en) Target identification method and device
CN115115981A (en) Data processing method, device, equipment, storage medium and computer program product
JP7336653B2 (en) Indoor positioning method using deep learning
KR102449031B1 (en) Method for indoor localization using deep learning
KR102270009B1 (en) Method for detecting moving object and estimating distance thereof based on artificial intelligence algorithm of multi channel images
JP2014225168A (en) Program, device, and method for calculating similarity between images represented by feature point set
JP2017054450A (en) Recognition unit, recognition method and recognition program
KR20230157841A (en) Method for image-based knowledge graph augmentation and embedding and system thereof
CN112989869B (en) Optimization method, device, equipment and storage medium of face quality detection model
CN115359487A (en) Rapid railcar number identification method, equipment and storage medium
JP2015184743A (en) Image processor and object recognition method
KR102101481B1 (en) Apparatus for lenrning portable security image based on artificial intelligence and method for the same

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant