WO2015165227A1 - Human face recognition method - Google Patents

Human face recognition method Download PDF

Info

Publication number
WO2015165227A1
WO2015165227A1 PCT/CN2014/089652 CN2014089652W WO2015165227A1 WO 2015165227 A1 WO2015165227 A1 WO 2015165227A1 CN 2014089652 W CN2014089652 W CN 2014089652W WO 2015165227 A1 WO2015165227 A1 WO 2015165227A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
face recognition
model
feature
recognition method
Prior art date
Application number
PCT/CN2014/089652
Other languages
French (fr)
Chinese (zh)
Inventor
李俊
Original Assignee
珠海易胜电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珠海易胜电子技术有限公司 filed Critical 珠海易胜电子技术有限公司
Publication of WO2015165227A1 publication Critical patent/WO2015165227A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the invention relates to a face recognition method.
  • Face recognition technology has developed rapidly in the past few years. Currently, face recognition technology cannot be satisfactorily handled in real life environments such as outdoor environments, and is only used indoors. The difficulty of face recognition is still illumination change, posture change, age change, occlusion, etc. These affect the face recognition algorithm used by the face recognition system, the degree is different, the classification of face recognition methods and its advantages and disadvantages The difficulty in face recognition that affects this is as follows:
  • a face recognition method based on appearance uses a pixel value of a face image pixel to generate a face template; a face recognition method based on a geometric feature does not rely on a pixel point, and is based on a feature point of the face (eye, nose, mouth, ear) 7) Generates a face template between geometric positional relationships.
  • the face recognition method based on appearance compared with the face recognition method based on geometric features can extract very rich face features from each pixel of the image, so it is more than the geometric feature based on several feature points. Face recognition methods bring high face recognition performance, and most successful face recognition methods are based on appearance-based recognition methods.
  • the appearance-based recognition method can not cope well with the illumination changes that affect the pixel points.
  • the geometric feature-based recognition method relies on the geometric positional relationship, which can compensate for the shortcomings of the appearance-based recognition method.
  • the face recognition method is based on the method of generating a face template to examine the overall face image or into a partial field, and is divided into a global face recognition method and a local face recognition method.
  • the global face recognition method for examining the overall face image has the advantages of expressing both the local features and the global features of the face, but it has the disadvantage of not being able to cope with the change of the posture.
  • the method of local face recognition for the change of posture is more than the method of global face recognition. Strong, there is a good response to people The advantage of the local characteristics of the face.
  • Elastic Bunch Graph Matching is a feature-based face recognition method. It belongs to the local face recognition method and is one of the most successful face recognition methods, but the local face recognition method. The disadvantage is that it does not reflect the global characteristics of the face. In order to overcome this, a method combining global face recognition method and local face recognition method has emerged, which brings some performance improvement. Both methods are based on the appearance-based face recognition method, which cannot overcome the appearance-based recognition method. Disadvantages.
  • the technical problem to be solved by the present invention is to provide a face recognition method that effectively finds facial feature points, is independent of illumination changes, and is stable to posture changes.
  • the invention provides a face recognition method, which comprises the following steps:
  • S2 generating an appearance-based face recognition model, and calculating a cosine similarity between the appearance-based face recognition model and the existing face model vector in the database;
  • step S4 using logical regression mixing based on the similarity level based on step S2 and step S3;
  • step S5 Determine the face recognition result based on the result of step S4.
  • a face elastic beam map is generated, and the face feature points are extracted according to the Haar feature in the detected face region.
  • generating a facial elastic beam map first extracting four points in the detected face region, respectively, the left and right eyeball midpoints, the mouth midpoints, and the lower jaw points, forming an initial partial face model, having 30
  • the template map of the feature points analyzes the relationship between each feature point and the four points of the initial part of the face model, generates a two-dimensional affine transformation, and applies this transformation on the 30 feature points of the template map to obtain
  • the eigenvalues corresponding to the 30 feature points are used to obtain the initial global face model; all 30 feature points of the initial global face model seek the correct convergence point, and the face elastic beam is generated as the feature point.
  • an appearance-based face recognition model is generated, and Gabor Jet is extracted from 30 feature points of the face elastic beam map, and the obtained vector is used as an initial model of the appearance-based face model, and the Gabor Jet complex number is extracted.
  • Straight composed of 40 vectors that are straight elements; the initial model of the face model uses PCA and LDA to obtain a face recognition model based on appearance.
  • a face recognition model based on geometric features is generated, and the distance between the extracted face feature points is calculated, and PCA and LDA are applied to the feature vectors whose elements are proportional to the horizontal axis and the vertical axis direction component, and the basis is obtained.
  • a face recognition model of geometric features is generated, and the distance between the extracted face feature points is calculated, and PCA and LDA are applied to the feature vectors whose elements are proportional to the horizontal axis and the vertical axis direction component, and the basis is obtained.
  • the invention adopts a face recognition method which combines the appearance-based face recognition method and the geometric feature-based face recognition method at the similarity level, can be satisfactorily applied in the actual living environment, and proposes a more effective search.
  • the method of face feature points also proposes a face recognition method based on geometric features that is independent of illumination changes and stable to posture changes.
  • FIG. 1 is a schematic flow chart showing a face recognition method according to an embodiment of the present invention.
  • Embodiments of the present invention provide a method for recognizing a face, including the following steps:
  • the present invention firstly, four points are extracted in the detected face region, which are the left and right eyeball midpoints, the mouth midpoint and the lower jaw point, and form an initial partial face model, and a template having 30 feature points.
  • the relationship between each feature point and the four points of the initial part of the face model is analyzed, and a two-dimensional affine transformation is generated.
  • the transformation is applied to the 30 feature points of the template image to obtain 30 feature points and The corresponding eigenvalues result in an initial global face model.
  • the first item is the association with the extrusion and rotation, and the latter is the translation association.
  • the translational association is simply calculated as the difference between the focus of the two-part model.
  • the four pixels of the matrix of rotation and stretching can be obtained from the relationship between two corresponding points of the two-part face model.
  • Part of the face model consists of 4 points, so that it has the smallest error, so that the corresponding 4 points are close to each other, and the linear regression transformation is obtained to obtain the rotation and stretching transformation matrix.
  • the obtained affine transformation is applied to the template image to generate an initial global face model.
  • G I face elastic beam diagram
  • the confidence level of the partial field centered on each feature point is obtained, and the point next to the point is also sought to be sure, and the corresponding feature point is updated with the point of high certainty.
  • the correct convergence point is sought, and the face elastic beam map which is used as the feature point is generated.
  • the haar feature is used instead of the Gabor feature, and the face feature points are extracted according to the Haar feature in the detected face region, and the Haar feature is to replace the pixel values in each field to examine the pixel values in a certain field. That is, the feature is to detect the difference or combination of pixel values of various modes in the candidate domain; in order to improve the detection performance of the object, such Haar features must be rich, which is trained by the two-dimensional classifier-cascade classifier.
  • the detector using Haar features is faster and more accurate than other detectors, and is mostly used for object detection.
  • the face detector based on the Haar feature of viola-jones is the most successful detector; in the present invention, each is the most successful detector.
  • the feature points are extracted from the large-capacity face database and the model centered on the point, and the viola-jones detector is used for training.
  • Gabor Jet is extracted from the 30 feature points of the face elastic beam map, and the vector obtained by the connection is used as the initial model of the appearance-based face model.
  • Gabor Jet is convolution of the Gabor filter for the pixel of interest. Got it.
  • Gabor filter The type of Gabor filter is determined.
  • a total of 40 Gabor filters are formed for 5 frequencies and 8 directions.
  • the Gabor filter is a set of 40 complex coefficients:
  • the magnitude of the Gabor Jet complex is taken out, and the vector consists of 40 straight elements.
  • Appearance-based face recognition model is obtained by applying PCA and LDA to the initial model of the face model.
  • the cosine similarity between the appearance-based face recognition model obtained above and the existing face model vector in the database is calculated.
  • the horizontal direction component and the vertical direction component are independent according to the distance.
  • the site was inspected.
  • the human head rotates and generally can only rotate in the vertical and horizontal directions.
  • the face template generation phase is as follows:
  • n the number of feature points
  • each direction is made into a possible double (combination), and for each double correspondence, the ratio of the distances included in the double is included.
  • the original template vector thus obtained contains many unnecessary features and the recognition ability is not high.
  • the PCA (alone by the direction axis) is applied here to remove the unnecessary components and then perform vector reduction.
  • the LDA is obtained for the obtained reduced vector, and a geometric feature-based face recognition model with high recognition power is generated.
  • the cosine similarity between the geometric feature-based face recognition model obtained from above and the existing face model vector in the database is calculated, and the cosine similarity calculation method is based on the appearance.
  • the face recognition method is the same.
  • the similarity level between the appearance-based face recognition method and the geometric feature-based face recognition method is mixed using logistic regression, and the formula is as follows:
  • the invention adopts a face recognition method which combines the appearance-based face recognition method and the geometric feature-based face recognition method at the similarity level, and can be satisfactorily applied to the actual living environment, and proposes It is more effective to find the method of facial feature points. It also proposes a geometric feature-based face recognition method that is independent of illumination changes and stable to posture changes.

Abstract

The present invention provides a human face recognition method, comprising the following steps: S1: generating a human face elastic bunch graph; S2: generating a human face recognition model based on appearance, and obtaining, by means of calculation, the cosine similarity between a vector of the human face recognition model based on the appearance and a vector of an existing human face model in a database; S3: generating a human face recognition model based on geometrical characteristics, and obtaining, by means of calculation, the cosine similarity between a vector of the human face recognition model based on the geometrical characteristics and the vector of the existing human face model in the database; S4: using logistic regression mixing according to the similarity level in step S2 and the similarity level in step S3; and S5: judging a human face recognition result according to a result of step S4. The present invention adopts a human face recognition method that mixes on similarity levels a human face recognition method based on appearance and a human face recognition method based on geometrical characteristics, and can be well applied to the actual living environment.

Description

无标题Untitled 技术领域:Technical field:
本发明涉及一种人脸识别方法。The invention relates to a face recognition method.
人脸识别技术在过去几年中迅速发展,目前人脸识别技术在室外环境等实际生活环境中不能圆满应对,只在室内使用。人脸识别的难点依旧是照明变化、姿势变化、年龄变化、遮挡等,这些对人脸识别系统所采用的人脸识别算法产生影响,其程度各不相同,人脸识别方法的分类与其优缺点、对此产生影响的人脸识别难点是如下:Face recognition technology has developed rapidly in the past few years. Currently, face recognition technology cannot be satisfactorily handled in real life environments such as outdoor environments, and is only used indoors. The difficulty of face recognition is still illumination change, posture change, age change, occlusion, etc. These affect the face recognition algorithm used by the face recognition system, the degree is different, the classification of face recognition methods and its advantages and disadvantages The difficulty in face recognition that affects this is as follows:
基于外观的人脸识别方法,利用人脸图像像素的像素值,生成人脸模板;基于几何特征的人脸识别方法不是依靠像素点,是根据人脸的特征点(眼、鼻、口、耳…)之间几何位置关系生成人脸模板。过去基于外观的人脸识别方法跟基于几何特征的人脸识别方法相比,从图像的每个像素点可取出很丰富的人脸特征,所以比依靠于几个特征点的基于几何特征的人脸识别方法带来了很高的人脸识别性能,现在大部分成功的人脸识别方法都依据于基于外观的识别方法。A face recognition method based on appearance uses a pixel value of a face image pixel to generate a face template; a face recognition method based on a geometric feature does not rely on a pixel point, and is based on a feature point of the face (eye, nose, mouth, ear) ...) Generates a face template between geometric positional relationships. In the past, the face recognition method based on appearance compared with the face recognition method based on geometric features can extract very rich face features from each pixel of the image, so it is more than the geometric feature based on several feature points. Face recognition methods bring high face recognition performance, and most successful face recognition methods are based on appearance-based recognition methods.
但基于外观的识别方法还是不能很好的应对对像素点产生影响的照明变化,而基于几何特征的识别方法是依靠几何位置关系,不拘于照明变化,即可以弥补基于外观的识别方法的缺点。However, the appearance-based recognition method can not cope well with the illumination changes that affect the pixel points. The geometric feature-based recognition method relies on the geometric positional relationship, which can compensate for the shortcomings of the appearance-based recognition method.
由于依靠于人脸特征点,要求在前阶段准确取出人脸特征点。人脸识别方法根据为生成人脸模板考察整体人脸图像或是分成部分领域来进行考察,分成全局人脸识别方法和局部人脸识别方法。考察整体人脸图像的全局人脸识别方法有把人脸局部特征和全局特征都表现的优点,但有无法应对姿势变化的缺点,相反,局部人脸识别方法对于姿势变化比全局人脸识别方法强,有能够很好地反应出人 脸局部特性的优点。Since relying on the face feature points, it is required to accurately take out the face feature points in the previous stage. The face recognition method is based on the method of generating a face template to examine the overall face image or into a partial field, and is divided into a global face recognition method and a local face recognition method. The global face recognition method for examining the overall face image has the advantages of expressing both the local features and the global features of the face, but it has the disadvantage of not being able to cope with the change of the posture. On the contrary, the method of local face recognition for the change of posture is more than the method of global face recognition. Strong, there is a good response to people The advantage of the local characteristics of the face.
过去,弹性图束匹配-EBGM(Elastic Bunch Graph Matching)作为一个基于特征点的人脸识别方法,属于局部人脸识别方法,是最成功的人脸识别方法中的一个,但局部人脸识别方法的缺点是不能反应出人脸的全局特征。为克服这一点,出现了结合全局人脸识别方法和局部人脸识别方法的方法,带来了一些性能改善,两个方法都依据基于外观的人脸识别方法,无法克服基于外观的识别方法的缺点。In the past, Elastic Bunch Graph Matching (EBGM) is a feature-based face recognition method. It belongs to the local face recognition method and is one of the most successful face recognition methods, but the local face recognition method. The disadvantage is that it does not reflect the global characteristics of the face. In order to overcome this, a method combining global face recognition method and local face recognition method has emerged, which brings some performance improvement. Both methods are based on the appearance-based face recognition method, which cannot overcome the appearance-based recognition method. Disadvantages.
在实际生活环境中,人脸图像由照明、姿势、年龄、遮挡等变化造成人脸识别难度。因此,在实际生活环境中,人脸识别技术不能圆满,为此进行深入研究。最近几年中,在此领域进行了很多研究,有了很大的进展,但是仍然达不到满意的要求。In the actual living environment, the face image is difficult to face recognition due to changes in illumination, posture, age, and occlusion. Therefore, in the real life environment, face recognition technology can not be completed, for this purpose in-depth study. In recent years, a lot of research has been carried out in this field, and great progress has been made, but still not satisfactory.
发明内容:Summary of the invention:
本发明所要解决的技术问题在于提供一种有效寻找脸部特征点,与照明变化无关,对姿势变化稳定的人脸识别方法。The technical problem to be solved by the present invention is to provide a face recognition method that effectively finds facial feature points, is independent of illumination changes, and is stable to posture changes.
本发明提供一种人脸识别方法,包括以下步骤:The invention provides a face recognition method, which comprises the following steps:
S1:成人脸弹性束图;S1: elastic face diagram of adult face;
S2:生成基于外观的人脸识别模型,计算获得基于外观的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度;S2: generating an appearance-based face recognition model, and calculating a cosine similarity between the appearance-based face recognition model and the existing face model vector in the database;
S3:生成基于几何特征的人脸识别模型,计算获得的基于几何特征的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度;S3: generating a face recognition model based on geometric features, and calculating a cosine similarity between the obtained geometric feature based face recognition model and the existing face model vector in the database;
S4:基于步骤S2与基于步骤S3的相似度级别使用逻辑回归混合;S4: using logical regression mixing based on the similarity level based on step S2 and step S3;
S5:基于步骤S4的结果判定人脸识别结果。 S5: Determine the face recognition result based on the result of step S4.
进一步地,生成人脸弹性束图,在检测到的人脸区域中根据Haar特征进行模式检测取出人脸特征点。Further, a face elastic beam map is generated, and the face feature points are extracted according to the Haar feature in the detected face region.
进一步地,生成人脸弹性束图,首先在检测到的人脸区域中提取四个点,分别是左右两个眼球中点、嘴中点和下颚点,组成初期部分人脸模型,在拥有30个特征点的模板图中分析每一个特征点与初期部分人脸模型的四个点之间的联系,生成二维仿射变换,在模板图的30个特征点上应用这种变换,求出30个特征点与之相对应的特征值,得出初期全局人脸模型;于初期全局人脸模型的所有30个特征点都寻求正确的汇合点,生成以此作为特征点的人脸弹性束图。Further, generating a facial elastic beam map, first extracting four points in the detected face region, respectively, the left and right eyeball midpoints, the mouth midpoints, and the lower jaw points, forming an initial partial face model, having 30 The template map of the feature points analyzes the relationship between each feature point and the four points of the initial part of the face model, generates a two-dimensional affine transformation, and applies this transformation on the 30 feature points of the template map to obtain The eigenvalues corresponding to the 30 feature points are used to obtain the initial global face model; all 30 feature points of the initial global face model seek the correct convergence point, and the face elastic beam is generated as the feature point. Figure.
进一步地,生成基于外观的人脸识别模型,对人脸弹性束图的30个特征点提取Gabor Jet,将其连接后获得的矢量作为基于外观的人脸模型的初期模型,取出Gabor Jet复数的幅直,组成由40个幅直为元素的矢量;人脸模型的初期模型应用PCA和LDA,获得基于外观的人脸识别模型。Further, an appearance-based face recognition model is generated, and Gabor Jet is extracted from 30 feature points of the face elastic beam map, and the obtained vector is used as an initial model of the appearance-based face model, and the Gabor Jet complex number is extracted. Straight, composed of 40 vectors that are straight elements; the initial model of the face model uses PCA and LDA to obtain a face recognition model based on appearance.
进一步地,生成基于几何特征的人脸识别模型,计算出取出的人脸特征点之间的距离,对按水平轴和垂直轴方向成分之间比例为要素的特征向量应用PCA和LDA,获得基于几何特征的人脸识别模型。Further, a face recognition model based on geometric features is generated, and the distance between the extracted face feature points is calculated, and PCA and LDA are applied to the feature vectors whose elements are proportional to the horizontal axis and the vertical axis direction component, and the basis is obtained. A face recognition model of geometric features.
本发明采用了把基于外观的人脸识别方法和基于几何特征的人脸识别方法在相似度级别混合的人脸识别方法,可以圆满应用于实际生活环境中,并且提出了更能够有效地的寻找脸部特征点的方法,还提出了与照明变化无关,对姿势变化稳定的基于几何特征的人脸识别方法。The invention adopts a face recognition method which combines the appearance-based face recognition method and the geometric feature-based face recognition method at the similarity level, can be satisfactorily applied in the actual living environment, and proposes a more effective search. The method of face feature points also proposes a face recognition method based on geometric features that is independent of illumination changes and stable to posture changes.
附图说明:BRIEF DESCRIPTION OF THE DRAWINGS:
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。 在附图中:The drawings described herein are intended to provide a further understanding of the invention, and are intended to be a part of the invention. In the drawing:
图1示意性示意出本发明实施例子给出的人脸识别方法的流程图。FIG. 1 is a schematic flow chart showing a face recognition method according to an embodiment of the present invention.
具体实施方式:detailed description:
下面将参考附图并结合实施例,来详细说明本发明。The invention will be described in detail below with reference to the drawings in conjunction with the embodiments.
本发明实施例提供一种人脸识别方法,包括以下步骤:Embodiments of the present invention provide a method for recognizing a face, including the following steps:
一、生成人脸弹性束图First, generate a face elastic beam diagram
在本发明中,首先在检测到的人脸区域中提取四个点,分别是左右两个眼球中点、嘴中点和下颚点,组成初期部分人脸模型,在拥有30个特征点的模板图中分析每一个特征点与初期部分人脸模型的四个点之间的联系,生成二维仿射变换,在模板图的30个特征点上应用这种变换,求出30个特征点与之相对应的特征值,得出初期全局人脸模型。In the present invention, firstly, four points are extracted in the detected face region, which are the left and right eyeball midpoints, the mouth midpoint and the lower jaw point, and form an initial partial face model, and a template having 30 feature points. In the figure, the relationship between each feature point and the four points of the initial part of the face model is analyzed, and a two-dimensional affine transformation is generated. The transformation is applied to the 30 feature points of the template image to obtain 30 feature points and The corresponding eigenvalues result in an initial global face model.
变换公式如下:The transformation formula is as follows:
[根据细则26改正04.12.2014] 
设不放弃普遍性而需得到的变换为。
Figure WO-DOC-FIGURE-1
[Correct according to Rule 26 04.12.2014]
The change that needs to be made without giving up universality is.
Figure WO-DOC-FIGURE-1
前项是与拉伸及旋转的关联项,后项是平移关联项。The first item is the association with the extrusion and rotation, and the latter is the translation association.
平移关联项是作为两部分模型的重点之间的差距来简单地计算。The translational association is simply calculated as the difference between the focus of the two-part model.
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-2
[Correct according to Rule 26 04.12.2014]
which is
Figure WO-DOC-FIGURE-2
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-3
以初期获得的4个点组成的初期部分人脸模型的重点;
[Correct according to Rule 26 04.12.2014]
Figure WO-DOC-FIGURE-3
The focus of the initial part of the face model consisting of 4 points obtained at the beginning;
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-4
在预备的人脸模型中取4个点组成的部分人脸模型的重点;
[Correct according to Rule 26 04.12.2014]
Figure WO-DOC-FIGURE-4
The focus of the partial face model consisting of 4 points in the prepared face model;
关于旋转及拉伸的矩阵的4个像素,可以根据两部分人脸模型的两个对应点之间的关系获得。The four pixels of the matrix of rotation and stretching can be obtained from the relationship between two corresponding points of the two-part face model.
部分人脸模型由4个点组成,让它具有最小误差,使对应的4个点相互接近,获得线性回归变换后得到旋转及拉伸变换矩阵。 Part of the face model consists of 4 points, so that it has the smallest error, so that the corresponding 4 points are close to each other, and the linear regression transformation is obtained to obtain the rotation and stretching transformation matrix.
将获得的仿射变换应用于模板图中,生成初期全局人脸模型。The obtained affine transformation is applied to the template image to generate an initial global face model.
GI-T(GT);G I -T(G T );
GI:人脸弹性束图;G I : face elastic beam diagram;
GT:模板图;G T : template map;
获得初期全局人脸模型之后,求得对于每个特征点以其为中心的部分领域的确信度,对该点旁边的点也寻求确信度,以确信度高的点更新相应特征点。After obtaining the initial global face model, the confidence level of the partial field centered on each feature point is obtained, and the point next to the point is also sought to be sure, and the corresponding feature point is updated with the point of high certainty.
如没有确信度大的点,则会终止。If there is no point of great confidence, it will terminate.
这样使每个特征点都不能再更新为止。This way each feature point can no longer be updated.
对于初期全局人脸模型的所有30个特征点都寻求正确的汇合点,生成以此作为特征点的人脸弹性束图。For all 30 feature points of the initial global face model, the correct convergence point is sought, and the face elastic beam map which is used as the feature point is generated.
本发明中利用haar特征代替Gabor特征,在检测到的人脸区域中根据Haar特征进行模式检测取出人脸特征点,Haar特点是代替每个点中的像素值考察某一个领域中像素值的合,即特点是检测候选领域中的各种模式的像素值合的差或合;为提高物体检测性能,这样的Haar特征必须丰富,这是依靠二维分类器-级联分类器训练而成。In the present invention, the haar feature is used instead of the Gabor feature, and the face feature points are extracted according to the Haar feature in the detected face region, and the Haar feature is to replace the pixel values in each field to examine the pixel values in a certain field. That is, the feature is to detect the difference or combination of pixel values of various modes in the candidate domain; in order to improve the detection performance of the object, such Haar features must be rich, which is trained by the two-dimensional classifier-cascade classifier.
利用Haar特征的检测器比其他检测器速度快、正确率高,多用于物体检测,特别是以viola-jones的Haar特征为基础的人脸检测器是最为成功的检测器;本发明中对每个特征点,在大容量人脸资料库中提取以该点为中心的模型,用viola-jones的检测器进行训练。The detector using Haar features is faster and more accurate than other detectors, and is mostly used for object detection. Especially the face detector based on the Haar feature of viola-jones is the most successful detector; in the present invention, each is the most successful detector. The feature points are extracted from the large-capacity face database and the model centered on the point, and the viola-jones detector is used for training.
二、生成基于外观的人脸识别模型及匹配Second, generate appearance-based face recognition model and match
对人脸弹性束图的30个特征点提取Gabor Jet,将其连接后获得的矢量作为基于外观的人脸模型的初期模型。Gabor Jet is extracted from the 30 feature points of the face elastic beam map, and the vector obtained by the connection is used as the initial model of the appearance-based face model.
Gabor Jet是对关注的像素点把Gabor滤波器卷积(convolution)来获 得。Gabor Jet is convolution of the Gabor filter for the pixel of interest. Got it.
Gabor滤波器和图像的卷积利用以下公式来计算:The convolution of the Gabor filter and image is calculated using the following formula:
[根据细则26改正04.12.2014] 
Gabor滤波器如下:
Figure WO-DOC-FIGURE-5
[Correct according to Rule 26 04.12.2014]
The Gabor filter is as follows:
Figure WO-DOC-FIGURE-5
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-6
[Correct according to Rule 26 04.12.2014]
Figure WO-DOC-FIGURE-6
依据矢量
Figure PCTCN2014089652-appb-000007
决定Gabor滤波器的类型,在发明中对5个频率和8个方向构成总计40个Gabor滤波器。
Vector basis
Figure PCTCN2014089652-appb-000007
The type of Gabor filter is determined. In the invention, a total of 40 Gabor filters are formed for 5 frequencies and 8 directions.
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-7
[Correct according to Rule 26 04.12.2014]
Figure WO-DOC-FIGURE-7
即Gabor滤波器是可以定为40个复数系数的集合:That is, the Gabor filter is a set of 40 complex coefficients:
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-8
[Correct according to Rule 26 04.12.2014]
Figure WO-DOC-FIGURE-8
本发明中取出Gabor Jet复数的幅直(magnitude),组成由40个幅直为元素的矢量。In the present invention, the magnitude of the Gabor Jet complex is taken out, and the vector consists of 40 straight elements.
对人脸模型的初期模型应用PCA和LDA,获得基于外观的人脸识别模型。Appearance-based face recognition model is obtained by applying PCA and LDA to the initial model of the face model.
两个人脸模型的匹配阶段中,计算出从上面获得的基于外观的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度(similarity)。In the matching phase of the two face models, the cosine similarity between the appearance-based face recognition model obtained above and the existing face model vector in the database is calculated.
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-9
[Correct according to Rule 26 04.12.2014]
Figure WO-DOC-FIGURE-9
三、生成基于几何特征的人脸识别模型及匹配。Third, generate a face recognition model based on geometric features and matching.
以前作为有代表性的基于几何特征的人脸识别方法可以提起根据距离的人脸识别方法和比例的人脸识别方法,原来的基于比例的人脸识别方法使用了人脸特征点之间的距离比例,但这个方法还是对于立体旋转的图像的对应距离的比例会不同,拥有对姿势变化不稳定的缺点。In the past, as a representative face recognition method based on geometric features, a face recognition method based on distance and a face recognition method based on scale can be mentioned. The original scale-based face recognition method uses the distance between face feature points. Proportion, but this method is also different for the corresponding distance of the stereoscopically rotated image, and has the disadvantage of being unstable to the posture change.
为了克服这样的缺点,本发明中根据距离的水平方向成分和垂直方向成分独 立地进行了考察。In order to overcome such disadvantages, in the present invention, the horizontal direction component and the vertical direction component are independent according to the distance. The site was inspected.
对于同一个平面的两个线段,该平面旋转时,两个线段的旋转方向成分的比例相同,对于和旋转方向直交的方向成分长度和比例都一致。For two line segments of the same plane, when the plane rotates, the ratio of the components of the rotation direction of the two line segments is the same, and the length and proportion of the component in the direction orthogonal to the direction of rotation are the same.
人头部旋转,一般只能向垂直、水平方向旋转。The human head rotates and generally can only rotate in the vertical and horizontal directions.
即提取连接人脸模型中差不多同一平面的特征点的线段,两个人脸模型中对于对应线段的水平、垂直轴方向成分之比例是对同一个人不变的事实为基础提取出人脸模板。That is, a line segment connecting feature points of almost the same plane in the face model is extracted, and the face templates are extracted based on the fact that the ratio of the horizontal and vertical axis components of the corresponding line segment is constant for the same person in the two face models.
人脸模板生成阶段如下:The face template generation phase is as follows:
在上面获得的特征点中除了以鼻点和耳点为主的弯曲很深的点以外,选出差不多在同一平面的点,不考虑它们之间的顺序,制作成可能的双(组合),求得每个双对垂直轴和水平轴方向的距离。In the feature points obtained above, except for the points where the nose and the ear are mainly curved, the points which are almost in the same plane are selected, and the order between them is not considered, and the possible double (combination) is made. Find the distance between each pair of vertical and horizontal axes.
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-10
[Correct according to Rule 26 04.12.2014]
Figure WO-DOC-FIGURE-10
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-11
[Correct according to Rule 26 04.12.2014]
Figure WO-DOC-FIGURE-11
DHi:第i水平距离DH i : ith horizontal distance
DVi:第i垂直距离DV i : ith vertical distance
(Xi,Yi):第j节点的坐标(X i , Y i ): coordinates of the jth node
n:特征点的个数n: the number of feature points
然后按方向轴不考虑每个距离的顺序,制作成可能的双(组合),对每个双对应包含该双所包含的距离的比例。Then, according to the order of the direction axes, each direction is made into a possible double (combination), and for each double correspondence, the ratio of the distances included in the double is included.
RHi=DHj/DHk,j,(≠k)=1,...,m,i=1,...,mC2RH i =DH j /DH k ,j,(≠k)=1,...,m,i=1,...,mC2
RVi=DVj/DVk,j,(≠k)=1,...,m,i=1,...,mC2RV i = DV j / DV k , j, (≠k) = 1, ..., m, i = 1, ..., mC2
RHi:第i水平比例 RH i : the i-th horizontal ratio
RVi:第i垂直比例RV i : ith vertical ratio
m:一个轴上的距离个数m: the number of distances on one axis
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-12
[Correct according to Rule 26 04.12.2014]
Figure WO-DOC-FIGURE-12
生成按方向轴获得的比例为元素的矢量,相互连接后生成原始模板。Generates a vector whose scale is obtained by the direction axis and connects them to each other to generate the original template.
Vο-(RH1,RH2,...,RHt,RV1,RV2,...,RVt),t-mC2V ο -(RH 1 , RH 2 ,...,RH t ,RV 1 ,RV 2 ,...,RV t ),t-mC2
这样获得的原始模板矢量包含很多不必要的的特征而且识别能力不高。The original template vector thus obtained contains many unnecessary features and the recognition ability is not high.
即在这里应用PCA(按方向轴单独进行)去除不必要的成分后进行矢量减缩。That is, the PCA (alone by the direction axis) is applied here to remove the unnecessary components and then perform vector reduction.
对获得的减缩矢量进行LDA,生成识别力高的基于几何特征的人脸识别模型。The LDA is obtained for the obtained reduced vector, and a geometric feature-based face recognition model with high recognition power is generated.
两个人脸模型的匹配阶段中,计算出从上面获得的基于几何特征的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度,余弦相似度的计算方法跟基于外观的人脸识别方法一样。In the matching phase of the two face models, the cosine similarity between the geometric feature-based face recognition model obtained from above and the existing face model vector in the database is calculated, and the cosine similarity calculation method is based on the appearance. The face recognition method is the same.
四、混合Fourth, mixing
基于外观的人脸识别方法与基于几何特征的人脸识别方法的相似度级别使用逻辑回归混合,公式如下:The similarity level between the appearance-based face recognition method and the geometric feature-based face recognition method is mixed using logistic regression, and the formula is as follows:
[根据细则26改正04.12.2014] 
Figure WO-DOC-FIGURE-13
[Correct according to Rule 26 04.12.2014]
Figure WO-DOC-FIGURE-13
X1;基于外观的方法距离X2;基于几何特征的方法距离,
Figure PCTCN2014089652-appb-000015
β0,β1,β2:逻辑回归(logistic regression)系数。
X 1 ; appearance-based method distance X 2 ; geometric feature-based method distance,
Figure PCTCN2014089652-appb-000015
β 0 , β 1 , β 2 : Logistic regression coefficient.
本发明采用了把基于外观的人脸识别方法和基于几何特征的人脸识别方法在相似度级别混合的人脸识别方法,可以圆满应用于实际生活环境中,并且提出了 更能够有效地的寻找脸部特征点的方法,还提出了与照明变化无关,对姿势变化稳定的基于几何特征的人脸识别方法。The invention adopts a face recognition method which combines the appearance-based face recognition method and the geometric feature-based face recognition method at the similarity level, and can be satisfactorily applied to the actual living environment, and proposes It is more effective to find the method of facial feature points. It also proposes a geometric feature-based face recognition method that is independent of illumination changes and stable to posture changes.
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。 The above description is only the preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes can be made to the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and scope of the present invention are intended to be included within the scope of the present invention.

Claims (5)

  1. 一种人脸识别方法,其特征在于,包括以下步骤:A face recognition method, comprising the steps of:
    S1:生成人脸弹性束图;S1: generating a facial elastic beam map;
    S2:生成基于外观的人脸识别模型,计算获得基于外观的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度;S2: generating an appearance-based face recognition model, and calculating a cosine similarity between the appearance-based face recognition model and the existing face model vector in the database;
    S3:生成基于几何特征的人脸识别模型,计算获得的基于几何特征的人脸识别模型与数据库中已有的人脸模型矢量之间的余弦相似度;S3: generating a face recognition model based on geometric features, and calculating a cosine similarity between the obtained geometric feature based face recognition model and the existing face model vector in the database;
    S4:基于步骤S2与基于步骤S3的相似度级别使用逻辑回归混合;S4: using logical regression mixing based on the similarity level based on step S2 and step S3;
    S5:基于步骤S4的结果判定人脸识别结果。S5: Determine the face recognition result based on the result of step S4.
  2. 如权利要求1所述的人脸识别方法,其特征在于,生成人脸弹性束图,在检测到的人脸区域中根据Haar特征进行模式检测取出人脸特征点。The face recognition method according to claim 1, wherein a face elastic beam map is generated, and a face feature point is extracted based on the Haar feature in the detected face region.
  3. 如权利要求2所述的人脸识别方法,其特征在于,生成人脸弹性束图,首先在检测到的人脸区域中提取四个点,分别是左右两个眼球中点、嘴中点和下颚点,组成初期部分人脸模型,在拥有30个特征点的模板图中分析每一个特征点与初期部分人脸模型的四个点之间的联系,生成二维仿射变换,在模板图的30个特征点上应用这种变换,求出30个特征点与之相对应的特征值,得出初期全局人脸模型;对于初期全局人脸模型的所有30个特征点都寻求正确的汇合点,生成以此作为特征点的人脸弹性束图。The face recognition method according to claim 2, wherein the facial elastic beam map is generated, and firstly, four points are extracted in the detected face region, which are the left and right eyeball midpoints, the mouth midpoint, and The lower point constitutes the initial part of the face model, and the relationship between each feature point and the four points of the initial part of the face model is analyzed in a template map with 30 feature points to generate a two-dimensional affine transformation in the template diagram. Applying this transformation to the 30 feature points, the eigenvalues corresponding to 30 feature points are obtained, and the initial global face model is obtained. For all 30 feature points of the initial global face model, the correct convergence is sought. Point, generate a face elastic beam map using this as a feature point.
  4. 如权利要求2所述的人脸识别方法,其特征在于,生成基于外观的人脸识别模型,对人脸弹性束图的30个特征点提取Gabor Jet,将其连接后获得的矢量作为基于外观的人脸模型的初期模型,取出Gabor Jet复数的幅直,组成由40个幅直为元素的矢量;对人脸模型的初期模型应用PCA和LDA,获 得基于外观的人脸识别模型。The face recognition method according to claim 2, wherein a face recognition model based on appearance is generated, and Gabor Jet is extracted from 30 feature points of the face elastic beam map, and the vector obtained by connecting is based on the appearance. The initial model of the face model, taking out the width of the Gabor Jet complex, consisting of 40 vectors that are straight elements; applying the PCA and LDA to the initial model of the face model. A face recognition model based on appearance.
  5. 如权利要求4所述的人脸识别方法,其特征在于,生成基于几何特征的人脸识别模型,计算出取出的人脸特征点之间的距离,对按水平轴和垂直轴方向成分之间比例为要素的特征向量应用PCA和LDA,获得基于几何特征的人脸识别模型。 The face recognition method according to claim 4, wherein a face recognition model based on geometric features is generated, and a distance between the extracted face feature points is calculated, and between the components in the horizontal axis and the vertical axis direction The feature vector of the feature is applied to PCA and LDA to obtain a face recognition model based on geometric features.
PCT/CN2014/089652 2014-04-28 2014-10-28 Human face recognition method WO2015165227A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410173445.X 2014-04-28
CN201410173445.XA CN103902992B (en) 2014-04-28 2014-04-28 Human face recognition method

Publications (1)

Publication Number Publication Date
WO2015165227A1 true WO2015165227A1 (en) 2015-11-05

Family

ID=50994304

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/089652 WO2015165227A1 (en) 2014-04-28 2014-10-28 Human face recognition method

Country Status (2)

Country Link
CN (1) CN103902992B (en)
WO (1) WO2015165227A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2610682C1 (en) * 2016-01-27 2017-02-14 Общество с ограниченной ответственностью "СТИЛСОФТ" Face recognition method

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902992B (en) * 2014-04-28 2017-04-19 珠海易胜电子技术有限公司 Human face recognition method
CN105160331A (en) * 2015-09-22 2015-12-16 镇江锐捷信息科技有限公司 Hidden Markov model based face geometrical feature identification method
CN105069448A (en) * 2015-09-29 2015-11-18 厦门中控生物识别信息技术有限公司 True and false face identification method and device
CN105631039B (en) * 2016-01-15 2019-02-15 北京邮电大学 A kind of picture browsing method
CN109214352A (en) * 2018-09-26 2019-01-15 珠海横琴现联盛科技发展有限公司 Dynamic human face retrieval method based on 2D camera 3 dimension imaging technology
CN111783699A (en) * 2020-07-06 2020-10-16 周书田 Video face recognition method based on efficient decomposition convolution and time pyramid network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102495999A (en) * 2011-11-14 2012-06-13 深圳市奔凯安全技术有限公司 Face recognition method
CN103440510A (en) * 2013-09-02 2013-12-11 大连理工大学 Method for positioning characteristic points in facial image
CN103902992A (en) * 2014-04-28 2014-07-02 珠海易胜电子技术有限公司 Human face recognition method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102495999A (en) * 2011-11-14 2012-06-13 深圳市奔凯安全技术有限公司 Face recognition method
CN103440510A (en) * 2013-09-02 2013-12-11 大连理工大学 Method for positioning characteristic points in facial image
CN103902992A (en) * 2014-04-28 2014-07-02 珠海易胜电子技术有限公司 Human face recognition method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FEI, JUNLIN: "Research on Auto Face Recognition System Based on Improving Feature Points Location Algorithm", 31 December 2008 (2008-12-31), pages 39 - 63 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2610682C1 (en) * 2016-01-27 2017-02-14 Общество с ограниченной ответственностью "СТИЛСОФТ" Face recognition method

Also Published As

Publication number Publication date
CN103902992A (en) 2014-07-02
CN103902992B (en) 2017-04-19

Similar Documents

Publication Publication Date Title
WO2015165227A1 (en) Human face recognition method
CN107145842B (en) Face recognition method combining LBP characteristic graph and convolutional neural network
CN106897675B (en) Face living body detection method combining binocular vision depth characteristic and apparent characteristic
WO2018107979A1 (en) Multi-pose human face feature point detection method based on cascade regression
CN105574518B (en) Method and device for detecting living human face
WO2016110005A1 (en) Gray level and depth information based multi-layer fusion multi-modal face recognition device and method
CN109101865A (en) A kind of recognition methods again of the pedestrian based on deep learning
WO2017219391A1 (en) Face recognition system based on three-dimensional data
WO2017133009A1 (en) Method for positioning human joint using depth image of convolutional neural network
US20150302240A1 (en) Method and device for locating feature points on human face and storage medium
KR20170000748A (en) Method and apparatus for face recognition
US9489561B2 (en) Method and system for estimating fingerprint pose
CN104392246B (en) It is a kind of based between class in class changes in faces dictionary single sample face recognition method
US20220180534A1 (en) Pedestrian tracking method, computing device, pedestrian tracking system and storage medium
CN111626246B (en) Face alignment method under mask shielding
Yang et al. Facial expression recognition based on dual-feature fusion and improved random forest classifier
CN109858433B (en) Method and device for identifying two-dimensional face picture based on three-dimensional face model
WO2018058419A1 (en) Two-dimensional image based human body joint point positioning model construction method, and positioning method
CN105760815A (en) Heterogeneous human face verification method based on portrait on second-generation identity card and video portrait
CN111524183A (en) Target row and column positioning method based on perspective projection transformation
Du High-precision portrait classification based on mtcnn and its application on similarity judgement
CN109993116B (en) Pedestrian re-identification method based on mutual learning of human bones
Li et al. Head pose classification based on line portrait
CN106980845B (en) Face key point positioning method based on structured modeling
Feng et al. Effective venue image retrieval using robust feature extraction and model constrained matching for mobile robot localization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14890918

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24/04/2017)

122 Ep: pct application non-entry in european phase

Ref document number: 14890918

Country of ref document: EP

Kind code of ref document: A1