CN105354558A - Face image matching method - Google Patents

Face image matching method Download PDF

Info

Publication number
CN105354558A
CN105354558A CN201510820897.7A CN201510820897A CN105354558A CN 105354558 A CN105354558 A CN 105354558A CN 201510820897 A CN201510820897 A CN 201510820897A CN 105354558 A CN105354558 A CN 105354558A
Authority
CN
China
Prior art keywords
image
matching
point
pixel
surf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510820897.7A
Other languages
Chinese (zh)
Other versions
CN105354558B (en
Inventor
师硕
刘依
郭迎春
阎刚
于洋
翟艳东
杨志坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN201510820897.7A priority Critical patent/CN105354558B/en
Publication of CN105354558A publication Critical patent/CN105354558A/en
Application granted granted Critical
Publication of CN105354558B publication Critical patent/CN105354558B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明人脸图像匹配方法,涉及图像数据处理,是基于两次SURF和形状上下文的人脸图像匹配方法,利用SURF算法粗匹配得到尺度差和方向差信息,再利用这些信息进行SURF算法精确匹配,对得到的匹配结果用形状上下文算法去除误匹配,步骤是:确定人脸区域;生成重构的积分图像;两次SURF特征匹配;生成形状上下文描述子,去除误匹配,完成人脸图像匹配。本发明方法克服了现有的人脸图像匹配方法中存在特征点少、匹配点少且正确率不高的缺陷。

The face image matching method of the present invention relates to image data processing. It is a face image matching method based on two SURF and shape contexts. The SURF algorithm is used for rough matching to obtain scale difference and direction difference information, and then the information is used for precise matching by the SURF algorithm. , use the shape context algorithm to remove the false match for the obtained matching result, the steps are: determine the face area; generate the reconstructed integral image; match the SURF feature twice; generate the shape context descriptor, remove the false match, and complete the face image matching . The method of the invention overcomes the defects of few feature points, few matching points and low correct rate in the existing face image matching method.

Description

人脸图像匹配方法Face Image Matching Method

技术领域technical field

本发明的技术方案涉及图像数据处理,具体地说是人脸图像匹配方法。The technical solution of the invention relates to image data processing, specifically a face image matching method.

背景技术Background technique

人脸图像匹配是图像匹配领域中的一个重要分支。随着自动化信息化时代的到来,人脸图像匹配在现实生活中有了越来越多的应用。因为人脸信息具有独特性,难以伪造且易于采集,广泛应用于门禁系统、视频监控和身份验证技术领域。Face image matching is an important branch in the field of image matching. With the advent of the automated information age, face image matching has more and more applications in real life. Because face information is unique, difficult to forge and easy to collect, it is widely used in access control systems, video surveillance and identity verification technologies.

现有的人脸图像匹配算法大都以提取人脸局部特征为基础,利用人脸局部特征进行匹配。主成分分析法(以下简称PCA方法)是最为常用的人脸局部特征提取方法。1991年,加利福尼亚大学的Turk等人利用PCA方法,提出了经典的“特征脸”人脸图像匹配算法,并取得了较好的效果。但是PCA方法只考虑了图像数据的二阶统计信息,未能利用数据中的高阶统计信息,忽略了多个像素间的非线性相关性。2004年,DavidLowe提出了尺度不变特征转换算法(以下简称SIFT算法),通过高斯微分函数来识别潜在的尺度和旋转不变的关键点,通过一个拟合精细的模型来确定位置和尺度,利用图像局部的梯度方向,给每个关键点位置分配一个或多个方向,实现了尺度和旋转的不变性,在每个关键点的邻域内,对一定尺度的图像局部的梯度,具有一种SIFT表示,可以允许较大的局部变形和光照变化。但是SIFT算法效率低且速度慢。2006年,瑞士的Bay等人提出了SpeededUpRobustFeature算法(以下简称SURF算法)改进了SIFT算法,利用Hessian矩阵行列式的极大值检测特征点,用积分图像盒子滤波的方法简化DOH(即Hessian矩阵的行列式值)中卷积的计算,大大提高了算法效率。但SURF算法仍然存在检测到的特征点少,得到的匹配对数目也少的问题。Most of the existing face image matching algorithms are based on extracting the local features of the face, and use the local features of the face for matching. Principal component analysis (hereinafter referred to as PCA method) is the most commonly used face local feature extraction method. In 1991, Turk and others at the University of California used the PCA method to propose a classic "eigenface" face image matching algorithm, and achieved good results. However, the PCA method only considers the second-order statistical information of the image data, fails to utilize the high-order statistical information in the data, and ignores the nonlinear correlation between multiple pixels. In 2004, David Lowe proposed the scale-invariant feature transformation algorithm (hereinafter referred to as the SIFT algorithm), which identifies potential scale and rotation-invariant key points through the Gaussian differential function, and determines the position and scale through a fine-fitting model. The local gradient direction of the image assigns one or more directions to each key point position, which realizes the invariance of scale and rotation. In the neighborhood of each key point, the local gradient of a certain scale image has a SIFT Indicates that large local deformation and illumination changes can be tolerated. But the SIFT algorithm is inefficient and slow. In 2006, Bay et al. in Switzerland proposed the SpeededUpRobustFeature algorithm (hereinafter referred to as the SURF algorithm) to improve the SIFT algorithm, using the maximum value of the Hessian matrix determinant to detect feature points, and simplifying the DOH (that is, the Hessian matrix) with the method of integral image box filtering. The calculation of the convolution in the determinant value) greatly improves the efficiency of the algorithm. However, the SURF algorithm still has the problem that few feature points are detected and the number of matching pairs obtained is also small.

现有的人脸图像匹配方法由于采用了PCA方法、SIFT算法或SURF算法进行特征提取,存在特征点少、匹配点少且正确率不高的缺陷,尤其对有姿态、表情、光照变化的人脸图像,特征点数目和正确率有待提高。因此,研究特征点多且正确率高的人脸图像匹配方法具有重要的意义。Due to the use of PCA method, SIFT algorithm or SURF algorithm for feature extraction, the existing face image matching methods have the defects of few feature points, few matching points and low accuracy rate, especially for people with posture, expression and illumination changes. Face images, the number of feature points and the accuracy rate need to be improved. Therefore, it is of great significance to study face image matching methods with many feature points and high accuracy.

发明内容Contents of the invention

本发明所要解决的技术问题是:提供人脸图像匹配方法,是基于两次SURF算法和形状上下文的人脸图像匹配方法,简称TSURF+SC(TwiceSURF+ShapeContext),利用SURF算法粗匹配得到尺度差和方向差信息,再利用这些信息进行SURF算法精确匹配,对得到的匹配结果用形状上下文算法去除误匹配,克服了现有的人脸图像匹配方法中存在特征点少、匹配点少且正确率不高的缺陷。The technical problem to be solved by the present invention is to provide a face image matching method, which is a face image matching method based on twice SURF algorithm and shape context, referred to as TSURF+SC (TwiceSURF+ShapeContext), and uses the SURF algorithm to roughly match to obtain the scale difference and direction difference information, and then use the information to perform accurate matching with the SURF algorithm, and use the shape context algorithm to remove the wrong match for the matching result, which overcomes the existing face image matching methods that have few feature points, few matching points and high accuracy. Not high flaws.

本发明解决该技术问题所采用的技术方案是:人脸图像匹配方法,是基于两次SURF和形状上下文的人脸图像匹配方法,具体步骤如下:The technical solution adopted by the present invention to solve the technical problem is: the face image matching method is a face image matching method based on twice SURF and shape context, and the specific steps are as follows:

第一步,确定人脸区域:The first step is to determine the face area:

输入两幅同一个人的相同大小的人脸图像,将其中一幅有表情、姿态和光照变化的图像作为待匹配图像,另一幅标准正面人脸图像作为模板图像,将待匹配图像和模板图像均进行缩放至模板图像的1/8,然后用一个20×20像素大小的人脸检测搜索框自左向右从上到下对上述两幅图像分别进行扫描,对每一个扫描到的子图像,用OpenCV自带的正面人脸检测器判定其是否为人脸图像,若是,则标记为人脸区域,每次对上述两幅图像分别扫描完一遍,将人脸检测搜索框放大10%,再重新扫描一遍,如此重复,直到人脸检测搜索框扩大到上述图像的一半大小时停止扫描,接着对所有被标记的人脸子图像从RGB转换到YCrCb颜色空间,对其中的每一个像素点的Cr、Cb分量进行肤色验证,验证所用肤色条件如公式(1)所示,Input two face images of the same size of the same person, one of the images with expression, posture and illumination changes is used as the image to be matched, and the other standard frontal face image is used as the template image, and the image to be matched and the template image are used Both are scaled to 1/8 of the template image, and then a face detection search box with a size of 20×20 pixels is used to scan the above two images from left to right and from top to bottom, and each scanned sub-image , Use the frontal face detector that comes with OpenCV to determine whether it is a face image, if so, mark it as a face area, scan the above two images each time, enlarge the face detection search box by 10%, and then start again Scan once and repeat until the face detection search box expands to half the size of the above image and stop scanning, then convert all marked face sub-images from RGB to YCrCb color space, and for each pixel in it Cr, The Cb component is used for skin color verification, and the skin color condition used for verification is shown in formula (1),

133≤Cr≤173∩77≤Cb≤127(1),133≤Cr≤173∩77≤Cb≤127(1),

式中Cr、Cb分别代表YCrCb颜色空间中图像的色调和饱和度,In the formula, Cr and Cb respectively represent the hue and saturation of the image in the YCrCb color space,

将上述扫描的图像中具有40%以上的像素点满足公式(1)的区域确定为人脸区域,即感兴趣区域,再将待匹配图像和模板图像中确定的人脸区域放大8倍,恢复到原始尺寸大小;In the above-mentioned scanned image, the area where more than 40% of the pixels satisfy the formula (1) is determined as the face area, that is, the area of interest, and then the face area determined in the image to be matched and the template image is enlarged by 8 times, and restored to original size;

第二步,生成重构的积分图像:In the second step, the reconstructed integral image is generated:

对上述第一步中确定的人脸区域再转换到RGB空间,然后利用公式(2)转换为灰度图像,Convert the face area determined in the first step above to RGB space, and then use formula (2) to convert it into a grayscale image,

X=0.299R+0.587G+0.114B(2),X=0.299R+0.587G+0.114B (2),

上式中,R、G和B分别是RGB空间的红色、绿色和蓝色通道,X表示灰度图像中的灰度值;In the above formula, R, G and B are the red, green and blue channels of the RGB space, respectively, and X represents the gray value in the gray image;

然后计算灰度图像中每个像素点的显著因子,得到显著因子图,如公式(3)所示,Then calculate the saliency factor of each pixel in the grayscale image to obtain the saliency factor map, as shown in formula (3),

σ(Xc)=magn×arctan(V/Xc)(3),σ(X c )=magn×arctan(V/X c ) (3),

上式中,magn为放大系数,σ(Xc)为人脸图像中的像素点的显著因子,V为人脸图像中像素点Xc与以像素点Xc为中心的八邻域Xi(i=0,…,7)的灰度差分,计算方法如公式(4)所示,In the above formula, magn is the magnification factor, σ(X c ) is the salience factor of the pixel in the face image, V is the pixel X c in the face image and the eight neighbors X i ( i =0,...,7), the calculation method is shown in formula (4),

VV == ΣΣ ii == 00 77 (( Xx ii -- Xx cc )) -- -- -- (( 44 )) ,,

将显著因子图左上角原点与该点所构成的矩形区域内所有点的像素值之和作为积分图上该点的像素值,生成重构的积分图像,如公式(5)所示,The sum of the pixel values of all points in the rectangular area formed by the origin of the upper left corner of the significant factor map and this point is used as the pixel value of the point on the integral map to generate a reconstructed integral image, as shown in formula (5),

II NN (( Xx cc )) == ΣΣ ii == 00 ii ≤≤ xx ΣΣ jj == 00 jj ≤≤ ythe y σσ (( Xx cc )) (( ii ,, jj )) -- -- -- (( 55 )) ,,

上式中IN(Xc)为重构的积分图像中Xc处的像素值,Xc的坐标为(x,y),IN(Xc)的值等于显著因子图中(0,0)点、(x,0)点、(0,y)点与(x,y)点构成的矩形区域内所有像素值的和;In the above formula, IN(X c ) is the pixel value at X c in the reconstructed integral image, the coordinates of X c are (x, y), and the value of IN(X c ) is equal to (0,0) in the significant factor map The sum of all pixel values in the rectangular area formed by point, (x,0) point, (0,y) point and (x,y) point;

第三步,两次SURF特征匹配:The third step, two SURF feature matching:

匹配过程首先检测SURF特征并生成描述子,然后进行一次粗匹配得到尺度差和方向差信息,最后再利用这些信息进行一次精确匹配,具体步骤如下:The matching process first detects SURF features and generates descriptors, then performs a rough match to obtain scale difference and direction difference information, and finally uses this information to perform an accurate match. The specific steps are as follows:

(1)生成SURF描述子:(1) Generate SURF descriptor:

用不同尺寸的盒子滤波模板与上述第二步得到的重构的积分图像求取显著因子图的不同尺度的Hessian矩阵行列式响应图像,之后在这些响应图像上采用3D非极大值抑制进行检测,将具有响应极大值的点确定为特征点,该特征点的尺度为相应响应图像的尺度,设盒子滤波模板大小为L×L像素,初始尺寸L=9像素,其对应的响应图像尺度s=1.2,然后依次使用L=15像素、21像素、27像素的尺寸,分别对应的响应图像尺度s可由公式(6)算出,Use box filter templates of different sizes and the reconstructed integral image obtained in the second step above to obtain the Hessian matrix determinant response images of different scales of the saliency factor map, and then use 3D non-maximum value suppression on these response images for detection , determine the point with the maximum response value as the feature point, the scale of the feature point is the scale of the corresponding response image, set the size of the box filter template as L×L pixels, the initial size L=9 pixels, and the corresponding response image scale s=1.2, then use L=15 pixels, 21 pixels, and 27 pixels in turn, and the corresponding response image scale s can be calculated by formula (6),

s=1.2×L/9(6),s=1.2×L/9(6),

得到特征点的位置和尺度s后,对每个特征点,在以特征点为中心,6s为半径的圆形区域内对显著因子图用大小为4s×4s像素的Haar小波模板进行响应运算,这里s需四舍五入为整数,然后用一个以特征点为中心,张角为π/3的扇形滑动窗口,围绕特征点以步长0.2弧度进行旋转,每转到一处,统计滑动窗口内图像Haar小波水平方向和垂直方向响应值dx、dy的累加值∑dx+∑dy,具有最大响应累加值的方向作为特征点的主方向,得到主方向后,以特征点为中心沿主方向将20s×20s像素大小的图像划分成4×4个子块,每个子块利用大小为2s×2s像素的Haar模板进行响应值的计算,并对水平方向x和垂直方向y的响应值分别统计其累加和及绝对值累加和∑dx、∑|dx|、∑dy、∑|dy|,形成特征矢量,即SURF描述子,每个特征点共生成4×4×4=64维的SURF描述子;After obtaining the position and scale s of the feature points, for each feature point, in a circular area with the feature point as the center and a radius of 6s, the Haar wavelet template with a size of 4s×4s pixels is used to perform a response operation on the significant factor map. Here s needs to be rounded to an integer, and then use a fan-shaped sliding window with a feature point as the center and an opening angle of π/3 to rotate around the feature point with a step size of 0.2 radians. Every time it turns, the image Haar in the sliding window is counted. The accumulative value ∑dx+∑dy of the wavelet horizontal and vertical response values dx and dy, the direction with the largest response accumulative value is taken as the main direction of the feature point. The pixel-sized image is divided into 4×4 sub-blocks, and each sub-block uses a Haar template with a size of 2s×2s pixels to calculate the response value, and the cumulative sum and absolute value of the response values in the horizontal direction x and vertical direction y are respectively counted. Value accumulation and ∑dx, ∑|dx|, ∑dy, ∑|dy| form a feature vector, that is, a SURF descriptor. Each feature point generates a total of 4×4×4=64-dimensional SURF descriptors;

(2)SURF算法粗匹配:(2) SURF algorithm rough matching:

对待匹配图像中的每一个特征点,计算它与模板图像中所有特征点SURF描述子之间的欧氏距离,记录最近邻距离d1和次近邻距离d2,当d1/d2<th1且d1<th2时,将该点与其最近邻记录为一对匹配点,存入初始匹配集,由此完成SURF算法粗匹配,上述th1和th2为预先设定的阈值;For each feature point in the matching image, calculate the Euclidean distance between it and all feature point SURF descriptors in the template image, record the nearest neighbor distance d1 and the second nearest neighbor distance d2, when d1/d2<th1 and d1<th2 , the point and its nearest neighbor are recorded as a pair of matching points and stored in the initial matching set, thus completing the rough matching of the SURF algorithm. The above th1 and th2 are preset thresholds;

(3)SURF算法精确匹配:(3) SURF algorithm exact match:

计算并统计上述(2)步得到的初始匹配集中匹配点对之间的尺度差和角度差,这里所说的尺度为特征点的尺度,角度为特征点主方向,然后计算所有匹配点对尺度差的均值ds和标准差dc,计算角度差的均值dO,将初始匹配集清空,对待匹配图像中每一个特征点计算它与模板图像中所有特征点间的尺度差tds、角度差tdO,待匹配图像与模板图像各部分之间的尺度差和角度差应保持一致,匹配对之间的尺度差和角度差应该落在一定范围内,若tds和tdO满足公式(7)条件,Calculate and count the scale difference and angle difference between the matching point pairs in the initial matching set obtained in step (2) above. The scale mentioned here is the scale of the feature point, and the angle is the main direction of the feature point, and then calculate the scale of all matching point pairs The mean value ds and standard deviation dc of the difference, calculate the mean value dO of the angle difference, clear the initial matching set, and calculate the scale difference tds and angle difference tdO between it and all the feature points in the template image for each feature point in the image to be matched. The scale difference and angle difference between each part of the matching image and the template image should be consistent, and the scale difference and angle difference between the matching pairs should fall within a certain range. If tds and tdO satisfy the conditions of formula (7),

(ds-1.5dc)<tds<(ds+1.5dc)∩(dO-π/6)<tdO<(dO+π/6)(7),(ds-1.5dc)<tds<(ds+1.5dc)∩(dO-π/6)<tdO<(dO+π/6)(7),

则计算两匹配点之间的欧氏距离,重做上述(2)步SURF算法粗匹配步骤,将得到的匹配点存入匹配集中,如果不满足公式(7),则跳过该对匹配点,不再计算它们的欧氏距离;Then calculate the Euclidean distance between the two matching points, redo the above (2) rough matching step of the SURF algorithm, and store the obtained matching points in the matching set. If the formula (7) is not satisfied, skip the pair of matching points , no longer calculate their Euclidean distance;

第四步,去除误匹配:The fourth step is to remove the mismatch:

利用形状上下文算法对上述第三步得到的匹配结果进行误匹配的剔除,具体方法如下:Use the shape context algorithm to eliminate the mismatching results obtained in the third step above. The specific method is as follows:

(1)生成形状上下文描述子:(1) Generate shape context descriptor:

将上述第三步中得到的匹配集中属于待匹配图像和模板图像的特征点分别作为两幅图像的采样点,对每个采样点计算并记录所有本图中其它采样点到它的距离、与它所成的角度,然后将距离归一化后分为6个区间,且将角度[0,2π]分为12个区间,这6个距离区间和12个角度区间组成了72个区块,统计落在各个区块中的采样点数目,就生成了72维的形状上下文描述子;Use the feature points in the matching set obtained in the third step above that belong to the image to be matched and the template image as the sampling points of the two images, and calculate and record the distance from all other sampling points in the image to it, and The angle it forms is then divided into 6 intervals after normalizing the distance, and the angle [0,2π] is divided into 12 intervals. These 6 distance intervals and 12 angle intervals form 72 blocks. By counting the number of sampling points falling in each block, a 72-dimensional shape context descriptor is generated;

(2)剔除误匹配,完成人脸图像匹配:(2) Eliminate false matches and complete face image matching:

计算第三步中得到的匹配集中每对匹配点间形状上下文描述子的欧氏距离dsc,求出欧氏距离均值w和标准差f,然后将sc不满足公式(8)的匹配对当作误匹配剔除掉,剩余的匹配集则为最终的匹配集,Calculate the Euclidean distance d sc of the shape context descriptor between each pair of matching points in the matching set obtained in the third step, find the mean value w and standard deviation f of the Euclidean distance, and then treat the matching pairs sc that do not satisfy the formula (8) as Eliminate false matches, and the remaining matching set is the final matching set.

dsc≤w+f(8),d sc ≤ w+f(8),

至此完成人脸图像匹配。At this point, the face image matching is completed.

上述人脸图像匹配方法,所述式子(3)中的放大系数magn=10。In the face image matching method described above, the magnification factor magn=10 in the formula (3).

上述人脸图像匹配方法,所述SURF算法粗匹配中,预先设定的阈值:th1=0.6,th2=0.3。In the face image matching method described above, in the rough matching of the SURF algorithm, the preset thresholds are: th1=0.6, th2=0.3.

上述人脸图像匹配方法,所述SURF算法精确匹配中,在重做上述(2)步SURF算法粗匹配步骤时取th1=0.7,th2=0.4。In the face image matching method above, in the precise matching of the SURF algorithm, th1=0.7 and th2=0.4 are taken when redoing the rough matching step of the SURF algorithm in the above (2) step.

本发明的有益效果是:与现有技术相比,本发明的突出的实质性特点和显著进步如下:The beneficial effects of the present invention are: compared with the prior art, the outstanding substantive features and remarkable progress of the present invention are as follows:

(1)本发明方法是基于SURF算法和形状上下文的人脸图像匹配方法,利用SURF算法粗匹配得到尺度差和方向差信息,再利用这些信息进行SURF算法精确匹配,对得到的匹配结果用形状上下文算法去除误匹配,克服了现有的人脸图像匹配方法中存在特征点少、匹配点少且正确率不高的缺陷。(1) The method of the present invention is a face image matching method based on SURF algorithm and shape context, utilizes SURF algorithm rough matching to obtain scale difference and direction difference information, then utilizes these information to carry out SURF algorithm accurate matching, uses shape to the matching result obtained The context algorithm removes false matches, and overcomes the defects of few feature points, few matching points and low accuracy in the existing face image matching methods.

(2)本发明方法设置人脸区域为感兴趣区域,只针对人脸区域进行匹配可节约大量时间,提高了匹配效率。(2) The method of the present invention sets the face area as the area of interest, and only matching the face area can save a lot of time and improve the matching efficiency.

(3)本发明方法在生成积分图像时对图像进行了重构变换,使眼睛、眉毛、鼻子、嘴包含大量人脸特征的部位更加突出、有更多特征点,从而增加了有效特征点数量。(3) The method of the present invention carries out reconstruction transformation to image when generating integral image, makes eyes, eyebrows, nose, mouth contain the position of a large amount of facial features more prominent, has more feature points, thereby increased effective feature point quantity .

(4)本发明方法利用粗匹配得到的尺度差和方向差信息进行精确匹配,可以得到更多匹配点,并且正确率得到了提升。(4) The method of the present invention uses the scale difference and direction difference information obtained by rough matching to perform precise matching, so that more matching points can be obtained, and the accuracy rate is improved.

(5)本发明方法利用形状上下文算法对精确匹配得到的结果进行了误匹配剔除操作,进一步提高了准确率。(5) The method of the present invention uses the shape context algorithm to perform an operation of eliminating false matches on the results obtained by exact matching, thereby further improving the accuracy rate.

下面的实施例对本发明的突出的实质性特点和显著进步作了进一步的证明。The following examples further demonstrate the outstanding substantive features and remarkable progress of the present invention.

附图说明Description of drawings

下面结合附图和实施例对本发明进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

图1为本发明方法的流程示意框图。Fig. 1 is a schematic block diagram of the process of the method of the present invention.

图2为本发明方法TSURF+SC方法与现有的人脸图像匹配方法在不同阈值下的匹配对数量曲线。Fig. 2 is the curves of the number of matching pairs between the TSURF+SC method of the present invention and the existing face image matching method under different thresholds.

图3为本发明方法TSURF+SC方法与现有的人脸图像匹配方法在不同阈值下的正确率曲线。Fig. 3 is the accuracy rate curve of the TSURF+SC method of the present invention and the existing face image matching method under different thresholds.

具体实施方式detailed description

图1所示实施例表明,本发明方法的流程是:确定人脸区域→生成重构的积分图像→两次SURF特征匹配→生成形状上下文描述子,去除误匹配,完成人脸图像匹配。The embodiment shown in FIG. 1 shows that the flow of the method of the present invention is: determine the face area → generate a reconstructed integral image → match SURF features twice → generate a shape context descriptor, remove mismatches, and complete face image matching.

图2所示实施例显示,在不同阈值th1且th2=0.4时,本发明方法TSURF+SC方法与现有的人脸图像匹配方法在不同阈值下的匹配对数量曲线,图中显示本发明方法TSURF+SC方法正确匹配对数量最多,说明其算法效果较现有的人脸图像匹配方法更好。The embodiment shown in Fig. 2 shows that when different thresholds th1 and th2=0.4, the matching pair number curves of the TSURF+SC method of the present invention method and the existing face image matching method under different thresholds, the method of the present invention is shown in the figure The TSURF+SC method has the largest number of correct matching pairs, indicating that its algorithm effect is better than the existing face image matching methods.

图3所示实施例显示,在不同阈值th1且th2=0.4时,本发明方法TSURF+SC方法与现有的人脸图像匹配方法在不同阈值下的正确率曲线,图中显示本发明方法TSURF+SC方法正确匹配的正确率最高,说明其算法效果较现有的人脸图像匹配方法更好。The embodiment shown in Fig. 3 shows that when different thresholds th1 and th2=0.4, the accuracy rate curves of the inventive method TSURF+SC method and the existing face image matching method under different thresholds, the figure shows the inventive method TSURF The +SC method has the highest correct matching rate, indicating that its algorithm is better than the existing face image matching methods.

实施例Example

本实施例的人脸图像匹配方法,是基于两次SURF和形状上下文的人脸图像匹配方法,具体步骤如下:The face image matching method of the present embodiment is a face image matching method based on twice SURF and shape context, and the specific steps are as follows:

第一步,确定人脸区域:The first step is to determine the face area:

输入两幅同一个人的相同大小的人脸图像,将其中一幅有表情、姿态和光照变化的图像作为待匹配图像,另一幅标准正面人脸图像作为模板图像,将待匹配图像和模板图像均进行缩放至模板图像的1/8,然后用一个20×20像素大小的人脸检测搜索框自左向右从上到下对上述两幅图像分别进行扫描,对每一个扫描到的子图像,用OpenCV自带的正面人脸检测器判定其是否为人脸图像,若是,则标记为人脸区域,每次对上述两幅图像分别扫描完一遍,将人脸检测搜索框放大10%,再重新扫描一遍,如此重复,直到人脸检测搜索框扩大到上述图像的一半大小时停止扫描,接着对所有被标记的人脸子图像从RGB转换到YCrCb颜色空间,对其中的每一个像素点的Cr、Cb分量进行肤色验证,验证所用肤色条件如公式(1)所示,Input two face images of the same size of the same person, one of the images with expression, posture and illumination changes is used as the image to be matched, and the other standard frontal face image is used as the template image, and the image to be matched and the template image are used Both are scaled to 1/8 of the template image, and then a face detection search box with a size of 20×20 pixels is used to scan the above two images from left to right and from top to bottom, and each scanned sub-image , Use the frontal face detector that comes with OpenCV to determine whether it is a face image, if so, mark it as a face area, scan the above two images each time, enlarge the face detection search box by 10%, and then start again Scan once and repeat until the face detection search box expands to half the size of the above image and stop scanning, then convert all marked face sub-images from RGB to YCrCb color space, and for each pixel in it Cr, The Cb component is used for skin color verification, and the skin color condition used for verification is shown in formula (1),

133≤Cr≤173∩77≤Cb≤127(1),133≤Cr≤173∩77≤Cb≤127(1),

式中Cr、Cb分别代表YCrCb颜色空间中图像的色调和饱和度,In the formula, Cr and Cb respectively represent the hue and saturation of the image in the YCrCb color space,

将上述扫描的图像中具有40%以上的像素点满足公式(1)的区域确定为人脸区域,即感兴趣区域,再将待匹配图像和模板图像中确定的人脸区域放大8倍,恢复到原始尺寸大小;In the above-mentioned scanned image, the area where more than 40% of the pixels satisfy the formula (1) is determined as the face area, that is, the area of interest, and then the face area determined in the image to be matched and the template image is enlarged by 8 times, and restored to original size;

第二步,生成重构的积分图像:In the second step, the reconstructed integral image is generated:

对上述第一步中确定的人脸区域再转换到RGB空间,然后利用公式(2)转换为灰度图像,Convert the face area determined in the first step above to RGB space, and then use formula (2) to convert it into a grayscale image,

X=0.299R+0.587G+0.114B(2),X=0.299R+0.587G+0.114B (2),

上式中,R、G和B分别是RGB空间的红色、绿色和蓝色通道,X表示灰度图像中的灰度值;In the above formula, R, G and B are the red, green and blue channels of the RGB space, respectively, and X represents the gray value in the gray image;

然后计算灰度图像中每个像素点的显著因子,得到显著因子图,如公式(3)所示,Then calculate the saliency factor of each pixel in the grayscale image to obtain the saliency factor map, as shown in formula (3),

σ(Xc)=magn×arctan(V/Xc)(3),σ(X c )=magn×arctan(V/X c ) (3),

上式中,magn为放大系数,放大系数magn=10,σ(Xc)为人脸图像中的像素点的显著因子,V为人脸图像中像素点Xc与以像素点Xc为中心的八邻域Xi(i=0,…,7)的灰度差分,计算方法如公式(4)所示,In the above formula, magn is the magnification factor, magnification factor magn=10, σ(X c ) is the salience factor of the pixel in the face image, V is the pixel point X c in the face image and the eighth centered on the pixel point X c The gray level difference of the neighborhood Xi ( i =0,...,7), the calculation method is shown in the formula (4),

VV == &Sigma;&Sigma; ii == 00 77 (( Xx ii -- Xx cc )) -- -- -- (( 44 )) ,,

将显著因子图左上角原点与该点所构成的矩形区域内所有点的像素值之和作为积分图上该点的像素值,生成重构的积分图像,如公式(5)所示,The sum of the pixel values of all points in the rectangular area formed by the origin of the upper left corner of the significant factor map and this point is used as the pixel value of the point on the integral map to generate a reconstructed integral image, as shown in formula (5),

II NN (( Xx cc )) == &Sigma;&Sigma; ii == 00 ii &le;&le; xx &Sigma;&Sigma; jj == 00 jj &le;&le; ythe y &sigma;&sigma; (( Xx cc )) (( ii ,, jj )) -- -- -- (( 55 )) ,,

上式中IN(Xc)为重构的积分图像中Xc处的像素值,Xc的坐标为(x,y),IN(Xc)的值等于显著因子图中(0,0)点、(x,0)点、(0,y)点与(x,y)点构成的矩形区域内所有像素值的和;In the above formula, IN(X c ) is the pixel value at X c in the reconstructed integral image, the coordinates of X c are (x, y), and the value of IN(X c ) is equal to (0,0) in the significant factor map The sum of all pixel values in the rectangular area formed by point, (x,0) point, (0,y) point and (x,y) point;

第三步,两次SURF特征匹配:The third step, two SURF feature matching:

匹配过程首先检测SURF特征并生成描述子,然后进行一次粗匹配得到尺度差和方向差信息,最后再利用这些信息进行一次精确匹配,具体步骤如下:The matching process first detects SURF features and generates descriptors, then performs a rough match to obtain scale difference and direction difference information, and finally uses this information to perform an accurate match. The specific steps are as follows:

(1)生成SURF描述子:(1) Generate SURF descriptor:

用不同尺寸的盒子滤波模板与上述第二步得到的重构的积分图像求取显著因子图的不同尺度的Hessian矩阵行列式响应图像,之后在这些响应图像上采用3D非极大值抑制进行检测,将具有响应极大值的点确定为特征点,该特征点的尺度为相应响应图像的尺度,设盒子滤波模板大小为L×L像素,初始尺寸L=9像素,其对应的响应图像尺度s=1.2,然后依次使用L=15像素、21像素、27像素的尺寸,分别对应的响应图像尺度s可由公式(6)算出,Use box filter templates of different sizes and the reconstructed integral image obtained in the second step above to obtain the Hessian matrix determinant response images of different scales of the saliency factor map, and then use 3D non-maximum value suppression on these response images for detection , determine the point with the maximum response value as the feature point, the scale of the feature point is the scale of the corresponding response image, set the size of the box filter template as L×L pixels, the initial size L=9 pixels, and the corresponding response image scale s=1.2, then use L=15 pixels, 21 pixels, and 27 pixels in turn, and the corresponding response image scale s can be calculated by formula (6),

s=1.2×L/9(6),s=1.2×L/9(6),

得到特征点的位置和尺度s后,对每个特征点,在以特征点为中心,6s为半径的圆形区域内对显著因子图用大小为4s×4s像素的Haar小波模板进行响应运算,这里s需四舍五入为整数,然后用一个以特征点为中心,张角为π/3的扇形滑动窗口,围绕特征点以步长0.2弧度进行旋转,每转到一处,统计滑动窗口内图像Haar小波水平方线和垂直方向响应值dx、dy的累加值∑dx+∑dy,具有最大响应累加值的方向作为特征点的主方向,得到主方向后,以特征点为中心沿主方向将20s×20s像素大小的图像划分成4×4个子块,每个子块利用大小为2s×2s像素的Haar模板进行响应值的计算,并对水平方向x和垂直方向y的响应值分别统计其累加和及绝对值累加和∑dx、∑|dx|、∑dy、∑|dy|,形成特征矢量,即SURF描述子,每个特征点共生成4×4×4=64维的SURF描述子;After obtaining the position and scale s of the feature points, for each feature point, in a circular area with the feature point as the center and a radius of 6s, the Haar wavelet template with a size of 4s×4s pixels is used to perform a response operation on the significant factor map. Here s needs to be rounded to an integer, and then use a fan-shaped sliding window with a feature point as the center and an opening angle of π/3 to rotate around the feature point with a step size of 0.2 radians. Every time it turns, the image Haar in the sliding window is counted. The cumulative value of wavelet horizontal square line and vertical direction response value dx, dy ∑dx+∑dy, the direction with the maximum response cumulative value is taken as the main direction of the feature point, after the main direction is obtained, take the feature point as the center and move 20s× The 20s pixel-sized image is divided into 4×4 sub-blocks, and each sub-block uses a Haar template with a size of 2s×2s pixels to calculate the response value, and the cumulative sum and sum of the response values in the horizontal direction x and vertical direction y are respectively counted Absolute value accumulation sums ∑dx, ∑|dx|, ∑dy, ∑|dy| to form a feature vector, that is, a SURF descriptor. Each feature point generates a total of 4×4×4=64-dimensional SURF descriptors;

(2)SURF算法粗匹配:(2) SURF algorithm rough matching:

对待匹配图像中的每一个特征点,计算它与模板图像中所有特征点SURF描述子之间的欧氏距离,记录最近邻距离d1和次近邻距离d2,当d1/d2<th1且d1<th2时,将该点与其最近邻记录为一对匹配点,存入初始匹配集,由此完成SURF算法粗匹配,上述th1和th2为预先设定的阈值,th1=0.6,th2=0.3;For each feature point in the matching image, calculate the Euclidean distance between it and all feature point SURF descriptors in the template image, record the nearest neighbor distance d1 and the second nearest neighbor distance d2, when d1/d2<th1 and d1<th2 , record the point and its nearest neighbor as a pair of matching points, and store them in the initial matching set, thus completing the rough matching of the SURF algorithm. The above-mentioned th1 and th2 are preset thresholds, th1=0.6, th2=0.3;

(3)SURF算法精确匹配:(3) SURF algorithm exact match:

计算并统计上述(2)步得到的初始匹配集中匹配点对之间的尺度差和角度差,这里所说的尺度为特征点的尺度,角度为特征点主方向,然后计算所有匹配点对尺度差的均值ds和标准差dc,计算角度差的均值dO,将初始匹配集清空,对待匹配图像中每一个特征点计算它与模板图像中所有特征点间的尺度差tds、角度差tdO,待匹配图像与模板图像各部分之间的尺度差和角度差应保持一致,匹配对之间的尺度差和角度差应该落在一定范围内,若tds和tdO满足公式(7)条件,Calculate and count the scale difference and angle difference between the matching point pairs in the initial matching set obtained in step (2) above. The scale mentioned here is the scale of the feature point, and the angle is the main direction of the feature point, and then calculate the scale of all matching point pairs The mean value ds and standard deviation dc of the difference, calculate the mean value dO of the angle difference, clear the initial matching set, and calculate the scale difference tds and angle difference tdO between it and all the feature points in the template image for each feature point in the image to be matched. The scale difference and angle difference between each part of the matching image and the template image should be consistent, and the scale difference and angle difference between the matching pairs should fall within a certain range. If tds and tdO satisfy the conditions of formula (7),

(ds-1.5dc)<tds<(ds+1.5dc)∩(dO-π/6)<tdO<(dO+π/6)(7),(ds-1.5dc)<tds<(ds+1.5dc)∩(dO-π/6)<tdO<(dO+π/6)(7),

则计算两匹配点之间的欧氏距离,重做上述(2)步SURF算法粗匹配步骤,其中th1=0.7,th2=0.4,将得到的匹配点存入匹配集中,如果不满足公式(7),则跳过该对匹配点,不再计算它们的欧氏距离;Then calculate the Euclidean distance between the two matching points, redo the rough matching step of the above (2) step SURF algorithm, wherein th1=0.7, th2=0.4, the matching point obtained is stored in the matching set, if the formula (7 ), then skip the pair of matching points and no longer calculate their Euclidean distance;

第四步,去除误匹配:The fourth step is to remove the mismatch:

利用形状上下文算法对上述第三步得到的匹配结果进行误匹配的剔除,具体方法如下:Use the shape context algorithm to eliminate the mismatching results obtained in the third step above. The specific method is as follows:

(1)生成形状上下文描述子:(1) Generate shape context descriptor:

将上述第三步中得到的匹配集中属于待匹配图像和模板图像的特征点分别作为两幅图像的采样点,对每个采样点计算并记录所有本图中其它采样点到它的距离、与它所成的角度,然后将距离归一化后分为6个区间,且将角度[0,2π]分为12个区间,这6个距离区间和12个角度区间组成了72个区块,统计落在各个区块中的采样点数目,就生成了72维的形状上下文描述子;Use the feature points in the matching set obtained in the third step above that belong to the image to be matched and the template image as the sampling points of the two images, and calculate and record the distance from all other sampling points in the image to it, and The angle it forms is then divided into 6 intervals after normalizing the distance, and the angle [0,2π] is divided into 12 intervals. These 6 distance intervals and 12 angle intervals form 72 blocks. By counting the number of sampling points falling in each block, a 72-dimensional shape context descriptor is generated;

(2)剔除误匹配,完成人脸图像匹配:(2) Eliminate false matches and complete face image matching:

计算第三步中得到的匹配集中每对匹配点间形状上下文描述子的欧氏距离dsc,求出欧氏距离均值w和标准差f,然后将sc不满足公式(8)的匹配对当作误匹配剔除掉,剩余的匹配集则为最终的匹配集,Calculate the Euclidean distance d sc of the shape context descriptor between each pair of matching points in the matching set obtained in the third step, find the mean value w and standard deviation f of the Euclidean distance, and then treat the matching pairs sc that do not satisfy the formula (8) as Eliminate false matches, and the remaining matching set is the final matching set.

dsc≤w+f(8),d sc ≤ w+f(8),

至此完成人脸图像匹配。At this point, the face image matching is completed.

本实施例利用VS2005和OPENCV2.0平台实现,并分别对丹麦技术大学IMM和GeorgiaTech人脸数据库中有表情变化、姿态变化、光照变化、表情和姿态同时变化的图像进行匹配实验。其中,IMM数据库包含40人,每人6幅图像,图像大小为640×480,GeorgiaTech人脸数据库中包含50人,每人15幅图像,图像大小为640×480,实验所用处理器是英特尔酷睿I3,4G内存。为了验证本发明方法在特征点数量和正确率上的优势,本实施例选取经典匹配算法SURF算法、SURF算法结合RANSAC算法、改进的SURF算法结合RANSAC算法与本方法TSURF+SC进行对比,记录每一组实验图片分别在不同门限值th1、th2时得到的匹配点数目与错误匹配的数目,实验数据表明本实施例方法在匹配点数目上平均超出对比算法80%以上,错误匹配占匹配点数目的比率也都低于对比算法,但是算法的运行时间要比原始SURF算法高出80%,算法效率需要在以后的研究中进行改进。表1列出的是一组IMM数据库图像中姿态有变化的人脸图像在经验阈值th1=0.7,th2=0.4时的匹配数据,表2为一组GeorgiaTech数据库中有光照变化的人脸图像在经验阈值th1=0.7,th2=0.4时的匹配数据。This embodiment is implemented using VS2005 and OPENCV2.0 platforms, and the matching experiments are carried out on images with expression changes, posture changes, illumination changes, and simultaneous changes in expression and posture in the IMM of the Technical University of Denmark and the GeorgiaTech face database. Among them, the IMM database contains 40 people, 6 images per person, and the image size is 640×480. The GeorgiaTech face database contains 50 people, 15 images per person, and the image size is 640×480. The processor used in the experiment is Intel Core I3, 4G memory. In order to verify the advantages of the method of the present invention in terms of the number of feature points and the accuracy rate, this embodiment selects the classic matching algorithm SURF algorithm, the SURF algorithm combined with the RANSAC algorithm, the improved SURF algorithm combined with the RANSAC algorithm and this method TSURF+SC for comparison, and records each The number of matching points and the number of wrong matches obtained by a group of experimental pictures at different thresholds th1 and th2 respectively, the experimental data shows that the method of this embodiment exceeds the comparison algorithm by more than 80% on average in the number of matching points, and the number of wrong matches accounts for the number of matching points The objective ratios are also lower than the comparison algorithm, but the running time of the algorithm is 80% higher than the original SURF algorithm, and the algorithm efficiency needs to be improved in future research. Table 1 lists the matching data of face images with posture changes in a group of IMM database images at empirical thresholds th1=0.7, th2=0.4, and Table 2 shows a group of face images with illumination changes in the GeorgiaTech database. Matching data when empirical thresholds th1=0.7, th2=0.4.

表1姿态变化匹配效果Table 1 Attitude change matching effect

表2光照变化匹配效果Table 2 Lighting change matching effect

表中的TSURF+SC为本实施例所采用的本发明方法。TSURF+SC in the table is the method of the present invention adopted in this embodiment.

结果表明,本实施例方法得到匹配对数目、正确率均优于三种对比算法,虽然匹配时间高于前两种算法,但比前两种算法提供了更多的正确匹配,算法效率需要在以后的研究中继续改善。The results show that the number of matching pairs and correct rate obtained by the method of this embodiment are better than those of the three comparison algorithms. Although the matching time is higher than the previous two algorithms, it provides more correct matches than the previous two algorithms, and the algorithm efficiency needs to be within Continue to improve in future research.

Claims (4)

1. Humanface image matching method, is characterized in that: be the Humanface image matching method based on twice SURF and Shape context, concrete steps are as follows:
The first step, determine human face region:
Input the facial image of the formed objects of two width same persons, to a wherein width espressiove, the image of attitude and illumination variation is as image to be matched, another width standard front face facial image is as template image, image to be matched and template image are all carried out zooming to 1/8 of template image, then the Face datection search box of 20 × 20 pixel sizes is used to scan respectively above-mentioned two width images from top to bottom from left to right, to the subimage that each scans, the obverse face detection device carried with OpenCV judges it whether as facial image, if, then be labeled as human face region, one time has been scanned respectively to above-mentioned two width images at every turn, Face datection search box is amplified 10%, rescan one time again, repetition like this, until stop scanning when Face datection search box expands the half size of above-mentioned image to, then from RGB, YCrCb color space is transformed into all face subimages be labeled, to the Cr of each pixel wherein, Cb component carries out colour of skin checking, verify that colour of skin condition used is as shown in formula (1),
133≤Cr≤173∩77≤Cb≤127(1),
In formula, Cr, Cb represent tone and the saturation degree of image in YCrCb color space respectively,
The region that the pixel in the image of above-mentioned scanning with more than 40% meets formula (1) is defined as human face region, i.e. area-of-interest, again the human face region determined in image to be matched and template image is amplified 8 times, return to original size size;
Second step, generates the integral image of reconstruct:
Again rgb space is transformed into the human face region determined in the above-mentioned first step, then utilizes formula (2) to be converted to gray level image,
X=0.299R+0.587G+0.114B(2),
In above formula, R, G and B are the redness of rgb space, green and blue channel respectively, and X represents the gray-scale value in gray level image;
Then calculate the significant factor of each pixel in gray level image, obtain significant factor figure, as shown in formula (3),
σ(X c)=magn×arctan(V/X c)(3),
In above formula, magn is amplification coefficient, σ (X c) be the significant factor of the pixel in facial image, V is pixel X in facial image cwith with pixel X ccentered by eight neighborhood X i(i=0 ..., 7) grey scale difference, computing method as shown in formula (4),
V = &Sigma; i = 0 7 ( X i - X c ) - - - ( 4 ) ,
In the rectangular area that significant factor figure upper left corner initial point and this point are formed the pixel value of pixel value sum this point on integrogram a little, generate the integral image of reconstruct, as shown in formula (5),
I N ( X c ) = &Sigma; i = 0 i &le; x &Sigma; j = 0 j &le; y &sigma; ( X c ) ( i , j ) - - - ( 5 ) ,
IN (X in above formula c) be X in the integral image of reconstruct cthe pixel value at place, X ccoordinate be (x, y), IN (X c) value equal (0,0) point in significant factor figure, (x, 0) point, (0, y) put with (x, y) all pixel values in the rectangular area that forms with;
3rd step, twice SURF characteristic matching:
First matching process detects SURF feature and generates descriptor, and then carry out once thick coupling and obtain yardstick difference and direction difference information, finally recycle these information and carry out an exact matching, concrete steps are as follows:
(1) SURF descriptor is generated:
The integral image of the reconstruct obtained with box Filtering Template and the above-mentioned second step of different size asks for the Hessian matrix determinant response image of the different scale of significant factor figure, on these response images, adopt 3D non-maxima suppression to detect afterwards, the point with response maximum value is defined as unique point, the yardstick of this unique point is the yardstick of respective response image, if box Filtering Template size is L × L pixel, original dimension L=9 pixel, the response image yardstick s=1.2 of its correspondence, then L=15 pixel is used successively, 21 pixels, the size of 27 pixels, response image yardstick s corresponding respectively can be calculated by formula (6),
s=1.2×L/9(6),
After the position obtaining unique point and yardstick s, to each unique point, centered by unique point, 6s be radius border circular areas in response computing is carried out to the Haar small echo template that significant factor figure size is 4s × 4s pixel, here s need be rounded to integer, then one is used centered by unique point, subtended angle is the fan-shaped moving window of π/3, rotate around unique point with step-length 0.2 radian, often forward a place to, image Haar small echo horizontal direction and vertical direction response dx in statistics moving window, the accumulated value ∑ dx+ ∑ dy of dy, there is the principal direction of direction as unique point of peak response accumulated value, after obtaining principal direction, centered by unique point, along principal direction, the image of 20s × 20s pixel size is divided into 4 × 4 sub-blocks, each sub-block utilizes Haar template that size is 2s × 2s pixel to carry out the calculating of response, and its cumulative sum and absolute value cumulative sum ∑ dx are added up respectively to the response of horizontal direction x and vertical direction y, ∑ | dx|, ∑ dy, ∑ | dy|, morphogenesis characters vector, i.e. SURF descriptor, the SURF descriptor that each unique point symbiosis becomes 4 × 4 × 4=64 to tie up,
(2) SURF algorithm slightly mates:
Treat each unique point in matching image, calculate the Euclidean distance between all unique point SURF descriptors in it and template image, record nearest neighbor distance d1 and time nearest neighbor distance d2, as d1/d2<th1 and d1<th2 time, this point and its arest neighbors are recorded as a pair match point, stored in initial matching collection, complete SURF algorithm thus and slightly mate, above-mentioned th1 and th2 is the threshold value preset;
(3) SURF algorithm exact matching:
Calculate and add up initial matching that above-mentioned (2) step obtains and concentrate yardstick difference between matching double points and differential seat angle, yardstick mentioned here is the yardstick of unique point, angle is unique point principal direction, then average ds and the standard deviation dc of all matching double points yardstick differences is calculated, calculate the average dO of differential seat angle, initial matching collection is emptied, treat each unique point in matching image and calculate yardstick difference tds in it and template image between all unique points, differential seat angle tdO, yardstick difference between image to be matched and template image each several part and differential seat angle should be consistent, mate between yardstick difference and differential seat angle should fall within the specific limits, if tds and tdO meets formula (7) condition,
(ds-1.5dc)<tds<(ds+1.5dc)∩(dO-π/6)<tdO<(dO+π/6)(7),
Then calculate the Euclidean distance between two match points, above-mentioned (2) step SURF algorithm of reforming slightly mates step, by the match point that obtains stored in set of matches, if do not meet formula (7), then skip this to match point, no longer calculate their Euclidean distance;
4th step, remove error hiding:
Utilize Shape context algorithm to carry out the rejecting of error hiding to the matching result that above-mentioned 3rd step obtains, concrete grammar is as follows:
(1) Shape context descriptor is generated:
The sampled point of unique point as two width images of image to be matched and template image will be belonged in the set of matches obtained in above-mentioned 3rd step, each sampled point is calculated and records other sampled point in all figure to its distance and its angulation, then 6 intervals will be divided into after range normalization, and by angle [0,2 π] be divided into 12 intervals, 72 blocks are constituted with 12 angular interval between these 6 distance regions, add up the sampled point number dropped in each block, just generate the Shape context descriptor of 72 dimensions;
(2) reject error hiding, complete facial image coupling:
Calculate the Euclidean distance d of Shape context descriptor between often pair of match point in the set of matches obtained in the 3rd step sc, obtain Euclidean distance average w and standard deviation f, the coupling then sc not being met formula (8) weeds out being used as error hiding, and remaining set of matches is then final set of matches,
d sc≤w+f(8),
So far facial image coupling is completed.
2. Humanface image matching method according to claim 1, is characterized in that: the amplification coefficient magn=10 in described formula (3).
3. Humanface image matching method according to claim 1, is characterized in that: during described SURF algorithm slightly mates, the threshold value preset: th1=0.6, th2=0.3.
4. Humanface image matching method according to claim 1, is characterized in that: in described SURF algorithm exact matching, gets th1=0.7, th2=0.4 when above-mentioned (2) step SURF algorithm of reforming slightly mates step.
CN201510820897.7A 2015-11-23 2015-11-23 Humanface image matching method Expired - Fee Related CN105354558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510820897.7A CN105354558B (en) 2015-11-23 2015-11-23 Humanface image matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510820897.7A CN105354558B (en) 2015-11-23 2015-11-23 Humanface image matching method

Publications (2)

Publication Number Publication Date
CN105354558A true CN105354558A (en) 2016-02-24
CN105354558B CN105354558B (en) 2018-09-28

Family

ID=55330525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510820897.7A Expired - Fee Related CN105354558B (en) 2015-11-23 2015-11-23 Humanface image matching method

Country Status (1)

Country Link
CN (1) CN105354558B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023187A (en) * 2016-05-17 2016-10-12 西北工业大学 Image registration method based on SIFT feature and angle relative distance
CN106971164A (en) * 2017-03-28 2017-07-21 北京小米移动软件有限公司 Shape of face matching process and device
CN107301718A (en) * 2017-06-20 2017-10-27 深圳怡化电脑股份有限公司 A kind of image matching method and device
CN108171846A (en) * 2017-12-30 2018-06-15 南京陶特思软件科技有限公司 There is the access control system of fast verification
CN109084898A (en) * 2018-07-02 2018-12-25 北京印刷学院 A method of it establishing observer and bores Cellular spectroscopic receptance function
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN109165657A (en) * 2018-08-20 2019-01-08 贵州宜行智通科技有限公司 A kind of image feature detection method and device based on improvement SIFT
CN110210341A (en) * 2019-05-20 2019-09-06 深圳供电局有限公司 Identity card authentication method based on face recognition, system thereof and readable storage medium
CN110852319A (en) * 2019-11-08 2020-02-28 深圳市深视创新科技有限公司 Rapid universal roi matching method
CN110941989A (en) * 2019-10-18 2020-03-31 北京达佳互联信息技术有限公司 Image verification method, image verification device, video verification method, video verification device, equipment and storage medium
CN111598176A (en) * 2020-05-19 2020-08-28 北京明略软件系统有限公司 Image matching processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070076922A1 (en) * 2005-09-30 2007-04-05 Sony United Kingdom Limited Object detection
CN104809731A (en) * 2015-05-05 2015-07-29 北京工业大学 Gradient binaryzation based rotation-invariant and scale-invariant scene matching method
CN104851095A (en) * 2015-05-14 2015-08-19 江南大学 Workpiece image sparse stereo matching method based on improved-type shape context

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070076922A1 (en) * 2005-09-30 2007-04-05 Sony United Kingdom Limited Object detection
CN104809731A (en) * 2015-05-05 2015-07-29 北京工业大学 Gradient binaryzation based rotation-invariant and scale-invariant scene matching method
CN104851095A (en) * 2015-05-14 2015-08-19 江南大学 Workpiece image sparse stereo matching method based on improved-type shape context

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANG GUI等: "Point-pattern matching method using SURF and Shape Context", 《OPTIK》 *
庄萱怡: "基于形状上下文和SURF兴趣点的行为识别", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023187A (en) * 2016-05-17 2016-10-12 西北工业大学 Image registration method based on SIFT feature and angle relative distance
CN106971164B (en) * 2017-03-28 2020-02-04 北京小米移动软件有限公司 Face shape matching method and device
CN106971164A (en) * 2017-03-28 2017-07-21 北京小米移动软件有限公司 Shape of face matching process and device
CN107301718A (en) * 2017-06-20 2017-10-27 深圳怡化电脑股份有限公司 A kind of image matching method and device
CN107301718B (en) * 2017-06-20 2019-07-26 深圳怡化电脑股份有限公司 A kind of image matching method and device
CN108171846A (en) * 2017-12-30 2018-06-15 南京陶特思软件科技有限公司 There is the access control system of fast verification
CN109084898A (en) * 2018-07-02 2018-12-25 北京印刷学院 A method of it establishing observer and bores Cellular spectroscopic receptance function
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN109165657A (en) * 2018-08-20 2019-01-08 贵州宜行智通科技有限公司 A kind of image feature detection method and device based on improvement SIFT
CN110210341A (en) * 2019-05-20 2019-09-06 深圳供电局有限公司 Identity card authentication method based on face recognition, system thereof and readable storage medium
CN110210341B (en) * 2019-05-20 2022-12-06 深圳供电局有限公司 ID card authentication method and system based on face recognition, and readable storage medium
CN110941989A (en) * 2019-10-18 2020-03-31 北京达佳互联信息技术有限公司 Image verification method, image verification device, video verification method, video verification device, equipment and storage medium
US11625819B2 (en) 2019-10-18 2023-04-11 Beijing Dajia Internet Information Technology Co., Ltd. Method and device for verifying image and video
CN110852319A (en) * 2019-11-08 2020-02-28 深圳市深视创新科技有限公司 Rapid universal roi matching method
CN111598176A (en) * 2020-05-19 2020-08-28 北京明略软件系统有限公司 Image matching processing method and device
CN111598176B (en) * 2020-05-19 2023-11-17 北京明略软件系统有限公司 Image matching processing method and device

Also Published As

Publication number Publication date
CN105354558B (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN105354558B (en) Humanface image matching method
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN103116763B (en) A kind of living body faces detection method based on hsv color Spatial Statistical Character
TW569148B (en) Method for locating facial features in an image
CN105893946B (en) A detection method for frontal face images
CN109543606A (en) A kind of face identification method that attention mechanism is added
CN109740572B (en) Human face living body detection method based on local color texture features
CN111126240B (en) Three-channel feature fusion face recognition method
Rouhi et al. A review on feature extraction techniques in face recognition
CN104794449B (en) Gait energy diagram based on human body HOG features obtains and personal identification method
CN112541434B (en) Face recognition method based on central point tracking model
CN104636749A (en) Target object detection method and device
CN107742094A (en) Improve the image processing method of testimony of a witness comparison result
CN105787427A (en) Lip area positioning method
CN111709305B (en) A Face Age Recognition Method Based on Partial Image Blocks
CN109522865A (en) A kind of characteristic weighing fusion face identification method based on deep neural network
Devadethan et al. Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing
CN109165551B (en) An expression recognition method based on adaptive weighted fusion of saliency structure tensor and LBP features
Chen et al. Face deduplication in video surveillance
Al-Shehri A simple and novel method for skin detection and face locating and tracking
Simha et al. Recognition of numbers and position using image processing techniques for solving sudoku puzzles
Chowdhury et al. Fuzzy rule based approach for face and facial feature extraction in biometric authentication
CN109766860A (en) Face Detection Method Based on Improved Adaboost Algorithm
CN115439930A (en) Multi-feature fusion gait recognition method based on space-time dimension screening
CN103886317B (en) Local feature characterization method based on face expression image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180928