CN104766063B - A kind of living body faces recognition methods - Google Patents
A kind of living body faces recognition methods Download PDFInfo
- Publication number
- CN104766063B CN104766063B CN201510161965.3A CN201510161965A CN104766063B CN 104766063 B CN104766063 B CN 104766063B CN 201510161965 A CN201510161965 A CN 201510161965A CN 104766063 B CN104766063 B CN 104766063B
- Authority
- CN
- China
- Prior art keywords
- image
- face
- block
- image block
- gradient value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 239000013598 vector Substances 0.000 claims abstract description 87
- 238000012549 training Methods 0.000 claims abstract description 40
- 238000005516 engineering process Methods 0.000 claims abstract description 25
- 238000001514 detection method Methods 0.000 claims abstract description 17
- 238000012795 verification Methods 0.000 claims description 17
- 238000009826 distribution Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 238000011840 criminal investigation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
本发明公开了一种活体人脸识别方法,其分为训练阶段和识别阶段,在训练阶段其获取多幅活体人脸图像和照片人脸图像,然后提取每幅活体人脸图像的灰度图像的特征向量作为正样本,并提取每幅照片人脸图像的灰度图像的特征向量作为负样本,再将所有正样本和负样本输入到SVM分类器中进行训练,得到SVM分类器训练模型;在识别阶段获取一帧人脸图像,先利用人脸识别技术进行识别,识别结果为合法用户时再利用活体检测技术提取特征向量,再将特征向量输入到SVM分类器训练模型中进行活体检测;优点是利用人脸识别技术判定人脸是否为合法用户,合法用户时再利用活体检测技术判定人脸来源为活体人脸还是假冒照片人脸,从而有效地消除了照片人脸带来的安全隐患。
The invention discloses a living body face recognition method, which is divided into a training stage and a recognition stage. In the training stage, multiple live face images and photo face images are obtained, and then the grayscale image of each live face image is extracted. The eigenvectors of each photo face image are used as positive samples, and the eigenvectors of the grayscale images of each photo face image are extracted as negative samples, and then all positive samples and negative samples are input into the SVM classifier for training to obtain the SVM classifier training model; In the recognition stage, a frame of face image is obtained, and the face recognition technology is used for recognition first. When the recognition result is a legitimate user, the feature vector is extracted by the living body detection technology, and then the feature vector is input into the SVM classifier training model for live body detection; The advantage is to use face recognition technology to determine whether the face is a legitimate user, and then use live body detection technology to determine whether the source of the face is a live face or a fake photo face when the user is a legitimate user, thus effectively eliminating the security risks caused by photo faces .
Description
技术领域technical field
本发明涉及一种人脸识别技术,尤其是涉及一种活体人脸识别方法。The invention relates to a face recognition technology, in particular to a living face recognition method.
背景技术Background technique
人脸识别技术是一种生物特征识别技术,其以方便、快捷、准确等优势,在近年来获得了突飞猛进的发展。人脸识别系统的输入端输入的一般是一张含有待检测身份的人脸图像,以及人脸数据库中的若干已知身份的人脸图像,而其输出则是一系列人脸相似度得分,以此表明识别的人脸的身份。目前来看,人脸识别技术已广泛应用于刑侦破案、银行系统、海关检查、民政部门、作息考勤等领域。然而,随着人脸识别技术应用范围的不断扩展,一些安全问题也随之发生,不法分子利用伪造的人脸照片欺骗人脸识别系统,从而对合法用户造成了重大经济损失。因此,对人脸图像的来源真实性判断显得尤为重要,这就是活体检测。Face recognition technology is a kind of biometric identification technology, which has achieved rapid development in recent years due to its advantages of convenience, speed and accuracy. The input of the face recognition system is generally a face image containing the identity to be detected, and several face images of known identities in the face database, and its output is a series of face similarity scores, In this way, the identity of the recognized face is indicated. At present, face recognition technology has been widely used in criminal investigations, banking systems, customs inspections, civil affairs departments, work and attendance, and other fields. However, with the continuous expansion of the application range of face recognition technology, some security problems have also occurred. Criminals use forged face photos to deceive the face recognition system, thus causing major economic losses to legitimate users. Therefore, it is particularly important to judge the authenticity of the source of the face image, which is liveness detection.
在谷歌公司发布Android4.0后,为广大机友带来了通过人脸识别解锁手机的功能,但随后一直就有用个人照片代替真人解锁手机的报道,因而谷歌公司一直在谨慎和保守使用人脸识别技术。要使人脸识别系统步入成熟,这类照片人脸代替真实活体人脸的安全隐患必须得到解决,因此有必要研究一种识别活体人脸的技术。After Google released Android 4.0, it brought the function of unlocking mobile phones through face recognition to the majority of mobile phone users, but there have been reports of using personal photos instead of real people to unlock mobile phones, so Google has been using face recognition cautiously and conservatively. recognition technology. In order to make the face recognition system mature, the potential safety hazards of replacing real living human faces with such photos must be solved, so it is necessary to study a technology for recognizing living human faces.
发明内容Contents of the invention
本发明所要解决的技术问题是提供一种活体人脸识别方法,其能够判定人脸是否为合法用户,又能够判定人脸来源是否为活体人脸,有效地消除了照片人脸带来的安全隐患。The technical problem to be solved by the present invention is to provide a living face recognition method, which can determine whether the face is a legitimate user, and can also determine whether the source of the face is a living face, effectively eliminating the security risks brought by the photo face. Hidden danger.
本发明解决上述技术问题所采用的技术方案为:一种活体人脸识别方法,其特征在于包括以下步骤:The technical solution adopted by the present invention to solve the above-mentioned technical problems is: a living body face recognition method, which is characterized in that it comprises the following steps:
①获取M幅包含有各不相同的人脸对象且尺寸大小为256×256的活体人脸图像,再获取每幅活体人脸图像的照片人脸图像,每幅照片人脸图像的尺寸大小为256×256;然后将M幅活体人脸图像和M幅照片人脸图像均转化成灰度图像,将2M幅灰度图像构成一个训练图像集合;接着计算训练图像集合中的每幅灰度图像的特征向量;再将每幅活体人脸图像的灰度图像的特征向量作为一个正样本,并以+1标识,将每幅照片人脸图像的灰度图像的特征向量作为一个负样本,并以-1标识;最后将所有正样本和所有负样本输入到SVM分类器中进行训练,得到SVM分类器训练模型;① Obtain M pieces of live face images containing different face objects with a size of 256×256, and then obtain photo face images of each live face image, and the size of each photo face image is 256×256; then convert M live face images and M photo face images into grayscale images, and form a training image set with 2M grayscale images; then calculate each grayscale image in the training image set The feature vector of the grayscale image of each piece of live face image is used as a positive sample, and is marked with +1, and the feature vector of the grayscale image of each photo face image is used as a negative sample, and Marked by -1; finally, all positive samples and all negative samples are input into the SVM classifier for training, and the SVM classifier training model is obtained;
②需进行活体人脸识别时,获取一帧包含有待识别的人脸对象的人脸图像,然后在该人脸图像中截取人脸对象所在的最小的矩形区域,再对矩形区域的尺寸大小进行规整,得到尺寸大小为256×256的待识别的人脸区域图像,接着将待识别的人脸区域图像转化成灰度图像;② When live face recognition is required, obtain a frame of face image containing the face object to be recognized, and then intercept the smallest rectangular area where the face object is located in the face image, and then calculate the size of the rectangular area Regularly, obtain the image of the face area to be recognized with a size of 256×256, and then convert the image of the face area to be recognized into a grayscale image;
③利用人脸识别技术对人脸区域图像的灰度图像进行识别,如果识别结果为合法用户,则执行步骤④;如果识别结果为非法用户,则拒绝人脸验证,人脸验证失败;③Recognize the grayscale image of the face area image using face recognition technology, if the recognition result is a legitimate user, then perform step ④; if the recognition result is an illegal user, reject the face verification, and the face verification fails;
④利用活体检测技术,先计算人脸区域图像的灰度图像的特征向量,再将人脸区域图像的灰度图像的特征向量输入到SVM分类器训练模型中,如果SVM分类器训练模型输出+1,则表示人脸区域图像的来源为活体人脸,人脸验证成功;如果SVM分类器训练模型输出-1,则表示人脸区域图像的来源为照片人脸,拒绝人脸验证,人脸验证失败。④Using the living body detection technology, first calculate the feature vector of the grayscale image of the face area image, and then input the feature vector of the grayscale image of the face area image into the SVM classifier training model. If the SVM classifier training model outputs + 1, it means that the source of the face area image is a live face, and the face verification is successful; if the SVM classifier training model outputs -1, it means that the source of the face area image is a photo face, and the face verification is rejected. verification failed.
所述的步骤①中训练图像集合中的每幅灰度图像的特征向量的获取过程与所述的步骤④中人脸区域图像的灰度图像的特征向量的获取过程相同,将训练图像集合中的每幅灰度图像和人脸区域图像的灰度图像均作为一幅待处理图像,待处理图像的特征向量的获取过程为:The acquisition process of the feature vector of each grayscale image in the training image collection in the described step 1. is the same as the acquisition process of the feature vector of the grayscale image of the human face area image in the described step 4., the training image collection Each grayscale image of and the grayscale image of the face region image are taken as an image to be processed, and the process of obtaining the feature vector of the image to be processed is:
a、将待处理图像划分成个互不重叠的尺寸大小为64×64的图像块;a. Divide the image to be processed into A non-overlapping image block with a size of 64×64;
b、将待处理图像中当前待处理的第i个图像块定义为当前图像块,其中,1≤i≤16,i的初始值为1;b. Define the i-th image block currently to be processed in the image to be processed as the current image block, wherein, 1≤i≤16, and the initial value of i is 1;
c、采用尺寸大小为3×3的滑动窗口在当前图像块中逐像素点滑动,将当前图像块分割成(64-2)×(64-2)个相重叠的尺寸大小为3×3的子块;c. Use a sliding window with a size of 3×3 to slide pixel by pixel in the current image block, and divide the current image block into (64-2)×(64-2) overlapping sizes of 3×3 sub-block;
d、将八个不同方向的Sobel算子分别与当前图像块中的每个子块做卷积操作,得到当前图像块中的每个子块在八个不同方向的梯度值,将当前图像块中的第j个子块在第k个方向的梯度值记为其中,八个不同方向的Sobel算子分别为0°的Sobel算子、45°的Sobel算子、90°的Sobel算子、135°的Sobel算子、180°的Sobel算子、225°的Sobel算子、270°的Sobel算子、315°的Sobel算子,1≤j≤(64-2)×(64-2),1≤k≤8;d. Convolve Sobel operators in eight different directions with each sub-block in the current image block to obtain the gradient values of each sub-block in the current image block in eight different directions, and use the current image block in the The gradient value of the jth sub-block in the kth direction is recorded as Among them, eight Sobel operators in different directions are 0° Sobel operator, 45° Sobel operator, 90° Sobel operator, 135° Sobel operator, 180° Sobel operator, 225° Sobel operator Sobel operator, 270° Sobel operator, 315° Sobel operator, 1≤j≤(64-2)×(64-2), 1≤k≤8;
e、按所有子块在当前图像块中的顺序,将当前图像块中的所有子块在每个方向的梯度值排列构成当前图像块在每个方向的维数为(64-2)×(64-2)的梯度值向量,将当前图像块中的所有子块在第k个方向的梯度值排列构成的当前图像块在第k个方向的维数为(64-2)×(64-2)的梯度值向量记为 其中,在此符号“[]”为矢量表示符号,表示当前图像块中的第1个子块在第k个方向的梯度值,表示当前图像块中的第2个子块在第k个方向的梯度值,表示当前图像块中的第(64-2)×(64-2)-1个子块在第k个方向的梯度值,表示当前图像块中的第(64-2)×(64-2)个子块在第k个方向的梯度值;e. According to the order of all sub-blocks in the current image block, arrange the gradient values of all sub-blocks in the current image block in each direction to form the dimension of the current image block in each direction as (64-2)×( 64-2) gradient value vector, the dimension of the current image block formed by arranging the gradient values of all sub-blocks in the k-th direction in the current image block in the k-th direction is (64-2)×(64- 2) The gradient value vector is denoted as Among them, the symbol “[]” is a vector representation symbol, Indicates the gradient value of the first sub-block in the current image block in the k-th direction, Indicates the gradient value of the second sub-block in the current image block in the k-th direction, Indicates the gradient value of the (64-2)×(64-2)-1th sub-block in the current image block in the k-th direction, Indicates the gradient value of the (64-2)×(64-2)th sub-block in the current image block in the k-th direction;
f、令i=i+1,将待处理图像中下一个待处理的图像块作为当前图像块,然后返回步骤c继续执行,直至待处理图像中的所有图像块处理完毕,得到待处理图像中的每个图像块在八个不同方向的维数为(64-2)×(64-2)的梯度值向量,其中,i=i+1中的“=”为赋值符号;f, make i=i+1, use the next image block to be processed in the image to be processed as the current image block, and then return to step c to continue until all image blocks in the image to be processed are processed, and the image to be processed is obtained The dimension of each image block in eight different directions is a gradient value vector of (64-2)×(64-2), wherein, "=" in i=i+1 is an assignment symbol;
g、按所有图像块在待处理图像中的顺序,将待处理图像中的所有图像块各自在八个不同方向的梯度值向量排列构成待处理图像的特征向量,记为T,T=[TV1 1,TV2 1,…,TV8 1,TV1 2,TV2 2,…,TV8 2,…,TV1 16,TV2 16,…,TV8 16],其中,在此符号“[]”为矢量表示符号,TV1 1表示第1个图像块在第1个方向的维数为(64-2)×(64-2)的梯度值向量,TV2 1表示第1个图像块在第2个方向的维数为(64-2)×(64-2)的梯度值向量,TV8 1表示第1个图像块在第8个方向的维数为(64-2)×(64-2)的梯度值向量,TV1 2表示第2个图像块在第1个方向的维数为(64-2)×(64-2)的梯度值向量,TV2 2表示第2个图像块在第2个方向的维数为(64-2)×(64-2)的梯度值向量,TV8 2表示第2个图像块在第8个方向的维数为(64-2)×(64-2)的梯度值向量,TV1 16表示第16个图像块在第1个方向的维数为(64-2)×(64-2)的梯度值向量,TV2 16表示第16个图像块在第2个方向的维数为(64-2)×(64-2)的梯度值向量,TV8 16表示第16个图像块在第8个方向的维数为(64-2)×(64-2)的梯度值向量。g, according to the order of all image blocks in the image to be processed, the gradient value vectors of all image blocks in the image to be processed are arranged in eight different directions respectively to form the feature vector of the image to be processed, denoted as T, T=[TV 1 1 ,TV 2 1 ,…,TV 8 1 ,TV 1 2 ,TV 2 2 ,…,TV 8 2 ,…,TV 1 16 ,TV 2 16 ,…,TV 8 16 ], where the symbol “ []” is a vector representation symbol, TV 1 1 represents the gradient value vector whose dimension of the first image block in the first direction is (64-2)×(64-2), TV 2 1 represents the first image The dimension of the block in the second direction is (64-2)×(64-2) gradient value vector, TV 8 1 means that the dimension of the first image block in the eighth direction is (64-2)× The gradient value vector of (64-2), TV 1 2 represents the gradient value vector whose dimension of the second image block in the first direction is (64-2)×(64-2), TV 2 2 represents the second The dimension of the second image block in the second direction is the gradient value vector of (64-2)×(64-2), TV 8 2 means that the dimension of the second image block in the eighth direction is (64-2 )×(64-2) gradient value vector, TV 1 16 represents the gradient value vector whose dimension of the 16th image block in the first direction is (64-2)×(64-2), TV 2 16 represents The dimension of the 16th image block in the second direction is the gradient value vector of (64-2)×(64-2), TV 8 16 means that the dimension of the 16th image block in the 8th direction is (64 -2)×(64-2) vector of gradient values.
与现有技术相比,本发明的优点在于:Compared with the prior art, the present invention has the advantages of:
1)本发明方法在现有的人脸识别技术的基础上加入了活体检测技术,本发明方法在训练阶段对多幅活体人脸图像和多幅照片人脸图像进行训练得到SVM分类器训练模型,在身份认证过程中,利用人脸识别技术判定人脸是否为合法用户,并利用活体检测技术先计算人脸区域图像的特征向量,再将人脸区域图像的特征向量输入到SVM分类器训练模型中,判定人脸来源为活体人脸还是为假冒照片人脸,从而有效地消除了照片人脸带来的安全隐患,实现了私密信息安全的双重保障。1) The inventive method has added living body detection technology on the basis of existing face recognition technology, and the inventive method obtains the SVM classifier training model to a plurality of living body face images and a plurality of photo face images in the training stage , in the identity authentication process, use face recognition technology to determine whether the face is a legitimate user, and use live detection technology to first calculate the feature vector of the face area image, and then input the feature vector of the face area image into the SVM classifier training In the model, it is determined whether the source of the face is a live face or a fake photo face, thereby effectively eliminating the security risks caused by the photo face and realizing the double protection of private information security.
2)本发明方法可适用于低处理能力的平台,如Android系统。2) The method of the present invention is applicable to platforms with low processing capabilities, such as the Android system.
3)本发明方法只需要一个摄像头,通过提取一帧人脸图像即可判断是否为活体人脸,大大减少了系统资源的占用比重,且无需添加额外的图像辅助设备,并无需用户的主动配合,极为自然。3) The method of the present invention only needs one camera, and it can judge whether it is a living human face by extracting a frame of human face image, which greatly reduces the proportion of system resources occupied, and does not need to add additional image auxiliary equipment, and does not require the active cooperation of users , very natural.
4)本发明方法在实验验证阶段对南京航空航天大学(NUAA)活体人脸库中的图像进行识别,鉴别出两类人脸图像的准确率达到98.718%。4) The method of the present invention recognizes the images in the live face database of Nanjing University of Aeronautics and Astronautics (NUAA) in the experimental verification stage, and the accuracy rate of identifying two types of face images reaches 98.718%.
附图说明Description of drawings
图1为本发明方法的总体实现框图;Fig. 1 is the overall realization block diagram of the inventive method;
图2a为一幅活体人脸图像的灰度图像中的所有图像块各自在八个不同方向的梯度值向量分布图;Fig. 2a is the gradient value vector distribution figure of all image blocks in the grayscale image of a living human face image respectively in eight different directions;
图2b为图2a对应的活体人脸图像的照片人脸图像的灰度图像中的所有图像块各自在八个不同方向的梯度值向量分布图;Fig. 2 b is the gradient value vector distribution diagram of all image blocks in the grayscale image of the photo face image corresponding to Fig. 2 a in eight different directions;
图3为本发明方法中的活体检测技术的工作特性ROC曲线图。Fig. 3 is an ROC curve diagram of the working characteristics of the living body detection technology in the method of the present invention.
具体实施方式detailed description
以下结合附图实施例对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.
本发明提出的一种活体人脸识别方法,其适用于Android系统,其总体实现框图如图1所示,其包括以下步骤:A kind of live face recognition method that the present invention proposes, and it is applicable to Android system, and its overall realization block diagram is as shown in Figure 1, and it comprises the following steps:
①通过手机摄像头获取M幅包含有各不相同的人脸对象且尺寸大小为256×256的活体人脸图像,再通过手机摄像头获取每幅活体人脸图像的照片人脸图像,每幅照片人脸图像的尺寸大小为256×256;然后将M幅活体人脸图像和M幅照片人脸图像均转化成灰度图像,将2M幅灰度图像构成一个训练图像集合;接着计算训练图像集合中的每幅灰度图像的特征向量;再将每幅活体人脸图像的灰度图像的特征向量作为一个正样本,并以+1标识,将每幅照片人脸图像的灰度图像的特征向量作为一个负样本,并以-1标识;最后将所有正样本和所有负样本输入到SVM分类器中进行训练,得到SVM分类器训练模型;其中,M≥50,在训练阶段通常正样本和负样本越多则得到的SVM分类器训练模型的分类结果更准确,因此具体操作时M可适当的取较大的值,如在具体操作时可取M=261。① Obtain M pieces of live face images containing different face objects with a size of 256×256 through the mobile phone camera, and then obtain the photo face images of each live face image through the mobile phone camera. The size of the face image is 256×256; then M pieces of live face images and M pieces of photo face images are converted into grayscale images, and 2M grayscale images are formed into a training image set; then the training image set is calculated The feature vector of each gray-scale image of each piece of face image; then the feature vector of the gray-scale image of each live face image is used as a positive sample, and marked with +1, the feature vector of the gray-scale image of each photo face image As a negative sample, it is marked with -1; finally, all positive samples and all negative samples are input into the SVM classifier for training, and the SVM classifier training model is obtained; among them, M≥50, usually positive samples and negative samples in the training stage The more samples there are, the more accurate the classification result of the SVM classifier training model will be. Therefore, M can be properly set to a larger value during specific operations. For example, M=261 can be used during specific operations.
在此,M幅活体人脸图像中的人脸对象各不相同,即对M个不同的人脸进行拍摄得到各不相同的活体人脸图像;而照片人脸图像是对照片上的人脸再进行一次拍摄得到的图像。Here, the face objects in the M live face images are different, that is, M different faces are photographed to obtain different live face images; The image obtained by taking another shot.
②需进行活体人脸识别时,通过手机摄像头获取一帧包含有待识别的人脸对象的人脸图像,然后在该人脸图像中截取人脸对象所在的最小的矩形区域,再对矩形区域的尺寸大小进行规整,得到尺寸大小为256×256的待识别的人脸区域图像,接着将待识别的人脸区域图像转化成灰度图像。② When live face recognition is required, a frame of face image containing the face object to be recognized is obtained through the camera of the mobile phone, and then the smallest rectangular area where the face object is located is intercepted in the face image, and then the rectangular area The size is regularized to obtain the image of the face region to be recognized with a size of 256×256, and then the image of the face region to be recognized is converted into a grayscale image.
③利用现有的人脸识别技术对人脸区域图像的灰度图像进行识别,如果识别结果为合法用户,则执行步骤④;如果识别结果为非法用户,则拒绝人脸验证,人脸验证失败。在此,当识别结果为非法用户时,该人脸区域图像的来源可能为活体人脸,也有可能为照片人脸,即用户以外的人无法通过人脸验证。③Use the existing face recognition technology to recognize the grayscale image of the face area image, if the recognition result is a legitimate user, then perform step ④; if the recognition result is an illegal user, then reject the face verification, and the face verification fails . Here, when the recognition result is an illegal user, the source of the face area image may be a live face, or a photo face, that is, people other than the user cannot pass the face verification.
④利用活体检测技术,先计算人脸区域图像的灰度图像的特征向量,再将人脸区域图像的灰度图像的特征向量输入到SVM分类器训练模型中,如果SVM分类器训练模型输出+1,则表示人脸区域图像的来源为活体人脸,人脸验证成功;如果SVM分类器训练模型输出-1,则表示人脸区域图像的来源为照片人脸,拒绝人脸验证,人脸验证失败。④Using the living body detection technology, first calculate the feature vector of the grayscale image of the face area image, and then input the feature vector of the grayscale image of the face area image into the SVM classifier training model. If the SVM classifier training model outputs + 1, it means that the source of the face area image is a live face, and the face verification is successful; if the SVM classifier training model outputs -1, it means that the source of the face area image is a photo face, and the face verification is rejected. verification failed.
在此具体实施例中,步骤①中训练图像集合中的每幅灰度图像的特征向量的获取过程与步骤④中人脸区域图像的灰度图像的特征向量的获取过程相同,将训练图像集合中的每幅灰度图像和人脸区域图像的灰度图像均作为一幅待处理图像,由于活体人脸与假冒照片人脸的较大区别在于后者的镜面反射量远远大于前者,真实人脸表面所产生的漫反射光的光强总与人脸表面各个方向入射光的光强及入射角的余弦值成正比,而假冒照片人脸不符合面部凹凸情况,并且光滑程度高,镜面反射量高,照片人脸对象将会是漫反射和镜面反射分量的线性组合,镜面反射量与漫反射量共享权重,因此本发明方法将待处理图像划分成16个互不重叠的图像块,然后将每个图像块划分成(64-2)×(64-2)个相重叠的尺寸大小为3×3的子块,再利用每个子块与八个不同方向的Sobel算子做卷积操作的结果来获取待处理图像的特征向量,即待处理图像的特征向量的获取过程为:In this specific embodiment, the acquisition process of the feature vector of each grayscale image in the training image set in step 1. is the same as the acquisition process of the feature vector of the grayscale image of the face area image in step 4. The training image set Each gray-scale image and the gray-scale image of the face area image in the image are regarded as an image to be processed. The biggest difference between the live face and the fake photo face is that the specular reflection of the latter is much larger than that of the former. The light intensity of the diffuse reflection light generated on the surface of the face is always proportional to the light intensity of the incident light in all directions on the face surface and the cosine value of the incident angle, while the face of the fake photo does not conform to the facial unevenness, and the degree of smoothness is high. The amount of reflection is high, and the face object of the photo will be a linear combination of diffuse reflection and specular reflection components, and the specular reflection and diffuse reflection share weights, so the inventive method divides the image to be processed into 16 non-overlapping image blocks, Then divide each image block into (64-2)×(64-2) overlapping sub-blocks with a size of 3×3, and then use each sub-block to convolve with eight Sobel operators in different directions The result of the operation to obtain the feature vector of the image to be processed, that is, the process of obtaining the feature vector of the image to be processed is:
a、将待处理图像划分成个互不重叠的尺寸大小为64×64的图像块。a. Divide the image to be processed into non-overlapping image blocks with a size of 64×64.
b、将待处理图像中当前待处理的第i个图像块定义为当前图像块,其中,1≤i≤16,i的初始值为1。b. Define the i-th image block currently to be processed in the image to be processed as the current image block, where 1≤i≤16, and the initial value of i is 1.
c、采用尺寸大小为3×3的滑动窗口在当前图像块中逐像素点滑动,将当前图像块分割成(64-2)×(64-2)个相重叠的尺寸大小为3×3的子块。c. Use a sliding window with a size of 3×3 to slide pixel by pixel in the current image block, and divide the current image block into (64-2)×(64-2) overlapping sizes of 3×3 subblock.
d、将八个不同方向的Sobel算子分别与当前图像块中的每个子块做卷积操作,得到当前图像块中的每个子块在八个不同方向的梯度值,将当前图像块中的第j个子块在第k个方向的梯度值记为为将第k个方向的Sobel算子与当前图像块中的第j个子块做卷积操作得到;其中,如表1所列,八个不同方向的Sobel算子分别为0°的Sobel算子、45°的Sobel算子、90°的Sobel算子、135°的Sobel算子、180°的Sobel算子、225°的Sobel算子、270°的Sobel算子、315°的Sobel算子,1≤j≤(64-2)×(64-2),1≤k≤8。d. Convolve Sobel operators in eight different directions with each sub-block in the current image block to obtain the gradient values of each sub-block in the current image block in eight different directions, and use the current image block in the The gradient value of the jth sub-block in the kth direction is recorded as It is obtained by convolving the Sobel operator in the k-th direction with the j-th sub-block in the current image block; where, as listed in Table 1, the Sobel operators in eight different directions are 0° Sobel operators , 45° Sobel operator, 90° Sobel operator, 135° Sobel operator, 180° Sobel operator, 225° Sobel operator, 270° Sobel operator, 315° Sobel operator, 1≤j≤(64-2)×(64-2), 1≤k≤8.
表1 八个不同方向的Sobel算子Table 1 Sobel operators in eight different directions
e、按所有子块在当前图像块中的顺序,将当前图像块中的所有子块在每个方向的梯度值排列构成当前图像块在每个方向的维数为(64-2)×(64-2)的梯度值向量,将当前图像块中的所有子块在第k个方向的梯度值排列构成的当前图像块在第k个方向的维数为(64-2)×(64-2)的梯度值向量记为 其中,在此符号“[]”为矢量表示符号,表示当前图像块中的第1个子块在第k个方向的梯度值,表示当前图像块中的第2个子块在第k个方向的梯度值,表示当前图像块中的第(64-2)×(64-2)-1个子块在第k个方向的梯度值,表示当前图像块中的第(64-2)×(64-2)个子块在第k个方向的梯度值。e. According to the order of all sub-blocks in the current image block, arrange the gradient values of all sub-blocks in the current image block in each direction to form the dimension of the current image block in each direction as (64-2)×( 64-2) gradient value vector, the dimension of the current image block formed by arranging the gradient values of all sub-blocks in the k-th direction in the current image block in the k-th direction is (64-2)×(64- 2) The gradient value vector is denoted as Among them, the symbol “[]” is a vector representation symbol, Indicates the gradient value of the first sub-block in the current image block in the k-th direction, Indicates the gradient value of the second sub-block in the current image block in the k-th direction, Indicates the gradient value of the (64-2)×(64-2)-1th sub-block in the current image block in the k-th direction, Indicates the gradient value of the (64-2)×(64-2)th sub-block in the current image block in the k-th direction.
f、令i=i+1,将待处理图像中下一个待处理的图像块作为当前图像块,然后返回步骤c继续执行,直至待处理图像中的所有图像块处理完毕,得到待处理图像中的每个图像块在八个不同方向的维数为(64-2)×(64-2)的梯度值向量,其中,i=i+1中的“=”为赋值符号。f, make i=i+1, use the next image block to be processed in the image to be processed as the current image block, and then return to step c to continue until all image blocks in the image to be processed are processed, and the image to be processed is obtained The dimensions of each image block in eight different directions are (64-2)×(64-2) gradient value vectors, where "=" in i=i+1 is an assignment symbol.
g、按所有图像块在待处理图像中的顺序,将待处理图像中的所有图像块各自在八个不同方向的梯度值向量排列构成待处理图像的特征向量,记为T,T=[TV1 1,TV2 1,…,TV8 1,TV1 2,TV2 2,…,TV8 2,…,TV1 16,TV2 16,…,TV8 16],其中,在此符号“[]”为矢量表示符号,TV1 1表示第1个图像块在第1个方向的维数为(64-2)×(64-2)的梯度值向量,TV2 1表示第1个图像块在第2个方向的维数为(64-2)×(64-2)的梯度值向量,TV8 1表示第1个图像块在第8个方向的维数为(64-2)×(64-2)的梯度值向量,TV1 2表示第2个图像块在第1个方向的维数为(64-2)×(64-2)的梯度值向量,TV2 2表示第2个图像块在第2个方向的维数为(64-2)×(64-2)的梯度值向量,TV8 2表示第2个图像块在第8个方向的维数为(64-2)×(64-2)的梯度值向量,TV1 16表示第16个图像块在第1个方向的维数为(64-2)×(64-2)的梯度值向量,TV2 16表示第16个图像块在第2个方向的维数为(64-2)×(64-2)的梯度值向量,TV8 16表示第16个图像块在第8个方向的维数为(64-2)×(64-2)的梯度值向量。g, according to the order of all image blocks in the image to be processed, the gradient value vectors of all image blocks in the image to be processed are arranged in eight different directions respectively to form the feature vector of the image to be processed, denoted as T, T=[TV 1 1 ,TV 2 1 ,…,TV 8 1 ,TV 1 2 ,TV 2 2 ,…,TV 8 2 ,…,TV 1 16 ,TV 2 16 ,…,TV 8 16 ], where the symbol “ []” is a vector representation symbol, TV 1 1 represents the gradient value vector whose dimension of the first image block in the first direction is (64-2)×(64-2), TV 2 1 represents the first image The dimension of the block in the second direction is (64-2)×(64-2) gradient value vector, TV 8 1 means that the dimension of the first image block in the eighth direction is (64-2)× The gradient value vector of (64-2), TV 1 2 represents the gradient value vector whose dimension of the second image block in the first direction is (64-2)×(64-2), TV 2 2 represents the second The dimension of the second image block in the second direction is the gradient value vector of (64-2)×(64-2), TV 8 2 means that the dimension of the second image block in the eighth direction is (64-2 )×(64-2) gradient value vector, TV 1 16 represents the gradient value vector whose dimension of the 16th image block in the first direction is (64-2)×(64-2), TV 2 16 represents The dimension of the 16th image block in the second direction is the gradient value vector of (64-2)×(64-2), TV 8 16 means that the dimension of the 16th image block in the 8th direction is (64 -2)×(64-2) vector of gradient values.
为进一步说明本发明方法的可行性和有效性,对本发明方法进行实验验证。In order to further illustrate the feasibility and effectiveness of the method of the present invention, the method of the present invention is verified experimentally.
在Matlab实验平台上选用的样本库采用公开的NUAA活体检测人脸库,在NUAA活体检测人脸库中随机抽取522副图像,包括261幅活体人脸图像及每幅活体人脸图像的照片人脸图像(即照片人脸图像共261幅),每幅图像的尺寸大小为256×256。每幅图像的类型为RGB三通道彩色图像,将其转化成灰度图像,方便图像处理。将每幅图像划分成个互不重叠的尺寸大小为64×64的图像块,然后采用尺寸大小为3×3的滑动窗口在每个图像块中逐像素点滑动,将每个图像块分割成(64-2)×(64-2)个相重叠的尺寸大小为3×3的子块,再将八个不同方向的Sobel算子分别与每个图像块中的每个子块做卷积操作,得到每个图像块中的每个子块在八个不同方向的梯度值,进而得到每个图像块在每个方向的梯度值向量。图2a给出了一幅活体人脸图像的灰度图像中的所有图像块各自在八个不同方向的梯度值向量分布,从图2a中可以看出,所有图像块在每个方向的梯度值向量所占的百分比各不相同。图2b给出了图2a对应的活体人脸图像的照片人脸图像的灰度图像中的所有图像块各自在八个不同方向的梯度值向量分布,从图2b中可以看出,所有图像块在每个方向的梯度值向量所占的百分比各不相同。对比图2a和图2b,可以看出图2b所示的分布的层次差异性较大。The sample library selected on the Matlab experimental platform uses the public NUAA live detection face database, and 522 images are randomly selected from the NUAA live detection face database, including 261 live face images and photos of each live face image. Face images (that is, a total of 261 photo face images), the size of each image is 256×256. The type of each image is an RGB three-channel color image, which is converted into a grayscale image to facilitate image processing. Divide each image into Non-overlapping image blocks with a size of 64×64, and then use a sliding window with a size of 3×3 to slide pixel by pixel in each image block, and divide each image block into (64-2)× (64-2) overlapping sub-blocks with a size of 3×3, and then perform convolution operations on eight Sobel operators in different directions with each sub-block in each image block to obtain each image block The gradient value of each sub-block in eight different directions, and then obtain the gradient value vector of each image block in each direction. Figure 2a shows the gradient value vector distribution of all image blocks in the grayscale image of a living face image in eight different directions. It can be seen from Figure 2a that the gradient value of all image blocks in each direction The percentages of vectors vary. Figure 2b shows the gradient value vector distributions of all the image blocks in the grayscale image of the photographic face image corresponding to the live face image in Figure 2a in eight different directions. As can be seen from Figure 2b, all image blocks The percentage of the vector of gradient values in each direction varies. Comparing Figure 2a and Figure 2b, it can be seen that the level of distribution shown in Figure 2b is quite different.
根据上述得到的所有活体人脸图像的灰度图像中的所有图像块各自在八个不同方向的梯度值向量,得到所有活体人脸图像的灰度图像的特征向量,并随机选取M个特征向量作为正样本;并根据上述得到的所有活体人脸图像的照片人脸图像的灰度图像中的所有图像块各自在八个不同方向的梯度值向量,得到所有活体人脸图像的照片人脸图像的灰度图像的特征向量,并将正样本对应的特征向量作为负样本;将所有正样本和所有负样本输入到SVM分类器中进行训练,得到SVM分类器训练模型。将剩余的M个活体人脸图像的灰度图像的特征向量和剩余的M个活体人脸图像的照片人脸图像的灰度图像的特征向量分别作为测试的特征向量输入到SVM分类器训练模型中,图3给出了本发明方法中的活体检测技术的工作特性ROC曲线,图3中横坐标表示假正概率(False Positive Rate),纵坐标表示真正概率(True Positive Rate),从图3中给出的ROC曲线下方的面积可知,本发明方法中的活体检测技术的检测率达到98.718%,检测总共用时167秒,平均每张人脸处理耗时不到一秒,这充分表明了本发明方法具有较低的复杂度,为Android平台的应用提供了较好的理论基础,这使得本发明方法中的活体检测技术对完善人脸识别技术的可行性大大提高,更有力的保障了人脸识别系统的使用安全性。According to the gradient value vectors of all the image blocks in the grayscale images of all living human face images obtained above in eight different directions, the feature vectors of the grayscale images of all living human face images are obtained, and randomly selected M eigenvectors are used as positive samples; and according to the gradient value vectors of all image blocks in the grayscale image of the photo face image of all live face images obtained above in eight different directions, obtain all live face images. The feature vector of the grayscale image of the photo face image, and the feature vector corresponding to the positive sample is used as a negative sample; all positive samples and all negative samples are input into the SVM classifier for training, and the SVM classifier training model is obtained. the remaining The eigenvectors of the grayscale images of M live face images and the remaining The eigenvectors of the gray-scale image of the photos of M live face images are input in the SVM classifier training model as the eigenvectors of the test respectively, and Fig. 3 has provided the operating characteristics ROC of the living body detection technique in the method of the present invention Curve, in Fig. 3, abscissa represents false positive probability (False Positive Rate), and ordinate represents true probability (True Positive Rate), from the area below the ROC curve that provides in Fig. 3 as can be known, the living body detection technology in the inventive method The detection rate reaches 98.718%, and the detection takes 167 seconds in total, and the average processing time of each face is less than one second, which fully demonstrates that the method of the present invention has lower complexity, and provides a better solution for the application of the Android platform. Theoretical basis, this makes the living body detection technology in the method of the present invention greatly improve the feasibility of perfecting the face recognition technology, and more effectively guarantees the use safety of the face recognition system.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510161965.3A CN104766063B (en) | 2015-04-08 | 2015-04-08 | A kind of living body faces recognition methods |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510161965.3A CN104766063B (en) | 2015-04-08 | 2015-04-08 | A kind of living body faces recognition methods |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104766063A CN104766063A (en) | 2015-07-08 |
CN104766063B true CN104766063B (en) | 2018-01-05 |
Family
ID=53647877
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510161965.3A Active CN104766063B (en) | 2015-04-08 | 2015-04-08 | A kind of living body faces recognition methods |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104766063B (en) |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107111750B (en) * | 2015-10-30 | 2020-06-05 | 微软技术许可有限责任公司 | Detection of deceptive faces |
CN105389553A (en) * | 2015-11-06 | 2016-03-09 | 北京汉王智远科技有限公司 | Living body detection method and apparatus |
CN105389554B (en) * | 2015-11-06 | 2019-05-17 | 北京汉王智远科技有限公司 | Living body distinguishing method and device based on face recognition |
CN108875473A (en) * | 2017-06-29 | 2018-11-23 | 北京旷视科技有限公司 | Living body verification method, device and system and storage medium |
CN107122709B (en) * | 2017-03-17 | 2020-12-04 | 上海云从企业发展有限公司 | Living body detection method and device |
CN108573203B (en) * | 2017-03-17 | 2021-01-26 | 北京旷视科技有限公司 | Identity authentication method and device and storage medium |
CN107122744B (en) * | 2017-04-28 | 2020-11-10 | 武汉神目信息技术有限公司 | Living body detection system and method based on face recognition |
CN107392135A (en) * | 2017-07-14 | 2017-11-24 | 广东欧珀移动通信有限公司 | Liveness detection method and related products |
CN107368817B (en) * | 2017-07-26 | 2020-02-21 | 湖南云迪生物识别科技有限公司 | Face recognition method and device |
CN107609494A (en) * | 2017-08-31 | 2018-01-19 | 北京飞搜科技有限公司 | A kind of human face in-vivo detection method and system based on silent formula |
CN107679457A (en) * | 2017-09-06 | 2018-02-09 | 阿里巴巴集团控股有限公司 | User identity method of calibration and device |
CN107680185A (en) * | 2017-09-22 | 2018-02-09 | 芜湖星途机器人科技有限公司 | The method for using robot register in meeting-place |
CN107818313B (en) | 2017-11-20 | 2019-05-14 | 腾讯科技(深圳)有限公司 | Vivo identification method, device and storage medium |
CN107958236B (en) * | 2017-12-28 | 2021-03-19 | 深圳市金立通信设备有限公司 | Face recognition sample image generation method and terminal |
CN108389053B (en) * | 2018-03-19 | 2021-10-29 | 广州逗号智能零售有限公司 | Payment method, payment device, electronic equipment and readable storage medium |
CN108494778A (en) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Identity identifying method and device |
CN111222380B (en) * | 2018-11-27 | 2023-11-03 | 杭州海康威视数字技术股份有限公司 | Living body detection method and device and recognition model training method thereof |
CN109409344A (en) * | 2018-12-23 | 2019-03-01 | 广东腾晟信息科技有限公司 | Human face data accurately compares and judgment method |
CN111444744A (en) * | 2018-12-29 | 2020-07-24 | 北京市商汤科技开发有限公司 | Living body detection method, living body detection device, and storage medium |
CN111488764B (en) * | 2019-01-26 | 2024-04-30 | 天津大学青岛海洋技术研究院 | Face recognition method for ToF image sensor |
WO2020159437A1 (en) * | 2019-01-29 | 2020-08-06 | Agency For Science, Technology And Research | Method and system for face liveness detection |
CN110334238B (en) * | 2019-03-27 | 2020-01-31 | 特斯联(北京)科技有限公司 | missing population tracing method and system based on face recognition |
CN110321872B (en) * | 2019-07-11 | 2021-03-16 | 京东方科技集团股份有限公司 | Facial expression recognition method and device, computer equipment and readable storage medium |
CN113239761B (en) * | 2021-04-29 | 2023-11-14 | 广州杰赛科技股份有限公司 | Face recognition method, device and storage medium |
CN113221767B (en) * | 2021-05-18 | 2023-08-04 | 北京百度网讯科技有限公司 | Method for training living body face recognition model and recognizing living body face and related device |
CN114067394A (en) * | 2021-11-09 | 2022-02-18 | 北京百度网讯科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020599A (en) * | 2012-12-12 | 2013-04-03 | 山东神思电子技术股份有限公司 | Identity authentication method based on face |
CN103116763A (en) * | 2013-01-30 | 2013-05-22 | 宁波大学 | Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics |
CN103440479A (en) * | 2013-08-29 | 2013-12-11 | 湖北微模式科技发展有限公司 | Method and system for detecting living body human face |
CN103605958A (en) * | 2013-11-12 | 2014-02-26 | 北京工业大学 | Living body human face detection method based on gray scale symbiosis matrixes and wavelet analysis |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8351662B2 (en) * | 2010-09-16 | 2013-01-08 | Seiko Epson Corporation | System and method for face verification using video sequence |
-
2015
- 2015-04-08 CN CN201510161965.3A patent/CN104766063B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020599A (en) * | 2012-12-12 | 2013-04-03 | 山东神思电子技术股份有限公司 | Identity authentication method based on face |
CN103116763A (en) * | 2013-01-30 | 2013-05-22 | 宁波大学 | Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics |
CN103440479A (en) * | 2013-08-29 | 2013-12-11 | 湖北微模式科技发展有限公司 | Method and system for detecting living body human face |
CN103605958A (en) * | 2013-11-12 | 2014-02-26 | 北京工业大学 | Living body human face detection method based on gray scale symbiosis matrixes and wavelet analysis |
Also Published As
Publication number | Publication date |
---|---|
CN104766063A (en) | 2015-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104766063B (en) | A kind of living body faces recognition methods | |
CN107886064A (en) | A kind of method that recognition of face scene based on convolutional neural networks adapts to | |
CN102629320B (en) | Ordinal measurement statistical description face recognition method based on feature level | |
CN111339897B (en) | Living body identification method, living body identification device, computer device, and storage medium | |
CN104102903A (en) | Secondary face recognition method based on SRC | |
KR102257897B1 (en) | Apparatus and method for liveness test,and apparatus and method for image processing | |
CN105488486B (en) | The face identification method and device for preventing photo from attacking | |
CN103279744A (en) | Multi-scale tri-mode texture feature-based method and system for detecting counterfeit fingerprints | |
CN106650623A (en) | Face detection-based method for verifying personnel and identity document for exit and entry | |
Mehta et al. | Facial emotion recognition using log Gabor filter and PCA | |
CN115512399A (en) | Face fusion attack detection method based on local features and lightweight network | |
Qin et al. | Face morphing attack detection and localization based on feature-wise supervision | |
Barni et al. | Iris deidentification with high visual realism for privacy protection on websites and social networks | |
Ibsen et al. | Impact of facial tattoos and paintings on face recognition systems | |
Ibitoye | A brief review of convolutional neural network techniques for masked face recognition | |
Dhavale | DWT and DCT based robust iris feature extraction and recognition algorithm for biometric personal identification | |
Naveen et al. | Face recognition and authentication using LBP and BSIF mask detection and elimination | |
CN115565210A (en) | A Lightweight Face Fusion Attack Detection Method Based on Feature Cascading | |
Teja | Real-time live face detection using face template matching and DCT energy analysis | |
WO2023158408A1 (en) | Face recognition method | |
Su et al. | Micro-expression recognition based on the spatio-temporal feature | |
Kakran et al. | Identification and Recognition of face and number Plate for Autonomous and Secure Car Parking | |
Dhavale | Robust iris recognition based on statistical properties of Walsh Hadamard transform domain | |
Karthigayani et al. | A novel approach for face recognition and age estimation using local binary pattern, discriminative approach using two layered back propagation network | |
Makhunga et al. | Design and Modelling of a Machine Learning Based Face Recognition Model for Access Control in an Electrical Plant Control Room |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |