WO2016145940A1 - Face authentication method and device - Google Patents

Face authentication method and device Download PDF

Info

Publication number
WO2016145940A1
WO2016145940A1 PCT/CN2016/070828 CN2016070828W WO2016145940A1 WO 2016145940 A1 WO2016145940 A1 WO 2016145940A1 CN 2016070828 W CN2016070828 W CN 2016070828W WO 2016145940 A1 WO2016145940 A1 WO 2016145940A1
Authority
WO
WIPO (PCT)
Prior art keywords
local
face
vector
face image
image sample
Prior art date
Application number
PCT/CN2016/070828
Other languages
French (fr)
Chinese (zh)
Inventor
郇淑雯
朱和贵
张祥德
Original Assignee
北京天诚盛业科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京天诚盛业科技有限公司 filed Critical 北京天诚盛业科技有限公司
Publication of WO2016145940A1 publication Critical patent/WO2016145940A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present invention relates to the field of image processing and pattern recognition, and in particular to a face authentication method and apparatus.
  • identity authentication technology As people pay more attention to the security and concealment of information, identity authentication technology has become more and more important.
  • Traditional identity authentication methods such as ID cards and passwords, are cumbersome to use. The biggest disadvantage is that it cannot distinguish between owners and imposters.
  • Biometrics are inherent in human beings and are unique and difficult to replicate. Therefore, the study of biometrics is of great significance.
  • the advantages of face recognition include: no need for user cooperation; non-contact type features; simple device implementation; and traceability. Because face recognition has the above advantages, this technology has a wide range of application scenarios. In social life, e-commerce, financial institutions, household registration management, etc. all have important application value.
  • Face authentication is a form of recognition. By effectively characterizing the face, the characteristics of the two face images are obtained, and the classification algorithm is used to determine whether the two photos are the same person.
  • the face authentication process is mainly divided into three parts: face detection, feature extraction and authentication. Since the face is a three-dimensional deformation model, and the face authentication is based on the photo taken by the camera imaging model, the result of the authentication is easily affected by external factors such as illumination, posture, expression, and occlusion.
  • face recognition technology involves many interdisciplinary subjects such as pattern recognition, statistical learning, machine vision, applied mathematics and information science, and its wide application prospects have received more and more attention.
  • the more mature face authentication technology methods in the prior art include: a face authentication method based on geometric features, a face authentication method based on overall features, and a method based on local features. These three methods are easily affected by factors such as light, posture, and expression, and the authentication accuracy is low.
  • the technical problem to be solved by the present invention is to provide a face authentication method and apparatus with strong anti-interference and high accuracy.
  • the present invention provides the following technical solutions:
  • a face authentication method includes:
  • the calculated local gray feature vector and the local HOG vector it is calculated whether the face image sample pair belongs to the same person.
  • a face authentication device includes:
  • Acquisition module used to acquire a pair of face image samples
  • Blocking module used to block the acquired face image sample pairs
  • a calculation module configured to calculate a local gray feature vector and a local direction gradient histogram HOG vector for the obtained block;
  • the judging module is configured to calculate, according to the calculated local gray feature vector and the local HOG vector, whether the pair of face image samples belong to the same person.
  • the face authentication method of the present invention first acquires a pair of face image samples; then, the acquired face image sample pairs are segmented, and the image segmentation process can be used to implement local information of the face image. Acquisition reduces the computational difficulty of image data processing; then calculates the local gray feature vector and the local direction gradient histogram HOG vector for the obtained block, since the gray feature of the image is the main parameter of image feature recognition, and the face The local features are stable, and the HOG operates on the local square cells of the image. It maintains good invariance to the geometric and optical deformation of the image, which makes the method effectively avoid illumination.
  • the calculated face map is calculated based on the calculated local gray feature vector and local HOG vector.
  • the face authentication method of the invention effectively avoids interference of factors such as illumination, expression and age, and also significantly improves the accuracy of face authentication.
  • FIG. 1 is a schematic flowchart 1 of a face authentication method according to the present invention.
  • FIG. 2 is a schematic flow chart of image grayscale difference feature extraction according to the present invention.
  • FIG. 3 is a schematic flow chart of face authentication performed by the softmax regression model of the present invention.
  • FIG. 4 is a schematic flowchart 2 of a face authentication method according to the present invention.
  • FIG. 5 is a schematic structural diagram 1 of a face authentication device according to the present invention.
  • FIG. 6 is a second schematic structural diagram of a face authentication device according to the present invention.
  • the present invention provides a face authentication method, as shown in FIG. 1, comprising:
  • Step S101 acquiring a pair of face image samples
  • Face image sample pair In the process of face authentication, the face image detected in real time is compared with the face image in the existing database for 1:1 comparison. This process is called authentication, but in contrast.
  • the two face images processed in the process ie containing a real-time image and a library image) are called a face sample pair.
  • the image sample pair may be composed of a face image collected in real time and a face image stored in a database, or may be composed of two face images stored in a database, or even two images.
  • the face image composed in real time is composed.
  • Step S102 Blocking the acquired face image sample pairs
  • the block processing method of the face image may adopt various methods known to those skilled in the art, such as a discrete non-coinciding blocking method, a window sliding traversal blocking method, and the like.
  • Step S103 Calculating a local gray level feature vector and a local direction gradient histogram HOG vector for the obtained block;
  • the calculation of the local gray feature vector and the direction gradient histogram HOG vector can effectively avoid the influence of illumination, expression, age and other factors on the result, and improve the accuracy of face authentication.
  • Step S104 Calculate, according to the calculated local gray feature vector and the local HOG vector, whether the pair of face image samples belong to the same person.
  • the method first acquires a pair of face image samples; then, the obtained face image sample pairs are segmented, and the image segmentation processing can be used to collect local information of the face image, thereby reducing image data processing.
  • Computational difficulty then calculate the local gray feature vector and the local direction gradient histogram HOG vector for the obtained block, since the gray feature of the image is the main parameter of the image feature recognition, and the local feature of the face has stability, and at the same time HOG operates on the local grid unit of the image, which can maintain the invariance of the geometric and optical deformation of the image, so that the method can effectively avoid the interference of illumination, expression and age.
  • the accuracy of the face authentication result is effectively improved; finally, according to the calculated local gray feature vector and the local HOG vector, it is calculated whether the face image sample pair belongs to the same person.
  • the face authentication method of the invention effectively avoids interference of factors such as illumination, expression and age, and also significantly improves the accuracy of face authentication.
  • step S102 may further be: performing multi-scale, overlapping partitioning on the acquired pairs of face image samples.
  • the multi-scale blocking strategy is used to extract local information at different scales, thereby supplementing the information loss caused by the single-scale features.
  • the method also uses an overlapping blocking strategy to segment the face image. This step further avoids the influence of lighting, expression and other factors on the authentication result, and also effectively enhances the accuracy of the system for face authentication.
  • the method can adopt different multi-scale and overlapping block strategies for different images.
  • the step size can be set to 5 at 10*10 scale; the step size can be set to 10 at 20*20 scale; 30*30 scale
  • the step size can be set to 15.
  • step S103 may further be: calculating the local gray vector cosine distance as the gray feature vector for the obtained block, and calculating the local HOG vector cosine distance as the HOG vector.
  • an integrated vector is formed by combining the calculated local gray vector cosine distance and the local HOG vector cosine distance.
  • the calculation process of the local gray vector cosine distance and the local HOG vector cosine distance adopted for the face images of different scales is the same.
  • the same form of blocking strategy is performed on the same scale to obtain the same number of small images.
  • the image sub-matrices a', b' in the lower right dashed frame of the 4*4 scale of the same position are extracted for the image 1 and the image 2, respectively:
  • Image block in the upper left frame of image 1 is
  • the HOG in the present invention is a feature defined for statistical information of gradient direction and intensity in a rectangular region of an image, and can well represent the edge or gradient structure of the target in the local region, thereby characterizing the shape of the target.
  • the HOG specific calculation process can be as follows:
  • A(x, y) represents the gray value of the image at the pixel point (x, y).
  • G x (x, y) and G y (x, y) represent the magnitudes of the vertical and horizontal gradients of the image at the (x, y) point.
  • the gradient size at the (x, y) point is defined as:
  • the gradient direction at the (x, y) point is defined as:
  • the gradient direction of [0, ⁇ ] is divided into nine bins, and the amplitude of each pixel is defined as:
  • B k represents the kth eigenvalue in the region of interest
  • B is the gradient histogram vector in the region of interest, which can indicate the gradient structure in the region of interest.
  • the partitioning method extracts the difference features of the local gradient histogram, that is, respectively calculates the gradient histogram vectors of the corresponding blocks of the pair of face samples to be authenticated, and then calculates the cosine distance of the two as the similarity measure of the image gradient distribution.
  • a method of combining local grayscale features and local HOG features is proposed to calculate the pair of face image samples, which can not only solve the influence of expression, age, illumination and other factors on the authentication result to some extent.
  • the face authentication process can be accurately implemented.
  • the method is superior to the prior art in the complexity of the time and space of the algorithm.
  • step S104 may further be: processing the calculated local gray feature vector and the local HOG vector, using an artificial neural network classification model of softmax regression, and calculating whether the face image sample pair is the same. personal.
  • the calculated local gray feature vector and local HOG vector may also be implemented by SVM algorithm, naive Bayes algorithm, Gaussian process based classification algorithm, adaboost algorithm or k-nearest neighbor algorithm (KNN). The processing is performed to calculate whether the pair of face image samples belong to the same person.
  • the artificial neural network algorithm has better training and learning ability, and the extracted face feature vector is used as the input of the artificial neural network, and then the training of the network parameters is performed to obtain the face recognition classifier, thereby completing Certification of faces.
  • the softmax model in the case of two classifications is used.
  • the specific algorithm of the softmax regression model can be referred to as follows:
  • Softmax regression is a method for multi-classification problems. To avoid generality, we set the value of the label (that is, the expected output) y to ⁇ 1...K ⁇ (K>2).
  • Figure 3 is a neural network representation of the softmax regression model, where 1 is the gray difference vector at the first scale, 2 is the gray difference vector at the second scale, and 3 is the HOG difference vector at the last scale.
  • 4 is the probability of being the same person, 5 is the probability that the person is not the same person. When 4 is greater than 5, it is judged that the obtained face image sample pair belongs to the same person. When 4 is less than 5, the obtained face image sample pair is judged. Not belonging to the same person.
  • step S102 may include step S105:
  • the aligned face image sample pairs are normalized and stretched.
  • the human image sample pairs are aligned in this step, wherein the face alignment method can be used.
  • the face alignment method can be used.
  • Various methods known to those skilled in the art may be used, such as a human eye positioning algorithm, an Active Shape Model (ASM) algorithm, and an Active Appearance Model (AAM) algorithm.
  • ASM Active Shape Model
  • AAM Active Appearance Model
  • the aligned face image sample pairs are normalized and stretched, wherein the face is
  • the method of normalizing and stretching the image may be performed by various methods known to those skilled in the art, such as image mean-variance normalization and gray-scale hyperbolic tangent stretching method, and maximum-minimum normalization. And the gray scale stretching method of the piecewise linear transformation function.
  • step S105 is further:
  • the aligning the face image sample pairs further comprises: aligning the face image sample pairs by using a human face positioning based face alignment algorithm;
  • the normalizing and stretching processes on the aligned face image sample pairs further include: using the mean-variance normalization and the gray-scale hyperbolic tangent stretching process on the aligned face image sample pairs.
  • the core idea of the alignment algorithm based on human eye localization is to locate the two eye positions in the face region by using the positioning algorithm, and transform the image into a uniform size by panning, zooming, rotating, etc., so that the two images in the mutual comparison are in the image.
  • the eyes are fixed in the same position, the specific steps are as follows:
  • the coordinates of the left eye are (x1, y1), and the coordinates of the right eye are (x2, y2);
  • step 2) The two images obtained in step 2) constitute a new pair of face samples, and the subsequent operations of this patent are performed on pairs of new face samples.
  • the currently obtained face sample image is affected by illumination when it is acquired. This method reduces the influence of illumination to some extent by performing mean-variance normalization and gray-scale hyperbolic tangent stretching on the image.
  • the image mean-variance normalization formula is:
  • g(i,j) represents the gray value at (i,j) after normalization
  • f(i,j) represents the pre-normalization
  • mean(f) represents the average value of the image before normalization
  • var(f) represents the variance of the gray value of the image before normalization
  • Grayscale hyperbolic tangent stretching is performed after the graph mean-variance normalization operation.
  • the face alignment algorithm based on human eye positioning, the mean-variance normalization algorithm and the gray-scale hyperbolic tangent stretching algorithm adopted in the present invention are all performed by using a basic algorithm in the field of mathematics, so that the present invention does not need to undergo complicated A lengthy mathematical process can be implemented.
  • the present invention also provides a face authentication device, as shown in FIG. 5, comprising:
  • the obtaining module 11 is configured to acquire a pair of face image samples
  • Blocking module 12 configured to block the acquired face image sample pairs
  • the calculating module 13 is configured to calculate a local gray level feature vector and a local direction gradient histogram HOG vector for the obtained block;
  • the determining module 14 is configured to calculate, according to the calculated local gray feature vector and the local HOG vector, whether the pair of face image samples belong to the same person.
  • the acquisition module 1 first acquires a face image sample pair; the segmentation module 2 then blocks the acquired face image sample pairs, and the image segmentation process can implement local information on the face image.
  • the acquisition is performed to reduce the computational difficulty of the image data processing; then the calculation module 13 calculates the local gray feature vector and the local direction gradient histogram HOG vector for the obtained block, since the gray feature of the image is the main parameter of the image feature recognition And the local features of the face have stability, while the HOG operates on the local grid unit of the image, which maintains good invariance to the geometric and optical deformation of the image, so that the invention can be effective Avoid interference from factors such as light, expression and age, which effectively improves people The accuracy of the face authentication result; the final judging module 14 calculates whether the pair of face image samples belong to the same person according to the calculated local gray feature vector and the local HOG vector.
  • the face authentication device of the invention effectively avoids interference of factors such as illumination, expression and age, and also significantly improves the accuracy of face authentication.
  • the blocking module 12 can be further configured to perform multi-scale, overlapping partitioning on the acquired pairs of face image samples.
  • the present invention when accepting external images, information of different scales is automatically processed, and partial images at different scales may contain different edges and the like. Therefore, multi-scale blocking strategies are used to extract local information at different scales, thereby supplementing The information caused by single-scale features is missing. At the same time, in order to extract sufficient image features, the present invention also uses an overlapping blocking strategy to segment the face image.
  • the invention not only can solve the influence of illumination, expression and other factors on the authentication result to a certain extent, but also effectively enhances the accuracy of the system for face authentication.
  • the calculation module 13 is further configured to calculate the local gray vector cosine distance as the gray feature vector for the obtained block, and calculate the local HOG vector cosine distance as the HOG vector.
  • the invention proposes a method for combining face image sample pairs by using a combination of local gray feature and local HOG feature, which can not only solve the influence of expression, age, illumination and other factors on the authentication result to some extent, and The face authentication process can be accurately implemented.
  • the present invention is superior to the prior art in the complexity of the time and space of the algorithm.
  • the judging module 14 can be further configured to process the calculated local gray feature vector and the local HOG vector, and adopt an artificial neural network classification model of the softmax regression model to calculate and determine the face image sample pair. Whether it belongs to the same person.
  • the judging module 14 can be further configured to process the calculated local gray feature vector and the local HOG vector, and adopt an artificial neural network classification model of softmax regression to calculate whether the face image sample pair is determined. Belong to the same person.
  • the judging module 14 in the present invention can also use the SVM algorithm, the native bayes, the Gaussian process-based classification algorithm, the adaboost algorithm or the k-nearest neighbor for the calculated local gray feature vector and local HOG vector. Algorithm (KNN), etc., to calculate and judge the face image Whether the sample pair belongs to the same person.
  • KNN Algorithm
  • the artificial neural network algorithm of the softmax regression model in the present invention is a mathematical model simulating the thinking mode of the human brain, which has the functions of learning, memory and error correction. In the present invention, it has the effect of improving the accuracy of face authentication. It takes the extracted face feature vector as the input of the artificial neural network, and then through the training of the network parameters to obtain the face recognition classifier, thereby completing the face authentication.
  • the pre-processing module 15 is connected between the obtaining module 11 and the blocking module 12, as shown in FIG. 6, for performing the face image sample pair. Alignment processing; normalize and stretch the aligned face image sample pairs.
  • the human image sample pairs are aligned in this step; at the same time, in order to strengthen the face image sample pair Consistency, so as to avoid the influence of factors such as illumination, direction and noise.
  • the aligned face image sample pairs are normalized and stretched.
  • the pre-processing module 15 is further configured to perform alignment processing on the face image sample pairs by using a human face alignment based face alignment algorithm; and using mean-variance normalization and grayscale on the aligned face image sample pairs. Hyperbolic tangent stretching treatment.
  • the face alignment algorithm based on human eye positioning, the mean-variance normalization algorithm and the gray-scale hyperbolic tangent stretching algorithm adopted in the present invention are all performed by using a basic algorithm in the field of mathematics, so that the present invention does not need to undergo complicated A lengthy mathematical process can be implemented.
  • the invention utilizes the difference between the local gray scale difference and the local gradient histogram to obtain the difference representation of the human face, and uses the multi-scale feature to describe the face difference more comprehensively.
  • the invention utilizes the local texture information to be robust to illumination, posture and expression, and the complexity of the time and space of the algorithm is also low.
  • the four sub-libraries Fb, Fc, DupI, and DupII achieved 97.91%, 73.20%, 63.71%, and 49.57% authentication rates respectively (error mismatch rate was 0.1%).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The present invention belongs to a field of image processing and mode recognition and discloses a face authentication method and device. The method comprises: acquiring a face image sample pair, dividing the acquired face image sample pair into blocks, and calculating based on the acquired blocks a local gray scale feature vector and a local histograms of oriented gradients (HOG) vector, calculating and determining whether the face image sample pair belong to a same person according to the calculated local gray scale feature vector and local HOG vector. The face authentication method and device of the present invention can effectively eliminate disturbing factors such as light, expressions and ages, and improve face recognition accuracy remarkably.

Description

人脸认证方法和装置Face authentication method and device 技术领域Technical field
本发明涉及图像处理与模式识别领域,特别是指一种人脸认证方法和装置。The present invention relates to the field of image processing and pattern recognition, and in particular to a face authentication method and apparatus.
背景技术Background technique
随着人们对信息的安全性、隐蔽性的重视,身份认证技术变得越来越重要。传统的身份认证手段,如身份证、密码等,使用麻烦,最大的缺点在于它不能区分拥有者和冒充者。生物特征是人与生俱来的,并且有唯一性和不易复制性,因此,生物特征识别的研究具有重要意义。其中人脸识别的优点包括:不需要使用者配合;具有非接触型特点;实现设备简单;具有可跟踪性。由于人脸识别具有以上优点,因此该技术有着广泛的应用场景,在社会生活中,电子商务、金融机构、户籍管理等都具有重要的应用价值。As people pay more attention to the security and concealment of information, identity authentication technology has become more and more important. Traditional identity authentication methods, such as ID cards and passwords, are cumbersome to use. The biggest disadvantage is that it cannot distinguish between owners and imposters. Biometrics are inherent in human beings and are unique and difficult to replicate. Therefore, the study of biometrics is of great significance. Among them, the advantages of face recognition include: no need for user cooperation; non-contact type features; simple device implementation; and traceability. Because face recognition has the above advantages, this technology has a wide range of application scenarios. In social life, e-commerce, financial institutions, household registration management, etc. all have important application value.
人脸认证是识别的一种形式,通过有效的表征人脸,得到两幅人脸图片的特征,利用分类算法来判定这两张照片是否是同一个人。人脸认证的流程主要分为三部分:人脸检测,特征提取与认证。由于人脸是一个三维形变模型,而且人脸认证是以摄像机成像模型所成的照片为介质的,所以认证的结果容易受到光照、姿态、表情和遮挡等外界因素的影响。与此同时,由于人脸认证技术涉及到了模式识别,统计学习,机器视觉,应用数学与信息科学等众多交叉学科,再加上其广泛的应用前景,受到了越来越多的关注。Face authentication is a form of recognition. By effectively characterizing the face, the characteristics of the two face images are obtained, and the classification algorithm is used to determine whether the two photos are the same person. The face authentication process is mainly divided into three parts: face detection, feature extraction and authentication. Since the face is a three-dimensional deformation model, and the face authentication is based on the photo taken by the camera imaging model, the result of the authentication is easily affected by external factors such as illumination, posture, expression, and occlusion. At the same time, face recognition technology involves many interdisciplinary subjects such as pattern recognition, statistical learning, machine vision, applied mathematics and information science, and its wide application prospects have received more and more attention.
目前,现有技术中较为成熟的人脸认证技术方法包括:基于几何特征的人脸认证方法、基于整体特征的人脸认证方法和基于局部特征的方法。这三种方法都容易受光照、姿态、表情等因素的影响,认证准确性低。 At present, the more mature face authentication technology methods in the prior art include: a face authentication method based on geometric features, a face authentication method based on overall features, and a method based on local features. These three methods are easily affected by factors such as light, posture, and expression, and the authentication accuracy is low.
发明内容Summary of the invention
本发明要解决的技术问题是提供一种抗干扰性强、准确性高的人脸认证方法和装置。The technical problem to be solved by the present invention is to provide a face authentication method and apparatus with strong anti-interference and high accuracy.
为解决上述技术问题,本发明提供技术方案如下:In order to solve the above technical problem, the present invention provides the following technical solutions:
一种人脸认证方法,包括:A face authentication method includes:
获取人脸图像样本对;Obtaining a pair of face image samples;
对获取的人脸图像样本对进行分块;Segmenting the acquired face image sample pairs;
对得到的分块计算局部灰度特征向量和局部方向梯度直方图HOG向量;Calculating a local gray feature vector and a local direction gradient histogram HOG vector for the obtained block;
根据计算出的局部灰度特征向量和局部HOG向量,计算判断人脸图像样本对是否属于同一个人。According to the calculated local gray feature vector and the local HOG vector, it is calculated whether the face image sample pair belongs to the same person.
一种人脸认证装置,包括:A face authentication device includes:
获取模块:用于获取人脸图像样本对;Acquisition module: used to acquire a pair of face image samples;
分块模块:用于对获取的人脸图像样本对进行分块;Blocking module: used to block the acquired face image sample pairs;
计算模块:用于对得到的分块计算局部灰度特征向量和局部方向梯度直方图HOG向量;a calculation module: configured to calculate a local gray feature vector and a local direction gradient histogram HOG vector for the obtained block;
判断模块:用于根据计算出的局部灰度特征向量和局部HOG向量,计算判断人脸图像样本对是否属于同一个人。The judging module is configured to calculate, according to the calculated local gray feature vector and the local HOG vector, whether the pair of face image samples belong to the same person.
本发明具有以下有益效果:The invention has the following beneficial effects:
与现有技术相比,本发明的人脸认证方法首先获取人脸图像样本对;然后对获取的人脸图像样本对进行分块,采用图像分块处理能够实现对人脸图像的局部信息进行采集,降低了图像数据处理的运算难度;接下来对得到的分块计算局部灰度特征向量和局部方向梯度直方图HOG向量,由于图像的灰度特征是图像特征识别的主要参数,并且人脸的局部特征具有稳定性,同时HOG是在图像的局部方格单元上进行操作的,它对图像几何的和光学的形变都能保持很好的不变性,使该方法能够有效的避免了光照、表情和年龄等因素的干扰,进而有效地提高了人脸认证结果的准确性;最后根据计算出的局部灰度特征向量和局部HOG向量,计算判断人脸图 像样本对是否属于同一个人。本发明的人脸认证方法有效地避免光照、表情和年龄等因素的干扰,同时也显著地提高了人脸认证的准确性。Compared with the prior art, the face authentication method of the present invention first acquires a pair of face image samples; then, the acquired face image sample pairs are segmented, and the image segmentation process can be used to implement local information of the face image. Acquisition reduces the computational difficulty of image data processing; then calculates the local gray feature vector and the local direction gradient histogram HOG vector for the obtained block, since the gray feature of the image is the main parameter of image feature recognition, and the face The local features are stable, and the HOG operates on the local square cells of the image. It maintains good invariance to the geometric and optical deformation of the image, which makes the method effectively avoid illumination. Interference with factors such as expression and age, which effectively improves the accuracy of face authentication results; finally, the calculated face map is calculated based on the calculated local gray feature vector and local HOG vector. Like if the sample pair belongs to the same person. The face authentication method of the invention effectively avoids interference of factors such as illumination, expression and age, and also significantly improves the accuracy of face authentication.
附图说明DRAWINGS
图1为本发明的人脸认证方法的流程示意图一;1 is a schematic flowchart 1 of a face authentication method according to the present invention;
图2为本发明的图像灰度差异特征提取的流程示意图;2 is a schematic flow chart of image grayscale difference feature extraction according to the present invention;
图3为本发明的softmax回归模型进行人脸认证的流程示意图;3 is a schematic flow chart of face authentication performed by the softmax regression model of the present invention;
图4为本发明的人脸认证方法的流程示意图二;4 is a schematic flowchart 2 of a face authentication method according to the present invention;
图5为本发明的人脸认证装置的结构示意图一;FIG. 5 is a schematic structural diagram 1 of a face authentication device according to the present invention; FIG.
图6为本发明的人脸认证装置的结构示意图二。FIG. 6 is a second schematic structural diagram of a face authentication device according to the present invention.
具体实施方式detailed description
为使本发明要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。The technical problems, the technical solutions, and the advantages of the present invention will be more clearly described in the following description.
一方面,本发明提供一种人脸认证方法,如图1所示,包括:In one aspect, the present invention provides a face authentication method, as shown in FIG. 1, comprising:
步骤S101:获取人脸图像样本对;Step S101: acquiring a pair of face image samples;
人脸图像样本对:在进行人脸认证过程中,通常是将实时检测到的人脸图像与已有数据库中的人脸图像进行比对1:1对比,这个过程称为认证,而在对比过程中所处理的两幅人脸图像(即包含一张实时图像和一张库内图像)即称为一个人脸样本对。Face image sample pair: In the process of face authentication, the face image detected in real time is compared with the face image in the existing database for 1:1 comparison. This process is called authentication, but in contrast. The two face images processed in the process (ie containing a real-time image and a library image) are called a face sample pair.
本步骤中,根据使用环境的不同,图像样本对可以由实时采集的人脸图像和数据库中存储的人脸图像构成,也可以由两幅数据库中存储的人脸图像构成,甚至是由两幅实时采集的人脸图像构成。In this step, depending on the usage environment, the image sample pair may be composed of a face image collected in real time and a face image stored in a database, or may be composed of two face images stored in a database, or even two images. The face image composed in real time is composed.
步骤S102:对获取的人脸图像样本对进行分块;Step S102: Blocking the acquired face image sample pairs;
本步骤中,人脸图像的分块处理方法可以采用本领域技术人员公知的各种方法,例如离散非重合分块方法、窗口滑动遍历式分块方法等。In this step, the block processing method of the face image may adopt various methods known to those skilled in the art, such as a discrete non-coinciding blocking method, a window sliding traversal blocking method, and the like.
步骤S103:对得到的分块计算局部灰度特征向量和局部方向梯度直方图HOG向量; Step S103: Calculating a local gray level feature vector and a local direction gradient histogram HOG vector for the obtained block;
本步骤中,对局部灰度特征向量和方向梯度直方图HOG向量的计算,能够有效的避免光照、表情、年龄等因素对结果的影响,提高人脸认证的准确性。In this step, the calculation of the local gray feature vector and the direction gradient histogram HOG vector can effectively avoid the influence of illumination, expression, age and other factors on the result, and improve the accuracy of face authentication.
步骤S104:根据计算出的局部灰度特征向量和局部HOG向量,计算判断人脸图像样本对是否属于同一个人。Step S104: Calculate, according to the calculated local gray feature vector and the local HOG vector, whether the pair of face image samples belong to the same person.
本发明中,该方法首先获取人脸图像样本对;然后对获取的人脸图像样本对进行分块,采用图像分块处理能够实现对人脸图像的局部信息进行采集,降低了图像数据处理的运算难度;接下来对得到的分块计算局部灰度特征向量和局部方向梯度直方图HOG向量,由于图像的灰度特征是图像特征识别的主要参数,并且人脸的局部特征具有稳定性,同时HOG是在图像的局部方格单元上进行操作的,它对图像几何的和光学的形变都能保持很好的不变性,使该方法能够有效的避免了光照、表情和年龄等因素的干扰,进而有效地提高了人脸认证结果的准确性;最后根据计算出的局部灰度特征向量和局部HOG向量,计算判断人脸图像样本对是否属于同一个人。本发明的人脸认证方法有效地避免光照、表情和年龄等因素的干扰,同时也显著地提高了人脸认证的准确性。In the present invention, the method first acquires a pair of face image samples; then, the obtained face image sample pairs are segmented, and the image segmentation processing can be used to collect local information of the face image, thereby reducing image data processing. Computational difficulty; then calculate the local gray feature vector and the local direction gradient histogram HOG vector for the obtained block, since the gray feature of the image is the main parameter of the image feature recognition, and the local feature of the face has stability, and at the same time HOG operates on the local grid unit of the image, which can maintain the invariance of the geometric and optical deformation of the image, so that the method can effectively avoid the interference of illumination, expression and age. Furthermore, the accuracy of the face authentication result is effectively improved; finally, according to the calculated local gray feature vector and the local HOG vector, it is calculated whether the face image sample pair belongs to the same person. The face authentication method of the invention effectively avoids interference of factors such as illumination, expression and age, and also significantly improves the accuracy of face authentication.
作为本发明的一种改进,步骤S102可以进一步为:对获取的人脸图像样本对进行多尺度、有重叠分块。As an improvement of the present invention, step S102 may further be: performing multi-scale, overlapping partitioning on the acquired pairs of face image samples.
本步骤中,由于不同尺度下的局部图像会包含不同的边缘等信息,因此采用多尺度分块策略提取不同尺度下的局部信息,以此补充单一尺度特征造成的信息缺失。同时为了提取充分的图像特征,该方法还采用有重叠的分块策略对人脸图像进行分块。本步骤进一步避免了光照、表情等因素对认证结果的影响,同时也有效的加强了系统进行人脸认证的准确性。In this step, since the local images at different scales contain different edges and other information, the multi-scale blocking strategy is used to extract local information at different scales, thereby supplementing the information loss caused by the single-scale features. At the same time, in order to extract sufficient image features, the method also uses an overlapping blocking strategy to segment the face image. This step further avoids the influence of lighting, expression and other factors on the authentication result, and also effectively enhances the accuracy of the system for face authentication.
下面以6*8的图像分为4*4的图像块为例进行说明:为了实现图像的多尺度、有重叠分块,设步长为2,则可以得到2*3=6个4*4的小图,下面是图像的整体矩阵: The following is an example of a 4*4 image block divided into 4*4 images. In order to realize multi-scale and overlapping partitioning of images, if the step size is 2, 2*3=6 4*4 can be obtained. The small image below is the overall matrix of the image:
Figure PCTCN2016070828-appb-000001
Figure PCTCN2016070828-appb-000001
下面是图像经过多尺度、有重叠分块后的6个小矩阵:The following are the 6 small matrices after the image has been multi-scaled and overlapped:
Figure PCTCN2016070828-appb-000002
Figure PCTCN2016070828-appb-000002
Figure PCTCN2016070828-appb-000003
Figure PCTCN2016070828-appb-000003
该方法可以针对不同的图像采用不同的多尺度和有重叠分块策略,如10*10尺度下可以设定步长为5;20*20尺度下可以设定步长为10;30*30尺度下可以设定步长为15。The method can adopt different multi-scale and overlapping block strategies for different images. For example, the step size can be set to 5 at 10*10 scale; the step size can be set to 10 at 20*20 scale; 30*30 scale The step size can be set to 15.
作为本发明的一种改进,步骤S103可以进一步为:对得到的分块,计算局部灰度向量余弦距离作为灰度特征向量,并计算局部HOG向量余弦距离作为HOG向量。As an improvement of the present invention, step S103 may further be: calculating the local gray vector cosine distance as the gray feature vector for the obtained block, and calculating the local HOG vector cosine distance as the HOG vector.
本步骤中,如图2所示,对计算完成的局部灰度向量余弦距离和局部HOG向量余弦距离组合构成集成向量。本发明中,对不同尺度的人脸图像采用的局部灰度向量余弦距离和局部HOG向量余弦距离的计算过程是相同的。针对两幅人脸图像,在同一尺度下进行相同形式的分块策略,得到相同数量的小图像。对于相同位置的两幅小图像,计算局部灰度向量余弦距离和局部HOG向量余弦距离:In this step, as shown in FIG. 2, an integrated vector is formed by combining the calculated local gray vector cosine distance and the local HOG vector cosine distance. In the present invention, the calculation process of the local gray vector cosine distance and the local HOG vector cosine distance adopted for the face images of different scales is the same. For the two face images, the same form of blocking strategy is performed on the same scale to obtain the same number of small images. For two small images at the same position, calculate the local gray vector cosine distance and the local HOG vector cosine distance:
1.下面以局部灰度向量余弦距离的计算为例进行说明,假设两个图像均为6*8的图像(矩阵里的每个元素的数值表示图像该位置的灰度值),每个元素的数值都已经确定,利用4*4的尺度对每幅图像提取图像块,设定分块步长为2,设图像1的灰度矩阵为A1,图像2的灰度矩阵为A21. The following is an example of the calculation of the local gray vector cosine distance, assuming that both images are 6*8 images (the value of each element in the matrix represents the gray value of the position of the image), each element The values have been determined. The image block is extracted for each image by using the 4*4 scale, and the block size is set to 2, the gray matrix of image 1 is A 1 , and the gray matrix of image 2 is A 2 :
Figure PCTCN2016070828-appb-000004
Figure PCTCN2016070828-appb-000004
对图像1和图像2分别提取相同位置的4*4尺度的左上方虚线框内的图像子矩阵a、b为:For the image 1 and the image 2, the image sub-matrices a and b in the upper left dotted frame of the 4*4 scale of the same position are respectively extracted:
Figure PCTCN2016070828-appb-000005
Figure PCTCN2016070828-appb-000005
对图像1和图像2分别提取相同位置的4*4尺度的右下方虚线框内的图像子矩阵a′、b′为:The image sub-matrices a', b' in the lower right dashed frame of the 4*4 scale of the same position are extracted for the image 1 and the image 2, respectively:
Figure PCTCN2016070828-appb-000006
Figure PCTCN2016070828-appb-000006
1)分别将图像1和图像2的对应分块图像拉成向量:1) Pull the corresponding block images of image 1 and image 2 into vectors:
图像1的左上方虚线框内图像块:Image block in the upper left frame of image 1:
a=[11,45,47,97,58,58,87,11,56,47,45,13,25,35,1,74]a=[11,45,47,97,58,58,87,11,56,47,45,13,25,35,1,74]
图像2的左上方虚线框内图像块:Image block in the upper left dotted line of image 2:
b=[54,11,1,11,39,13,1,13,36,74,1,47,23,87,48,47]b=[54,11,1,11,39,13,1,13,36,74,1,47,23,87,48,47]
图像1的右下方虚线框内图像块:The image block inside the dotted line in the lower right of image 1:
a′=[45,13,87,47,1,74,15,54,1,87,33,39,1,49,55,36]a'=[45,13,87,47,1,74,15,54,1,87,33,39,1,49,55,36]
图像2的右下方虚线框内图像块:Image block in the dotted box at the bottom right of image 2:
b′=[1,47,18,62,78,47,97,7,55,87,11,83,47,18,62,26]b'=[1,47,18,62,78,47,97,7,55,87,11,83,47,18,62,26]
2)计算对应图像块向量的余弦距离:2) Calculate the cosine distance of the corresponding image block vector:
grayCosdis(a,b)=1-cos(a,b)=0.4809  (1)grayCosdis(a,b)=1-cos(a,b)=0.4809 (1)
grayCosdis(a,b)=1-cos(a′,b′)=0.3561  (2) grayCosdis(a,b)=1-cos(a',b')=0.3561 (2)
其中,
Figure PCTCN2016070828-appb-000007
among them,
Figure PCTCN2016070828-appb-000007
3)由于6*8的图像可以被划分成2*3=6个图像块,因此会得到对应的6个余弦距离,这六个值可以组成一个向量[graycosdis1,graycosdis2,graycosdis3.,graycosdis4,graycosdis5,graycosdis6],表示该人脸样本对在该尺度下的灰度差异特征。3) Since the 6*8 image can be divided into 2*3=6 image blocks, the corresponding 6 cosine distances will be obtained. These six values can form a vector [graycosdis1, graycosdis2, graycosdis3., graycosdis4, graycosdis5 , graycosdis6], indicating the grayscale difference feature of the face sample pair at the scale.
2.本发明中的HOG是针对图像某个矩形区域中的梯度方向与强度的统计信息定义的一种特征,可以很好的表征局部区域内目标的边缘或者梯度结构,进而表征目标的形状。2. The HOG in the present invention is a feature defined for statistical information of gradient direction and intensity in a rectangular region of an image, and can well represent the edge or gradient structure of the target in the local region, thereby characterizing the shape of the target.
HOG具体计算过程可以如下:The HOG specific calculation process can be as follows:
A(x,y)表示图像在像素点(x,y)的灰度值。A(x, y) represents the gray value of the image at the pixel point (x, y).
1)计算图像的一阶梯度。求导操作不仅能够捕获轮廓和一些纹理信息,1) Calculate a step of the image. The derivation operation not only captures the outline and some texture information,
还能弱化光照的影响。计算方法如下:It also weakens the effects of lighting. The calculation method is as follows:
Gx(x,y)=A(x+1,y)-A(x-1,y)  (3)G x (x,y)=A(x+1,y)-A(x-1,y) (3)
Gy(x,y)=A(x,y+1)-A(x,y-1)  (4)G y (x,y)=A(x,y+1)-A(x,y-1) (4)
Gx(x,y)和Gy(x,y)表示图像在(x,y)点处的垂直方向梯度和水平方向梯度的幅值。G x (x, y) and G y (x, y) represent the magnitudes of the vertical and horizontal gradients of the image at the (x, y) point.
在(x,y)点的梯度大小定义为:The gradient size at the (x, y) point is defined as:
Figure PCTCN2016070828-appb-000008
Figure PCTCN2016070828-appb-000008
在(x,y)点的梯度方向定义为:The gradient direction at the (x, y) point is defined as:
Figure PCTCN2016070828-appb-000009
Figure PCTCN2016070828-appb-000009
把[0,π]的梯度方向划分成9个区间(bin),各个像素点的幅值定义为:The gradient direction of [0, π] is divided into nine bins, and the amplitude of each pixel is defined as:
Figure PCTCN2016070828-appb-000010
Figure PCTCN2016070828-appb-000010
2)将关注区域内所有像素点的梯度幅值进行累加,这样一个关注区域2) Accumulate the gradient magnitudes of all the pixels in the region of interest, such a region of interest
内可以提取9个特征,累加方法如下:9 features can be extracted, and the accumulation method is as follows:
Bk=∑valuek(x,y)1≤k≤9  (8) B k =∑value k (x,y)1≤k≤9 (8)
Bk表示关注区域内第k个特征值,B即为关注区域内梯度直方图向量,能够表明关注区域内的梯度结构。B k represents the kth eigenvalue in the region of interest, and B is the gradient histogram vector in the region of interest, which can indicate the gradient structure in the region of interest.
对于局部HOG特征的提取,与局部灰度差异特征相同,即使是在表情、年龄、光照等因素变化的情况下,人脸的局部梯度分布是基本稳定的,因此采用与局部灰度差异特征相同的分块方式提取局部梯度直方图差异特征,即分别计算待认证人脸样本对的对应块的梯度直方图向量,然后计算二者的余弦距离,作为图像梯度分布的相似度量。For the extraction of local HOG features, it is the same as the local grayscale difference feature. Even in the case of changes in expression, age, illumination, etc., the local gradient distribution of the face is basically stable, so it is the same as the local grayscale difference feature. The partitioning method extracts the difference features of the local gradient histogram, that is, respectively calculates the gradient histogram vectors of the corresponding blocks of the pair of face samples to be authenticated, and then calculates the cosine distance of the two as the similarity measure of the image gradient distribution.
本步骤中提出了一种采用局部灰度特征与局部HOG特征相结合的方法进行人脸图像样本对的计算,不仅能够在一定程度上解决表情、年龄、光照等因素对认证结果的影响,而且能够准确的实现人脸认证过程,和现有技术相比,该方法在算法的时间和空间的复杂度上都要优于现有技术。In this step, a method of combining local grayscale features and local HOG features is proposed to calculate the pair of face image samples, which can not only solve the influence of expression, age, illumination and other factors on the authentication result to some extent. The face authentication process can be accurately implemented. Compared with the prior art, the method is superior to the prior art in the complexity of the time and space of the algorithm.
作为本发明的一种改进,步骤S104可以进一步为:对计算出的局部灰度特征向量和局部HOG向量,采用softmax回归的人工神经网络分类模型进行处理,计算判断人脸图像样本对是否属于同一个人。本发明中也可以对计算出的局部灰度特征向量和局部HOG向量,采用SVM算法、朴素贝叶斯算法(native bayes)、基于高斯过程的分类算法、adaboost算法或k-近邻算法(KNN)等进行处理,进而计算判断人脸图像样本对是否属于同一个人。As an improvement of the present invention, step S104 may further be: processing the calculated local gray feature vector and the local HOG vector, using an artificial neural network classification model of softmax regression, and calculating whether the face image sample pair is the same. personal. In the present invention, the calculated local gray feature vector and local HOG vector may also be implemented by SVM algorithm, naive Bayes algorithm, Gaussian process based classification algorithm, adaboost algorithm or k-nearest neighbor algorithm (KNN). The processing is performed to calculate whether the pair of face image samples belong to the same person.
本步骤中,利用人工神经网络算法具有较好的训练和学习能力,把提取的人脸特征向量作为人工神经网络的输入,然后通过对网络参数的训练,以获得人脸识别分类器,从而完成对人脸的认证。对于本发明中优选的softmax回归模型神经网络算法,采用的是二分类情况下的softmax模型。softmax回归模型的具体算法可以参考如下:In this step, the artificial neural network algorithm has better training and learning ability, and the extracted face feature vector is used as the input of the artificial neural network, and then the training of the network parameters is performed to obtain the face recognition classifier, thereby completing Certification of faces. For the preferred softmax regression model neural network algorithm in the present invention, the softmax model in the case of two classifications is used. The specific algorithm of the softmax regression model can be referred to as follows:
假设,训练集由m个已标记的样本构成:X={(x(1),y(1))…(x(m),y(m))},其中输入样本x(i)∈Rn+1(我们对符号的约定如下:特征向量x的维度为n+1,其中
Figure PCTCN2016070828-appb-000011
为截距项)。Softmax回归是针对多分类问题的方法,为了不失一般性,我们将标签(也即是期望输出)y的取值定为{1…K}(K>2)。
Assume that the training set consists of m labeled samples: X={(x (1) , y (1) )...(x (m) , y (m) )}, where the input sample x (i) ∈R n+1 (our convention for symbols is as follows: the dimension of the feature vector x is n+1, where
Figure PCTCN2016070828-appb-000011
For the intercept item). Softmax regression is a method for multi-classification problems. To avoid generality, we set the value of the label (that is, the expected output) y to {1...K} (K>2).
在softmax模型中,针对每一个给定的输入x,我们想用一个假设函数(hypothesis function)hθ(x),估算出对每一个标签类j的条件概率值 p(y=jx)。也即是,假设函数对应每个样本的输出是一个K维向量,每个分量代表给定x的条件下各个分类的概率估计值。所以,本文中,我们将假设函数定义为如下形式:In the softmax model, for each given input x, we want to estimate the conditional probability value p(y=jx) for each tag class j using a hypothesis function h θ (x). That is, it is assumed that the output of the function corresponding to each sample is a K-dimensional vector, and each component represents a probability estimate of each class under the condition of a given x. So, in this article, we will assume that the function is defined as:
Figure PCTCN2016070828-appb-000012
其中
Figure PCTCN2016070828-appb-000013
Figure PCTCN2016070828-appb-000012
among them
Figure PCTCN2016070828-appb-000013
其中,θj∈Rn+1是模型的参数,与样本对应,θj,0是j类别的偏差项,
Figure PCTCN2016070828-appb-000014
是归一化系统。图3为softmax回归模型的神经网络表示,其中,1为第一个尺度下的灰度差异向量,2为第二个尺度下的灰度差异向量,3为最后一个尺度下的HOG差异向量,4为是同一个人的概率,5为不是同一个人的概率,当4大于5时,则判断获取的人脸图像样本对属于同一个人,当4小于5时,则判断获取的人脸图像样本对不属于同一个人。
Where θ j ∈R n+1 is the parameter of the model, corresponding to the sample, θ j,0 is the deviation term of the j category,
Figure PCTCN2016070828-appb-000014
It is a normalized system. Figure 3 is a neural network representation of the softmax regression model, where 1 is the gray difference vector at the first scale, 2 is the gray difference vector at the second scale, and 3 is the HOG difference vector at the last scale. 4 is the probability of being the same person, 5 is the probability that the person is not the same person. When 4 is greater than 5, it is judged that the obtained face image sample pair belongs to the same person. When 4 is less than 5, the obtained face image sample pair is judged. Not belonging to the same person.
本发明中,如图4所示,步骤S102之前可以包括步骤S105:In the present invention, as shown in FIG. 4, step S102 may include step S105:
对人脸图像样本对进行对齐处理;Aligning the face image sample pairs;
对对齐后的人脸图像样本对进行归一化和拉伸处理。The aligned face image sample pairs are normalized and stretched.
由于人脸在采集过程中会出现平移与旋转等姿态变化,为了避免姿态和位置信息对人脸认证的影响,本步骤中对人类图像样本对进行对齐处理,其中,人脸对齐方法可以采用本领域技术人员公知的各种方法均可,如基于人眼定位算法、主动形状模型(Active Shape Model,简称ASM)算法和主动表观模型(Active Appearance Model,简称AAM)算法等。同时为了加强人脸图像样本对的一致性,从而避免光照、方向和噪声等因素的影响,本步骤中对对齐后的人脸图像样本对进行归一化和拉伸处理,其中,对人脸图像进行归一化和拉伸的方法可以采用本领域技术人员公知的各种方法均可,如采用图像均值-方差归一化和灰度双曲正切拉伸方法、最大值-最小值归一化和分段线性变换函数的灰度拉伸方法等。Since the face changes in translation during the acquisition process, in order to avoid the influence of the posture and position information on the face authentication, the human image sample pairs are aligned in this step, wherein the face alignment method can be used. Various methods known to those skilled in the art may be used, such as a human eye positioning algorithm, an Active Shape Model (ASM) algorithm, and an Active Appearance Model (AAM) algorithm. At the same time, in order to strengthen the consistency of the face image sample pairs, thereby avoiding the influence of illumination, direction and noise, in this step, the aligned face image sample pairs are normalized and stretched, wherein the face is The method of normalizing and stretching the image may be performed by various methods known to those skilled in the art, such as image mean-variance normalization and gray-scale hyperbolic tangent stretching method, and maximum-minimum normalization. And the gray scale stretching method of the piecewise linear transformation function.
优选的,步骤S105进一步为: Preferably, step S105 is further:
所述对人脸图像样本对进行对齐处理进一步为:采用基于人眼定位的人脸对齐算法对人脸图像样本对进行对齐处理;The aligning the face image sample pairs further comprises: aligning the face image sample pairs by using a human face positioning based face alignment algorithm;
所述对对齐后的人脸图像样本对进行归一化和拉伸处理进一步为:对对齐后的人脸图像样本对采用均值-方差归一化和灰度双曲正切拉伸处理。The normalizing and stretching processes on the aligned face image sample pairs further include: using the mean-variance normalization and the gray-scale hyperbolic tangent stretching process on the aligned face image sample pairs.
本步骤中,具体可以采用:In this step, specifically:
1.基于人眼定位的对齐算法1. Alignment algorithm based on human eye positioning
基于人眼定位的对齐算法的核心思想是利用定位算法定位人脸区域中的两眼位置,并通过平移、缩放、旋转等操作使图像变换为统一大小,使相互比对的两幅图像中人眼固定在相同的位置,具体步骤如下:The core idea of the alignment algorithm based on human eye localization is to locate the two eye positions in the face region by using the positioning algorithm, and transform the image into a uniform size by panning, zooming, rotating, etc., so that the two images in the mutual comparison are in the image. The eyes are fixed in the same position, the specific steps are as follows:
1)利用人脸特征点定位算法对人脸图像进行眼睛定位,得到左眼坐标为(x1,y1),右眼坐标为(x2,y2);1) Using the face feature point localization algorithm to perform eye positioning on the face image, the coordinates of the left eye are (x1, y1), and the coordinates of the right eye are (x2, y2);
2)利用MATLAB自带函数cp2tform对人脸样本对中的两幅人脸图像分别进行变化;2) Using MATLAB's own function cp2tform to change the two face images in the face sample pair respectively;
3)经过步骤2)得到的两幅图像构成新的人脸样本对,本专利后续操作均在新人脸样本对上进行。3) The two images obtained in step 2) constitute a new pair of face samples, and the subsequent operations of this patent are performed on pairs of new face samples.
2.图像均值-方差归一化与灰度双曲正切拉伸2. Image Mean-variance Normalization and Grayscale Hyperbolic Tangent Stretching
当前得到的人脸样本图像采集时会受到光照影响,该方法通过对图像进行均值-方差归一化与灰度双曲正切拉伸在一定程度上降低光照的影响。The currently obtained face sample image is affected by illumination when it is acquired. This method reduces the influence of illumination to some extent by performing mean-variance normalization and gray-scale hyperbolic tangent stretching on the image.
图像均值-方差归一化公式为:The image mean-variance normalization formula is:
Figure PCTCN2016070828-appb-000015
Figure PCTCN2016070828-appb-000015
Figure PCTCN2016070828-appb-000016
Figure PCTCN2016070828-appb-000016
Figure PCTCN2016070828-appb-000017
Figure PCTCN2016070828-appb-000017
其中,g(i,j)表示归一化后(i,j)处的灰度值,f(i,j)表示归一化前(i,j)处 的灰度值,mean(f)表示归一化前图像的平均值,var(f)表示归一化前图像灰度值的方差。Where g(i,j) represents the gray value at (i,j) after normalization, and f(i,j) represents the pre-normalization (i,j) The gray value, mean(f) represents the average value of the image before normalization, and var(f) represents the variance of the gray value of the image before normalization.
灰度双曲正切拉伸在图形均值-方差归一化操作之后进行。Grayscale hyperbolic tangent stretching is performed after the graph mean-variance normalization operation.
Figure PCTCN2016070828-appb-000018
Figure PCTCN2016070828-appb-000018
Figure PCTCN2016070828-appb-000019
Figure PCTCN2016070828-appb-000019
其中
Figure PCTCN2016070828-appb-000020
表示拉伸后(i,j)处的灰度值,g(i,j)表示拉伸前(i,j)处的灰度值。
among them
Figure PCTCN2016070828-appb-000020
Indicates the gray value at (i, j) after stretching, and g(i, j) represents the gray value at (i, j) before stretching.
本发明中采用的基于人眼定位的人脸对齐算法、均值-方差归一化算法和灰度双曲正切拉伸算法均是采用数学领域中的基础算法进行的,使本发明不需要经过复杂冗长的数学运算过程就能够实现。The face alignment algorithm based on human eye positioning, the mean-variance normalization algorithm and the gray-scale hyperbolic tangent stretching algorithm adopted in the present invention are all performed by using a basic algorithm in the field of mathematics, so that the present invention does not need to undergo complicated A lengthy mathematical process can be implemented.
另一方面,本发明还提供了一种人脸认证装置,如图5所示,包括:In another aspect, the present invention also provides a face authentication device, as shown in FIG. 5, comprising:
获取模块11:用于获取人脸图像样本对;The obtaining module 11 is configured to acquire a pair of face image samples;
分块模块12:用于对获取的人脸图像样本对进行分块;Blocking module 12: configured to block the acquired face image sample pairs;
计算模块13:用于对得到的分块计算局部灰度特征向量和局部方向梯度直方图HOG向量;The calculating module 13 is configured to calculate a local gray level feature vector and a local direction gradient histogram HOG vector for the obtained block;
判断模块14:用于根据计算出的局部灰度特征向量和局部HOG向量,计算判断人脸图像样本对是否属于同一个人。The determining module 14 is configured to calculate, according to the calculated local gray feature vector and the local HOG vector, whether the pair of face image samples belong to the same person.
本发明的人脸认证装置,首先获取模块1获取人脸图像样本对;然后分块模块2对获取的人脸图像样本对进行分块,采用图像分块处理能够实现对人脸图像的局部信息进行采集,降低了图像数据处理的运算难度;接下来计算模块13对得到的分块计算局部灰度特征向量和局部方向梯度直方图HOG向量,由于图像的灰度特征是图像特征识别的主要参数,并且人脸的局部特征具有稳定性,同时HOG是在图像的局部方格单元上进行操作的,它对图像几何的和光学的形变都能保持很好的不变性,使本发明能够有效的避免了光照、表情和年龄等因素的干扰,进而有效地提高了人 脸认证结果的准确性;最后判断模块14根据计算出的局部灰度特征向量和局部HOG向量,计算判断人脸图像样本对是否属于同一个人。本发明的人脸认证装置有效地避免光照、表情和年龄等因素的干扰,同时也显著地提高了人脸认证的准确性。In the face authentication device of the present invention, the acquisition module 1 first acquires a face image sample pair; the segmentation module 2 then blocks the acquired face image sample pairs, and the image segmentation process can implement local information on the face image. The acquisition is performed to reduce the computational difficulty of the image data processing; then the calculation module 13 calculates the local gray feature vector and the local direction gradient histogram HOG vector for the obtained block, since the gray feature of the image is the main parameter of the image feature recognition And the local features of the face have stability, while the HOG operates on the local grid unit of the image, which maintains good invariance to the geometric and optical deformation of the image, so that the invention can be effective Avoid interference from factors such as light, expression and age, which effectively improves people The accuracy of the face authentication result; the final judging module 14 calculates whether the pair of face image samples belong to the same person according to the calculated local gray feature vector and the local HOG vector. The face authentication device of the invention effectively avoids interference of factors such as illumination, expression and age, and also significantly improves the accuracy of face authentication.
作为本发明的一种改进,分块模块12可以进一步用于对获取的人脸图像样本对进行多尺度、有重叠分块。As an improvement of the present invention, the blocking module 12 can be further configured to perform multi-scale, overlapping partitioning on the acquired pairs of face image samples.
本发明中,在接受外界图像时,会自动处理不同尺度的信息,不同尺度下的局部图像会包含不同的边缘等信息,因此采用多尺度分块策略提取不同尺度下的局部信息,以此补充单一尺度特征造成的信息缺失。同时为了提取充分的图像特征,本发明还采用有重叠的分块策略对人脸图像进行分块。本发明不仅能够在一定程度上解决光照、表情等因素对认证结果的影响,同时也有效的加强了系统进行人脸认证的准确性。In the present invention, when accepting external images, information of different scales is automatically processed, and partial images at different scales may contain different edges and the like. Therefore, multi-scale blocking strategies are used to extract local information at different scales, thereby supplementing The information caused by single-scale features is missing. At the same time, in order to extract sufficient image features, the present invention also uses an overlapping blocking strategy to segment the face image. The invention not only can solve the influence of illumination, expression and other factors on the authentication result to a certain extent, but also effectively enhances the accuracy of the system for face authentication.
优选的,计算模块13,可以进一步用于对得到的分块,计算局部灰度向量余弦距离作为灰度特征向量,并计算局部HOG向量余弦距离作为HOG向量。Preferably, the calculation module 13 is further configured to calculate the local gray vector cosine distance as the gray feature vector for the obtained block, and calculate the local HOG vector cosine distance as the HOG vector.
本发明中提出了一种采用局部灰度特征与局部HOG特征相结合的方法进行人脸图像样本对的计算,不仅能够在一定程度上解决表情、年龄、光照等因素对认证结果的影响,而且能够准确的实现人脸认证过程,和现有技术相比,本发明在算法的时间和空间的复杂度上都要优于现有技术。The invention proposes a method for combining face image sample pairs by using a combination of local gray feature and local HOG feature, which can not only solve the influence of expression, age, illumination and other factors on the authentication result to some extent, and The face authentication process can be accurately implemented. Compared with the prior art, the present invention is superior to the prior art in the complexity of the time and space of the algorithm.
作为本发明的一种改进,判断模块14,可以进一步用于对计算出的局部灰度特征向量和局部HOG向量,采用softmax回归模型的人工神经网络分类模型进行处理,计算判断人脸图像样本对是否属于同一个人。As an improvement of the present invention, the judging module 14 can be further configured to process the calculated local gray feature vector and the local HOG vector, and adopt an artificial neural network classification model of the softmax regression model to calculate and determine the face image sample pair. Whether it belongs to the same person.
作为本发明的一种改进,判断模块14,可以进一步用于对计算出的局部灰度特征向量和局部HOG向量,采用softmax回归的人工神经网络分类模型进行处理,计算判断人脸图像样本对是否属于同一个人。本发明中的判断模块14也可以对计算出的局部灰度特征向量和局部HOG向量,采用SVM算法、朴素贝叶斯算法(native bayes)、基于高斯过程的分类算法、adaboost算法或k-近邻算法(KNN)等进行处理,进而计算判断人脸图像 样本对是否属于同一个人。As an improvement of the present invention, the judging module 14 can be further configured to process the calculated local gray feature vector and the local HOG vector, and adopt an artificial neural network classification model of softmax regression to calculate whether the face image sample pair is determined. Belong to the same person. The judging module 14 in the present invention can also use the SVM algorithm, the native bayes, the Gaussian process-based classification algorithm, the adaboost algorithm or the k-nearest neighbor for the calculated local gray feature vector and local HOG vector. Algorithm (KNN), etc., to calculate and judge the face image Whether the sample pair belongs to the same person.
本发明中的softmax回归模型的人工神经网络算法是一种模拟人类大脑思维方式的数学模型,它具有学习、记忆和纠错的功能。在本发明中,它具有提高人脸认证准确性的作用。它是把提取的人脸特征向量作为人工神经网络的输入,然后通过对网络参数的训练,以获得人脸识别分类器,从而完成对人脸的认证。The artificial neural network algorithm of the softmax regression model in the present invention is a mathematical model simulating the thinking mode of the human brain, which has the functions of learning, memory and error correction. In the present invention, it has the effect of improving the accuracy of face authentication. It takes the extracted face feature vector as the input of the artificial neural network, and then through the training of the network parameters to obtain the face recognition classifier, thereby completing the face authentication.
为了避免姿态和位置信息,以及光照等因素对人脸认证的影响,获取模块11和分块模块12之间连接有预处理模块15,如图6所示,用于对人脸图像样本对进行对齐处理;对对齐后的人脸图像样本对进行归一化和拉伸处理。In order to avoid the influence of the posture and the position information and the illumination on the face authentication, the pre-processing module 15 is connected between the obtaining module 11 and the blocking module 12, as shown in FIG. 6, for performing the face image sample pair. Alignment processing; normalize and stretch the aligned face image sample pairs.
由于人脸在采集过程中会出现平移与旋转等姿态变化,为了避免姿态和位置信息对人脸认证的影响,本步骤中对人类图像样本对进行对齐处理;同时为了加强人脸图像样本对的一致性,从而避免光照、方向和噪声等因素的影响,本步骤中对对齐后的人脸图像样本对进行归一化和拉伸处理。Since the face changes during the acquisition process such as translation and rotation, in order to avoid the influence of posture and position information on face authentication, the human image sample pairs are aligned in this step; at the same time, in order to strengthen the face image sample pair Consistency, so as to avoid the influence of factors such as illumination, direction and noise. In this step, the aligned face image sample pairs are normalized and stretched.
优选的,预处理模块15,进一步用于采用基于人眼定位的人脸对齐算法对人脸图像样本对进行对齐处理;对对齐后的人脸图像样本对采用均值-方差归一化和灰度双曲正切拉伸处理。Preferably, the pre-processing module 15 is further configured to perform alignment processing on the face image sample pairs by using a human face alignment based face alignment algorithm; and using mean-variance normalization and grayscale on the aligned face image sample pairs. Hyperbolic tangent stretching treatment.
本发明中采用的基于人眼定位的人脸对齐算法、均值-方差归一化算法和灰度双曲正切拉伸算法均是采用数学领域中的基础算法进行的,使本发明不需要经过复杂冗长的数学运算过程就能够实现。The face alignment algorithm based on human eye positioning, the mean-variance normalization algorithm and the gray-scale hyperbolic tangent stretching algorithm adopted in the present invention are all performed by using a basic algorithm in the field of mathematics, so that the present invention does not need to undergo complicated A lengthy mathematical process can be implemented.
本发明利用局部灰度差异与局部梯度直方图差异,得到了人脸的差异表示,并且利用多尺度特征对人脸差异进行更全面的描述。本发明利用了局部纹理信息对光照,姿态,表情等具有鲁棒性的特点,同时算法的时间和空间的复杂度也比较低。在FERET数据库上,四个子库Fb,Fc,DupI,DupII分别取得了97.91%,73.20%,63.71%,49.57%的认证率(错误判对率为0.1%)。The invention utilizes the difference between the local gray scale difference and the local gradient histogram to obtain the difference representation of the human face, and uses the multi-scale feature to describe the face difference more comprehensively. The invention utilizes the local texture information to be robust to illumination, posture and expression, and the complexity of the time and space of the algorithm is also low. In the FERET database, the four sub-libraries Fb, Fc, DupI, and DupII achieved 97.91%, 73.20%, 63.71%, and 49.57% authentication rates respectively (error mismatch rate was 0.1%).
以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普 通技术人员来说,在不脱离本发明所述原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。 The above is a preferred embodiment of the present invention, it should be noted that Many modifications and refinements can be made by those skilled in the art without departing from the principles of the invention, and such modifications and refinements are also considered to be within the scope of the invention.

Claims (10)

  1. 一种人脸认证方法,其特征在于,包括:A face authentication method, comprising:
    获取人脸图像样本对;Obtaining a pair of face image samples;
    对获取的人脸图像样本对进行分块;Segmenting the acquired face image sample pairs;
    对得到的分块计算局部灰度特征向量和局部方向梯度直方图HOG向量;Calculating a local gray feature vector and a local direction gradient histogram HOG vector for the obtained block;
    根据计算出的局部灰度特征向量和局部HOG向量,计算判断人脸图像样本对是否属于同一个人。According to the calculated local gray feature vector and the local HOG vector, it is calculated whether the face image sample pair belongs to the same person.
  2. 根据权利要求1所述的人脸认证方法,其特征在于,所述对获取的人脸图像样本对进行分块进一步为:The face authentication method according to claim 1, wherein the segmenting the acquired face image sample pairs further comprises:
    对获取的人脸图像样本对进行多尺度、有重叠分块。The acquired face image sample pairs are multi-scaled and overlapped.
  3. 根据权利要求1所述的人脸认证方法,其特征在于,所述对得到的分块计算局部灰度特征向量和局部方向梯度直方图HOG向量进一步为:The face authentication method according to claim 1, wherein the calculating the local gray feature vector and the local direction gradient histogram HOG vector for the obtained block further comprises:
    对得到的分块,计算局部灰度向量余弦距离作为灰度特征向量,并计算局部HOG向量余弦距离作为HOG向量。For the obtained partition, the local gray vector cosine distance is calculated as the gray feature vector, and the local HOG vector cosine distance is calculated as the HOG vector.
  4. 根据权利要求1所述的人脸认证方法,其特征在于,所述根据计算出的局部灰度特征向量和局部HOG向量,计算判断人脸图像样本对是否属于同一个人进一步为:The face authentication method according to claim 1, wherein the calculating whether the face image sample pair belongs to the same person according to the calculated local gray feature vector and the local HOG vector is further:
    对计算出的局部灰度特征向量和局部HOG向量,采用softmax回归的人工神经网络分类模型进行处理,计算判断人脸图像样本对是否属于同一个人。The calculated local gray eigenvectors and local HOG vectors are processed by the artificial neural network classification model of softmax regression, and it is calculated whether the pair of face image samples belong to the same person.
  5. 根据权利要求1所述的人脸认证方法,其特征在于,所述对获取的人脸图像样本对进行分块之前包括:The face authentication method according to claim 1, wherein the step of segmenting the acquired face image sample pairs comprises:
    对人脸图像样本对进行对齐处理;Aligning the face image sample pairs;
    对对齐后的人脸图像样本对进行归一化和拉伸处理。The aligned face image sample pairs are normalized and stretched.
  6. 根据权利要求5所述的人脸认证方法,其特征在于,所述对人脸 图像样本对进行对齐处理进一步为:采用基于人眼定位的人脸对齐算法对人脸图像样本对进行对齐处理;The face authentication method according to claim 5, wherein the face is facing The alignment of the image sample pairs is further performed by: aligning the face image sample pairs with a human face alignment algorithm based on human eye positioning;
    所述对对齐后的人脸图像样本对进行归一化和拉伸处理进一步为:对对齐后的人脸图像样本对采用均值-方差归一化和灰度双曲正切拉伸处理。The normalizing and stretching processes on the aligned face image sample pairs further include: using the mean-variance normalization and the gray-scale hyperbolic tangent stretching process on the aligned face image sample pairs.
  7. 一种人脸认证装置,其特征在于,包括:A face authentication device, comprising:
    获取模块:用于获取人脸图像样本对;Acquisition module: used to acquire a pair of face image samples;
    分块模块:用于对获取的人脸图像样本对进行分块;Blocking module: used to block the acquired face image sample pairs;
    计算模块:用于对得到的分块计算局部灰度特征向量和局部方向梯度直方图HOG向量;a calculation module: configured to calculate a local gray feature vector and a local direction gradient histogram HOG vector for the obtained block;
    判断模块:用于根据计算出的局部灰度特征向量和局部HOG向量,计算判断人脸图像样本对是否属于同一个人。The judging module is configured to calculate, according to the calculated local gray feature vector and the local HOG vector, whether the pair of face image samples belong to the same person.
  8. 根据权利要求7所述的人脸认证装置,其特征在于,所述分块模块,进一步用于对获取的人脸图像样本对进行多尺度、有重叠分块。The face authentication device according to claim 7, wherein the blocking module is further configured to perform multi-scale and overlapping segmentation on the acquired face image sample pairs.
  9. 根据权利要求7所述的人脸认证装置,其特征在于,所述计算模块,进一步用于对得到的分块,计算局部灰度向量余弦距离作为灰度特征向量,并计算局部HOG向量余弦距离作为HOG向量。The face authentication device according to claim 7, wherein the calculation module is further configured to calculate a local gray vector cosine distance as a gray feature vector for the obtained block, and calculate a local HOG vector cosine distance As a HOG vector.
  10. 根据权利要求7所述的人脸认证装置,其特征在于,所述判断模块,进一步用于对计算出的局部灰度特征向量和局部HOG向量,采用softmax回归的人工神经网络分类模型进行处理,计算判断人脸图像样本对是否属于同一个人。 The face authentication device according to claim 7, wherein the determining module is further configured to process the calculated local gray feature vector and the local HOG vector by using an artificial neural network classification model of softmax regression, It is calculated to determine whether the face image sample pair belongs to the same person.
PCT/CN2016/070828 2015-03-19 2016-01-13 Face authentication method and device WO2016145940A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510121748.1 2015-03-19
CN201510121748.1A CN105447441B (en) 2015-03-19 2015-03-19 Face authentication method and device

Publications (1)

Publication Number Publication Date
WO2016145940A1 true WO2016145940A1 (en) 2016-09-22

Family

ID=55557601

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/070828 WO2016145940A1 (en) 2015-03-19 2016-01-13 Face authentication method and device

Country Status (2)

Country Link
CN (1) CN105447441B (en)
WO (1) WO2016145940A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520215A (en) * 2018-03-28 2018-09-11 电子科技大学 Single sample face recognition method based on multiple dimensioned union feature encoder
CN110059551A (en) * 2019-03-12 2019-07-26 五邑大学 A kind of automatic checkout system of food based on image recognition
CN110472460A (en) * 2018-05-11 2019-11-19 北京京东尚科信息技术有限公司 Face image processing process and device
CN111860454A (en) * 2020-08-04 2020-10-30 北京深醒科技有限公司 Model switching algorithm based on face recognition
CN112069875A (en) * 2020-07-17 2020-12-11 北京百度网讯科技有限公司 Face image classification method and device, electronic equipment and storage medium
CN112446247A (en) * 2019-08-30 2021-03-05 北京大学 Low-illumination face detection method based on multi-feature fusion and low-illumination face detection network
CN112861100A (en) * 2021-02-08 2021-05-28 北京百度网讯科技有限公司 Identity authentication method, device, equipment and storage medium
CN113705462A (en) * 2021-08-30 2021-11-26 平安科技(深圳)有限公司 Face recognition method and device, electronic equipment and computer readable storage medium

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102547820B1 (en) * 2016-07-11 2023-06-27 삼성전자주식회사 Method and apparatus for verifying user using multiple biometric verifiers
CN110099601A (en) * 2016-10-14 2019-08-06 费森瑟有限公司 Detection respiration parameter simultaneously provides the system and method for biofeedback
CN107392182B (en) * 2017-08-17 2020-12-04 宁波甬慧智能科技有限公司 Face acquisition and recognition method and device based on deep learning
CN107657216A (en) * 2017-09-11 2018-02-02 安徽慧视金瞳科技有限公司 1 to the 1 face feature vector comparison method based on interference characteristic vector data collection
CN107818299A (en) * 2017-10-17 2018-03-20 内蒙古科技大学 Face recognition algorithms based on fusion HOG features and depth belief network
EP3698268A4 (en) 2017-11-22 2021-02-17 Zhejiang Dahua Technology Co., Ltd. Methods and systems for face recognition
CN107992807B (en) * 2017-11-22 2020-10-30 浙江大华技术股份有限公司 Face recognition method and device based on CNN model
CN108234770B (en) * 2018-01-03 2020-11-03 京东方科技集团股份有限公司 Auxiliary makeup system, auxiliary makeup method and auxiliary makeup device
CN109002832B (en) * 2018-06-11 2021-11-19 湖北大学 Image identification method based on hierarchical feature extraction
CN110866435B (en) * 2019-08-13 2023-09-12 广州三木智能科技有限公司 Far infrared pedestrian training method for self-similarity gradient orientation histogram
CN111539271B (en) * 2020-04-10 2023-05-02 哈尔滨新光光电科技股份有限公司 Face recognition method based on wearable equipment and wearable face detection equipment for frontier defense

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120087552A1 (en) * 2010-10-08 2012-04-12 Micro-Star Int'l Co., Ltd. Facial recognition method for eliminating the effect of noise blur and environmental variations
CN102663413A (en) * 2012-03-09 2012-09-12 中盾信安科技(江苏)有限公司 Multi-gesture and cross-age oriented face image authentication method
CN103810490A (en) * 2014-02-14 2014-05-21 海信集团有限公司 Method and device for confirming attribute of face image
CN103914686A (en) * 2014-03-11 2014-07-09 辰通智能设备(深圳)有限公司 Face comparison authentication method and system based on identification photo and collected photo
CN104077580A (en) * 2014-07-15 2014-10-01 中国科学院合肥物质科学研究院 Pest image automatic recognition method based on high-reliability network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6826300B2 (en) * 2001-05-31 2004-11-30 George Mason University Feature based classification
KR100438841B1 (en) * 2002-04-23 2004-07-05 삼성전자주식회사 Method for verifying users and updating the data base, and face verification system using thereof
CN102622588B (en) * 2012-03-08 2013-10-09 无锡中科奥森科技有限公司 Dual-certification face anti-counterfeit method and device
CN102646190B (en) * 2012-03-19 2018-05-08 深圳市腾讯计算机系统有限公司 A kind of authentication method based on biological characteristic, apparatus and system
CN103440478B (en) * 2013-08-27 2016-08-10 电子科技大学 A kind of method for detecting human face based on HOG feature

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120087552A1 (en) * 2010-10-08 2012-04-12 Micro-Star Int'l Co., Ltd. Facial recognition method for eliminating the effect of noise blur and environmental variations
CN102663413A (en) * 2012-03-09 2012-09-12 中盾信安科技(江苏)有限公司 Multi-gesture and cross-age oriented face image authentication method
CN103810490A (en) * 2014-02-14 2014-05-21 海信集团有限公司 Method and device for confirming attribute of face image
CN103914686A (en) * 2014-03-11 2014-07-09 辰通智能设备(深圳)有限公司 Face comparison authentication method and system based on identification photo and collected photo
CN104077580A (en) * 2014-07-15 2014-10-01 中国科学院合肥物质科学研究院 Pest image automatic recognition method based on high-reliability network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RUAN, XIAOHU ET AL.: "AN ASSESSMENT METHODFOR FACE ALIGNMENT BASED ON FEATURE MATCHING", CAAI TRANSACTIONS ON INTELLIGENT SYSYTEM, vol. 10, no. 1, 28 February 2015 (2015-02-28) *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520215A (en) * 2018-03-28 2018-09-11 电子科技大学 Single sample face recognition method based on multiple dimensioned union feature encoder
CN110472460A (en) * 2018-05-11 2019-11-19 北京京东尚科信息技术有限公司 Face image processing process and device
CN110059551A (en) * 2019-03-12 2019-07-26 五邑大学 A kind of automatic checkout system of food based on image recognition
CN112446247A (en) * 2019-08-30 2021-03-05 北京大学 Low-illumination face detection method based on multi-feature fusion and low-illumination face detection network
CN112446247B (en) * 2019-08-30 2022-11-15 北京大学 Low-illumination face detection method based on multi-feature fusion and low-illumination face detection network
CN112069875A (en) * 2020-07-17 2020-12-11 北京百度网讯科技有限公司 Face image classification method and device, electronic equipment and storage medium
CN112069875B (en) * 2020-07-17 2024-05-28 北京百度网讯科技有限公司 Classification method and device for face images, electronic equipment and storage medium
CN111860454B (en) * 2020-08-04 2024-02-09 北京深醒科技有限公司 Model switching algorithm based on face recognition
CN111860454A (en) * 2020-08-04 2020-10-30 北京深醒科技有限公司 Model switching algorithm based on face recognition
CN112861100A (en) * 2021-02-08 2021-05-28 北京百度网讯科技有限公司 Identity authentication method, device, equipment and storage medium
CN112861100B (en) * 2021-02-08 2023-09-05 北京百度网讯科技有限公司 Authentication method, device, equipment and storage medium
CN113705462B (en) * 2021-08-30 2023-07-14 平安科技(深圳)有限公司 Face recognition method, device, electronic equipment and computer readable storage medium
CN113705462A (en) * 2021-08-30 2021-11-26 平安科技(深圳)有限公司 Face recognition method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN105447441A (en) 2016-03-30
CN105447441B (en) 2019-03-29

Similar Documents

Publication Publication Date Title
WO2016145940A1 (en) Face authentication method and device
WO2021120752A1 (en) Region-based self-adaptive model training method and device, image detection method and device, and apparatus and medium
Gou et al. A joint cascaded framework for simultaneous eye detection and eye state estimation
CN104517104B (en) A kind of face identification method and system based under monitoring scene
Gunay et al. Automatic age classification with LBP
WO2016150240A1 (en) Identity authentication method and apparatus
Gou et al. Learning-by-synthesis for accurate eye detection
US20100111375A1 (en) Method for Determining Atributes of Faces in Images
Liu et al. Local histogram specification for face recognition under varying lighting conditions
Zhao et al. Cascaded shape space pruning for robust facial landmark detection
Huang et al. A component-based framework for generalized face alignment
CN105760815A (en) Heterogeneous human face verification method based on portrait on second-generation identity card and video portrait
Yu et al. Improvement of face recognition algorithm based on neural network
Rukhiran et al. Effecting of environmental conditions to accuracy rates of face recognition based on IoT solution
Kwaśniewska et al. Face detection in image sequences using a portable thermal camera
CN110969101A (en) Face detection and tracking method based on HOG and feature descriptor
Kwaśniewska et al. Real-time facial feature tracking in poor quality thermal imagery
Hemasree et al. Facial Skin Texture and Distributed Dynamic Kernel Support Vector Machine (DDKSVM) Classifier for Age Estimation in Facial Wrinkles.
Aliradi et al. A novel descriptor (LGBQ) based on Gabor filters
Thomas et al. Real Time Face Mask Detection and Recognition using Python
Li et al. 3D face recognition by constructing deformation invariant image
CN108171750A (en) The chest handling positioning identification system of view-based access control model
Martinez et al. Facial landmarking for in-the-wild images with local inference based on global appearance
Arya et al. An Efficient Face Detection and Recognition Method for Surveillance
Ayodele et al. Development of a modified local Binary Pattern-Gabor Wavelet transform aging invariant face recognition system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16764122

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16764122

Country of ref document: EP

Kind code of ref document: A1