WO2017181899A1 - 一种人脸活体验证方法及装置 - Google Patents

一种人脸活体验证方法及装置 Download PDF

Info

Publication number
WO2017181899A1
WO2017181899A1 PCT/CN2017/080376 CN2017080376W WO2017181899A1 WO 2017181899 A1 WO2017181899 A1 WO 2017181899A1 CN 2017080376 W CN2017080376 W CN 2017080376W WO 2017181899 A1 WO2017181899 A1 WO 2017181899A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
feature point
error
homography matrix
calculation result
Prior art date
Application number
PCT/CN2017/080376
Other languages
English (en)
French (fr)
Inventor
鲍冠伯
李季檩
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP17785384.3A priority Critical patent/EP3447679A4/en
Priority to JP2018538712A priority patent/JP2019509545A/ja
Priority to KR1020187021514A priority patent/KR102120241B1/ko
Publication of WO2017181899A1 publication Critical patent/WO2017181899A1/zh
Priority to US16/024,117 priority patent/US10796179B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/167Detection; Localisation; Normalisation using comparisons between temporally consecutive images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application belongs to the field of face recognition technology, and in particular, to a method and device for verifying a face.
  • In vivo detection refers to the technique of judging whether the input face is a planar prosthesis or a real living face.
  • the existing multi-camera living body detection technology generally adopts a binocular camera with left and right horizontal arrangement, and performs stereo matching of the left and right images after performing double target setting, and recovers the depth information of the face region according to the face.
  • the area has regular depth information to distinguish between real faces and plane prostheses (such as photos, videos, or printed faces on mobile devices such as mobile phones or tablets).
  • the embodiment of the present application provides the following technical solutions:
  • a method for verifying a human face which includes:
  • a human face living body verification device comprising:
  • An acquiring unit configured to acquire a face image captured by at least two cameras
  • a feature point registration unit configured to perform feature point registration on the face image according to a preset face feature point to obtain a corresponding feature point combination
  • a fitting unit configured to fit a homography matrix between the feature point combinations by using a preset algorithm
  • a calculating unit configured to calculate a transformation error of the feature point combination based on the homography matrix, and obtain an error calculation result
  • a verification unit configured to perform face biometric verification according to the error calculation result.
  • the embodiment of the present application does not need to calibrate the camera, and eliminates the calibration error in the calibration process of the camera in the prior art.
  • the method in the embodiment of the present application directly performs feature point matching on the captured face image, and Fitting out the homography matrix, and obtaining the error calculation result based on the homography matrix, the face biometric verification is carried out.
  • the calculation amount of the living body discrimination algorithm is greatly reduced, and the speed and real-time of the living body detection are improved. .
  • FIG. 1 is a schematic diagram of a scene of a method for verifying a living body according to an embodiment of the present application
  • FIG. 1b is a schematic flowchart of a method for verifying a living body of a human body according to an embodiment of the present application
  • FIG. 2a is a schematic flowchart of a method for verifying a living body of a human body according to an embodiment of the present application
  • 2b is a schematic diagram of feature point registration in a method for verifying a living body of a living body according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a human face living body verification apparatus according to an embodiment of the present application.
  • FIG. 3b is another schematic structural diagram of a human face living body verification apparatus according to an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a server according to an embodiment of the present application.
  • module as used herein may be taken to mean a software object that is executed on the computing system.
  • the different components, modules, engines, and services described herein can be considered as implementation objects on the computing system.
  • the embodiment of the present application provides a method and device for verifying a human face.
  • FIG. 1 is a schematic diagram of a scene of a method for verifying a living body in a living body according to an embodiment of the present application.
  • the scene may include a face biometric verification device, which may be referred to as a face verification device.
  • the device may be implemented by a dedicated computing device or integrated into some computing device, such as a server, such as an application running on a server, a board inserted into a server expansion slot, or the like.
  • the device may be configured to acquire a face image captured by at least two cameras, such as acquiring a face image captured by the camera A and the camera B, and thereafter, performing the feature on the face image according to the preset face feature point.
  • Point registration get the corresponding feature point combination, for example, Performing five-dimensional registration on the face image; then, using a preset algorithm to fit the homography matrix between the feature point combinations, and finally calculating a transformation error of the feature point combination based on the homography matrix, and obtaining an error calculation result And performing face biometric verification according to the error calculation result. For example, if the error calculation result satisfies a preset threshold, it may be determined that the result of the face biometric verification is pass, and the like.
  • the calibration of the camera is not required, and the calibration error in the calibration process of the camera in the prior art is eliminated.
  • the method of the embodiment of the present application directly performs feature point matching on the captured face image and fits the homography matrix. And based on the homography matrix to obtain the error calculation results, the face biometric verification is carried out. Compared with the existing living body detection technology, the calculation amount of the living body discrimination algorithm is greatly reduced, and the speed and real-time performance of the living body detection are improved.
  • the scene may further include multiple camera devices (ie, cameras), which are mainly used to capture image data, and send the face image to the face verification device when the face image is captured to determine whether the face biometric verification is by.
  • camera devices ie, cameras
  • the description will be made from the perspective of a human face biometric verification device, which may be integrated in a network device such as a server.
  • a method for verifying a human face comprising: acquiring a face image captured by at least two cameras; performing feature point registration on the face image according to a preset face feature point, and obtaining a corresponding feature point combination;
  • the preset algorithm fits the homography matrix between the feature point combinations; calculates a transformation error of the feature point combination based on the homography matrix, obtains an error calculation result, and performs a face according to the error calculation result In vivo verification.
  • FIG. 1b is a schematic flowchart of a method for verifying a living body of a human body according to an embodiment of the present application.
  • the method can include the following steps.
  • step S101 a face image captured by at least two cameras is acquired.
  • the step of the face biometric verification device acquiring the face image captured by the at least two cameras may specifically include:
  • Step 1 Acquire image data captured by at least two cameras.
  • Step 2 Perform face detection on the image data to obtain a face detection result.
  • Step 3 When the face detection result indicates that a face image is detected, the face image is acquired.
  • Face Detection is a key part of the automatic face recognition system. It refers to any given image, and uses a certain strategy to search for it to determine whether it contains a face. Then returns the position, size and posture of the face. It is easily conceivable that the present embodiment can analyze and identify the acquired image data by using an existing face detection algorithm. The process of face detection is not specifically limited herein.
  • a plurality of cameras collect image data, and send the image data to the face biometric verification device in real time, and the face biometric verification device performs face detection on the received image data.
  • the face image is acquired for biometric verification.
  • the collected image data may be directly detected by the camera, and when the face image is detected, the face image is acquired, and the face image is sent to the human face verification device. For live verification.
  • step S102 feature point registration is performed on the face image according to the preset face feature point, and a corresponding feature point combination is obtained.
  • the face biometric verification device performs feature point registration on the face image according to the number and position of the preset facial feature points, that is, the facial image is subjected to facial registration, so that the corresponding feature point combination can be obtained. .
  • the face biometric verification device can perform feature point registration on the two face images according to the preset face feature points after acquiring the face images captured by the two cameras. Thereby a plurality of feature point combinations are obtained, which are also called feature point pairs.
  • the feature point registration means that multiple images are taken.
  • the registration process of the face biometric verification device may be as follows: first, feature extraction is performed on two face images to obtain feature points; and matching feature point pairs are found by performing similarity measurement; Image space coordinate transformation parameters are obtained by matching feature point pairs; finally, image registration is performed by coordinate transformation parameters.
  • the existing feature extraction algorithm may be used to perform feature point registration on the image, which is not specifically described herein.
  • step S103 a homography matrix between the feature point combinations is fitted using a preset algorithm.
  • the face biometric verification device may use a preset random sampling consistency algorithm and a least squares method to fit the feature point combination to generate a homography matrix.
  • the basic assumption of the random sampling consistency algorithm (Ransac algorithm) in the embodiment of the present application is that the sample contains correct data (inliers, data that can be described by the model), and also includes abnormal data (outliers, which are far from the normal range and cannot be adapted.
  • the data of the mathematical model), that is, the data set contains noise. These anomalous data may be due to erroneous measurements, incorrect assumptions, incorrect calculations, and the like.
  • the Ransac algorithm also assumes that given a correct set of data, there is a way to calculate the model parameters that match those data.
  • the least squares method (also known as the least squares method) is the best function match for finding data by minimizing the sum of the squares of the errors.
  • the least squares method can be used to easily obtain unknown data and minimize the sum of the squares of the errors between the obtained data and the actual data.
  • the face biometric verification device calculates and fits the feature point combination model, that is, the homography matrix, through a random sampling consistency algorithm and a least square method; further, fitting the feature point combination to generate
  • the steps of the homography matrix may specifically include:
  • Step a marking the pair of feature points in the feature point combination in the form of homogeneous coordinates
  • Step b establishing a matrix model, and determining, by using the matrix model, a correspondence relationship between pairs of feature points;
  • Step c performing homogeneous transformation on the corresponding relational expression to generate a corresponding relationship in a homogeneous form
  • Step d Fitting the feature point combination based on the corresponding relationship of the homogeneous form to generate a homography matrix.
  • the homogeneous coordinate is to represent an n-dimensional vector with an n+1-dimensional vector; for example, there are two cameras, assuming that P1 and P2 are respectively a pair of corresponding features in two facial images.
  • step S104 based on the homography matrix, the transformation error of the feature point combination is calculated, the error calculation result is obtained, and the face biometric verification is performed according to the error calculation result.
  • the face biometric verification device calculates the transformation error of the feature point combination based on the homography matrix, and the error calculation result may specifically include:
  • Step A For each feature point combination, calculate a corresponding transform error based on the homography matrix.
  • Step B Perform an average calculation according to each feature point combination corresponding transformation error to obtain an average error, and determine the average error as an error calculation result.
  • the face biometric verification device is configured for each feature point based on the homography matrix Calculating the corresponding transform error, wherein the transform error is the error obtained by the feature point combination based on the homogeneous transformation and the homogeneous transformation of the homography matrix. If 50 feature points are set in advance, the corresponding feature points are combined into 50 groups. Then, the average of the transformation errors of the 50 sets of feature points is calculated to obtain an average error.
  • each feature point pair corresponds to a transformation error. If 50 feature points are set in advance, the corresponding feature point pairs are 50, and then 50 The transformation errors of the feature point pairs are averaged to obtain an average error.
  • the intermediate camera is used as a comparison, and there may be two errors in each feature point combination; in some more accurate occasions, multiple errors may be set for comparison, such as If there are three feature points in the feature point combination, and there is a transformation error between the pair of feature points, there may be three errors per feature point combination, which is not specifically limited herein.
  • performing face biometric verification according to the error calculation result may specifically include:
  • Step 1 If the average error indicated by the error calculation result is less than or equal to a preset threshold, determine that the result of the face biometric verification is passed;
  • Step 2 If the average error indicated by the error calculation result is greater than the preset threshold, determine that the result of the face biometric verification is not passed.
  • the average error is compared with a preset threshold. If the average error is less than or equal to the preset threshold, the result of the face biometric verification may be considered as passing, that is, obtaining The obtained face image is a real person, and then the face recognition is performed; if the average error is greater than the preset threshold, it can be considered that the result of the face biometric verification is not passed, that is, the acquired face image is a prosthesis.
  • the method for verifying the living body of the human body provided by the embodiment of the present application is at least two
  • the feature point registration is performed on the face image according to the preset face feature point, and the corresponding feature point combination is obtained; then, the preset algorithm is used to fit the feature point combination.
  • the homography matrix finally, based on the homography matrix, calculating the transformation error of the combination of the feature points, thereby performing the face biometric verification according to the error calculation result; the embodiment of the present application does not need to calibrate the camera, that is, the calibration error is eliminated, directly
  • the captured face image performs feature point matching and fits the homography matrix, so the calculation amount of the living body discrimination algorithm can be reduced; and the camera can be placed freely, thereby improving the flexibility and convenience of living body discrimination.
  • the method of human face verification provided by the method is mainly applied to an office access control system. Before performing face recognition, first use this method for live verification, and then perform face recognition when the living body verification of the face passes; this method uses two or more cameras that are freely placed, by analyzing multiple cameras simultaneously. The three-dimensional geometric relationship of the facial features in the multiple face images is taken, and the homography of the feature point pairs in the plane analytic geometry is utilized, and the user's cooperation is not required for the living body verification.
  • the following is a detailed description of the method for verifying the human face in the case of two cameras.
  • the two cameras are respectively disposed on the side of the door frame, and are generally placed up and down to save space.
  • FIG. 2a is a schematic flowchart of a method for verifying a living body of a human body according to an embodiment of the present disclosure.
  • the method can include the following steps.
  • step S201 the face biometric verification device simultaneously captures images in the two cameras.
  • the camera A and the camera B are respectively arranged on the side of the door frame, and are freely placed, which not only supports the left and right horizontal placement or the vertical placement, but also allows a certain horizontal or vertical offset and angle between the cameras, in actual deployment. Convenient and flexible. It can be understood that the camera A and the camera B are generally placed up and down to save space.
  • step S202 the face biometric verification device performs a face on the images in the two cameras. Detection.
  • the camera A and the camera B send the captured image to the face biometric verification device, and the face biometric verification device performs face detection on the received image, that is, detects whether there is a human face in the image, wherein the face detection is Refers to a given strategy for a given image to search for a face. It is easily conceivable that the present embodiment can analyze and identify the acquired image data by using an existing face detection algorithm.
  • step S203 when the face biometric verification device detects that a human face is detected in the images of the two cameras, the feature point registration is performed to obtain a feature point pair.
  • the face biometric verification device performs feature point registration on the two face images, that is, the facial features registration.
  • 90 feature points are preset in the face biometric verification device.
  • the two sets of feature points are in one-to-one correspondence.
  • the two cameras are captured simultaneously by the camera A and the camera B.
  • the face image, the point marked on the person's face is the registration feature feature.
  • step S204 the face biometric verification apparatus fits the homography matrix of the feature point pair using the random sampling consistency Ransac algorithm according to the feature point pair.
  • P1(x1, y1, 1)T and P2(x2, y2, 1)T are the homogeneous coordinates of a pair of corresponding feature point pairs in two face images, respectively, and establish a matrix model of the homography matrix.
  • the coordinates transformed by H*P1 are:
  • H*P1 is transformed into a homogeneous form to obtain a homogeneous form:
  • the matrix model H may be fitted to the 90 pairs of feature point pairs using the Ransac algorithm and the least squares method to obtain a homography matrix corresponding to the matrix model H.
  • step S205 the face biometric verification device calculates an error of the feature point pair before and after the transformation of the homography matrix.
  • step S206 the face biometric verification device calculates an average error based on the error of the feature point pair before and after the transformation of the homography matrix.
  • the transformation errors of the 90 pairs of feature point pairs are averaged to obtain an average error.
  • step S207a if the average error is less than or equal to the preset threshold, the face biometric verification device determines that the face biometric verification passes.
  • step S207b if the average error is greater than the preset threshold, the face biometric verification device determines that the face biometric verification fails.
  • the face biometric verification device compares the average error with the preset threshold h to determine whether it is a real person or a prosthesis. If the average error is less than or equal to the preset threshold, it is determined that the face biometric verification is passed, that is, the image is a real person, and the human face is a living body. Face recognition is performed when the verification is passed, and when the face recognition passes, the door is opened, and when the face recognition fails, the door is not opened; if the average error is greater than the preset threshold, it is determined that the face biometric verification fails, that is, The image is a prosthesis and the door does not open.
  • the method described in this embodiment adopts two or more cameras that are freely placed, and uses the feature point pairs in the plane analytic geometry through the three-dimensional geometric relationship of the facial features in the plurality of face images simultaneously captured by the camera.
  • the homography is not required to be calibrated by the camera, and the user does not need to cooperate.
  • the multi-camera shooting picture directly performs the feature point matching of the face to perform the living body verification of the face.
  • the method can effectively distinguish a human face from a planar prosthesis (such as a photo, video or printed face paper played on a mobile device such as a mobile phone or tablet) without calibrating the camera.
  • the multi-camera has a high degree of freedom of placement, and It only supports left and right horizontal placement or vertical up and down, and also allows a certain horizontal or vertical offset and angle between the cameras, which is more convenient and flexible in the actual deployment of the binocular system.
  • the method can be applied to various types of access control systems, such as an office access control system, and can also be applied to other systems that require in vivo verification.
  • the face biometric verification method provided by the embodiment of the present invention firstly performs feature point registration on the face image according to the preset face feature point when at least two cameras capture the face image, and obtain corresponding The feature point combination; then, the default algorithm is used to fit the homography matrix between the feature point combinations; finally, based on the homography matrix, the transformation errors of the combination of the feature points are calculated, thereby performing the face living body according to the error calculation result.
  • Verification the embodiment of the present application does not need to calibrate the camera, that is, eliminates the calibration error, directly performs feature point matching on the captured face image, and fits the homography matrix, thereby reducing the calculation amount of the living body discrimination algorithm;
  • the camera can be placed freely, which improves the flexibility and convenience of living body discrimination.
  • the embodiment of the present application further provides an apparatus based on the above-described human face verification method.
  • the meaning of the noun is the same as the method of the face biometric verification described above. For specific implementation details, refer to the description in the method embodiment.
  • FIG. 3a is a schematic structural diagram of a human face verification device according to an embodiment of the present disclosure, which may include an acquisition unit 301, a feature point registration unit 302, a fitting unit 303, a calculation unit 304, and a verification unit 305.
  • the acquiring unit 301 is configured to acquire a face image captured by at least two cameras; the feature point registration unit 302 is configured to perform feature matching on the face image according to a preset face feature point. Quasi, get the corresponding feature point combination.
  • the fitting unit 303 is configured to fit a homography matrix between the feature point combinations by using a preset algorithm; the calculating unit 304 is configured to calculate the feature point combination based on the homography matrix Transforming the error to obtain an error calculation result; the verification unit 305 is configured to perform face biometric verification according to the error calculation result.
  • FIG. 3b is another schematic structural diagram of a human face verification device according to an embodiment of the present disclosure.
  • the acquiring unit 301 may include:
  • the image acquisition sub-unit 3011 is configured to acquire image data captured by at least two cameras.
  • the detecting subunit 3012 is configured to perform face detection on the image data to obtain a face detection result.
  • the face acquisition sub-unit 3013 is configured to acquire the face image when the face detection result indicates that the face image is detected.
  • face detection is a key part of the automatic face recognition system. It refers to searching for any given image with a certain strategy to determine whether it contains a face, and if so, returning to the face. The location, size and posture. It is easily conceivable that the present embodiment can analyze and identify the acquired image data by using an existing face detection algorithm. The process of face detection is not specifically limited herein.
  • a plurality of cameras collect image data, and send the image data to the face biometric verification device in real time, and the face biometric verification device performs face detection on the received image data.
  • the face image is acquired for biometric verification.
  • the collected image data may be directly detected by the camera, and when the face image is detected, the face image is acquired, and the face image is sent to the human face verification device. For live verification.
  • the feature point registration unit 302 can be used to follow the preset face feature points. Feature point registration is performed on the face image to obtain a corresponding feature point combination.
  • the face biometric verification device performs feature point registration on the face image according to the number and position of the preset facial feature points, that is, the facial image is subjected to facial registration, so that the corresponding feature point combination can be obtained. .
  • the face biometric verification device can perform feature point registration on the two face images according to the preset face feature points after acquiring the face images captured by the two cameras. Thereby a plurality of feature point combinations are obtained, which are also called feature point pairs.
  • the registration of feature points refers to a process of matching and superimposing a plurality of face images obtained by different cameras in different regions by different imaging means, including geometric correction and projection transformation. Processing with a uniform scale.
  • the registration process of the human face verification device may be as follows: first, feature extraction is performed on two face images to obtain feature points; matching feature point pairs are found by performing similarity measurement; The matched feature point pairs obtain the image space coordinate transformation parameters; finally, the image registration is performed by the coordinate transformation parameters.
  • the existing feature extraction algorithm may be used to perform feature point registration on the image, which is not specifically described herein.
  • the fitting unit 303 may be specifically configured to: fit the feature point combination by using a preset random sampling consistency algorithm and a least square method to generate a homography matrix.
  • the basic assumption of the random sampling consistency algorithm (Ransac algorithm) in the embodiment of the present application is that the sample contains correct data, and also includes abnormal data, that is, the data set contains noise. These anomalous data may be due to erroneous measurements, incorrect assumptions, incorrect calculations, and the like.
  • the Ransac algorithm also assumes that given a correct set of data, there is a way to calculate the model parameters that match those data.
  • the least squares method is the best function match for finding data by minimizing the sum of the squares of the errors.
  • the least squares method can be used to easily obtain unknown data and minimize the sum of the squares of the errors between the obtained data and the actual data.
  • the face biometric verification device calculates and fits the feature point combination model, that is, the homography matrix, by using a random sampling consistency algorithm and a least squares method; further, the fitting unit 303 may specifically include:
  • the marking sub-unit 3031 is configured to mark two pairs of feature point pairs in the feature point combination in the form of homogeneous coordinates.
  • the determining subunit 3032 is configured to establish a matrix model, and determine, by the matrix model, a correspondence relationship between pairs of feature points.
  • the transformation subunit 3033 is configured to perform homogeneous transformation on the correspondence relationship to generate a correspondence expression in a homogeneous form.
  • the fitting sub-unit 3034 is configured to fit the feature point combination based on the homogeneous relationship of the homogeneous form to generate a homography matrix.
  • the homogeneous coordinate is to represent an n-dimensional vector with an n+1-dimensional vector; for example, there are two cameras, assuming that P1 and P2 are respectively a pair of corresponding features in two facial images.
  • the calculating unit 304 may be specifically configured to: calculate, for each feature point combination, a corresponding transform error based on the homography matrix, and perform an average value calculation according to each feature point combination corresponding transform error to obtain The average error is determined and the average error is determined as the error calculation result.
  • the face biometric verification device calculates a corresponding transform error based on the homography matrix for each feature point combination, wherein the transform error is obtained by combining the feature point combination before and after the homogeneous transformation of the homography matrix
  • the transform error is obtained by combining the feature point combination before and after the homogeneous transformation of the homography matrix
  • the corresponding feature points are combined into 50 groups, and the transformation errors of the 50 sets of feature point combinations are averaged to obtain an average error.
  • each feature point pair corresponds to a transformation error. If 50 feature points are set in advance, the corresponding feature point pairs are 50, and then 50 The transformation errors of the feature point pairs are averaged to obtain an average error.
  • the intermediate camera is used as a comparison, and there may be two errors for each feature point combination; in some more accurate occasions, multiple errors may be set for comparison, for example, if There are three feature points in the feature point combination, and there is a transformation error between the pair of feature points. There may be three errors in each feature point combination, which is not specifically limited herein.
  • the verification unit 305 may be specifically configured to: if the average error indicated by the error calculation result is less than or equal to a preset threshold, determine that the result of the face biometric verification is passed; if the error calculation result indicates If the average error is greater than the preset threshold, it is determined that the result of the face biometric verification is not passed.
  • the average error is compared with a preset threshold. If the average error is less than or equal to the preset threshold, the result of the face biometric verification may be considered as passing, that is, obtaining The obtained face image is a real person, and then the face recognition is performed; if the average error is greater than the preset threshold, it can be considered that the result of the face biometric verification is not passed, that is, the acquired face image is a prosthesis.
  • each of the above units may be implemented as an independent entity, or may be combined in any combination, as one or several entities, and the specific implementation of each unit above.
  • each of the above units may be implemented as an independent entity, or may be combined in any combination, as one or several entities, and the specific implementation of each unit above.
  • the human face verification device can be specifically integrated in a network device such as a server.
  • the face biometric verification device provided by the embodiment of the present invention firstly performs feature point registration on the face image according to the preset face feature point when at least two cameras capture the face image, and obtain corresponding The feature point combination; then, the default algorithm is used to fit the homography matrix between the feature point combinations; finally, based on the homography matrix, the transformation errors of the combination of the feature points are calculated, thereby performing the face living body according to the error calculation result.
  • Verification the embodiment of the present application does not need to calibrate the camera, that is, eliminates the calibration error, directly performs feature point matching on the captured face image, and fits the homography matrix, thereby reducing the calculation amount of the living body discrimination algorithm;
  • the camera can be placed freely, which improves the flexibility and convenience of living body discrimination.
  • the embodiment of the present application further provides a server, wherein the face biometric verification device of the embodiment of the present application can be integrated.
  • FIG. 4 shows a schematic structural diagram of a server involved in an embodiment of the present application.
  • the server may include one or more processors 401 of the processing core, a memory 402 of one or more computer readable storage media, a radio frequency (RF) circuit 403, a power source 404, an input unit 405, and a display unit 406, etc. component.
  • RF radio frequency
  • FIG. 4 does not constitute a limitation on the server, and the server in each embodiment may include more or less components than the illustration, or combine some components, or different. Parts layout.
  • a server in some examples may not have RF circuitry 403 and/or display unit 406.
  • the processor 401 is the control center of the server, connecting various portions of the entire server using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 402, and recalling data stored in the memory 402, Execute various functions of the server And process the data to monitor the server as a whole.
  • processor 401 can include one or more processing cores.
  • processor 401 can integrate an application processor and a modem processor, wherein the application processor primarily processes an operating system, a user interface, an application, etc., and the modem processor primarily processes wireless communications. It can be understood that the above modem processor may not be integrated into the processor 401.
  • the memory 402 can be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by running software programs and modules stored in the memory 402.
  • the memory 402 can mainly include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.).
  • the storage data area can store data and the like created according to the use of the server.
  • memory 402 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 402 can also include a memory controller to provide processor 401 access to memory 402.
  • the processor 401 in the server loads the executable file corresponding to the process of one or more applications into the memory 402 according to the following instruction, and is stored in the memory by the processor 401.
  • the application in 402 thus implementing various functions.
  • processor 401 can run an application stored in memory 402 to perform the following operations:
  • the processor 401 may be further configured to: acquire image data captured by at least two cameras; perform face detection on the image data to obtain a face detection result; and when the face detection result indicates detection When the face image is obtained, the face image is acquired.
  • the processor 401 is further configured to perform fitting on the feature point combination by using a preset random sampling consistency algorithm and a least squares method to generate a homography matrix.
  • the processor 401 may be further configured to mark two pairs of feature points in the feature point combination in the form of homogeneous coordinates; establish a matrix model, and determine the two feature points by using the matrix model. Corresponding relationship between pairs; performing homogeneous transformation on the corresponding relational expression to generate a corresponding relational expression in a homogeneous form; fitting the feature point combination based on the homogeneous relational expression, generating Uniform matrix.
  • the processor 401 may be further configured to: calculate, for each feature point combination, a corresponding transform error based on the homography matrix; perform an average calculation according to each feature point combination corresponding transform error to obtain an average The error is determined and the average error is determined as the error calculation result.
  • the processor 401 may be further configured to: if the average error indicated by the error calculation result is less than or equal to a preset threshold, determine that a result of the face biometric verification is a pass; if the error calculation result indicates If the average error is greater than the preset threshold, it is determined that the result of the face biometric verification is not passed.
  • the server when at least two cameras capture a face image, the feature points are first registered according to the preset face feature points, and corresponding feature points are obtained. Combining; then, using the preset algorithm to fit the homography matrix between the feature point combinations; finally, based on the homography matrix, calculating the transformation error of the combination of these feature points, so as to verify the face according to the error calculation result;
  • the application embodiment does not need to calibrate the camera, that is, eliminates the calibration error, directly performs feature point matching on the captured face image, and fits the homography matrix, thereby reducing the calculation amount of the living body discrimination algorithm;
  • the camera can be placed freely, thereby improving the flexibility and convenience of living body discrimination.
  • the human face verification device provided by the embodiment of the present application is, for example, a computer, a tablet computer, a mobile phone with a touch function, etc., and the face biometric verification device is the same as the face biometric verification method in the above embodiment. It is contemplated that any of the methods provided in the embodiment of the human face verification method may be run on the face biometric verification device.
  • a common tester in the field can understand all or part of the process of implementing the method for verifying the face of the living body in the embodiment of the present application, which can be controlled by a computer program.
  • the computer program can be stored in a computer readable storage medium, such as in a memory of the terminal, and executed by at least one processor in the terminal, and can include, as described, The flow of an embodiment of a human face verification method.
  • the storage medium may be a magnetic disk, an optical disk, a read only memory (ROM), a random access memory (RAM), or the like.
  • each functional module may be integrated into one processing chip, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated module if implemented in the form of a software functional module and sold or used as a standalone product, may also be stored in a computer readable storage medium, such as a read only memory, a magnetic disk or an optical disk, etc. .

Abstract

公开了一种人脸活体验证方法及装置,其中该方法包括:获取至少两个摄像头捕获到的人脸图像;按照预设人脸特征点,对人脸图像进行特征点配准,得到相应的特征点组合;利用预设算法拟合出特征点组合之间的单应矩阵;基于单应矩阵,计算特征点组合的变换误差,得到误差计算结果,并根据误差计算结果进行人脸活体验证。无需对摄像头进行标定,即消除了标定误差,直接对捕获到的人脸图像进行特征点匹配,并拟合出单应矩阵,因此可以降低活体判别算法计算量;并且,摄像头可摆放自由,从而可提高活体判别的灵活性和便利性。

Description

一种人脸活体验证方法及装置
本申请要求于2016年04月21日提交中国专利局、申请号为201610252267.9、发明名称为“一种人脸活体验证方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于人脸识别技术领域,尤其涉及一种人脸活体验证方法及装置。
申请背景
随着生物识别技术的发展,人脸识别技术已经趋于成熟,对于一些诸如门禁、登录等系统的人脸识别系统而言,用户可以凭借照片等非法手段欺骗系统。因此,活体检测成为这类系统安全性的保障手段。活体检测是指判断输入的人脸的是平面假体还是真实的活体人脸的技术。
现有的多摄像头的活体检测技术一般采用左右水平配置的双目摄像头,在进行双目标定后进行左右两幅图像的立体匹配(Stereo matching),恢复出人脸区域的深度信息,根据人脸区域有规则的深度信息来区分真人的人脸与平面假体(如手机或者平板等移动设备上播放的照片、视频或者打印的人脸纸片)。
申请内容
为解决上述技术问题,本申请实施例提供以下技术方案:
一种人脸活体验证方法,其中包括:
获取至少两个摄像头捕获到的人脸图像;
按照预设人脸特征点,对所述人脸图像进行特征点配准,得到相应 的特征点组合;
利用预设算法拟合出所述特征点组合之间的单应矩阵;
基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果,并根据所述误差计算结果进行人脸活体验证。
一种人脸活体验证装置,其中包括:
获取单元,用于获取至少两个摄像头捕获到的人脸图像;
特征点配准单元,用于按照预设人脸特征点,对所述人脸图像进行特征点配准,得到相应的特征点组合;
拟合单元,用于利用预设算法拟合出所述特征点组合之间的单应矩阵;
计算单元,用于基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果;
验证单元,用于根据所述误差计算结果进行人脸活体验证。
相对于现有技术,本申请实施例,无需对摄像头进行标定,消除了现有技术中摄像头标定过程中的标定误差,本申请实施例方法直接对捕获到的人脸图像进行特征点匹配,并拟合出单应矩阵,并基于单应矩阵获取误差计算结果后进行人脸活体验证,相较于现有活体检测技术,大大减少了活体判别算法计算量,提升了活体检测的速度及实时性。
附图简要说明
下面结合附图,通过对本申请的具体实施方式详细描述,将使本申请的技术方案及其它有益效果显而易见。
图1a是本申请实施例提供的人脸活体验证方法的场景示意图;
图1b是本申请实施例提供的人脸活体验证方法的流程示意图;
图2a为本申请实施例提供的人脸活体验证方法的流程示意图;
图2b为本申请实施例提供的人脸活体验证方法中特征点配准示意图;
图3a为本申请实施例提供的人脸活体验证装置的结构示意图;
图3b为本申请实施例提供的人脸活体验证装置的另一结构示意图;
图4为本申请实施例提供的服务器的结构示意图。
实施本申请的方式
请参照图式,其中相同的组件符号代表相同的组件。本申请的原理是以实施在一适当的运算环境中来举例说明。以下的说明是基于所例示的本申请具体实施例,其不应被视为限制本申请未在此详述的其它具体实施例。
在以下的说明中,本申请的具体实施例将参考由一部或多部计算机所执行的步骤及符号来说明,除非另有述明。因此,这些步骤及操作可以由硬件加软件实现,可以由计算机或处理器执行。
本文所使用的术语「模块」可看做为在该运算系统上执行的软件对象。本文所述的不同组件、模块、引擎及服务可看做为在该运算系统上的实施对象。
本申请实施例提供一种人脸活体验证方法及装置。
参见图1a,该图为本申请实施例所提供的人脸活体验证方法的场景示意图。该场景可以包括人脸活体验证装置,可简称人脸验证装置。该装置可以由专门的计算设备实现,也可以集成在某些计算设备,如服务器中,例如可以为运行在服务器中的应用程序、插入服务器扩展插槽中的板卡等。该装置可以用于获取至少两个摄像头捕获到的人脸图像,如获取摄像头A以及摄像头B捕获到的人脸图像,其后,按照预设人脸特征点,对所述人脸图像进行特征点配准,得到相应的特征点组合,比如, 对该人脸图像进行五官配准;接着,利用预设算法拟合出所述特征点组合之间的单应矩阵,最后基于该单应矩阵,计算特征点组合的变换误差,得到误差计算结果,并根据该误差计算结果进行人脸活体验证,比如,若该误差计算结果满足预设阈值,则可以确定该人脸活体验证的结果为通过,等等。
本申请实施例,无需对摄像头进行标定,消除了现有技术中摄像头标定过程中的标定误差,本申请实施例方法直接对捕获到的人脸图像进行特征点匹配,并拟合出单应矩阵,并基于单应矩阵获取误差计算结果后进行人脸活体验证,相较于现有活体检测技术,大大减少了活体判别算法计算量,提升了活体检测的速度及实时性。
此外,该场景还可以包括多个摄像装置(即摄像头),主要用于捕获图像数据,并在捕获到人脸图像时将该人脸图像发送至人脸验证装置,以确定人脸活体验证是否通过。
在一些实施例中,将从人脸活体验证装置的角度进行描述,该人脸活体验证装置具体可以集成在服务器等网络设备中。
一种人脸活体验证方法,包括:获取至少两个摄像头捕获到的人脸图像;按照预设人脸特征点,对所述人脸图像进行特征点配准,得到相应的特征点组合;利用预设算法拟合出所述特征点组合之间的单应矩阵;基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果,并根据所述误差计算结果进行人脸活体验证。
请参阅图1b,图1b是本申请实施例提供的人脸活体验证方法的流程示意图。所述方法可以包括以下步骤。
在步骤S101中,获取至少两个摄像头捕获到的人脸图像。
可以理解的是,在本申请实施例中,人脸活体验证装置获取至少两个摄像头捕获到的人脸图像的步骤可以具体包括:
步骤1、获取至少两个摄像头捕获到的图像数据。
步骤2、对所述图像数据进行人脸检测,得到人脸检测结果。
步骤3、当所述人脸检测结果指示检测到人脸图像时,获取所述人脸图像。
其中,人脸检测(Face Detection)是自动人脸识别系统中的一个关键环节,是指对于任意一幅给定的图像,采用一定的策略对其进行搜索以确定其中是否含有人脸,如果是则返回人脸的位置、大小和姿态。容易想到的是,本实施例可以采用现有人脸检测算法对获取到的图像数据进行分析和识别,此处对人脸检测的过程不作具体限定。
也就是说,多个摄像头(两个或两个以上)对图像数据进行收集,并实时将图像数据发送至人脸活体验证装置,人脸活体验证装置对接收到的图像数据进行人脸检测,当确定检测到人脸图像时,获取该人脸图像,以进行活体验证。
在某些实施方式中,也可以由摄像头直接对收集到的图像数据进行人脸检测,在检测到人脸图像时,获取该人脸图像,并将该人脸图像发送至人脸活体验证装置,以进行活体验证。
在步骤S102中,按照预设人脸特征点,对该人脸图像进行特征点配准,得到相应的特征点组合。
比如,人脸活体验证装置按照预设人脸特征点的个数以及位置,对该人脸图像进行特征点配准,即对该人脸图像进行五官配准,从而可以得到相应的特征点组合。
又比如,如果摄像头为两个,人脸活体验证装置可以在获取到两个摄像头捕获到的人脸图像后,按照预设人脸特征点,对这两幅人脸图像进行特征点配准,从而得到多个特征点组合,也称为特征点对。
其中,本申请实施例中,特征点配准(registration)是指将多个摄像 头在同一区域内以不同成像手段所获得的多幅人脸图像进行匹配、叠加的过程,包括几何纠正、投影变换与统一比例尺等处理。
一些例子中,若摄像头为两个,则人脸活体验证装置配准过程可以如下:首先对两幅人脸图像进行特征提取,得到特征点;通过进行相似性度量找到匹配的特征点对;然后通过匹配的特征点对得到图像空间坐标变换参数;最后由坐标变换参数进行图像配准。其中,可以采用现有特征提取算法对图像进行特征点配准,此处不作具体描述。
在步骤S103中,利用预设算法拟合出该特征点组合之间的单应矩阵。
一些例子中,人脸活体验证装置可利用预设的随机采样一致性算法以及最小二乘法,对所述特征点组合进行拟合,生成单应矩阵。
其中,本申请实施例中随机采样一致性算法(Ransac算法)基本假设是样本中包含正确数据(inliers,可以被模型描述的数据),也包含异常数据(outliers,偏离正常范围很远、无法适应数学模型的数据),即数据集中含有噪声。这些异常数据可能是由于错误的测量、错误的假设、错误的计算等产生的。同时Ransac算法也假设,给定一组正确的数据,存在可以计算出符合这些数据的模型参数的方法。
最小二乘法(又称最小平方法)是通过最小化误差的平方和寻找数据的最佳函数匹配。利用最小二乘法可以简便地求得未知的数据,并使得这些求得的数据与实际数据之间误差的平方和为最小。
也就是说,人脸活体验证装置通过随机采样一致性算法以及最小二乘法,计算并拟合出符合特征点组合模型,即单应矩阵;进一步的,对所述特征点组合进行拟合,生成单应矩阵的步骤可以具体包括:
步骤a、以齐次坐标的形式,对特征点组合中的两两特征点对进行标示;
步骤b、建立矩阵模型,并通过所述矩阵模型确定两两特征点对之间的对应关系式;
步骤c、对所述对应关系式进行齐次变换,生成齐次形式的对应关系式;
步骤d、基于所述齐次形式的对应关系式,对所述特征点组合进行拟合,生成单应矩阵。
其中,齐次坐标就是将一个原本是n维的向量用一个n+1维向量来表示;比如,摄像头设置有两个,假定P1与P2分别是两幅人脸图像中的一对对应的特征点对的齐次坐标,并建立单应矩阵的矩阵模型H,则通过单应矩阵H,P1与P2二者的对应关系式可以为P2=H*P1。
进一步的,比如,单应矩阵的矩阵模型H为3x3的矩阵,为了归一化,矩阵模型H中的h33=1(即最后一个元素),则将H*P1变换为齐次形式的对应关系式;由于一对特征点对可以生成两个方程(齐次变换前及齐次变换后),因此,确定矩阵模型H的8个未知参数至少需要4对特征点对。从而,可以使用Ransac算法和最小二乘法对各特征点对来拟合矩阵模型H,得到矩阵模型H对应的单应矩阵。
在步骤S104中,基于该单应矩阵,计算特征点组合的变换误差,得到误差计算结果,并根据该误差计算结果进行人脸活体验证。
可以理解的是,人脸活体验证装置基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果可以具体包括:
步骤A、对于每一特征点组合,基于所述单应矩阵计算相应的变换误差。
步骤B、根据每一特征点组合相应的变换误差进行平均值计算,得到平均误差,并将所述平均误差确定为误差计算结果。
比如,人脸活体验证装置对于每一特征点组合,基于所述单应矩阵 计算相应的变换误差,其中,该变换误差为特征点组合基于单应矩阵齐次变换前及齐次变换后得到的误差,若预先设置有50个特征点,则相应的特征点组合为50组,则针对这50组特征点组合的变换误差进行平均值计算,得到平均误差。
进一步的,比如,若特征点组合中包含两个特征点,则每一特征点对对应一变换误差,若预先设置有50个特征点,则相应的特征点对为50个,则针对这50个特征点对的变换误差进行平均值计算,得到平均误差。
可以理解的是,若特征点组合中包含三个特征点,则以中间摄像头为对比,每特征点组合可以存在有两个误差;在一些较为精确的场合中,可以多设置误差进行对比,如,若特征点组合中包含三个特征点,两两特征点对之间存在一变换误差,则每特征点组合可存在有三个误差,此处不作具体限定。
一些例子中,根据所述误差计算结果进行人脸活体验证,可以具体包括:
步骤一、若所述误差计算结果指示的平均误差小于或等于预设阈值,则确定人脸活体验证的结果为通过;
步骤二、若所述误差计算结果指示的平均误差大于所述预设阈值,则确定人脸活体验证的结果为不通过。
也就是说,在计算出变换误差的平均误差后,将该平均误差与预设阈值进行比较,如果该平均误差小于或等于预设阈值,则可以认为人脸活体验证的结果为通过,即获取到的人脸图像为真人,进而再进行人脸识别;如果该平均误差大于预设阈值,则可以认为人脸活体验证的结果为不通过,即获取到的人脸图像为假体。
由上述可知,本申请实施例提供的人脸活体验证方法,在至少两个 摄像头捕获到人脸图像时,首先按照预设人脸特征点,对该人脸图像进行特征点配准,得到相应的特征点组合;然后,利用预设算法拟合出特征点组合之间的单应矩阵;最后,基于该单应矩阵,计算这些特征点组合的变换误差,从而根据误差计算结果进行人脸活体验证;本申请实施例无需对摄像头进行标定,即消除了标定误差,直接对捕获到的人脸图像进行特征点匹配,并拟合出单应矩阵,因此可以降低活体判别算法计算量;并且,摄像头可摆放自由,从而可提高活体判别的灵活性和便利性。
一些例子中,本方法提供的人脸活体验证方法,主要应用于办公室门禁系统。在进行人脸识别前,首先使用本方法进行活体验证,当人脸的活体验证通过时再进行人脸识别;本方法采用自由摆放的两个或者两个以上的摄像头,通过分析多摄像头同时拍摄的多张人脸图像中的五官点的三维几何关系,利用平面解析几何中的特征点对的单应性,无需用户配合来进行人脸的活体验证。
以下以两个摄像头为例对人脸活体验证方法进行详细说明,其中,两个摄像头分别配置在门框侧,一般是上下摆放,以节省空间。
请参阅图2a,图2a为本申请实施例提供的人脸活体验证方法的流程示意图。所述方法可以包括以下步骤。
在步骤S201中,人脸活体验证装置同时捕获两个摄像头中的图像。
比如,摄像头A和摄像头B分别配置在门框侧,摆放自由,不仅支持左右水平摆放或者上下竖直摆放,也允许摄像头间存在一定的水平或者垂直的偏移和夹角,在实际部署中方便灵活。可以理解的是,摄像头A和摄像头B一般是上下摆放,以节省空间。
在步骤S202中,人脸活体验证装置对两个摄像头中的图像进行人脸 检测。
比如,摄像头A和摄像头B将捕获到的图像发送至人脸活体验证装置,人脸活体验证装置对接收到的图像进行人脸检测,即检测图像中是否存在有人脸,其中,人脸检测是指对于任意一幅给定的图像,采用一定的策略对其进行搜索以确定其中是否含有人脸。容易想到的是,本实施例可以采用现有人脸检测算法对获取到的图像数据进行分析和识别。
在步骤S203中,人脸活体验证装置在确定两个摄像头的图像中均检测到人脸时,进行特征点配准,得到特征点对。
比如,摄像头A和摄像头B均检测到人脸时,人脸活体验证装置会对这两个人脸图像进行特征点配准,即五官点配准。本实施例中,人脸活体验证装置中预先设置有90个特征点,这两组特征点点对一一对应,如图2b所示分别是上下配置的摄像头A和摄像头B同时捕获的两幅人脸图像,人脸上标示的点为配准的五官特征点。
在步骤S204中,人脸活体验证装置根据特征点对,使用随机采样一致性Ransac算法拟合出特征点对的单应矩阵。
首先,假定P1(x1,y1,1)T与P2(x2,y2,1)T分别是两幅人脸图像中的一对对应的特征点对的齐次坐标,建立单应矩阵的矩阵模型H如下,则通过单应矩阵的矩阵模型H,P1和P2二者的对应关系为P2=H*P1。
其中H为3x3的矩阵:
Figure PCTCN2017080376-appb-000001
其中为了归一化,设定单应矩阵H中的h33=1。由H*P1变换后的坐标为:
Figure PCTCN2017080376-appb-000002
将H*P1进行齐次变换,变成齐次形式可得:
Figure PCTCN2017080376-appb-000003
即:
Figure PCTCN2017080376-appb-000004
由于一对特征点对可以生成两个方程(齐次变换前及齐次变换后),因此,确定矩阵模型H的8个未知参数至少需要4对特征点对。本实施例中,比如,可以使用Ransac算法和最小二乘法对这90对特征点对来拟合矩阵模型H,得到矩阵模型H对应的单应矩阵。
在步骤S205中,人脸活体验证装置计算特征点对在该单应矩阵的变换前后的误差。
比如,人脸活体验证装置计算这90对特征点对在该单应矩阵的进行 齐次变换前后的误差,将90对特征点对分别带入(1)和(2)中,从而可以计算出齐次变换前后的欧式距离的误差e=||H*P1-P2||2,其中,欧氏距离(Euclidean distance)也称欧几里得距离,是一个通常采用的距离定义,它是在m维空间中两个点之间的真实距离。
在步骤S206中,人脸活体验证装置根据特征点对在该单应矩阵的变换前后的误差,计算平均误差。
比如,人脸活体验证装置预先设置有90个特征点,则相应的特征点对为90对,则针对这90对特征点对的变换误差进行平均值计算,得到平均误差。
在步骤S207a中,若平均误差小于或等于预设阈值,则人脸活体验证装置确定人脸活体验证通过。
在步骤S207b中,若平均误差大于预设阈值,则人脸活体验证装置确定人脸活体验证不通过。
最后,人脸活体验证装置比较平均误差与预设阈值h来判断是真人还是假体,如果平均误差小于或等于预设阈值,确定人脸活体验证通过,即图像为真人,当人脸的活体验证通过时再进行人脸识别,当人脸识别通过时,则门打开,当人脸识别未通过时,则门不打开;如果平均误差大于预设阈值,确定人脸活体验证不通过,即图像为假体,门不打开。
本实施例所述的方法采用自由摆放的两个或者两个以上的摄像头,通过上述摄像头同时拍摄的多张人脸图像中的五官点的三维几何关系,利用平面解析几何中的特征点对的单应性,无需相机进行标定,也无需用户配合,多摄像头拍摄的画面直接进行人脸的特征点对匹配来进行人脸的活体验证。该方法可以在未对摄像头进行标定的情况下有效地将真人的人脸与平面假体(如手机或者平板等移动设备上播放的照片、视频或者打印的人脸纸片)区分开。该方法中多摄像头的摆放自由度高,不 仅支持左右水平摆放或者上下竖直,也允许摄像头间存在一定的水平或者垂直的偏移和夹角,在双目系统的实际部署中更加方便灵活。
可以理解的是,本方法可应用于各类门禁系统,比如办公室门禁系统,也可以应用于其他需要进行活体验证的系统中。
由上述可知,本申请实施例提供的人脸活体验证方法,在至少两个摄像头捕获到人脸图像时,首先按照预设人脸特征点,对该人脸图像进行特征点配准,得到相应的特征点组合;然后,利用预设算法拟合出特征点组合之间的单应矩阵;最后,基于该单应矩阵,计算这些特征点组合的变换误差,从而根据误差计算结果进行人脸活体验证;本申请实施例无需对摄像头进行标定,即消除了标定误差,直接对捕获到的人脸图像进行特征点匹配,并拟合出单应矩阵,因此可以降低活体判别算法计算量;并且,摄像头可摆放自由,从而可提高活体判别的灵活性和便利性。
为便于更好的实施本申请实施例提供的人脸活体验证方法,本申请实施例还提供一种基于上述人脸活体验证方法的装置。其中名词的含义与上述人脸活体验证的方法中相同,具体实现细节可以参考方法实施例中的说明。
请参阅图3a,图3a为本申请实施例提供的人脸活体验证装置的结构示意图,可以包括获取单元301、特征点配准单元302、拟合单元303、计算单元304以及验证单元305。
其中所述获取单元301,用于获取至少两个摄像头捕获到的人脸图像;所述特征点配准单元302,用于按照预设人脸特征点,对所述人脸图像进行特征点配准,得到相应的特征点组合。
所述拟合单元303,用于利用预设算法拟合出所述特征点组合之间的单应矩阵;所述计算单元304,用于基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果;所述验证单元305,用于根据所述误差计算结果进行人脸活体验证。
请一并参阅图3b,图3b为本申请实施例提供的人脸活体验证装置的另一结构示意图,所述获取单元301可以包括:
图像获取子单元3011,用于获取至少两个摄像头捕获到的图像数据。
检测子单元3012,用于对所述图像数据进行人脸检测,得到人脸检测结果。
人脸获取子单元3013,用于当所述人脸检测结果指示检测到人脸图像时,获取所述人脸图像。
其中,人脸检测是自动人脸识别系统中的一个关键环节,是指对于任意一幅给定的图像,采用一定的策略对其进行搜索以确定其中是否含有人脸,如果是则返回人脸的位置、大小和姿态。容易想到的是,本实施例可以采用现有人脸检测算法对获取到的图像数据进行分析和识别,此处对人脸检测的过程不作具体限定。
也就是说,多个摄像头(两个或两个以上)对图像数据进行收集,并实时将图像数据发送至人脸活体验证装置,人脸活体验证装置对接收到的图像数据进行人脸检测,当确定检测到人脸图像时,获取该人脸图像,以进行活体验证。
在某些实施方式中,也可以由摄像头直接对收集到的图像数据进行人脸检测,在检测到人脸图像时,获取该人脸图像,并将该人脸图像发送至人脸活体验证装置,以进行活体验证。
其后,所述特征点配准单元302,可以用于按照预设人脸特征点, 对所述人脸图像进行特征点配准,得到相应的特征点组合。
比如,人脸活体验证装置按照预设人脸特征点的个数以及位置,对该人脸图像进行特征点配准,即对该人脸图像进行五官配准,从而可以得到相应的特征点组合。
又比如,如果摄像头为两个,人脸活体验证装置可以在获取到两个摄像头捕获到的人脸图像后,按照预设人脸特征点,对这两幅人脸图像进行特征点配准,从而得到多个特征点组合,也称为特征点对。
其中,本申请实施例中,特征点配准(registration)是指将多个摄像头在同一区域内以不同成像手段所获得的多幅人脸图像进行匹配、叠加的过程,包括几何纠正、投影变换与统一比例尺等处理。
进一步的,若摄像头为两个,则人脸活体验证装置配准过程可以如下:首先对两幅人脸图像进行特征提取,得到特征点;通过进行相似性度量找到匹配的特征点对;然后通过匹配的特征点对得到图像空间坐标变换参数;最后由坐标变换参数进行图像配准。其中,可以采用现有特征提取算法对图像进行特征点配准,此处不作具体描述。
接着,所述拟合单元303可以具体用于:利用预设的随机采样一致性算法以及最小二乘法,对所述特征点组合进行拟合,生成单应矩阵。
其中,本申请实施例中随机采样一致性算法(Ransac算法)基本假设是样本中包含正确数据,也包含异常数据,即数据集中含有噪声。这些异常数据可能是由于错误的测量、错误的假设、错误的计算等产生的。同时Ransac算法也假设,给定一组正确的数据,存在可以计算出符合这些数据的模型参数的方法。
最小二乘法是通过最小化误差的平方和寻找数据的最佳函数匹配。利用最小二乘法可以简便地求得未知的数据,并使得这些求得的数据与实际数据之间误差的平方和为最小。
也就是说,人脸活体验证装置通过随机采样一致性算法以及最小二乘法,计算并拟合出符合特征点组合模型,即单应矩阵;进一步的,所述拟合单元303可以具体包括:
标示子单元3031,用于以齐次坐标的形式,对特征点组合中的两两特征点对进行标示。
确定子单元3032,用于建立矩阵模型,并通过所述矩阵模型确定两两特征点对之间的对应关系式。
变换子单元3033,用于对所述对应关系式进行齐次变换,生成齐次形式的对应关系式。
拟合子单元3034,用于基于所述齐次形式的对应关系式,对所述特征点组合进行拟合,生成单应矩阵。
其中,齐次坐标就是将一个原本是n维的向量用一个n+1维向量来表示;比如,摄像头设置有两个,假定P1与P2分别是两幅人脸图像中的一对对应的特征点对的齐次坐标,并建立单应矩阵的矩阵模型H,则通过单应矩阵H,P1与P2二者的对应关系式可以为P2=H*P1。
进一步的,比如,单应矩阵的矩阵模型H为3x3的矩阵,为了归一化,矩阵模型H中的h33=1(即最后一个元素),则将H*P1变换为齐次形式的对应关系式;由于一对特征点对可以生成两个方程(齐次变换前及齐次变换后),因此,确定矩阵模型H的8个未知参数至少需要4对特征点对。从而,可以使用Ransac算法和最小二乘法对各特征点对来拟合矩阵模型H,得到矩阵模型H对应的单应矩阵。
可以理解的是,所述计算单元304可以具体用于:对于每一特征点组合,基于所述单应矩阵计算相应的变换误差,根据每一特征点组合相应的变换误差进行平均值计算,得到平均误差,并将所述平均误差确定为误差计算结果。
比如,人脸活体验证装置对于每一特征点组合,基于所述单应矩阵计算相应的变换误差,其中,该变换误差为特征点组合基于单应矩阵齐次变换前及齐次变换后得到的误差,若预先设置有50个特征点,则相应的特征点组合为50组,则针对这50组特征点组合的变换误差进行平均值计算,得到平均误差。
进一步的,比如,若特征点组合中包含两个特征点,则每一特征点对对应一变换误差,若预先设置有50个特征点,则相应的特征点对为50个,则针对这50个特征点对的变换误差进行平均值计算,得到平均误差。
又比如,若特征点组合中包含三个特征点,则以中间摄像头为对比,每特征点组合可以存在有两个误差;在一些较为精确的场合中,可以多设置误差进行对比,如,若特征点组合中包含三个特征点,两两特征点对之间存在一变换误差,则每特征点组合可存在有三个误差,此处不作具体限定。
更进一步的,所述验证单元305可以具体用于:若所述误差计算结果指示的平均误差小于或等于预设阈值,则确定人脸活体验证的结果为通过;若所述误差计算结果指示的平均误差大于所述预设阈值,则确定人脸活体验证的结果为不通过。
也就是说,在计算出变换误差的平均误差后,将该平均误差与预设阈值进行比较,如果该平均误差小于或等于预设阈值,则可以认为人脸活体验证的结果为通过,即获取到的人脸图像为真人,进而再进行人脸识别;如果该平均误差大于预设阈值,则可以认为人脸活体验证的结果为不通过,即获取到的人脸图像为假体。
具体实施时,以上各个单元可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现,以上各个单元的具体实施 可参见前面的方法实施例,在此不再赘述。
该人脸活体验证装置具体可以集成在服务器等网络设备中。
由上述可知,本申请实施例提供的人脸活体验证装置,在至少两个摄像头捕获到人脸图像时,首先按照预设人脸特征点,对该人脸图像进行特征点配准,得到相应的特征点组合;然后,利用预设算法拟合出特征点组合之间的单应矩阵;最后,基于该单应矩阵,计算这些特征点组合的变换误差,从而根据误差计算结果进行人脸活体验证;本申请实施例无需对摄像头进行标定,即消除了标定误差,直接对捕获到的人脸图像进行特征点匹配,并拟合出单应矩阵,因此可以降低活体判别算法计算量;并且,摄像头可摆放自由,从而可提高活体判别的灵活性和便利性。
本申请实施例还提供一种服务器,其中可以集成本申请实施例的人脸活体验证装置。如图4所示,其示出了本申请实施例所涉及的服务器的结构示意图。
该服务器可以包括一个或者一个以上处理核心的处理器401、一个或一个以上计算机可读存储介质的存储器402、射频(Radio Frequency,RF)电路403、电源404、输入单元405、以及显示单元406等部件。本领域技术人员可以理解,图4中示出的服务器结构并不构成对服务器的限定,各实施例中的服务器可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。例如,一些例子中的服务器可以不具有RF电路403和/或显示单元406。
处理器401是该服务器的控制中心,利用各种接口和线路连接整个服务器的各个部分,通过运行或执行存储在存储器402内的软件程序和/或模块,以及调用存储在存储器402内的数据,执行服务器的各种功能 和处理数据,从而对服务器进行整体监控。一些例子中,处理器401可包括一个或多个处理核心。一些例子中,处理器401可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器401中。
存储器402可用于存储软件程序以及模块,处理器401通过运行存储在存储器402的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器402可主要包括存储程序区和存储数据区。其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等。存储数据区可存储根据服务器的使用所创建的数据等。此外,存储器402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器402还可以包括存储器控制器,以提供处理器401对存储器402的访问。
具体在本实施例中,服务器中的处理器401会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器402中,并由处理器401来运行存储在存储器402中的应用程序,从而实现各种功能。例如,处理器401可以运行存储在存储器402中的应用程序,执行以下操作:
获取至少两个摄像头捕获到的人脸图像;
按照预设人脸特征点,对所述人脸图像进行特征点配准,得到相应的特征点组合;
利用预设算法拟合出所述特征点组合之间的单应矩阵;
基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果,并根据所述误差计算结果进行人脸活体验证。
一些例子中,所述处理器401还可以用于,获取至少两个摄像头捕获到的图像数据;对所述图像数据进行人脸检测,得到人脸检测结果;当所述人脸检测结果指示检测到人脸图像时,获取所述人脸图像。
一些例子中,所述处理器401还可以用于,利用预设的随机采样一致性算法以及最小二乘法,对所述特征点组合进行拟合,生成单应矩阵。
一些例子中,所述处理器401还可以用于,以齐次坐标的形式,对特征点组合中的两两特征点对进行标示;建立矩阵模型,并通过所述矩阵模型确定两两特征点对之间的对应关系式;对所述对应关系式进行齐次变换,生成齐次形式的对应关系式;基于所述齐次形式的对应关系式,对所述特征点组合进行拟合,生成单应矩阵。
一些例子中,所述处理器401还可以用于,对于每一特征点组合,基于所述单应矩阵计算相应的变换误差;根据每一特征点组合相应的变换误差进行平均值计算,得到平均误差,并将所述平均误差确定为误差计算结果。
一些例子中,所述处理器401还可以用于,若所述误差计算结果指示的平均误差小于或等于预设阈值,则确定人脸活体验证的结果为通过;若所述误差计算结果指示的平均误差大于所述预设阈值,则确定人脸活体验证的结果为不通过。
由上述可知,本申请实施例提供的服务器中,在至少两个摄像头捕获到人脸图像时,首先按照预设人脸特征点,对该人脸图像进行特征点配准,得到相应的特征点组合;然后,利用预设算法拟合出特征点组合之间的单应矩阵;最后,基于该单应矩阵,计算这些特征点组合的变换误差,从而根据误差计算结果进行人脸活体验证;本申请实施例无需对摄像头进行标定,即消除了标定误差,直接对捕获到的人脸图像进行特征点匹配,并拟合出单应矩阵,因此可以降低活体判别算法计算量;并 且,摄像头可摆放自由,从而可提高活体判别的灵活性和便利性。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对人脸活体验证方法的详细描述,此处不再赘述。
本申请实施例提供的所述人脸活体验证装置,譬如为计算机、平板电脑、具有触摸功能的手机等等,所述人脸活体验证装置与上文实施例中的人脸活体验证方法属于同一构思,在所述人脸活体验证装置上可以运行所述人脸活体验证方法实施例中提供的任一方法。
需要说明的是,对本申请所述人脸活体验证方法而言,本领域普通测试人员可以理解实现本申请实施例所述人脸活体验证方法的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,所述计算机程序可存储于一计算机可读取存储介质中,如存储在终端的存储器中,并被该终端内的至少一个处理器执行,在执行过程中可包括如所述人脸活体验证方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)等。
对本申请实施例的所述人脸活体验证装置而言,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中,所述存储介质譬如为只读存储器,磁盘或光盘等。
以上对本申请实施例所提供的一种人脸活体验证方法及装置进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐 述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (18)

  1. 一种人脸活体验证方法,其特征在于,包括:
    计算设备获取至少两个摄像头捕获到的人脸图像;
    所述计算设备按照预设人脸特征点,对所述人脸图像进行特征点配准,得到相应的特征点组合;
    所述计算设备利用预设算法拟合出所述特征点组合之间的单应矩阵;
    所述计算设备基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果,并根据所述误差计算结果进行人脸活体验证。
  2. 根据权利要求1所述的人脸活体验证方法,其特征在于,所述获取至少两个摄像头捕获到的人脸图像包括:
    获取至少两个摄像头捕获到的图像数据;
    对所述图像数据进行人脸检测,得到人脸检测结果;
    当所述人脸检测结果指示检测到人脸图像时,获取所述人脸图像。
  3. 根据权利要求1所述的人脸活体验证方法,其特征在于,所述利用预设算法拟合出所述特征点组合之间的单应矩阵,包括:
    利用预设的随机采样一致性算法以及最小二乘法,对所述特征点组合进行拟合,生成单应矩阵。
  4. 根据权利要求3所述的人脸活体验证方法,其特征在于,所述对所述特征点组合进行拟合,生成单应矩阵包括:
    以齐次坐标的形式,对特征点组合中的两两特征点对进行标示;
    建立矩阵模型,并通过所述矩阵模型确定两两特征点对之间的对应关系式;
    对所述对应关系式进行齐次变换,生成齐次形式的对应关系式;
    基于所述齐次形式的对应关系式,对所述特征点组合进行拟合,生成单应矩阵。
  5. 根据权利要求1所述的人脸活体验证方法,其特征在于,所述基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果包括:
    对于每一特征点组合,基于所述单应矩阵计算相应的变换误差;
    根据每一特征点组合相应的变换误差进行平均值计算,得到平均误差,并将所述平均误差确定为误差计算结果。
  6. 根据权利要求5所述的人脸活体验证方法,其特征在于,所述根据所述误差计算结果进行人脸活体验证,包括:
    若所述误差计算结果指示的平均误差小于或等于预设阈值,则确定人脸活体验证的结果为通过;
    若所述误差计算结果指示的平均误差大于所述预设阈值,则确定人脸活体验证的结果为不通过。
  7. 一种人脸活体验证装置,其特征在于,包括处理器和存储器,所述存储器中存储有计算机可读指令,可以使所述处理器执行以下操作:
    获取至少两个摄像头捕获到的人脸图像;
    按照预设人脸特征点,对所述人脸图像进行特征点配准,得到相应的特征点组合;
    利用预设算法拟合出所述特征点组合之间的单应矩阵;
    基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果;
    根据所述误差计算结果进行人脸活体验证。
  8. 根据权利要求7所述的人脸活体验证装置,其特征在于,所述计算机可读指令可以使所述处理器执行以下操作:
    获取至少两个摄像头捕获到的图像数据;
    对所述图像数据进行人脸检测,得到人脸检测结果;
    当所述人脸检测结果指示检测到人脸图像时,获取所述人脸图像。
  9. 根据权利要求7所述的人脸活体验证装置,其特征在于,所述计算机可读指令可以使所述处理器执行以下操作:利用预设的随机采样一致性算法以及最小二乘法,对所述特征点组合进行拟合,生成单应矩阵。
  10. 根据权利要求9所述的人脸活体验证装置,其特征在于,所述计算机可读指令可以使所述处理器执行以下操作:
    以齐次坐标的形式,对特征点组合中的两两特征点对进行标示;
    建立矩阵模型,并通过所述矩阵模型确定两两特征点对之间的对应关系式;
    对所述对应关系式进行齐次变换,生成齐次形式的对应关系式;
    基于所述齐次形式的对应关系式,对所述特征点组合进行拟合,生成单应矩阵。
  11. 根据权利要求7所述的人脸活体验证装置,其特征在于,所述计算机可读指令可以使所述处理器执行以下操作:对于每一特征点组合,基于所述单应矩阵计算相应的变换误差,根据每一特征点组合相应的变换误差进行平均值计算,得到平均误差,并将所述平均误差确定为误差计算结果。
  12. 根据权利要求11所述的人脸活体验证装置,其特征在于,所述计算机可读指令可以使所述处理器执行以下操作:若所述误差计算结果指示的平均误差小于或等于预设阈值,则确定人脸活体验证的结果为通过;若所述误差计算结果指示的平均误差大于所述预设阈值,则确定人脸活体验证的结果为不通过。
  13. 一种机器可读存储介质,其特征在于,包括计算机可读指令, 可以使至少一个处理器执行以下操作:
    获取至少两个摄像头捕获到的人脸图像;
    按照预设人脸特征点,对所述人脸图像进行特征点配准,得到相应的特征点组合;
    利用预设算法拟合出所述特征点组合之间的单应矩阵;
    基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果;
    根据所述误差计算结果进行人脸活体验证。
  14. 根据权利要求13所述的存储介质,其特征在于,所述计算机可读指令可以使至少一个处理器执行以下操作:
    获取至少两个摄像头捕获到的图像数据;
    对所述图像数据进行人脸检测,得到人脸检测结果;
    当所述人脸检测结果指示检测到人脸图像时,获取所述人脸图像。
  15. 根据权利要求13所述的存储介质,其特征在于,所述计算机可读指令可以使至少一个处理器执行以下操作:利用预设的随机采样一致性算法以及最小二乘法,对所述特征点组合进行拟合,生成单应矩阵。
  16. 根据权利要求15所述的存储介质,其特征在于,所述计算机可读指令可以使至少一个处理器执行以下操作:
    以齐次坐标的形式,对特征点组合中的两两特征点对进行标示;
    建立矩阵模型,并通过所述矩阵模型确定两两特征点对之间的对应关系式;
    对所述对应关系式进行齐次变换,生成齐次形式的对应关系式;
    基于所述齐次形式的对应关系式,对所述特征点组合进行拟合,生成单应矩阵。
  17. 根据权利要求13所述的存储介质,其特征在于,所述计算机可 读指令可以使至少一个处理器执行以下操作:
    对于每一特征点组合,基于所述单应矩阵计算相应的变换误差,根据每一特征点组合相应的变换误差进行平均值计算,得到平均误差,并将所述平均误差确定为误差计算结果。
  18. 根据权利要求17所述的存储介质,其特征在于,所述计算机可读指令可以使至少一个处理器执行以下操作:
    若所述误差计算结果指示的平均误差小于或等于预设阈值,则确定人脸活体验证的结果为通过;若所述误差计算结果指示的平均误差大于所述预设阈值,则确定人脸活体验证的结果为不通过。
PCT/CN2017/080376 2016-04-21 2017-04-13 一种人脸活体验证方法及装置 WO2017181899A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP17785384.3A EP3447679A4 (en) 2016-04-21 2017-04-13 METHOD AND DEVICE FOR FACIAL VERIFICATION IN VIVO
JP2018538712A JP2019509545A (ja) 2016-04-21 2017-04-13 生きた人の顔検証方法およびデバイス
KR1020187021514A KR102120241B1 (ko) 2016-04-21 2017-04-13 얼굴 생체 검증 방법 및 장치
US16/024,117 US10796179B2 (en) 2016-04-21 2018-06-29 Living face verification method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610252267.9 2016-04-21
CN201610252267.9A CN105868733A (zh) 2016-04-21 2016-04-21 一种人脸活体验证方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/024,117 Continuation-In-Part US10796179B2 (en) 2016-04-21 2018-06-29 Living face verification method and device

Publications (1)

Publication Number Publication Date
WO2017181899A1 true WO2017181899A1 (zh) 2017-10-26

Family

ID=56632933

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/080376 WO2017181899A1 (zh) 2016-04-21 2017-04-13 一种人脸活体验证方法及装置

Country Status (6)

Country Link
US (1) US10796179B2 (zh)
EP (1) EP3447679A4 (zh)
JP (1) JP2019509545A (zh)
KR (1) KR102120241B1 (zh)
CN (1) CN105868733A (zh)
WO (1) WO2017181899A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108156161A (zh) * 2017-12-27 2018-06-12 百度在线网络技术(北京)有限公司 验证方法和装置
CN110751017A (zh) * 2019-09-03 2020-02-04 上海交通大学 一种基于强化学习的在线非监督的人群组分割方法及终端
CN111046703A (zh) * 2018-10-12 2020-04-21 杭州海康威视数字技术股份有限公司 人脸防伪检测方法、装置及多目相机
CN111291586A (zh) * 2018-12-06 2020-06-16 北京市商汤科技开发有限公司 活体检测方法、装置、电子设备及计算机可读存储介质
CN111666786A (zh) * 2019-03-06 2020-09-15 杭州海康威视数字技术股份有限公司 图像处理方法、装置、电子设备及存储介质
CN111780673A (zh) * 2020-06-17 2020-10-16 杭州海康威视数字技术股份有限公司 一种测距方法、装置及设备
CN112598716A (zh) * 2020-12-01 2021-04-02 浙江大华技术股份有限公司 一种多模态图像的配准方法、装置及系统
CN111666786B (zh) * 2019-03-06 2024-05-03 杭州海康威视数字技术股份有限公司 图像处理方法、装置、电子设备及存储介质

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868733A (zh) 2016-04-21 2016-08-17 腾讯科技(深圳)有限公司 一种人脸活体验证方法及装置
CN107316029B (zh) * 2017-07-03 2018-11-23 腾讯科技(深圳)有限公司 一种活体验证方法及设备
CA3070624A1 (en) * 2017-07-28 2019-01-31 Nuro, Inc. Flexible compartment design on autonomous and semi-autonomous vehicle
US11341771B2 (en) * 2017-10-18 2022-05-24 Sony Semiconductor Solutions Corporation Object identification electronic device
KR102123835B1 (ko) * 2018-09-11 2020-06-17 한국산업기술대학교 산학협력단 적응적 분류에 기반한 영상정합 시스템 및 방법
CN111079470B (zh) * 2018-10-18 2023-08-22 杭州海康威视数字技术股份有限公司 人脸活体检测的方法和装置
CN110059590B (zh) * 2019-03-29 2023-06-30 努比亚技术有限公司 一种人脸活体验证方法、装置、移动终端及可读存储介质
US10997396B2 (en) 2019-04-05 2021-05-04 Realnetworks, Inc. Face liveness detection systems and methods
CN111860055B (zh) * 2019-04-29 2023-10-24 北京眼神智能科技有限公司 人脸静默活体检测方法、装置、可读存储介质及设备
CN110636276B (zh) * 2019-08-06 2021-12-28 RealMe重庆移动通信有限公司 视频拍摄方法、装置、存储介质及电子设备
CN111079587B (zh) * 2019-12-03 2023-09-22 北京迈格威科技有限公司 人脸的识别方法、装置、计算机设备及可读存储介质
CN111079684B (zh) * 2019-12-24 2023-04-07 陕西西图数联科技有限公司 一种基于粗糙-精细拟合的三维人脸检测方法
CN113128258B (zh) * 2019-12-30 2022-10-04 杭州海康威视数字技术股份有限公司 活体检测方法、装置、电子设备及存储介质
CN111209870A (zh) * 2020-01-09 2020-05-29 杭州涂鸦信息技术有限公司 一种双目活体摄像头快速配准方法及其系统和装置
CN111339958B (zh) * 2020-02-28 2023-08-29 南京鑫之派智能科技有限公司 一种基于单目视觉的人脸活体检测方法及系统
CN113536844B (zh) * 2020-04-16 2023-10-31 中移(成都)信息通信科技有限公司 人脸对比方法、装置、设备及介质
CN111581411B (zh) * 2020-06-09 2023-08-22 腾讯科技(深圳)有限公司 高精度人脸形状库的构建方法、装置、设备及存储介质
CN112287798A (zh) * 2020-10-23 2021-01-29 深圳市商汤科技有限公司 测温方法及装置、电子设备及存储介质
CN112418021A (zh) * 2020-11-10 2021-02-26 上海臻面智能信息科技有限公司 一种单rgb-ir摄像头的安全模组的实现方法及装置
CN113553990B (zh) * 2021-08-09 2022-04-15 深圳智必选科技有限公司 多人脸跟踪识别的方法、装置、计算机设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080192980A1 (en) * 2007-02-14 2008-08-14 Samsung Electronics Co., Ltd. Liveness detection method and apparatus of video image
CN101923641A (zh) * 2010-09-09 2010-12-22 北京交通大学 一种改进的人脸识别方法
CN102065313A (zh) * 2010-11-16 2011-05-18 上海大学 平行式相机阵列的未标定多视点图像校正方法
CN104573682A (zh) * 2015-02-15 2015-04-29 四川川大智胜软件股份有限公司 一种基于人脸相似度的人脸防伪方法
CN105023010A (zh) * 2015-08-17 2015-11-04 中国科学院半导体研究所 一种人脸活体检测方法及系统
CN105205458A (zh) * 2015-09-16 2015-12-30 北京邮电大学 人脸活体检测方法、装置及系统
CN105868733A (zh) * 2016-04-21 2016-08-17 腾讯科技(深圳)有限公司 一种人脸活体验证方法及装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8315441B2 (en) * 2007-06-29 2012-11-20 Nec Corporation Masquerade detection system, masquerade detection method and masquerade detection program
US8448859B2 (en) * 2008-09-05 2013-05-28 Datalogic ADC, Inc. System and method for preventing cashier and customer fraud at retail checkout
WO2010050206A1 (ja) * 2008-10-28 2010-05-06 日本電気株式会社 なりすまし検知システム、なりすまし検知方法及びなりすまし検知プログラム
US8553982B2 (en) * 2009-12-23 2013-10-08 Intel Corporation Model-based play field registration
CN102834843B (zh) * 2010-04-02 2016-11-16 诺基亚技术有限公司 用于面部检测的方法和装置
US8675926B2 (en) * 2010-06-08 2014-03-18 Microsoft Corporation Distinguishing live faces from flat surfaces
US9342886B2 (en) * 2011-04-29 2016-05-17 Qualcomm Incorporated Devices, methods, and apparatuses for homography evaluation involving a mobile device
KR101373397B1 (ko) * 2012-06-11 2014-03-13 인하대학교 산학협력단 증강현실의 호모그래피 정확도 향상을 위한 csp 기반의 ransac 샘플링 방법
CN103679118B (zh) * 2012-09-07 2017-06-16 汉王科技股份有限公司 一种人脸活体检测方法及系统
CN103971400B (zh) * 2013-02-06 2018-02-02 阿里巴巴集团控股有限公司 一种基于标识码的三维交互的方法和系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080192980A1 (en) * 2007-02-14 2008-08-14 Samsung Electronics Co., Ltd. Liveness detection method and apparatus of video image
CN101923641A (zh) * 2010-09-09 2010-12-22 北京交通大学 一种改进的人脸识别方法
CN102065313A (zh) * 2010-11-16 2011-05-18 上海大学 平行式相机阵列的未标定多视点图像校正方法
CN104573682A (zh) * 2015-02-15 2015-04-29 四川川大智胜软件股份有限公司 一种基于人脸相似度的人脸防伪方法
CN105023010A (zh) * 2015-08-17 2015-11-04 中国科学院半导体研究所 一种人脸活体检测方法及系统
CN105205458A (zh) * 2015-09-16 2015-12-30 北京邮电大学 人脸活体检测方法、装置及系统
CN105868733A (zh) * 2016-04-21 2016-08-17 腾讯科技(深圳)有限公司 一种人脸活体验证方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3447679A4 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108156161A (zh) * 2017-12-27 2018-06-12 百度在线网络技术(北京)有限公司 验证方法和装置
CN111046703A (zh) * 2018-10-12 2020-04-21 杭州海康威视数字技术股份有限公司 人脸防伪检测方法、装置及多目相机
CN111046703B (zh) * 2018-10-12 2023-04-18 杭州海康威视数字技术股份有限公司 人脸防伪检测方法、装置及多目相机
CN111291586A (zh) * 2018-12-06 2020-06-16 北京市商汤科技开发有限公司 活体检测方法、装置、电子设备及计算机可读存储介质
CN111666786A (zh) * 2019-03-06 2020-09-15 杭州海康威视数字技术股份有限公司 图像处理方法、装置、电子设备及存储介质
CN111666786B (zh) * 2019-03-06 2024-05-03 杭州海康威视数字技术股份有限公司 图像处理方法、装置、电子设备及存储介质
CN110751017A (zh) * 2019-09-03 2020-02-04 上海交通大学 一种基于强化学习的在线非监督的人群组分割方法及终端
CN110751017B (zh) * 2019-09-03 2021-03-02 上海交通大学 一种基于强化学习的在线非监督的人群组分割方法及终端
CN111780673A (zh) * 2020-06-17 2020-10-16 杭州海康威视数字技术股份有限公司 一种测距方法、装置及设备
CN112598716A (zh) * 2020-12-01 2021-04-02 浙江大华技术股份有限公司 一种多模态图像的配准方法、装置及系统
CN112598716B (zh) * 2020-12-01 2024-04-05 浙江大华技术股份有限公司 一种多模态图像的配准方法、装置及系统

Also Published As

Publication number Publication date
CN105868733A (zh) 2016-08-17
EP3447679A4 (en) 2019-04-24
EP3447679A1 (en) 2019-02-27
KR20180098367A (ko) 2018-09-03
US10796179B2 (en) 2020-10-06
US20180307928A1 (en) 2018-10-25
KR102120241B1 (ko) 2020-06-08
JP2019509545A (ja) 2019-04-04

Similar Documents

Publication Publication Date Title
WO2017181899A1 (zh) 一种人脸活体验证方法及装置
JP3954484B2 (ja) 画像処理装置およびプログラム
US7554575B2 (en) Fast imaging system calibration
JP5445460B2 (ja) なりすまし検知システム、なりすまし検知方法及びなりすまし検知プログラム
WO2019071664A1 (zh) 结合深度信息的人脸识别方法、装置及存储介质
WO2018228218A1 (zh) 身份识别方法、计算设备及存储介质
WO2022095596A1 (zh) 图像对齐方法、图像对齐装置及终端设备
US10991124B2 (en) Determination apparatus and method for gaze angle
CN111028205B (zh) 一种基于双目测距的眼睛瞳孔定位方法及装置
TWI669664B (zh) 眼睛狀態檢測系統及眼睛狀態檢測系統的操作方法
JP6055435B2 (ja) 被写体認識装置、被写体認識方法及び被写体認識プログラム
WO2021008205A1 (zh) 图像处理
JP6662382B2 (ja) 情報処理装置および方法、並びにプログラム
TWI711007B (zh) 調整感興趣區域的方法與其運算裝置
TW202242716A (zh) 用於目標匹配的方法、裝置、設備及儲存媒體
CN106406507B (zh) 图像处理方法以及电子设备
US11166005B2 (en) Three-dimensional information acquisition system using pitching practice, and method for calculating camera parameters
CN113409371B (zh) 图像配准方法及相关装置、设备
CN111814869B (zh) 一种同步定位与建图的方法、装置、电子设备及存储介质
Skocaj et al. Evaluation of performance of smart mobile devices in machine vision tasks
CN117745832A (zh) 定位方法、多相机系统、电子设备及存储介质
CN114078206A (zh) 目标验证方法及设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018538712

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20187021514

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020187021514

Country of ref document: KR

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2017785384

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017785384

Country of ref document: EP

Effective date: 20181121

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17785384

Country of ref document: EP

Kind code of ref document: A1