WO2017181899A1 - 一种人脸活体验证方法及装置 - Google Patents
一种人脸活体验证方法及装置 Download PDFInfo
- Publication number
- WO2017181899A1 WO2017181899A1 PCT/CN2017/080376 CN2017080376W WO2017181899A1 WO 2017181899 A1 WO2017181899 A1 WO 2017181899A1 CN 2017080376 W CN2017080376 W CN 2017080376W WO 2017181899 A1 WO2017181899 A1 WO 2017181899A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- feature point
- error
- homography matrix
- calculation result
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/45—Structures or tools for the administration of authentication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/167—Detection; Localisation; Normalisation using comparisons between temporally consecutive images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present application belongs to the field of face recognition technology, and in particular, to a method and device for verifying a face.
- In vivo detection refers to the technique of judging whether the input face is a planar prosthesis or a real living face.
- the existing multi-camera living body detection technology generally adopts a binocular camera with left and right horizontal arrangement, and performs stereo matching of the left and right images after performing double target setting, and recovers the depth information of the face region according to the face.
- the area has regular depth information to distinguish between real faces and plane prostheses (such as photos, videos, or printed faces on mobile devices such as mobile phones or tablets).
- the embodiment of the present application provides the following technical solutions:
- a method for verifying a human face which includes:
- a human face living body verification device comprising:
- An acquiring unit configured to acquire a face image captured by at least two cameras
- a feature point registration unit configured to perform feature point registration on the face image according to a preset face feature point to obtain a corresponding feature point combination
- a fitting unit configured to fit a homography matrix between the feature point combinations by using a preset algorithm
- a calculating unit configured to calculate a transformation error of the feature point combination based on the homography matrix, and obtain an error calculation result
- a verification unit configured to perform face biometric verification according to the error calculation result.
- the embodiment of the present application does not need to calibrate the camera, and eliminates the calibration error in the calibration process of the camera in the prior art.
- the method in the embodiment of the present application directly performs feature point matching on the captured face image, and Fitting out the homography matrix, and obtaining the error calculation result based on the homography matrix, the face biometric verification is carried out.
- the calculation amount of the living body discrimination algorithm is greatly reduced, and the speed and real-time of the living body detection are improved. .
- FIG. 1 is a schematic diagram of a scene of a method for verifying a living body according to an embodiment of the present application
- FIG. 1b is a schematic flowchart of a method for verifying a living body of a human body according to an embodiment of the present application
- FIG. 2a is a schematic flowchart of a method for verifying a living body of a human body according to an embodiment of the present application
- 2b is a schematic diagram of feature point registration in a method for verifying a living body of a living body according to an embodiment of the present disclosure
- FIG. 3 is a schematic structural diagram of a human face living body verification apparatus according to an embodiment of the present application.
- FIG. 3b is another schematic structural diagram of a human face living body verification apparatus according to an embodiment of the present application.
- FIG. 4 is a schematic structural diagram of a server according to an embodiment of the present application.
- module as used herein may be taken to mean a software object that is executed on the computing system.
- the different components, modules, engines, and services described herein can be considered as implementation objects on the computing system.
- the embodiment of the present application provides a method and device for verifying a human face.
- FIG. 1 is a schematic diagram of a scene of a method for verifying a living body in a living body according to an embodiment of the present application.
- the scene may include a face biometric verification device, which may be referred to as a face verification device.
- the device may be implemented by a dedicated computing device or integrated into some computing device, such as a server, such as an application running on a server, a board inserted into a server expansion slot, or the like.
- the device may be configured to acquire a face image captured by at least two cameras, such as acquiring a face image captured by the camera A and the camera B, and thereafter, performing the feature on the face image according to the preset face feature point.
- Point registration get the corresponding feature point combination, for example, Performing five-dimensional registration on the face image; then, using a preset algorithm to fit the homography matrix between the feature point combinations, and finally calculating a transformation error of the feature point combination based on the homography matrix, and obtaining an error calculation result And performing face biometric verification according to the error calculation result. For example, if the error calculation result satisfies a preset threshold, it may be determined that the result of the face biometric verification is pass, and the like.
- the calibration of the camera is not required, and the calibration error in the calibration process of the camera in the prior art is eliminated.
- the method of the embodiment of the present application directly performs feature point matching on the captured face image and fits the homography matrix. And based on the homography matrix to obtain the error calculation results, the face biometric verification is carried out. Compared with the existing living body detection technology, the calculation amount of the living body discrimination algorithm is greatly reduced, and the speed and real-time performance of the living body detection are improved.
- the scene may further include multiple camera devices (ie, cameras), which are mainly used to capture image data, and send the face image to the face verification device when the face image is captured to determine whether the face biometric verification is by.
- camera devices ie, cameras
- the description will be made from the perspective of a human face biometric verification device, which may be integrated in a network device such as a server.
- a method for verifying a human face comprising: acquiring a face image captured by at least two cameras; performing feature point registration on the face image according to a preset face feature point, and obtaining a corresponding feature point combination;
- the preset algorithm fits the homography matrix between the feature point combinations; calculates a transformation error of the feature point combination based on the homography matrix, obtains an error calculation result, and performs a face according to the error calculation result In vivo verification.
- FIG. 1b is a schematic flowchart of a method for verifying a living body of a human body according to an embodiment of the present application.
- the method can include the following steps.
- step S101 a face image captured by at least two cameras is acquired.
- the step of the face biometric verification device acquiring the face image captured by the at least two cameras may specifically include:
- Step 1 Acquire image data captured by at least two cameras.
- Step 2 Perform face detection on the image data to obtain a face detection result.
- Step 3 When the face detection result indicates that a face image is detected, the face image is acquired.
- Face Detection is a key part of the automatic face recognition system. It refers to any given image, and uses a certain strategy to search for it to determine whether it contains a face. Then returns the position, size and posture of the face. It is easily conceivable that the present embodiment can analyze and identify the acquired image data by using an existing face detection algorithm. The process of face detection is not specifically limited herein.
- a plurality of cameras collect image data, and send the image data to the face biometric verification device in real time, and the face biometric verification device performs face detection on the received image data.
- the face image is acquired for biometric verification.
- the collected image data may be directly detected by the camera, and when the face image is detected, the face image is acquired, and the face image is sent to the human face verification device. For live verification.
- step S102 feature point registration is performed on the face image according to the preset face feature point, and a corresponding feature point combination is obtained.
- the face biometric verification device performs feature point registration on the face image according to the number and position of the preset facial feature points, that is, the facial image is subjected to facial registration, so that the corresponding feature point combination can be obtained. .
- the face biometric verification device can perform feature point registration on the two face images according to the preset face feature points after acquiring the face images captured by the two cameras. Thereby a plurality of feature point combinations are obtained, which are also called feature point pairs.
- the feature point registration means that multiple images are taken.
- the registration process of the face biometric verification device may be as follows: first, feature extraction is performed on two face images to obtain feature points; and matching feature point pairs are found by performing similarity measurement; Image space coordinate transformation parameters are obtained by matching feature point pairs; finally, image registration is performed by coordinate transformation parameters.
- the existing feature extraction algorithm may be used to perform feature point registration on the image, which is not specifically described herein.
- step S103 a homography matrix between the feature point combinations is fitted using a preset algorithm.
- the face biometric verification device may use a preset random sampling consistency algorithm and a least squares method to fit the feature point combination to generate a homography matrix.
- the basic assumption of the random sampling consistency algorithm (Ransac algorithm) in the embodiment of the present application is that the sample contains correct data (inliers, data that can be described by the model), and also includes abnormal data (outliers, which are far from the normal range and cannot be adapted.
- the data of the mathematical model), that is, the data set contains noise. These anomalous data may be due to erroneous measurements, incorrect assumptions, incorrect calculations, and the like.
- the Ransac algorithm also assumes that given a correct set of data, there is a way to calculate the model parameters that match those data.
- the least squares method (also known as the least squares method) is the best function match for finding data by minimizing the sum of the squares of the errors.
- the least squares method can be used to easily obtain unknown data and minimize the sum of the squares of the errors between the obtained data and the actual data.
- the face biometric verification device calculates and fits the feature point combination model, that is, the homography matrix, through a random sampling consistency algorithm and a least square method; further, fitting the feature point combination to generate
- the steps of the homography matrix may specifically include:
- Step a marking the pair of feature points in the feature point combination in the form of homogeneous coordinates
- Step b establishing a matrix model, and determining, by using the matrix model, a correspondence relationship between pairs of feature points;
- Step c performing homogeneous transformation on the corresponding relational expression to generate a corresponding relationship in a homogeneous form
- Step d Fitting the feature point combination based on the corresponding relationship of the homogeneous form to generate a homography matrix.
- the homogeneous coordinate is to represent an n-dimensional vector with an n+1-dimensional vector; for example, there are two cameras, assuming that P1 and P2 are respectively a pair of corresponding features in two facial images.
- step S104 based on the homography matrix, the transformation error of the feature point combination is calculated, the error calculation result is obtained, and the face biometric verification is performed according to the error calculation result.
- the face biometric verification device calculates the transformation error of the feature point combination based on the homography matrix, and the error calculation result may specifically include:
- Step A For each feature point combination, calculate a corresponding transform error based on the homography matrix.
- Step B Perform an average calculation according to each feature point combination corresponding transformation error to obtain an average error, and determine the average error as an error calculation result.
- the face biometric verification device is configured for each feature point based on the homography matrix Calculating the corresponding transform error, wherein the transform error is the error obtained by the feature point combination based on the homogeneous transformation and the homogeneous transformation of the homography matrix. If 50 feature points are set in advance, the corresponding feature points are combined into 50 groups. Then, the average of the transformation errors of the 50 sets of feature points is calculated to obtain an average error.
- each feature point pair corresponds to a transformation error. If 50 feature points are set in advance, the corresponding feature point pairs are 50, and then 50 The transformation errors of the feature point pairs are averaged to obtain an average error.
- the intermediate camera is used as a comparison, and there may be two errors in each feature point combination; in some more accurate occasions, multiple errors may be set for comparison, such as If there are three feature points in the feature point combination, and there is a transformation error between the pair of feature points, there may be three errors per feature point combination, which is not specifically limited herein.
- performing face biometric verification according to the error calculation result may specifically include:
- Step 1 If the average error indicated by the error calculation result is less than or equal to a preset threshold, determine that the result of the face biometric verification is passed;
- Step 2 If the average error indicated by the error calculation result is greater than the preset threshold, determine that the result of the face biometric verification is not passed.
- the average error is compared with a preset threshold. If the average error is less than or equal to the preset threshold, the result of the face biometric verification may be considered as passing, that is, obtaining The obtained face image is a real person, and then the face recognition is performed; if the average error is greater than the preset threshold, it can be considered that the result of the face biometric verification is not passed, that is, the acquired face image is a prosthesis.
- the method for verifying the living body of the human body provided by the embodiment of the present application is at least two
- the feature point registration is performed on the face image according to the preset face feature point, and the corresponding feature point combination is obtained; then, the preset algorithm is used to fit the feature point combination.
- the homography matrix finally, based on the homography matrix, calculating the transformation error of the combination of the feature points, thereby performing the face biometric verification according to the error calculation result; the embodiment of the present application does not need to calibrate the camera, that is, the calibration error is eliminated, directly
- the captured face image performs feature point matching and fits the homography matrix, so the calculation amount of the living body discrimination algorithm can be reduced; and the camera can be placed freely, thereby improving the flexibility and convenience of living body discrimination.
- the method of human face verification provided by the method is mainly applied to an office access control system. Before performing face recognition, first use this method for live verification, and then perform face recognition when the living body verification of the face passes; this method uses two or more cameras that are freely placed, by analyzing multiple cameras simultaneously. The three-dimensional geometric relationship of the facial features in the multiple face images is taken, and the homography of the feature point pairs in the plane analytic geometry is utilized, and the user's cooperation is not required for the living body verification.
- the following is a detailed description of the method for verifying the human face in the case of two cameras.
- the two cameras are respectively disposed on the side of the door frame, and are generally placed up and down to save space.
- FIG. 2a is a schematic flowchart of a method for verifying a living body of a human body according to an embodiment of the present disclosure.
- the method can include the following steps.
- step S201 the face biometric verification device simultaneously captures images in the two cameras.
- the camera A and the camera B are respectively arranged on the side of the door frame, and are freely placed, which not only supports the left and right horizontal placement or the vertical placement, but also allows a certain horizontal or vertical offset and angle between the cameras, in actual deployment. Convenient and flexible. It can be understood that the camera A and the camera B are generally placed up and down to save space.
- step S202 the face biometric verification device performs a face on the images in the two cameras. Detection.
- the camera A and the camera B send the captured image to the face biometric verification device, and the face biometric verification device performs face detection on the received image, that is, detects whether there is a human face in the image, wherein the face detection is Refers to a given strategy for a given image to search for a face. It is easily conceivable that the present embodiment can analyze and identify the acquired image data by using an existing face detection algorithm.
- step S203 when the face biometric verification device detects that a human face is detected in the images of the two cameras, the feature point registration is performed to obtain a feature point pair.
- the face biometric verification device performs feature point registration on the two face images, that is, the facial features registration.
- 90 feature points are preset in the face biometric verification device.
- the two sets of feature points are in one-to-one correspondence.
- the two cameras are captured simultaneously by the camera A and the camera B.
- the face image, the point marked on the person's face is the registration feature feature.
- step S204 the face biometric verification apparatus fits the homography matrix of the feature point pair using the random sampling consistency Ransac algorithm according to the feature point pair.
- P1(x1, y1, 1)T and P2(x2, y2, 1)T are the homogeneous coordinates of a pair of corresponding feature point pairs in two face images, respectively, and establish a matrix model of the homography matrix.
- the coordinates transformed by H*P1 are:
- H*P1 is transformed into a homogeneous form to obtain a homogeneous form:
- the matrix model H may be fitted to the 90 pairs of feature point pairs using the Ransac algorithm and the least squares method to obtain a homography matrix corresponding to the matrix model H.
- step S205 the face biometric verification device calculates an error of the feature point pair before and after the transformation of the homography matrix.
- step S206 the face biometric verification device calculates an average error based on the error of the feature point pair before and after the transformation of the homography matrix.
- the transformation errors of the 90 pairs of feature point pairs are averaged to obtain an average error.
- step S207a if the average error is less than or equal to the preset threshold, the face biometric verification device determines that the face biometric verification passes.
- step S207b if the average error is greater than the preset threshold, the face biometric verification device determines that the face biometric verification fails.
- the face biometric verification device compares the average error with the preset threshold h to determine whether it is a real person or a prosthesis. If the average error is less than or equal to the preset threshold, it is determined that the face biometric verification is passed, that is, the image is a real person, and the human face is a living body. Face recognition is performed when the verification is passed, and when the face recognition passes, the door is opened, and when the face recognition fails, the door is not opened; if the average error is greater than the preset threshold, it is determined that the face biometric verification fails, that is, The image is a prosthesis and the door does not open.
- the method described in this embodiment adopts two or more cameras that are freely placed, and uses the feature point pairs in the plane analytic geometry through the three-dimensional geometric relationship of the facial features in the plurality of face images simultaneously captured by the camera.
- the homography is not required to be calibrated by the camera, and the user does not need to cooperate.
- the multi-camera shooting picture directly performs the feature point matching of the face to perform the living body verification of the face.
- the method can effectively distinguish a human face from a planar prosthesis (such as a photo, video or printed face paper played on a mobile device such as a mobile phone or tablet) without calibrating the camera.
- the multi-camera has a high degree of freedom of placement, and It only supports left and right horizontal placement or vertical up and down, and also allows a certain horizontal or vertical offset and angle between the cameras, which is more convenient and flexible in the actual deployment of the binocular system.
- the method can be applied to various types of access control systems, such as an office access control system, and can also be applied to other systems that require in vivo verification.
- the face biometric verification method provided by the embodiment of the present invention firstly performs feature point registration on the face image according to the preset face feature point when at least two cameras capture the face image, and obtain corresponding The feature point combination; then, the default algorithm is used to fit the homography matrix between the feature point combinations; finally, based on the homography matrix, the transformation errors of the combination of the feature points are calculated, thereby performing the face living body according to the error calculation result.
- Verification the embodiment of the present application does not need to calibrate the camera, that is, eliminates the calibration error, directly performs feature point matching on the captured face image, and fits the homography matrix, thereby reducing the calculation amount of the living body discrimination algorithm;
- the camera can be placed freely, which improves the flexibility and convenience of living body discrimination.
- the embodiment of the present application further provides an apparatus based on the above-described human face verification method.
- the meaning of the noun is the same as the method of the face biometric verification described above. For specific implementation details, refer to the description in the method embodiment.
- FIG. 3a is a schematic structural diagram of a human face verification device according to an embodiment of the present disclosure, which may include an acquisition unit 301, a feature point registration unit 302, a fitting unit 303, a calculation unit 304, and a verification unit 305.
- the acquiring unit 301 is configured to acquire a face image captured by at least two cameras; the feature point registration unit 302 is configured to perform feature matching on the face image according to a preset face feature point. Quasi, get the corresponding feature point combination.
- the fitting unit 303 is configured to fit a homography matrix between the feature point combinations by using a preset algorithm; the calculating unit 304 is configured to calculate the feature point combination based on the homography matrix Transforming the error to obtain an error calculation result; the verification unit 305 is configured to perform face biometric verification according to the error calculation result.
- FIG. 3b is another schematic structural diagram of a human face verification device according to an embodiment of the present disclosure.
- the acquiring unit 301 may include:
- the image acquisition sub-unit 3011 is configured to acquire image data captured by at least two cameras.
- the detecting subunit 3012 is configured to perform face detection on the image data to obtain a face detection result.
- the face acquisition sub-unit 3013 is configured to acquire the face image when the face detection result indicates that the face image is detected.
- face detection is a key part of the automatic face recognition system. It refers to searching for any given image with a certain strategy to determine whether it contains a face, and if so, returning to the face. The location, size and posture. It is easily conceivable that the present embodiment can analyze and identify the acquired image data by using an existing face detection algorithm. The process of face detection is not specifically limited herein.
- a plurality of cameras collect image data, and send the image data to the face biometric verification device in real time, and the face biometric verification device performs face detection on the received image data.
- the face image is acquired for biometric verification.
- the collected image data may be directly detected by the camera, and when the face image is detected, the face image is acquired, and the face image is sent to the human face verification device. For live verification.
- the feature point registration unit 302 can be used to follow the preset face feature points. Feature point registration is performed on the face image to obtain a corresponding feature point combination.
- the face biometric verification device performs feature point registration on the face image according to the number and position of the preset facial feature points, that is, the facial image is subjected to facial registration, so that the corresponding feature point combination can be obtained. .
- the face biometric verification device can perform feature point registration on the two face images according to the preset face feature points after acquiring the face images captured by the two cameras. Thereby a plurality of feature point combinations are obtained, which are also called feature point pairs.
- the registration of feature points refers to a process of matching and superimposing a plurality of face images obtained by different cameras in different regions by different imaging means, including geometric correction and projection transformation. Processing with a uniform scale.
- the registration process of the human face verification device may be as follows: first, feature extraction is performed on two face images to obtain feature points; matching feature point pairs are found by performing similarity measurement; The matched feature point pairs obtain the image space coordinate transformation parameters; finally, the image registration is performed by the coordinate transformation parameters.
- the existing feature extraction algorithm may be used to perform feature point registration on the image, which is not specifically described herein.
- the fitting unit 303 may be specifically configured to: fit the feature point combination by using a preset random sampling consistency algorithm and a least square method to generate a homography matrix.
- the basic assumption of the random sampling consistency algorithm (Ransac algorithm) in the embodiment of the present application is that the sample contains correct data, and also includes abnormal data, that is, the data set contains noise. These anomalous data may be due to erroneous measurements, incorrect assumptions, incorrect calculations, and the like.
- the Ransac algorithm also assumes that given a correct set of data, there is a way to calculate the model parameters that match those data.
- the least squares method is the best function match for finding data by minimizing the sum of the squares of the errors.
- the least squares method can be used to easily obtain unknown data and minimize the sum of the squares of the errors between the obtained data and the actual data.
- the face biometric verification device calculates and fits the feature point combination model, that is, the homography matrix, by using a random sampling consistency algorithm and a least squares method; further, the fitting unit 303 may specifically include:
- the marking sub-unit 3031 is configured to mark two pairs of feature point pairs in the feature point combination in the form of homogeneous coordinates.
- the determining subunit 3032 is configured to establish a matrix model, and determine, by the matrix model, a correspondence relationship between pairs of feature points.
- the transformation subunit 3033 is configured to perform homogeneous transformation on the correspondence relationship to generate a correspondence expression in a homogeneous form.
- the fitting sub-unit 3034 is configured to fit the feature point combination based on the homogeneous relationship of the homogeneous form to generate a homography matrix.
- the homogeneous coordinate is to represent an n-dimensional vector with an n+1-dimensional vector; for example, there are two cameras, assuming that P1 and P2 are respectively a pair of corresponding features in two facial images.
- the calculating unit 304 may be specifically configured to: calculate, for each feature point combination, a corresponding transform error based on the homography matrix, and perform an average value calculation according to each feature point combination corresponding transform error to obtain The average error is determined and the average error is determined as the error calculation result.
- the face biometric verification device calculates a corresponding transform error based on the homography matrix for each feature point combination, wherein the transform error is obtained by combining the feature point combination before and after the homogeneous transformation of the homography matrix
- the transform error is obtained by combining the feature point combination before and after the homogeneous transformation of the homography matrix
- the corresponding feature points are combined into 50 groups, and the transformation errors of the 50 sets of feature point combinations are averaged to obtain an average error.
- each feature point pair corresponds to a transformation error. If 50 feature points are set in advance, the corresponding feature point pairs are 50, and then 50 The transformation errors of the feature point pairs are averaged to obtain an average error.
- the intermediate camera is used as a comparison, and there may be two errors for each feature point combination; in some more accurate occasions, multiple errors may be set for comparison, for example, if There are three feature points in the feature point combination, and there is a transformation error between the pair of feature points. There may be three errors in each feature point combination, which is not specifically limited herein.
- the verification unit 305 may be specifically configured to: if the average error indicated by the error calculation result is less than or equal to a preset threshold, determine that the result of the face biometric verification is passed; if the error calculation result indicates If the average error is greater than the preset threshold, it is determined that the result of the face biometric verification is not passed.
- the average error is compared with a preset threshold. If the average error is less than or equal to the preset threshold, the result of the face biometric verification may be considered as passing, that is, obtaining The obtained face image is a real person, and then the face recognition is performed; if the average error is greater than the preset threshold, it can be considered that the result of the face biometric verification is not passed, that is, the acquired face image is a prosthesis.
- each of the above units may be implemented as an independent entity, or may be combined in any combination, as one or several entities, and the specific implementation of each unit above.
- each of the above units may be implemented as an independent entity, or may be combined in any combination, as one or several entities, and the specific implementation of each unit above.
- the human face verification device can be specifically integrated in a network device such as a server.
- the face biometric verification device provided by the embodiment of the present invention firstly performs feature point registration on the face image according to the preset face feature point when at least two cameras capture the face image, and obtain corresponding The feature point combination; then, the default algorithm is used to fit the homography matrix between the feature point combinations; finally, based on the homography matrix, the transformation errors of the combination of the feature points are calculated, thereby performing the face living body according to the error calculation result.
- Verification the embodiment of the present application does not need to calibrate the camera, that is, eliminates the calibration error, directly performs feature point matching on the captured face image, and fits the homography matrix, thereby reducing the calculation amount of the living body discrimination algorithm;
- the camera can be placed freely, which improves the flexibility and convenience of living body discrimination.
- the embodiment of the present application further provides a server, wherein the face biometric verification device of the embodiment of the present application can be integrated.
- FIG. 4 shows a schematic structural diagram of a server involved in an embodiment of the present application.
- the server may include one or more processors 401 of the processing core, a memory 402 of one or more computer readable storage media, a radio frequency (RF) circuit 403, a power source 404, an input unit 405, and a display unit 406, etc. component.
- RF radio frequency
- FIG. 4 does not constitute a limitation on the server, and the server in each embodiment may include more or less components than the illustration, or combine some components, or different. Parts layout.
- a server in some examples may not have RF circuitry 403 and/or display unit 406.
- the processor 401 is the control center of the server, connecting various portions of the entire server using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 402, and recalling data stored in the memory 402, Execute various functions of the server And process the data to monitor the server as a whole.
- processor 401 can include one or more processing cores.
- processor 401 can integrate an application processor and a modem processor, wherein the application processor primarily processes an operating system, a user interface, an application, etc., and the modem processor primarily processes wireless communications. It can be understood that the above modem processor may not be integrated into the processor 401.
- the memory 402 can be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by running software programs and modules stored in the memory 402.
- the memory 402 can mainly include a storage program area and a storage data area.
- the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.).
- the storage data area can store data and the like created according to the use of the server.
- memory 402 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 402 can also include a memory controller to provide processor 401 access to memory 402.
- the processor 401 in the server loads the executable file corresponding to the process of one or more applications into the memory 402 according to the following instruction, and is stored in the memory by the processor 401.
- the application in 402 thus implementing various functions.
- processor 401 can run an application stored in memory 402 to perform the following operations:
- the processor 401 may be further configured to: acquire image data captured by at least two cameras; perform face detection on the image data to obtain a face detection result; and when the face detection result indicates detection When the face image is obtained, the face image is acquired.
- the processor 401 is further configured to perform fitting on the feature point combination by using a preset random sampling consistency algorithm and a least squares method to generate a homography matrix.
- the processor 401 may be further configured to mark two pairs of feature points in the feature point combination in the form of homogeneous coordinates; establish a matrix model, and determine the two feature points by using the matrix model. Corresponding relationship between pairs; performing homogeneous transformation on the corresponding relational expression to generate a corresponding relational expression in a homogeneous form; fitting the feature point combination based on the homogeneous relational expression, generating Uniform matrix.
- the processor 401 may be further configured to: calculate, for each feature point combination, a corresponding transform error based on the homography matrix; perform an average calculation according to each feature point combination corresponding transform error to obtain an average The error is determined and the average error is determined as the error calculation result.
- the processor 401 may be further configured to: if the average error indicated by the error calculation result is less than or equal to a preset threshold, determine that a result of the face biometric verification is a pass; if the error calculation result indicates If the average error is greater than the preset threshold, it is determined that the result of the face biometric verification is not passed.
- the server when at least two cameras capture a face image, the feature points are first registered according to the preset face feature points, and corresponding feature points are obtained. Combining; then, using the preset algorithm to fit the homography matrix between the feature point combinations; finally, based on the homography matrix, calculating the transformation error of the combination of these feature points, so as to verify the face according to the error calculation result;
- the application embodiment does not need to calibrate the camera, that is, eliminates the calibration error, directly performs feature point matching on the captured face image, and fits the homography matrix, thereby reducing the calculation amount of the living body discrimination algorithm;
- the camera can be placed freely, thereby improving the flexibility and convenience of living body discrimination.
- the human face verification device provided by the embodiment of the present application is, for example, a computer, a tablet computer, a mobile phone with a touch function, etc., and the face biometric verification device is the same as the face biometric verification method in the above embodiment. It is contemplated that any of the methods provided in the embodiment of the human face verification method may be run on the face biometric verification device.
- a common tester in the field can understand all or part of the process of implementing the method for verifying the face of the living body in the embodiment of the present application, which can be controlled by a computer program.
- the computer program can be stored in a computer readable storage medium, such as in a memory of the terminal, and executed by at least one processor in the terminal, and can include, as described, The flow of an embodiment of a human face verification method.
- the storage medium may be a magnetic disk, an optical disk, a read only memory (ROM), a random access memory (RAM), or the like.
- each functional module may be integrated into one processing chip, or each module may exist physically separately, or two or more modules may be integrated into one module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
- the integrated module if implemented in the form of a software functional module and sold or used as a standalone product, may also be stored in a computer readable storage medium, such as a read only memory, a magnetic disk or an optical disk, etc. .
Abstract
Description
Claims (18)
- 一种人脸活体验证方法,其特征在于,包括:计算设备获取至少两个摄像头捕获到的人脸图像;所述计算设备按照预设人脸特征点,对所述人脸图像进行特征点配准,得到相应的特征点组合;所述计算设备利用预设算法拟合出所述特征点组合之间的单应矩阵;所述计算设备基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果,并根据所述误差计算结果进行人脸活体验证。
- 根据权利要求1所述的人脸活体验证方法,其特征在于,所述获取至少两个摄像头捕获到的人脸图像包括:获取至少两个摄像头捕获到的图像数据;对所述图像数据进行人脸检测,得到人脸检测结果;当所述人脸检测结果指示检测到人脸图像时,获取所述人脸图像。
- 根据权利要求1所述的人脸活体验证方法,其特征在于,所述利用预设算法拟合出所述特征点组合之间的单应矩阵,包括:利用预设的随机采样一致性算法以及最小二乘法,对所述特征点组合进行拟合,生成单应矩阵。
- 根据权利要求3所述的人脸活体验证方法,其特征在于,所述对所述特征点组合进行拟合,生成单应矩阵包括:以齐次坐标的形式,对特征点组合中的两两特征点对进行标示;建立矩阵模型,并通过所述矩阵模型确定两两特征点对之间的对应关系式;对所述对应关系式进行齐次变换,生成齐次形式的对应关系式;基于所述齐次形式的对应关系式,对所述特征点组合进行拟合,生成单应矩阵。
- 根据权利要求1所述的人脸活体验证方法,其特征在于,所述基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果包括:对于每一特征点组合,基于所述单应矩阵计算相应的变换误差;根据每一特征点组合相应的变换误差进行平均值计算,得到平均误差,并将所述平均误差确定为误差计算结果。
- 根据权利要求5所述的人脸活体验证方法,其特征在于,所述根据所述误差计算结果进行人脸活体验证,包括:若所述误差计算结果指示的平均误差小于或等于预设阈值,则确定人脸活体验证的结果为通过;若所述误差计算结果指示的平均误差大于所述预设阈值,则确定人脸活体验证的结果为不通过。
- 一种人脸活体验证装置,其特征在于,包括处理器和存储器,所述存储器中存储有计算机可读指令,可以使所述处理器执行以下操作:获取至少两个摄像头捕获到的人脸图像;按照预设人脸特征点,对所述人脸图像进行特征点配准,得到相应的特征点组合;利用预设算法拟合出所述特征点组合之间的单应矩阵;基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果;根据所述误差计算结果进行人脸活体验证。
- 根据权利要求7所述的人脸活体验证装置,其特征在于,所述计算机可读指令可以使所述处理器执行以下操作:获取至少两个摄像头捕获到的图像数据;对所述图像数据进行人脸检测,得到人脸检测结果;当所述人脸检测结果指示检测到人脸图像时,获取所述人脸图像。
- 根据权利要求7所述的人脸活体验证装置,其特征在于,所述计算机可读指令可以使所述处理器执行以下操作:利用预设的随机采样一致性算法以及最小二乘法,对所述特征点组合进行拟合,生成单应矩阵。
- 根据权利要求9所述的人脸活体验证装置,其特征在于,所述计算机可读指令可以使所述处理器执行以下操作:以齐次坐标的形式,对特征点组合中的两两特征点对进行标示;建立矩阵模型,并通过所述矩阵模型确定两两特征点对之间的对应关系式;对所述对应关系式进行齐次变换,生成齐次形式的对应关系式;基于所述齐次形式的对应关系式,对所述特征点组合进行拟合,生成单应矩阵。
- 根据权利要求7所述的人脸活体验证装置,其特征在于,所述计算机可读指令可以使所述处理器执行以下操作:对于每一特征点组合,基于所述单应矩阵计算相应的变换误差,根据每一特征点组合相应的变换误差进行平均值计算,得到平均误差,并将所述平均误差确定为误差计算结果。
- 根据权利要求11所述的人脸活体验证装置,其特征在于,所述计算机可读指令可以使所述处理器执行以下操作:若所述误差计算结果指示的平均误差小于或等于预设阈值,则确定人脸活体验证的结果为通过;若所述误差计算结果指示的平均误差大于所述预设阈值,则确定人脸活体验证的结果为不通过。
- 一种机器可读存储介质,其特征在于,包括计算机可读指令, 可以使至少一个处理器执行以下操作:获取至少两个摄像头捕获到的人脸图像;按照预设人脸特征点,对所述人脸图像进行特征点配准,得到相应的特征点组合;利用预设算法拟合出所述特征点组合之间的单应矩阵;基于所述单应矩阵,计算所述特征点组合的变换误差,得到误差计算结果;根据所述误差计算结果进行人脸活体验证。
- 根据权利要求13所述的存储介质,其特征在于,所述计算机可读指令可以使至少一个处理器执行以下操作:获取至少两个摄像头捕获到的图像数据;对所述图像数据进行人脸检测,得到人脸检测结果;当所述人脸检测结果指示检测到人脸图像时,获取所述人脸图像。
- 根据权利要求13所述的存储介质,其特征在于,所述计算机可读指令可以使至少一个处理器执行以下操作:利用预设的随机采样一致性算法以及最小二乘法,对所述特征点组合进行拟合,生成单应矩阵。
- 根据权利要求15所述的存储介质,其特征在于,所述计算机可读指令可以使至少一个处理器执行以下操作:以齐次坐标的形式,对特征点组合中的两两特征点对进行标示;建立矩阵模型,并通过所述矩阵模型确定两两特征点对之间的对应关系式;对所述对应关系式进行齐次变换,生成齐次形式的对应关系式;基于所述齐次形式的对应关系式,对所述特征点组合进行拟合,生成单应矩阵。
- 根据权利要求13所述的存储介质,其特征在于,所述计算机可 读指令可以使至少一个处理器执行以下操作:对于每一特征点组合,基于所述单应矩阵计算相应的变换误差,根据每一特征点组合相应的变换误差进行平均值计算,得到平均误差,并将所述平均误差确定为误差计算结果。
- 根据权利要求17所述的存储介质,其特征在于,所述计算机可读指令可以使至少一个处理器执行以下操作:若所述误差计算结果指示的平均误差小于或等于预设阈值,则确定人脸活体验证的结果为通过;若所述误差计算结果指示的平均误差大于所述预设阈值,则确定人脸活体验证的结果为不通过。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP17785384.3A EP3447679A4 (en) | 2016-04-21 | 2017-04-13 | METHOD AND DEVICE FOR FACIAL VERIFICATION IN VIVO |
JP2018538712A JP2019509545A (ja) | 2016-04-21 | 2017-04-13 | 生きた人の顔検証方法およびデバイス |
KR1020187021514A KR102120241B1 (ko) | 2016-04-21 | 2017-04-13 | 얼굴 생체 검증 방법 및 장치 |
US16/024,117 US10796179B2 (en) | 2016-04-21 | 2018-06-29 | Living face verification method and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610252267.9 | 2016-04-21 | ||
CN201610252267.9A CN105868733A (zh) | 2016-04-21 | 2016-04-21 | 一种人脸活体验证方法及装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/024,117 Continuation-In-Part US10796179B2 (en) | 2016-04-21 | 2018-06-29 | Living face verification method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017181899A1 true WO2017181899A1 (zh) | 2017-10-26 |
Family
ID=56632933
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/080376 WO2017181899A1 (zh) | 2016-04-21 | 2017-04-13 | 一种人脸活体验证方法及装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10796179B2 (zh) |
EP (1) | EP3447679A4 (zh) |
JP (1) | JP2019509545A (zh) |
KR (1) | KR102120241B1 (zh) |
CN (1) | CN105868733A (zh) |
WO (1) | WO2017181899A1 (zh) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108156161A (zh) * | 2017-12-27 | 2018-06-12 | 百度在线网络技术(北京)有限公司 | 验证方法和装置 |
CN110751017A (zh) * | 2019-09-03 | 2020-02-04 | 上海交通大学 | 一种基于强化学习的在线非监督的人群组分割方法及终端 |
CN111046703A (zh) * | 2018-10-12 | 2020-04-21 | 杭州海康威视数字技术股份有限公司 | 人脸防伪检测方法、装置及多目相机 |
CN111291586A (zh) * | 2018-12-06 | 2020-06-16 | 北京市商汤科技开发有限公司 | 活体检测方法、装置、电子设备及计算机可读存储介质 |
CN111666786A (zh) * | 2019-03-06 | 2020-09-15 | 杭州海康威视数字技术股份有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN111780673A (zh) * | 2020-06-17 | 2020-10-16 | 杭州海康威视数字技术股份有限公司 | 一种测距方法、装置及设备 |
CN112598716A (zh) * | 2020-12-01 | 2021-04-02 | 浙江大华技术股份有限公司 | 一种多模态图像的配准方法、装置及系统 |
CN111666786B (zh) * | 2019-03-06 | 2024-05-03 | 杭州海康威视数字技术股份有限公司 | 图像处理方法、装置、电子设备及存储介质 |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105868733A (zh) | 2016-04-21 | 2016-08-17 | 腾讯科技(深圳)有限公司 | 一种人脸活体验证方法及装置 |
CN107316029B (zh) * | 2017-07-03 | 2018-11-23 | 腾讯科技(深圳)有限公司 | 一种活体验证方法及设备 |
CA3070624A1 (en) * | 2017-07-28 | 2019-01-31 | Nuro, Inc. | Flexible compartment design on autonomous and semi-autonomous vehicle |
US11341771B2 (en) * | 2017-10-18 | 2022-05-24 | Sony Semiconductor Solutions Corporation | Object identification electronic device |
KR102123835B1 (ko) * | 2018-09-11 | 2020-06-17 | 한국산업기술대학교 산학협력단 | 적응적 분류에 기반한 영상정합 시스템 및 방법 |
CN111079470B (zh) * | 2018-10-18 | 2023-08-22 | 杭州海康威视数字技术股份有限公司 | 人脸活体检测的方法和装置 |
CN110059590B (zh) * | 2019-03-29 | 2023-06-30 | 努比亚技术有限公司 | 一种人脸活体验证方法、装置、移动终端及可读存储介质 |
US10997396B2 (en) | 2019-04-05 | 2021-05-04 | Realnetworks, Inc. | Face liveness detection systems and methods |
CN111860055B (zh) * | 2019-04-29 | 2023-10-24 | 北京眼神智能科技有限公司 | 人脸静默活体检测方法、装置、可读存储介质及设备 |
CN110636276B (zh) * | 2019-08-06 | 2021-12-28 | RealMe重庆移动通信有限公司 | 视频拍摄方法、装置、存储介质及电子设备 |
CN111079587B (zh) * | 2019-12-03 | 2023-09-22 | 北京迈格威科技有限公司 | 人脸的识别方法、装置、计算机设备及可读存储介质 |
CN111079684B (zh) * | 2019-12-24 | 2023-04-07 | 陕西西图数联科技有限公司 | 一种基于粗糙-精细拟合的三维人脸检测方法 |
CN113128258B (zh) * | 2019-12-30 | 2022-10-04 | 杭州海康威视数字技术股份有限公司 | 活体检测方法、装置、电子设备及存储介质 |
CN111209870A (zh) * | 2020-01-09 | 2020-05-29 | 杭州涂鸦信息技术有限公司 | 一种双目活体摄像头快速配准方法及其系统和装置 |
CN111339958B (zh) * | 2020-02-28 | 2023-08-29 | 南京鑫之派智能科技有限公司 | 一种基于单目视觉的人脸活体检测方法及系统 |
CN113536844B (zh) * | 2020-04-16 | 2023-10-31 | 中移(成都)信息通信科技有限公司 | 人脸对比方法、装置、设备及介质 |
CN111581411B (zh) * | 2020-06-09 | 2023-08-22 | 腾讯科技(深圳)有限公司 | 高精度人脸形状库的构建方法、装置、设备及存储介质 |
CN112287798A (zh) * | 2020-10-23 | 2021-01-29 | 深圳市商汤科技有限公司 | 测温方法及装置、电子设备及存储介质 |
CN112418021A (zh) * | 2020-11-10 | 2021-02-26 | 上海臻面智能信息科技有限公司 | 一种单rgb-ir摄像头的安全模组的实现方法及装置 |
CN113553990B (zh) * | 2021-08-09 | 2022-04-15 | 深圳智必选科技有限公司 | 多人脸跟踪识别的方法、装置、计算机设备及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080192980A1 (en) * | 2007-02-14 | 2008-08-14 | Samsung Electronics Co., Ltd. | Liveness detection method and apparatus of video image |
CN101923641A (zh) * | 2010-09-09 | 2010-12-22 | 北京交通大学 | 一种改进的人脸识别方法 |
CN102065313A (zh) * | 2010-11-16 | 2011-05-18 | 上海大学 | 平行式相机阵列的未标定多视点图像校正方法 |
CN104573682A (zh) * | 2015-02-15 | 2015-04-29 | 四川川大智胜软件股份有限公司 | 一种基于人脸相似度的人脸防伪方法 |
CN105023010A (zh) * | 2015-08-17 | 2015-11-04 | 中国科学院半导体研究所 | 一种人脸活体检测方法及系统 |
CN105205458A (zh) * | 2015-09-16 | 2015-12-30 | 北京邮电大学 | 人脸活体检测方法、装置及系统 |
CN105868733A (zh) * | 2016-04-21 | 2016-08-17 | 腾讯科技(深圳)有限公司 | 一种人脸活体验证方法及装置 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8315441B2 (en) * | 2007-06-29 | 2012-11-20 | Nec Corporation | Masquerade detection system, masquerade detection method and masquerade detection program |
US8448859B2 (en) * | 2008-09-05 | 2013-05-28 | Datalogic ADC, Inc. | System and method for preventing cashier and customer fraud at retail checkout |
WO2010050206A1 (ja) * | 2008-10-28 | 2010-05-06 | 日本電気株式会社 | なりすまし検知システム、なりすまし検知方法及びなりすまし検知プログラム |
US8553982B2 (en) * | 2009-12-23 | 2013-10-08 | Intel Corporation | Model-based play field registration |
CN102834843B (zh) * | 2010-04-02 | 2016-11-16 | 诺基亚技术有限公司 | 用于面部检测的方法和装置 |
US8675926B2 (en) * | 2010-06-08 | 2014-03-18 | Microsoft Corporation | Distinguishing live faces from flat surfaces |
US9342886B2 (en) * | 2011-04-29 | 2016-05-17 | Qualcomm Incorporated | Devices, methods, and apparatuses for homography evaluation involving a mobile device |
KR101373397B1 (ko) * | 2012-06-11 | 2014-03-13 | 인하대학교 산학협력단 | 증강현실의 호모그래피 정확도 향상을 위한 csp 기반의 ransac 샘플링 방법 |
CN103679118B (zh) * | 2012-09-07 | 2017-06-16 | 汉王科技股份有限公司 | 一种人脸活体检测方法及系统 |
CN103971400B (zh) * | 2013-02-06 | 2018-02-02 | 阿里巴巴集团控股有限公司 | 一种基于标识码的三维交互的方法和系统 |
-
2016
- 2016-04-21 CN CN201610252267.9A patent/CN105868733A/zh active Pending
-
2017
- 2017-04-13 JP JP2018538712A patent/JP2019509545A/ja active Pending
- 2017-04-13 EP EP17785384.3A patent/EP3447679A4/en active Pending
- 2017-04-13 KR KR1020187021514A patent/KR102120241B1/ko active IP Right Grant
- 2017-04-13 WO PCT/CN2017/080376 patent/WO2017181899A1/zh active Application Filing
-
2018
- 2018-06-29 US US16/024,117 patent/US10796179B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080192980A1 (en) * | 2007-02-14 | 2008-08-14 | Samsung Electronics Co., Ltd. | Liveness detection method and apparatus of video image |
CN101923641A (zh) * | 2010-09-09 | 2010-12-22 | 北京交通大学 | 一种改进的人脸识别方法 |
CN102065313A (zh) * | 2010-11-16 | 2011-05-18 | 上海大学 | 平行式相机阵列的未标定多视点图像校正方法 |
CN104573682A (zh) * | 2015-02-15 | 2015-04-29 | 四川川大智胜软件股份有限公司 | 一种基于人脸相似度的人脸防伪方法 |
CN105023010A (zh) * | 2015-08-17 | 2015-11-04 | 中国科学院半导体研究所 | 一种人脸活体检测方法及系统 |
CN105205458A (zh) * | 2015-09-16 | 2015-12-30 | 北京邮电大学 | 人脸活体检测方法、装置及系统 |
CN105868733A (zh) * | 2016-04-21 | 2016-08-17 | 腾讯科技(深圳)有限公司 | 一种人脸活体验证方法及装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3447679A4 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108156161A (zh) * | 2017-12-27 | 2018-06-12 | 百度在线网络技术(北京)有限公司 | 验证方法和装置 |
CN111046703A (zh) * | 2018-10-12 | 2020-04-21 | 杭州海康威视数字技术股份有限公司 | 人脸防伪检测方法、装置及多目相机 |
CN111046703B (zh) * | 2018-10-12 | 2023-04-18 | 杭州海康威视数字技术股份有限公司 | 人脸防伪检测方法、装置及多目相机 |
CN111291586A (zh) * | 2018-12-06 | 2020-06-16 | 北京市商汤科技开发有限公司 | 活体检测方法、装置、电子设备及计算机可读存储介质 |
CN111666786A (zh) * | 2019-03-06 | 2020-09-15 | 杭州海康威视数字技术股份有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN111666786B (zh) * | 2019-03-06 | 2024-05-03 | 杭州海康威视数字技术股份有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN110751017A (zh) * | 2019-09-03 | 2020-02-04 | 上海交通大学 | 一种基于强化学习的在线非监督的人群组分割方法及终端 |
CN110751017B (zh) * | 2019-09-03 | 2021-03-02 | 上海交通大学 | 一种基于强化学习的在线非监督的人群组分割方法及终端 |
CN111780673A (zh) * | 2020-06-17 | 2020-10-16 | 杭州海康威视数字技术股份有限公司 | 一种测距方法、装置及设备 |
CN112598716A (zh) * | 2020-12-01 | 2021-04-02 | 浙江大华技术股份有限公司 | 一种多模态图像的配准方法、装置及系统 |
CN112598716B (zh) * | 2020-12-01 | 2024-04-05 | 浙江大华技术股份有限公司 | 一种多模态图像的配准方法、装置及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN105868733A (zh) | 2016-08-17 |
EP3447679A4 (en) | 2019-04-24 |
EP3447679A1 (en) | 2019-02-27 |
KR20180098367A (ko) | 2018-09-03 |
US10796179B2 (en) | 2020-10-06 |
US20180307928A1 (en) | 2018-10-25 |
KR102120241B1 (ko) | 2020-06-08 |
JP2019509545A (ja) | 2019-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017181899A1 (zh) | 一种人脸活体验证方法及装置 | |
JP3954484B2 (ja) | 画像処理装置およびプログラム | |
US7554575B2 (en) | Fast imaging system calibration | |
JP5445460B2 (ja) | なりすまし検知システム、なりすまし検知方法及びなりすまし検知プログラム | |
WO2019071664A1 (zh) | 结合深度信息的人脸识别方法、装置及存储介质 | |
WO2018228218A1 (zh) | 身份识别方法、计算设备及存储介质 | |
WO2022095596A1 (zh) | 图像对齐方法、图像对齐装置及终端设备 | |
US10991124B2 (en) | Determination apparatus and method for gaze angle | |
CN111028205B (zh) | 一种基于双目测距的眼睛瞳孔定位方法及装置 | |
TWI669664B (zh) | 眼睛狀態檢測系統及眼睛狀態檢測系統的操作方法 | |
JP6055435B2 (ja) | 被写体認識装置、被写体認識方法及び被写体認識プログラム | |
WO2021008205A1 (zh) | 图像处理 | |
JP6662382B2 (ja) | 情報処理装置および方法、並びにプログラム | |
TWI711007B (zh) | 調整感興趣區域的方法與其運算裝置 | |
TW202242716A (zh) | 用於目標匹配的方法、裝置、設備及儲存媒體 | |
CN106406507B (zh) | 图像处理方法以及电子设备 | |
US11166005B2 (en) | Three-dimensional information acquisition system using pitching practice, and method for calculating camera parameters | |
CN113409371B (zh) | 图像配准方法及相关装置、设备 | |
CN111814869B (zh) | 一种同步定位与建图的方法、装置、电子设备及存储介质 | |
Skocaj et al. | Evaluation of performance of smart mobile devices in machine vision tasks | |
CN117745832A (zh) | 定位方法、多相机系统、电子设备及存储介质 | |
CN114078206A (zh) | 目标验证方法及设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2018538712 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20187021514 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020187021514 Country of ref document: KR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2017785384 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2017785384 Country of ref document: EP Effective date: 20181121 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17785384 Country of ref document: EP Kind code of ref document: A1 |