WO2019080579A1 - 3d face identity authentication method and apparatus - Google Patents

3d face identity authentication method and apparatus

Info

Publication number
WO2019080579A1
WO2019080579A1 PCT/CN2018/098442 CN2018098442W WO2019080579A1 WO 2019080579 A1 WO2019080579 A1 WO 2019080579A1 CN 2018098442 W CN2018098442 W CN 2018098442W WO 2019080579 A1 WO2019080579 A1 WO 2019080579A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
dimensional image
target
depth
Prior art date
Application number
PCT/CN2018/098442
Other languages
French (fr)
Chinese (zh)
Inventor
肖振中
黄源浩
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2019080579A1 publication Critical patent/WO2019080579A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints

Definitions

  • the invention belongs to the technical field of computers, and more particularly to a 3D face identity authentication method and device.
  • the human body has many unique features, such as faces, fingerprints, irises, human ears, etc. These features are collectively referred to as biometrics. Biometrics are widely used in security, home, intelligent hardware and many other fields. At present, more mature biometrics such as fingerprint recognition and iris recognition have been widely used in mobile phones, computers and other terminals. For the features such as face, although the related research has been very deep, the recognition of features such as face is still not popular, mainly because the existing recognition methods have limitations that result in lower recognition rate and recognition stability. These limitations mainly include the influence of ambient light intensity and illumination direction, the recognition rate of facial expressions, and the vulnerability of artificial features.
  • the recognition of existing features such as face is mainly based on the two-dimensional color image of the face.
  • the recognition effect will be seriously affected.
  • the direction of the light is different, there will be shadows on the face image, which will also affect the recognition effect.
  • the referenced face image is acquired without an expression, and the face image currently acquired under the smiling expression, the effect of face recognition also decreases.
  • the recognized object is not a real face but a two-dimensional face image, it can often be recognized.
  • biometric recognition based on near-infrared or thermal infrared images is generally used.
  • the near-infrared image can be improved by the interference of ambient light, but it is difficult to solve the problem of artificial feature deception.
  • the thermal infrared image is only The real face is imaged, so the problem of artificial feature deception can be solved.
  • the thermal infrared image has low resolution, which seriously affects the recognition effect.
  • the present invention provides a task execution method based on face recognition.
  • the present invention provides a 3D face identity authentication method and apparatus, including the steps of: acquiring a depth image and a two-dimensional image including a target face; and registering the depth image with a reference face 3D texture image to obtain Determining the posture information of the target human face; projecting the reference human face 3D texture image into the reference human face two-dimensional image according to the posture information; and performing the target human face two-dimensional image and the reference human face two-dimensional image Similarity comparison.
  • the method further includes the steps of: detecting the human eye of the target face using the depth image and/or the two-dimensional image independently of steps (b)-(d), when the person When the eye line direction coincides with the preset direction, proceed to step (b) or (c) or (d).
  • the method further includes the steps of: detecting, by the depth image and/or the two-dimensional image, whether the target face is a real face, independent of steps (b)-(d), if true The face is: continue to perform step (b) or (c) or (d), or when the similarity exceeds a preset first threshold, the authentication passes.
  • the method further includes the step of: updating the corresponding texture information in the reference face 3D texture image by using the target face two-dimensional image when the similarity exceeds a preset second threshold.
  • the reference face 3D texture image is obtained by acquiring a depth image sequence including a reference face and a two-dimensional image sequence; and calculating the reference face 3D texture image.
  • the 3D texture image includes a 3D point cloud or a 3D mesh with texture information.
  • the projection refers to projecting the 3D texture image onto a 2D plane to form a human face two-dimensional image.
  • the face pose in the reference face two-dimensional image is consistent with the face pose in the target face two-dimensional image.
  • the two-dimensional image comprises an infrared image.
  • the two-dimensional image comprises a structured light image.
  • the present invention also provides a 3D face identity authentication device, comprising: a depth camera for acquiring a depth image including a target face; a plane camera for acquiring a two-dimensional image including the target face; Receiving the depth image and the two-dimensional image, and performing an operation of: registering the depth image with a reference face 3D texture image to acquire posture information of the target face; The face 3D texture image projects a reference face two-dimensional image according to the posture information; and compares the target face two-dimensional image with the reference face two-dimensional image.
  • the processor further performs: detecting, by the depth image and/or the two-dimensional image, a human eye line of sight of the target face, when the human eye line of sight and preset When the directions are the same, continue with other operations.
  • the processor further performs: using the depth image and/or the two-dimensional image, detecting whether the target face is a real face, and if it is a real face: continuing to execute Other operations, or when the similarity exceeds a preset first threshold, the authentication passes.
  • the processor further performs an operation of updating a corresponding texture in the reference face 3D texture image by using the target face two-dimensional image when the similarity exceeds a preset second threshold information.
  • FIG. 1 is a schematic diagram of a 3D face identity authentication scenario according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a 3D face identity entry method in accordance with one embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a 3D face identity entry and authentication method in accordance with one embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a 3D face identity authentication method according to still another embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a 3D face identity entry and authentication method according to still another embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a 3D face identity entry and authentication method according to another embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a 3D face identity authentication device in accordance with one implementation of the present invention.
  • connection can be for a fixed effect or for circuit communication.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include one or more of the features either explicitly or implicitly.
  • the meaning of "a plurality" is two or more, unless specifically defined otherwise.
  • Face authentication technology can be used for security check and monitoring.
  • smart terminals such as mobile phones and tablets
  • face identity can also be applied to unlocking, paying, and even entertainment games.
  • Intelligent terminal devices such as mobile phones, tablets, computers, televisions, etc.
  • the application environment often changes, and environmental changes may affect the imaging of color cameras. For example, when the light is weak, the face cannot be well imaged. On the other hand, the color camera cannot recognize whether the recognized object is a real face.
  • the invention provides a 3D face identity authentication method and device.
  • the depth image and the two-dimensional image that are insensitive to ambient illumination will be used to realize the functions of inputting, detecting and recognizing the face identity, and the biometric detection based on the depth image and the two-dimensional image is used to avoid the false recognition of the false face.
  • the two-dimensional image here may be an infrared image, an ultraviolet image, etc.
  • the corresponding acquisition camera may be a flat camera such as an infrared camera or an ultraviolet camera. In the following description, an infrared image will be described as an example.
  • FIG. 1 is a schematic diagram of a 3D face identity authentication scenario according to an embodiment of the present invention.
  • the user 10 holds the face authentication device 11 (mobile terminal, such as a mobile phone, a tablet, etc.), and the device 11 is internally provided with a depth camera 111 and an infrared camera 112.
  • the depth camera 111 is used to acquire the The depth image of the target face
  • the infrared camera 112 is used to acquire an infrared image containing the target face.
  • the device 11 needs to record and save the reference face information into the device 11 for subsequent authentication comparison before performing the face identity authentication; in the face identity authentication phase, the device 11 will collect the depth image of the current target face.
  • the face identity authentication succeeds, otherwise it fails.
  • the "reference face” and the “target face” mentioned above are relative to the two different stages of face identity entry and authentication. The difference is only to show that the essence of face identity authentication is verification. Whether the target face is the same as the reference face,
  • FIG. 2 is a schematic diagram of a 3D face identity entry method in accordance with one embodiment of the present invention. The method includes the following steps:
  • the depth image sequence and the infrared image sequence including the reference face are acquired by the depth camera 111 and the infrared camera 112.
  • the sequence image is because the single image cannot contain the information of the entire face, so the collection needs to be included.
  • the sequence image of all parts of the face, the depth image and the infrared image can be collected simultaneously or in time.
  • the device 11 does not move, the face constantly changes direction to collect a sequence image containing all the face parts information; the other way is that the face is not moving, and the device 11 collects all the movements by including A sequence image of face part information, it is understood that any other means can be applied to the present invention.
  • any one of the images in the sequence preferably has at least partially coincides with the face region included in at least one of the other images, and the image of the coincident portion is beneficial for subsequent images.
  • Image fusion For example, three images are respectively collected on the left side, the middle side, and the right side of the face, wherein the middle image and the left and right sides have a common face area in the image.
  • the captured depth image or the infrared image includes both the face and the background. Therefore, in this step, the face needs to be detected to divide the step.
  • the depth information may be The face is segmented, and for the infrared image, a contour recognition based method or a machine learning based method such as an Adaboost algorithm or a neural network based detection method may be utilized. It will be appreciated that any suitable method of face detection can be applied to the present invention.
  • the depth image and the infrared image are registered images (for details, as described later), so that only one of the images may be detected in face detection, and the face in the other image may be based on Correspondence can be obtained directly.
  • the learned neural network model to perform face detection and segmentation on the infrared image to obtain a new infrared image with some or all of the background removed, and then a new depth image can be obtained according to the correspondence between the depth image and the infrared image.
  • a detection method that combines two images more efficiently will be employed, first based on the depth value of the corresponding pixel in the depth image; secondly, based on the depth value and the lens parameter of the infrared camera, the depth value can be estimated.
  • the size of the face region is finally selected.
  • the infrared image region of the size of the face region corresponding to the depth value is selected as the object on the infrared image as the object for face determination. Due to the face detection of infrared images in the traditional method, the size of the face region needs to be subjected to a certain number of iterations to achieve the best effect, and the method directly determines the size by using the depth information, thereby speeding up the face detection speed. .
  • the face depth image sequence obtained in the previous step is first merged into an overall face 3D point cloud model.
  • the depth image sequence is merged into a 3D image by using an ICP (Iterative Nearest Point) algorithm, which is a face 3D point cloud model, by the ICP (Iterative Nearest Point) algorithm, the paper "Kinectfusion”
  • ICP Iterative Nearest Point
  • the Kinectfusion method described in Real-time 3D reconstruction and interaction using a moving depth camera” can be used in the present invention.
  • a dynamic fusion algorithm can be used to acquire a 3D point cloud model of the face, such as the paper "Dynamicfusion reconstruction and tracking of nonrigid scenes in realtime”.
  • the Dynamicfusion algorithm in " can be used in the present invention.
  • considering the 3D point cloud model is rich in noise and large amount of data it is also necessary to convert the 3D point cloud model into a 3D mesh model, and any suitable mesh generation algorithm can be applied to the present invention.
  • the 3D point cloud model or the 3D mesh model is uniformly expressed by 3D images.
  • the texture information contained in the infrared image is assigned to the 3D image to obtain a 3D texture image.
  • each pixel on the depth image contains not only the pixel value representing the depth, but also the pixel value representing the texture information, so after the 3D image is obtained, each point in the 3D image is obtained ( The node) is assigned a pixel value representing the texture information to obtain a 3D texture image.
  • a two-dimensional human face infrared image is obtained by projecting a 3D texture image onto a two-dimensional plane.
  • the frontal orientation of the face is first obtained according to the 3D information in the 3D texture image, and then the 3D texture image is projected.
  • a full frontal face infrared image is obtained on a 2-dimensional plane perpendicular to the orientation. It can be understood that after acquiring the 3D texture image, the complete human face infrared image at any viewing angle can be obtained by projecting to the two-dimensional plane. It should be noted that in order to distinguish the originally acquired infrared image from the projected or transformed infrared image, the latter is uniformly represented by the "face infrared image" in the present invention to distinguish the "infrared image”.
  • the feature extraction algorithm is used to extract the facial feature information.
  • the face infrared image is placed into a pre-learned neural network (such as a convolutional neural network CNN, etc.) to output feature information of the face.
  • a pre-learned neural network such as a convolutional neural network CNN, etc.
  • the extracted face feature information is saved to the device 11 as an identity authentication feature of the reference face for subsequent target face identity authentication comparison.
  • FIG. 3 is a schematic diagram of a 3D face identity entry and authentication method according to an embodiment of the present invention.
  • the authentication step includes: acquiring a depth image sequence of the target face and an infrared image sequence; calculating a 3D texture image of the target face based on the sequence; projecting the 3D texture image of the target face into the front face infrared image and based on the infrared
  • the image extracts feature information of the target face; unlike the face feature entry, finally, the feature information of the target face is compared with the feature information of the reference face to determine whether it is the same face.
  • FIG. 4 and FIG. 5 are schematic diagrams of a 3D face identity entry and authentication method according to an embodiment of the present invention, and the face identity entry method corresponding to the 3D face identity argumentation method shown in FIG. 4 is the same as the embodiment shown in FIG. See Figure 5 for details.
  • This certification method includes the following steps:
  • the depth camera 111 and the infrared camera 112 collect a depth image and an infrared image including the target face.
  • only one depth image and the infrared image are collected, which can also speed up the face identity authentication and give the user a better experience.
  • multiple images can also be acquired in other embodiments, but the multiple images here are still relatively small compared to the sequence containing all the information of the face in the face entry stage. In the following description, a single depth image and a single infrared image will be described.
  • this step After acquiring the depth image and the infrared image including the face, similar to step 201, this step generally includes a face detection and segmentation step, and finally obtains a depth image and an infrared image from which part or all of the background is removed.
  • the human eye line of sight indicates the position of the current target human eye's attention, and line of sight detection is increasingly used in many applications.
  • human eye line of sight detection is also performed.
  • the human eye line of sight detection step may also not be applied to the 3D face identity authentication.
  • the human eye line of sight detection step may also be placed between other steps in the embodiment, that is, the human eye line of sight detection step. It is relatively independent of other steps, and can perform this step according to different application requirements and obtain the human eyesight detection result.
  • the use of one of the face depth image, the face infrared image, or a combination of the two can be used to detect the human eye line of sight.
  • a combination of a depth image and an infrared image is preferably employed to detect a human eye line of sight.
  • the 3D information of the human face (such as a 3D point cloud) is calculated by using the depth image, and information such as face orientation and key point 3D coordinates can be acquired according to the 3D information;
  • the details of the human eye are identified according to the infrared image.
  • the pupil center, the scintillation point (the fixed spot formed by the cornea reflected by the human eye after the infrared light is fixed in the infrared camera), the pupil, the iris, etc., further based on the 3D information of the face and the infrared image and the depth image
  • the relationship between the two can obtain the 3D coordinates of the human eye detail features; finally, the human eye line of sight is calculated by combining the 3D coordinates of one or more human eye detail features. direction.
  • a human eye line of sight detection method known in the art can also be applied to the present invention, for example, it is possible to perform line of sight detection or the like using only an infrared image.
  • the human eye line of sight detection can further enhance the physical examination of the user's face identity authentication. For example, in the embodiment shown in FIG. 1, when the human eye does not look at the device 11 and the face is collected by the depth camera 111 and the infrared camera 112. At this time, the certification performed is often not the subjective will of the user, but a mis-authentication. Therefore, in some applications, the human eye line of sight detection can be used as a separate step to detect the human eye line of sight, and other steps can be based on the results of the human eye detection in this step to determine whether or not to perform or perform which method.
  • the next step is performed when the human eye line of sight is detected to be the same as the preset line of sight direction, where the preset line of sight direction generally refers to the human eye gaze direction or attention in the current 3D face identity authentication application.
  • the face authentication application displayed on the screen such as unlocking, paying, etc.
  • the preset line of sight direction may also refer to other directions, such as the direction of the pointing device 11.
  • the posture information of the current target face is acquired by using the depth image; secondly, the target face infrared image is aligned and corrected based on the posture information, and the purpose of the correction is to acquire the current current posture of the reference human face. Infrared image of the face, which can eliminate the face image recognition error caused by different postures to the greatest extent; finally, the corrected target face image is extracted from the face feature, and the feature is compared with the feature of the reference face image. Make a comparison for certification. Let's take a closer look at these steps:
  • a 3D image of the reference face (such as a 3D point cloud, a 3D mesh, etc.) has been saved, so in this step, the target face depth image acquired in step 301 and the 3D image of the reference face are performed.
  • Alignment in one embodiment, the ICP algorithm is used to achieve the alignment of the two, and after the alignment operation, the posture information of the current target face relative to the reference face can be obtained.
  • a 3D image of a standard human face may also be used, and the 3D image of the standard human face is used as a 3D image of the reference human face for calculating the posture information of the target human face.
  • the target facial infrared image After acquiring the posture information of the current target face, correcting the target facial infrared image based on the posture information to obtain the current target facial infrared image with the same reference facial infrared image posture obtained in step 203, preferably, reference
  • the face infrared image is a frontal face image, so the purpose of the correction is to obtain a frontal face infrared image of the current target face.
  • the gesture-based face image alignment algorithm of the prior art can be applied to the present invention, such as the method described in the paper "DeepFace Closing the Gap to Human Level performance in face verification".
  • the feature extraction algorithm is used to extract the facial feature information.
  • the target face infrared image is placed in the same neural grid as used in the entry phase to output feature information having a similar structure.
  • the feature information of the current target face obtained in the previous step is compared with the feature information of the target face obtained in the entry phase to determine whether it is the same face.
  • the comparison here generally outputs a similarity.
  • a preset threshold such as a threshold of 80%, it is considered to be the same face, otherwise it is a different face.
  • FIG. 5 is a schematic diagram of the above 3D face identity entry and authentication method. It should be noted that, in addition to the obtained reference face feature information recorded and saved in the entry phase, in fact, a 3D image (3D point cloud/grid) of the reference face needs to be entered and saved, so as to be in the authentication stage. The 3D image is called when calculating the pose of the target face.
  • the accuracy of the 3D face identity authentication is greatly dependent on the alignment and correction accuracy of the face infrared image, since only a single image or a few infrared images are acquired during the authentication phase.
  • the acquired infrared image pose is relatively biased, such as when the head is raised or the side face is more, even if it is converted into the same target face infrared image by the alignment and correction by the alignment and correction, the alignment and correction are performed. It is impossible for the algorithm to recover the loss of features due to the side face. Based on this, the present invention also provides a more accurate 3D face identity entry and authentication method.
  • FIG. 6 is a schematic diagram of a 3D face identity entry and authentication method according to still another embodiment of the present invention.
  • the depth image sequence of the reference face and the infrared image sequence are first obtained, and then the 3D texture image including the 3D point cloud/grid and the infrared texture information is calculated, and finally the 3D texture image is recorded and stored in the storage device. in.
  • the depth image of the target face and the infrared image are first acquired. This step often requires face detection and image segmentation to obtain the face image.
  • the line of sight detection is performed when the detected eye line direction is greater than the preset.
  • the next step is to match (or align, register) the depth image with the saved 3D point cloud/grid of the reference face to obtain the posture information of the target face; and then refer to the face according to the posture information.
  • the 3D texture image is projected to obtain the same reference face infrared image as the target face pose; the reference face infrared image and the target face infrared image are again placed into the neural grid to extract the respective facial features. Information; finally, the face features are compared and the comparison results are output.
  • a face infrared image closest to the target face pose is obtained.
  • the method does not need to change the posture of the infrared image by the alignment and correction algorithm.
  • the 3D texture image contains all the information including the reference face, the reference person obtained by the projection is obtained.
  • the infrared image of the face can ensure the highest similarity with the infrared image of the target face, which is beneficial to improve the authentication accuracy.
  • a depth learning algorithm for similarity judgment may be trained in the algorithm selection.
  • the two images, the output of the algorithm is the similarity, which can speed up the authentication.
  • the face authentication method described above is often easily "spoofed". For example, using a 2D image or a three-dimensional model of a face, the image or the three-dimensional model is used as a target face, and the above method may be used for authentication success, which is in some cases. Applications such as unlocking and payment based on face authentication are unacceptable.
  • the 3D face identity authentication method provided by the present invention may further include a living body detecting step, which is used to determine whether the current target face is a real face, and only the target face and the reference face are similar. When the preset threshold is exceeded and the target face is a real face, it will be authenticated, otherwise it will fail.
  • detecting a living body there are various methods for detecting a living body. In one embodiment, it may be determined based on the acquired target face depth image whether it is a stereoscopic object to solve the "spoofing" caused by the 2D image. In one embodiment, the extracted infrared image may be utilized. The implicit skin characteristics of the face are used to determine the skin to solve the "spoofing" caused by the general three-dimensional model. However, there is still a lack of an effective living detection method that can cope with various "spoofing" situations. The present invention will provide an algorithm to solve this problem.
  • the living body detection method in the present invention is based on a deep learning algorithm.
  • the model is trained by constructing a neural grid model and using a large amount of data.
  • a large amount of data here includes depth images of real people, 2D photos, simulation masks, 3D models, and infrared images.
  • the larger the amount of data the more accurate the trained neural mesh model will be.
  • the trained neural grid can very accurately find real faces from various false faces, thus achieving live detection.
  • the acquired target face depth image and the infrared image are input into the neural grid to output a result of whether it is a real face; in another embodiment, only the depth image or the infrared image may be used. Enter into the neural grid to output the result of whether it is a real face.
  • the living body detecting step may also be performed after the depth image and the infrared image are acquired, and the correlation detecting step is performed after the living body detection passes, so the living body detecting step is relative to the acquiring the depth image and the infrared image.
  • the external steps are also relatively independent, and the step can be performed before any step and the next step is determined based on the result of the living body detection.
  • the living body detecting step may not be performed.
  • the living body detecting step can also be performed by the feature extraction and comparison steps, that is, the similarity detection of the target face is performed only when the living body detection passes.
  • the 3D face identity authentication algorithm may further include a data update step to deal with face changes.
  • the authentication is performed when the target human face and the reference face similarity exceed a certain threshold and are detected by the living body. It is conceivable that if the entered reference face information is always the same, when the target face changes with time, the similarity will become lower and lower until there is misunderstanding, that is, the current target person cannot be distinguished. The face is the original reference face. In order to cope with this problem, after the 3D face authentication is passed, when the similarity is higher than another threshold, the current target face information is used as the new reference face information, and the reference face information is continuously updated, even if Faces can be accurately authenticated after a large change in face. It should be noted that the step of updating the information indicates that the corresponding threshold should generally be higher than the threshold in the face authentication determination step.
  • the meaning of the update reference face information mentioned here is also different.
  • the face 3D texture image may be updated; for the embodiment shown in FIG. 5, the feature information of the face infrared image is updated.
  • the target face feature information is used as the new reference face feature information to implement data update; and in the embodiment shown in FIG. 6 , the updated 3D texture image of the face is the target face 2D image.
  • the texture information replaces the corresponding texture information in the original reference face 3D texture image.
  • FIG. 7 is a schematic diagram of a 3D face identity authentication apparatus according to an implementation of the present invention.
  • the device 11 includes a projection module 702 for capturing an infrared structured light image to a target space, an acquisition module 707 for collecting a structured light image, and a device 11 further including a processor (in the figure) Not shown), the processor calculates a depth image of the target after receiving the structured light image.
  • the structured light image herein includes face texture information in addition to the structured light information, so the structured light image can also participate in face identity entry and authentication as a face infrared image and a depth image.
  • the acquisition module 707 is a part of the depth camera 111 in FIG. 1 and is also an infrared camera 112. In other words, the depth camera and the infrared camera here can be considered to be the same camera.
  • the device 11 further includes an infrared floodlight 706 that can emit infrared light of the same wavelength as the structured light emitted by the projection module 702.
  • the projection module 702 and the infrared floodlight 706 are time-division switches to respectively acquire the depth image and the infrared image of the target.
  • the infrared image acquired at this time is a pure infrared image, and the face feature information contained in the structured light image is more obvious, and the face authentication accuracy is higher.
  • a depth camera based on TOF (Time Flight Method) technology may be utilized, where projection module 702 is used to emit light pulses, and acquisition module 707 is used to receive pulse pulses, and the processor is used to record pulse transmissions and receptions.
  • the time used, the depth image of the target is calculated based on the time.
  • the acquisition module 707 can simultaneously acquire the depth image and the infrared image of the target, and there is no parallax between the two.
  • an additional infrared camera 703 can be provided for acquiring an infrared image.
  • the acquisition module 707 can be used synchronously.
  • the infrared camera 703 acquires a depth image and an infrared image of the target. This device differs from the previous ones in that, since the depth image is different from the camera of the infrared image, there is a parallax between the two, and when an image having no parallax is required in the calculation processing performed by the subsequent face authentication The depth image needs to be registered with the infrared image in advance.
  • Device 11 may also include handset 704, ambient light/proximity sensor 705, etc. to achieve more functionality.
  • the proximity of the face can be detected by the proximity sensor 705, and when the face is too close, the projection module 702 is closed. Project or reduce the projection power.
  • the face authentication and the handset can be combined to implement an automatic call. For example, when the device is a communication device, after the device receives the incoming call, the face authentication application is activated and the required depth camera and the infrared camera are used to capture the depth image. Infrared image, when the authentication is passed, connect the call and open the device such as the handset to realize the call.
  • the device 11 can also include a screen 701 that can be used to display image content as well as for touch interaction.
  • the function of the screen unlocking of the device can be implemented by using the face authentication method.
  • the user recognizes that the inertial measurement unit in the device 11 is picked up by picking up the device 11. The acceleration will light up the screen, and the screen will appear to be unlocked.
  • the device turns on the depth camera and the infrared camera is used to capture the depth image and/or the infrared image.
  • the initiator is activated.
  • Face authentication application In the human eye sight detection process in the face authentication process, the preset human eye line of sight direction may be set to the direction in which the human eye looks at the screen 701, that is, the face authentication and unlocking are further performed only when the human eye looks at the screen.
  • the device 11 also includes a memory (not shown) for storing feature information entered during the entry phase, and for storing applications, instructions, and the like.
  • a memory (not shown) for storing feature information entered during the entry phase, and for storing applications, instructions, and the like.
  • the 3D face identity entry and authentication method described above is saved to the memory in the form of a software program.
  • the processor calls the instructions in the memory and performs the entry and authentication methods.
  • the 3D face identity entry and authentication method can also be directly written into the processor in the form of instruction code, thereby improving execution efficiency.
  • the 3D face identity entry and authentication method described in the present invention may be configured in the form of software or hardware. in.

Abstract

Provided are a 3D face identity authentication method and apparatus. The method comprises the following steps: obtaining a depth image and a two-dimensional image which include a target face; performing registration on the depth image and a reference face 3D texture image, so as to obtain pose information of the target face; using the reference face 3D texture image to project a reference face two-dimensional image according to the pose information; performing similarity comparison on a target face two-dimensional image and the reference face two-dimensional image. The method combines 3D information and projection to obtain a complete reference face two-dimensional image in order to increase recognition accuracy. The method also comprises steps of eye line of sight detection, liveness detection and data updating in order to improve user examination, reduce false recognition rates and tackle problems such as facial changes.

Description

3D人脸身份认证方法与装置3D face identity authentication method and device 技术领域Technical field
本发明属于计算机技术领域,更具体地说,是涉及一种3D人脸身份认证方法与装置。The invention belongs to the technical field of computers, and more particularly to a 3D face identity authentication method and device.
背景技术Background technique
人体有诸多唯一的特征,比如人脸、指纹、虹膜、人耳等,这些特征被统称为生物特征。生物特征识别被广泛用于安防、家居、智能硬件等众多领域,目前较为成熟的生物特征识别比如指纹识别、虹膜识别等已被普遍应用于手机、电脑等终端。而对于人脸等特征,尽管相关的研究已经非常深入,而对于人脸等特征的识别则仍未普及,这主要是因为已有的识别方法存在局限导致识别率及识别的稳定性较低。这些局限主要包括受环境光光强以及光照方向影响、人脸表情影响识别率、以及容易被人造特征欺骗等。The human body has many unique features, such as faces, fingerprints, irises, human ears, etc. These features are collectively referred to as biometrics. Biometrics are widely used in security, home, intelligent hardware and many other fields. At present, more mature biometrics such as fingerprint recognition and iris recognition have been widely used in mobile phones, computers and other terminals. For the features such as face, although the related research has been very deep, the recognition of features such as face is still not popular, mainly because the existing recognition methods have limitations that result in lower recognition rate and recognition stability. These limitations mainly include the influence of ambient light intensity and illumination direction, the recognition rate of facial expressions, and the vulnerability of artificial features.
已有的人脸等特征的识别,主要基于人脸二维彩色图像,当环境光强较弱时,会严重影响识别效果。另外,当光照的方向不同时,人脸图像上会存在阴影,同样也会影响识别效果。当被参照的人脸图像是在没有表情的情形下采集的,而当前是在微笑表情下采集的人脸图像,人脸识别的效果也会下降。另外,若被识别对象不是真实人脸,而是二维的人脸图片时,往往也能通过识别。The recognition of existing features such as face is mainly based on the two-dimensional color image of the face. When the ambient light intensity is weak, the recognition effect will be seriously affected. In addition, when the direction of the light is different, there will be shadows on the face image, which will also affect the recognition effect. When the referenced face image is acquired without an expression, and the face image currently acquired under the smiling expression, the effect of face recognition also decreases. In addition, if the recognized object is not a real face but a two-dimensional face image, it can often be recognized.
对于以上的问题,目前普遍采用基于近红外或热红外图像的生物特征识别,近红外图像不会受到环境光的干扰可以提高识别稳定性,然而却难以解决人造特征欺骗的问题;热红外图像仅对真实人脸成像,因此可以解决人造特征欺骗的问题,然而热红外图像分辨率低,严重影响识别效果。For the above problems, biometric recognition based on near-infrared or thermal infrared images is generally used. The near-infrared image can be improved by the interference of ambient light, but it is difficult to solve the problem of artificial feature deception. The thermal infrared image is only The real face is imaged, so the problem of artificial feature deception can be solved. However, the thermal infrared image has low resolution, which seriously affects the recognition effect.
基于以上说明,目前仍缺乏一种较为全面的生物特征解决方案以用来执行像解锁、支付等任务。Based on the above description, there is still a lack of a more comprehensive biometric solution for performing tasks such as unlocking and payment.
发明内容Summary of the invention
本发明为了解决现有技术中缺乏一种全面的人脸识别方案的问题,提供一种基于人脸识别的任务执行方法。In order to solve the problem of lacking a comprehensive face recognition scheme in the prior art, the present invention provides a task execution method based on face recognition.
为了解决上述问题,本发明采用的技术方案如下所述:In order to solve the above problems, the technical solution adopted by the present invention is as follows:
本发明提供一种3D人脸身份认证方法及装置,包括以下步骤:获取包含目 标人脸的深度图像以及二维图像;将所述深度图像与参考人脸3D纹理图像进行配准,以获取所述目标人脸的姿态信息;将所述参考人脸3D纹理图像按所述姿态信息投影出参考人脸二维图像;将所述目标人脸二维图像与所述参考人脸二维图像进行相似度比对。The present invention provides a 3D face identity authentication method and apparatus, including the steps of: acquiring a depth image and a two-dimensional image including a target face; and registering the depth image with a reference face 3D texture image to obtain Determining the posture information of the target human face; projecting the reference human face 3D texture image into the reference human face two-dimensional image according to the posture information; and performing the target human face two-dimensional image and the reference human face two-dimensional image Similarity comparison.
在一个实施例中,还包括步骤:独立于步骤(b)~(d),利用所述深度图像和/或所述二维图像,检测所述目标人脸的人眼视线,当所述人眼视线方向与预设方向一致时,继续执行步骤(b)或(c)或(d)。In one embodiment, the method further includes the steps of: detecting the human eye of the target face using the depth image and/or the two-dimensional image independently of steps (b)-(d), when the person When the eye line direction coincides with the preset direction, proceed to step (b) or (c) or (d).
在一个实施例中,还包括步骤:独立于步骤(b)~(d),利用所述深度图像和/或所述二维图像,检测所述目标人脸是否为真实人脸,若为真实人脸则:继续执行步骤(b)或(c)或(d),或当所述相似度超过预设第一阈值时,认证通过。In one embodiment, the method further includes the steps of: detecting, by the depth image and/or the two-dimensional image, whether the target face is a real face, independent of steps (b)-(d), if true The face is: continue to perform step (b) or (c) or (d), or when the similarity exceeds a preset first threshold, the authentication passes.
在一个实施例中,还包括步骤:当所述相似度超过预设第二阈值时,利用所述目标人脸二维图像更新所述参考人脸3D纹理图像中相应的纹理信息。In an embodiment, the method further includes the step of: updating the corresponding texture information in the reference face 3D texture image by using the target face two-dimensional image when the similarity exceeds a preset second threshold.
在一个实施例中,所述参考人脸3D纹理图像通过以下方法获取:获取包含参考人脸的深度图像序列以及二维图像序列;计算出所述参考人脸3D纹理图像。所述3D纹理图像包括带有纹理信息的3D点云或3D网格。In one embodiment, the reference face 3D texture image is obtained by acquiring a depth image sequence including a reference face and a two-dimensional image sequence; and calculating the reference face 3D texture image. The 3D texture image includes a 3D point cloud or a 3D mesh with texture information.
在一个实施例中,所述投影指的是将所述3D纹理图像投影到2D平面以形成人脸二维图像。In one embodiment, the projection refers to projecting the 3D texture image onto a 2D plane to form a human face two-dimensional image.
在一个实施例中,所述参考人脸二维图像中的人脸姿态与所述目标人脸二维图像中的人脸姿态一致。In one embodiment, the face pose in the reference face two-dimensional image is consistent with the face pose in the target face two-dimensional image.
在一个实施例中,所述二维图像包括红外图像。In one embodiment, the two-dimensional image comprises an infrared image.
在一个实施例中,所述二维图像包括结构光图像。In one embodiment, the two-dimensional image comprises a structured light image.
本发明还提供一种3D人脸身份认证装置,其特征在于,包括:深度相机,用于获取包含目标人脸的深度图像;平面相机,用于获取包含目标人脸的二维图像;处理器,接收所述深度图像以及所述二维图像,并执行以下操作:将所述深度图像与参考人脸3D纹理图像进行配准,以获取所述目标人脸的姿态信息;将所述参考人脸3D纹理图像按所述姿态信息投影出参考人脸二维图像;将所述目标人脸二维图像与所述参考人脸二维图像进行相似度比对。The present invention also provides a 3D face identity authentication device, comprising: a depth camera for acquiring a depth image including a target face; a plane camera for acquiring a two-dimensional image including the target face; Receiving the depth image and the two-dimensional image, and performing an operation of: registering the depth image with a reference face 3D texture image to acquire posture information of the target face; The face 3D texture image projects a reference face two-dimensional image according to the posture information; and compares the target face two-dimensional image with the reference face two-dimensional image.
在一个实施例中,所述处理器还执行以下操作:利用所述深度图像和/或所述二维图像,检测所述目标人脸的人眼视线,当所述人眼视线方向与预设方向一 致时,继续其他操作。In one embodiment, the processor further performs: detecting, by the depth image and/or the two-dimensional image, a human eye line of sight of the target face, when the human eye line of sight and preset When the directions are the same, continue with other operations.
在一个实施例中,所述处理器还执行以下操作:利用所述深度图像和/或所述二维图像,检测所述目标人脸是否为真实人脸,若为真实人脸则:继续执行其他操作,或当所述相似度超过预设第一阈值时,认证通过。In one embodiment, the processor further performs: using the depth image and/or the two-dimensional image, detecting whether the target face is a real face, and if it is a real face: continuing to execute Other operations, or when the similarity exceeds a preset first threshold, the authentication passes.
在一个实施例中,所述处理器还执行以下操作:当所述相似度超过预设第二阈值时,利用所述目标人脸二维图像更新所述参考人脸3D纹理图像中相应的纹理信息。In one embodiment, the processor further performs an operation of updating a corresponding texture in the reference face 3D texture image by using the target face two-dimensional image when the similarity exceeds a preset second threshold information.
附图说明DRAWINGS
图1是根据本发明一个实施例的3D人脸身份认证场景示意图。FIG. 1 is a schematic diagram of a 3D face identity authentication scenario according to an embodiment of the present invention.
图2是根据本发明一个实施例的3D人脸身份录入方法示意图。2 is a schematic diagram of a 3D face identity entry method in accordance with one embodiment of the present invention.
图3是根据本发明一个实施例的3D人脸身份录入以及认证方法示意图。3 is a schematic diagram of a 3D face identity entry and authentication method in accordance with one embodiment of the present invention.
图4是根据本发明又一个实施例的3D人脸身份认证方法示意图。4 is a schematic diagram of a 3D face identity authentication method according to still another embodiment of the present invention.
图5是根据本发明又一个实施例的3D人脸身份录入以及认证方法示意图。FIG. 5 is a schematic diagram of a 3D face identity entry and authentication method according to still another embodiment of the present invention.
图6是根据本发明另一实施例的3D人脸身份录入以及认证方法示意图。FIG. 6 is a schematic diagram of a 3D face identity entry and authentication method according to another embodiment of the present invention.
图7是根据本发明一个实施的3D人脸身份认证装置示意图。7 is a schematic diagram of a 3D face identity authentication device in accordance with one implementation of the present invention.
具体实施方式Detailed ways
为了使本发明实施例所要解决的技术问题、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
需要说明的是,当元件被称为“固定于”或“设置于”另一个元件,它可以直接在另一个元件上或者间接在该另一个元件上。当一个元件被称为是“连接于”另一个元件,它可以是直接连接到另一个元件或间接连接至该另一个元件上。另外,连接即可以是用于固定作用也可以是用于电路连通作用。It is to be noted that when an element is referred to as being "fixed" or "in" another element, it can be directly on the other element or indirectly. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or indirectly connected to the other element. In addition, the connection can be for a fixed effect or for circuit communication.
需要理解的是,术语“长度”、“宽度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明实施例和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。It should be understood that the terms "length", "width", "upper", "lower", "front", "back", "left", "right", "vertical", "horizontal", "top" The orientation or positional relationship of the "bottom", "inside", "outside" and the like is based on the orientation or positional relationship shown in the drawings, and is merely for the convenience of describing the embodiments of the present invention and the simplified description, rather than indicating or implying The device or component must have a particular orientation, configuration and operation in a particular orientation, and thus is not to be construed as limiting the invention.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多该特征。在本发明实施例的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。Moreover, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" or "second" may include one or more of the features either explicitly or implicitly. In the description of the embodiments of the present invention, the meaning of "a plurality" is two or more, unless specifically defined otherwise.
人脸身份认证技术可以被用于安检、监控,目前随着智能终端比如手机、平板的普及,人脸身份也会被应用于解锁、支付,甚至娱乐游戏等多个方面。智能终端设备,比如手机、平板、电脑、电视等大都配备了彩色相机,利用彩色相机采集包含人脸的图像后,利用该图像进行人脸检测及识别,从而进一步利用识别的结果执行其他相关的应用。然而,对于像手机、平板等移动终端设备而言,其应用环境常常变化,环境变化会影响彩色相机的成像,比如光线较弱时则无法对人脸进行很好的成像。另一方面,彩色相机无法识别被识别对象是否为真实人脸。Face authentication technology can be used for security check and monitoring. At present, with the popularity of smart terminals such as mobile phones and tablets, face identity can also be applied to unlocking, paying, and even entertainment games. Intelligent terminal devices, such as mobile phones, tablets, computers, televisions, etc., are mostly equipped with color cameras. After capturing images containing faces using color cameras, the images are used for face detection and recognition, thereby further utilizing the results of the recognition to perform other related application. However, for mobile terminal devices such as mobile phones and tablets, the application environment often changes, and environmental changes may affect the imaging of color cameras. For example, when the light is weak, the face cannot be well imaged. On the other hand, the color camera cannot recognize whether the recognized object is a real face.
本发明提供了一种3D人脸身份认证方法与装置。将利用对环境光照不敏感的深度图像以及二维图像实现对人脸身份的录入、检测、识别等功能,同时结合基于深度图像以及二维图像的活体检测来避免虚假人脸的身份误识别现象。这里的二维图像可以是红外图像、紫外图像等,相对应的采集相机可以为红外相机、紫外相机等平面相机。在以下的说明中以红外图像为例进行说明。The invention provides a 3D face identity authentication method and device. The depth image and the two-dimensional image that are insensitive to ambient illumination will be used to realize the functions of inputting, detecting and recognizing the face identity, and the biometric detection based on the depth image and the two-dimensional image is used to avoid the false recognition of the false face. . The two-dimensional image here may be an infrared image, an ultraviolet image, etc., and the corresponding acquisition camera may be a flat camera such as an infrared camera or an ultraviolet camera. In the following description, an infrared image will be described as an example.
图1是根据本发明一个实施例的3D人脸身份认证场景示意图。用户10手持人脸身份认证装置11(移动终端,如手机、平板等),装置11内部前置了一个深度相机111以及红外相机112,当装置11朝向人脸时,深度相机111用于获取包含目标人脸的深度图像,红外相机112用于采集包含目标人脸的红外图像。装置11在进行人脸身份认证之前需要将参考人脸的信息录入并保存到装置11中,以便于后续的认证比对;在人脸身份认证阶段,装置11将采集当前目标人脸的深度图像与红外图像,并基于深度图像与红外图像提取当前目标人脸的特征信息,当该特征信息与录入的参考人脸信息相匹配时,人脸身份认证成功,否则失败。以上所说的“参考人脸”以及“目标人脸”是相对于人脸身份录入以及认证两个不同的阶段而言,仅以示区别,抽象的说,人脸身份认证的本质即是验证目标人脸与参考人脸是否相同,FIG. 1 is a schematic diagram of a 3D face identity authentication scenario according to an embodiment of the present invention. The user 10 holds the face authentication device 11 (mobile terminal, such as a mobile phone, a tablet, etc.), and the device 11 is internally provided with a depth camera 111 and an infrared camera 112. When the device 11 faces the face, the depth camera 111 is used to acquire the The depth image of the target face, the infrared camera 112 is used to acquire an infrared image containing the target face. The device 11 needs to record and save the reference face information into the device 11 for subsequent authentication comparison before performing the face identity authentication; in the face identity authentication phase, the device 11 will collect the depth image of the current target face. And the infrared image, and extracting feature information of the current target face based on the depth image and the infrared image, when the feature information matches the entered reference face information, the face identity authentication succeeds, otherwise it fails. The "reference face" and the "target face" mentioned above are relative to the two different stages of face identity entry and authentication. The difference is only to show that the essence of face identity authentication is verification. Whether the target face is the same as the reference face,
3D人脸身份录入与认证3D face identity entry and authentication
图2是根据本发明一个实施例的3D人脸身份录入方法示意图。方法包括以下步骤:2 is a schematic diagram of a 3D face identity entry method in accordance with one embodiment of the present invention. The method includes the following steps:
201:获取深度图像序列以及红外图像序列。201: Acquire a depth image sequence and an infrared image sequence.
这一步骤中即是通过深度相机111以及红外相机112采集包含参考人脸的深度图像序列与红外图像序列,之所以是序列图像是因为单幅图像无法包含整体人脸的信息,因此需要采集包含人脸所有部位信息的序列图像,深度图像与红外图像的采集即可以同步采集也可以分时采集。在采集时,一种方式是装置11不动,人脸不断改变方向从而采集到包含所有人脸部位信息的序列图像;另一种方式是人脸不动,装置11通过移动来采集包含所有人脸部位信息的序列图像,可以理解的是,其他任何方式均可以应用到本发明中。In this step, the depth image sequence and the infrared image sequence including the reference face are acquired by the depth camera 111 and the infrared camera 112. The sequence image is because the single image cannot contain the information of the entire face, so the collection needs to be included. The sequence image of all parts of the face, the depth image and the infrared image can be collected simultaneously or in time. In the acquisition, one way is that the device 11 does not move, the face constantly changes direction to collect a sequence image containing all the face parts information; the other way is that the face is not moving, and the device 11 collects all the movements by including A sequence image of face part information, it is understood that any other means can be applied to the present invention.
在采集包含参考人脸的深度图像与红外图像序列时,序列中任意一张图像与其他图像的至少一张所包含的人脸区域最好有至少部分重合,重合部分的图像将有利于后续的图像融合。比如对人脸的左侧、中部以及右侧分别采集三幅图像,其中中部图像与左、右两侧的图像中有部分共同的人脸区域。When collecting the depth image and the infrared image sequence including the reference face, any one of the images in the sequence preferably has at least partially coincides with the face region included in at least one of the other images, and the image of the coincident portion is beneficial for subsequent images. Image fusion. For example, three images are respectively collected on the left side, the middle side, and the right side of the face, wherein the middle image and the left and right sides have a common face area in the image.
一般地,采集到的深度图像或是红外图像中同时包含了人脸与背景,因此在这一步骤中还需要对人脸进行检测以分割步骤,比如对于深度图像而言,可以根据深度信息对人脸进行分割,而对于红外图像,可以利用基于轮廓识别的方法或基于机器学习比如Adaboost算法或基于神经网络的检测方法等。可以理解的是,任何合适的人脸检测方法均可以应用到本发明中。Generally, the captured depth image or the infrared image includes both the face and the background. Therefore, in this step, the face needs to be detected to divide the step. For example, for the depth image, the depth information may be The face is segmented, and for the infrared image, a contour recognition based method or a machine learning based method such as an Adaboost algorithm or a neural network based detection method may be utilized. It will be appreciated that any suitable method of face detection can be applied to the present invention.
在本实施例中,深度图像与红外图像为配准后的图像(具体详见后面说明),因此在人脸检测时可以仅对其中一种图像进行检测,另一种图像中的人脸根据对应关系可以直接得到。比如利用经学习后的神经网络模型对红外图像进行人脸检测及分割后得到新的去除部分或全部背景的红外图像,随后根据深度图像与红外图像之间的对应关系可以得到新的深度图像。在一个实施例中,将采用一种更高效地结合两种图像的检测方法,首先根据深度图像中对应像素的深度值;其次根据该深度值以及红外相机的镜头参数可以估算在该深度值上的人脸区域大小;最后在红外图像上以该像素为中心选取与深度值对应的人脸区域大小的红外图像区域作为对象进行人脸判定。由于传统方法中对红外图像进行人脸检测时,人脸区域的大小还需要经过一定的迭代次数才能达到最佳的效果,而本方法则直接利 用深度信息确定出大小,从而加快人脸检测速度。In this embodiment, the depth image and the infrared image are registered images (for details, as described later), so that only one of the images may be detected in face detection, and the face in the other image may be based on Correspondence can be obtained directly. For example, using the learned neural network model to perform face detection and segmentation on the infrared image to obtain a new infrared image with some or all of the background removed, and then a new depth image can be obtained according to the correspondence between the depth image and the infrared image. In one embodiment, a detection method that combines two images more efficiently will be employed, first based on the depth value of the corresponding pixel in the depth image; secondly, based on the depth value and the lens parameter of the infrared camera, the depth value can be estimated. The size of the face region is finally selected. On the infrared image, the infrared image region of the size of the face region corresponding to the depth value is selected as the object on the infrared image as the object for face determination. Due to the face detection of infrared images in the traditional method, the size of the face region needs to be subjected to a certain number of iterations to achieve the best effect, and the method directly determines the size by using the depth information, thereby speeding up the face detection speed. .
202:计算出人脸的3D纹理图像。202: Calculate a 3D texture image of the face.
在本步骤中,首先利用上一步骤中获取的人脸深度图像序列融合成整体的人脸3D点云模型。在一个实施例中,通过深度图像序列中不同深度图像中重合的人脸部分,利用ICP(迭代最近点)算法将深度图像序列融合成一幅3D图像,即人脸3D点云模型,论文“Kinectfusion Real-time 3D reconstruction and interaction using a moving depth camera”中记载的Kinectfusion方法可以被用于本发明中。在一些实施例中,考虑到在图像序列采集过程中人脸会出现一些表情变化,此时可以利用动态融合算法获取人脸的3D点云模型,比如论文“Dynamicfusion reconstruction and tracking of nonrigid scenes in realtime”中的Dynamicfusion算法可以被用于本发明中。在一些实施例中,考虑到3D点云模型噪声多、数据量大等原因,还需要将3D点云模型转换成3D网格模型,任何合适的网格生成算法都可以被应用于本发明中。在后面的说明中,3D点云模型或3D网格模型都用3D图像统一来表达。In this step, the face depth image sequence obtained in the previous step is first merged into an overall face 3D point cloud model. In one embodiment, the depth image sequence is merged into a 3D image by using an ICP (Iterative Nearest Point) algorithm, which is a face 3D point cloud model, by the ICP (Iterative Nearest Point) algorithm, the paper "Kinectfusion" The Kinectfusion method described in Real-time 3D reconstruction and interaction using a moving depth camera" can be used in the present invention. In some embodiments, considering that some facial expression changes occur in the image sequence collection process, a dynamic fusion algorithm can be used to acquire a 3D point cloud model of the face, such as the paper "Dynamicfusion reconstruction and tracking of nonrigid scenes in realtime". The Dynamicfusion algorithm in " can be used in the present invention. In some embodiments, considering the 3D point cloud model is rich in noise and large amount of data, it is also necessary to convert the 3D point cloud model into a 3D mesh model, and any suitable mesh generation algorithm can be applied to the present invention. . In the following description, the 3D point cloud model or the 3D mesh model is uniformly expressed by 3D images.
其次,利用红外图像中所含有的纹理信息赋于3D图像中以得到3D纹理图像。在红外图像与深度图像配准的情况下,深度图像上每个像素不仅含有表示深度的像素值,同时还含有表示纹理信息的像素值,因此在得到3D图像后,将3D图像中各个点(节点)赋于表示纹理信息的像素值后即得到3D纹理图像。Secondly, the texture information contained in the infrared image is assigned to the 3D image to obtain a 3D texture image. In the case of infrared image and depth image registration, each pixel on the depth image contains not only the pixel value representing the depth, but also the pixel value representing the texture information, so after the 3D image is obtained, each point in the 3D image is obtained ( The node) is assigned a pixel value representing the texture information to obtain a 3D texture image.
203:投影出人脸红外图像。203: Projecting an infrared image of a face.
在本步骤中,通过将3D纹理图像投影到二维平面,从而得到二维的人脸红外图像。考虑到下面人脸含有的特征信息最多,因此在一个实施例中,为了获取正面完整的人脸红外图像,首先根据3D纹理图像中的3D信息获取人脸的正面朝向,其次将3D纹理图像投影到与该朝向垂直的2维平面上,即可获得完整的正面人脸红外图像。可以理解的是,在获取了3D纹理图像之后,可以通过投影到二维平面的方式获取任意视角下的完整人脸红外图像。需要说明的是,为了区分原始获取的红外图像与经投影或变换后的红外图像,将后者在本发明中一律用“人脸红外图像”表示,以区分“红外图像”。In this step, a two-dimensional human face infrared image is obtained by projecting a 3D texture image onto a two-dimensional plane. Considering that the following face contains the most feature information, in one embodiment, in order to obtain a frontal full face infrared image, the frontal orientation of the face is first obtained according to the 3D information in the 3D texture image, and then the 3D texture image is projected. A full frontal face infrared image is obtained on a 2-dimensional plane perpendicular to the orientation. It can be understood that after acquiring the 3D texture image, the complete human face infrared image at any viewing angle can be obtained by projecting to the two-dimensional plane. It should be noted that in order to distinguish the originally acquired infrared image from the projected or transformed infrared image, the latter is uniformly represented by the "face infrared image" in the present invention to distinguish the "infrared image".
204:提取人脸特征信息。204: Extract face feature information.
利用上一步骤中获取的正面人脸红外图像,利用特征提取算法提取出人脸特 征信息。在一个实施例中,将人脸红外图像放入到预先学习好的神经网络(比如卷积神经网络CNN等)中即可输出人脸的特征信息。Using the positive face infrared image obtained in the previous step, the feature extraction algorithm is used to extract the facial feature information. In one embodiment, the face infrared image is placed into a pre-learned neural network (such as a convolutional neural network CNN, etc.) to output feature information of the face.
205:录入人脸特征信息。205: Enter face feature information.
提取后的人脸特征信息将作为参考人脸的身份认证特征被保存到装置11中,以备后续的目标人脸身份认证比对。The extracted face feature information is saved to the device 11 as an identity authentication feature of the reference face for subsequent target face identity authentication comparison.
可以理解的是,以上的人脸身份录入方法也可以用来进行人脸身份认证,如图3所示的是根据本发明一个实施例的3D人脸身份录入以及认证方法示意图。其中认证步骤包括:获取目标人脸的深度图像序列以及红外图像序列;基于该序列计算出目标人脸的3D纹理图像;将目标人脸的3D纹理图像投影出正面人脸红外图像并基于该红外图像提取目标人脸的特征信息;与人脸特征录入不同的是,最后是将目标人脸的特征信息与参考人脸的特征信息进行比对以判断是否为同一人脸。在投影步骤中,不一定需要投影出正面人脸红外图像,只要在录入以及认证阶段保证二者均是按照同一平面(方向)进行投影即可。It can be understood that the above face identity entry method can also be used for face identity authentication. FIG. 3 is a schematic diagram of a 3D face identity entry and authentication method according to an embodiment of the present invention. The authentication step includes: acquiring a depth image sequence of the target face and an infrared image sequence; calculating a 3D texture image of the target face based on the sequence; projecting the 3D texture image of the target face into the front face infrared image and based on the infrared The image extracts feature information of the target face; unlike the face feature entry, finally, the feature information of the target face is compared with the feature information of the reference face to determine whether it is the same face. In the projection step, it is not necessary to project a frontal face infrared image, as long as both are recorded in the same plane (direction) during the entry and authentication phases.
然而利用这一方法在进行人脸认证时需要采集当前用户多幅图像,需要消耗更多的时间,所带来的用户体验较差,因此在本发明中还将提供一种速度更快、体验更佳的人脸身份认证方法。However, using this method to collect multiple images of the current user during face authentication requires more time and brings a poor user experience, so a faster speed and experience will be provided in the present invention. Better face authentication method.
图4以及图5是根据本发明一个实施例的3D人脸身份录入以及认证方法示意图,图4所示的3D人脸身份论证方法所对应的人脸身份录入方法和图2所示实施例相同,具体可见图5。本认证方法包括以下步骤:4 and FIG. 5 are schematic diagrams of a 3D face identity entry and authentication method according to an embodiment of the present invention, and the face identity entry method corresponding to the 3D face identity argumentation method shown in FIG. 4 is the same as the embodiment shown in FIG. See Figure 5 for details. This certification method includes the following steps:
401:获取深度图像以及红外图像。401: Acquire a depth image and an infrared image.
在本步骤中,深度相机111以及红外相机112采集包含目标人脸的深度图像与红外图像,与人脸录入方法中不同的是,在人脸身份认证阶段不需要采集多幅图像,在一种实施例中,仅采集一幅深度图像与红外图像即可,这种方式也可以加快人脸身份认证的速度,给用户的体验更佳。可以理解的是,在其他实施例中也可以采集多幅图像,但相比人脸录入阶段中含有人脸所有信息的序列而言,这里的多幅图像仍相对较少。在以下说明中,将以单幅深度图像与单幅红外图像进行说明。In this step, the depth camera 111 and the infrared camera 112 collect a depth image and an infrared image including the target face. Unlike the face entry method, in the face authentication phase, it is not necessary to collect multiple images. In the embodiment, only one depth image and the infrared image are collected, which can also speed up the face identity authentication and give the user a better experience. It can be understood that multiple images can also be acquired in other embodiments, but the multiple images here are still relatively small compared to the sequence containing all the information of the face in the face entry stage. In the following description, a single depth image and a single infrared image will be described.
在获取包含人脸的深度图像与红外图像之后,与步骤201中类似,本步骤中一般还包括人脸检测及分割步骤,最终得到去除部分或全部背景的的深度图像与 红外图像。After acquiring the depth image and the infrared image including the face, similar to step 201, this step generally includes a face detection and segmentation step, and finally obtains a depth image and an infrared image from which part or all of the background is removed.
402:检测人眼视线。402: Detecting the human eye.
人眼视线表明了当前目标人眼的注意力所在的位置,视线检测越来越多的被用在很多应用中,在本实施例中,为了提高用户体检,也进行了人眼视线检测。可以理解的是,人眼视线检测步骤也可以不应用于3D人脸身份认证中,另外,人眼视线检测步骤也可以被放在本实施例中的其他步骤之间,即人眼视线检测步骤相对其他步骤较为独立,可以根据不同的应用需求执行这一步骤并获取人眼视线检测结果。The human eye line of sight indicates the position of the current target human eye's attention, and line of sight detection is increasingly used in many applications. In this embodiment, in order to improve the user's physical examination, human eye line of sight detection is also performed. It can be understood that the human eye line of sight detection step may also not be applied to the 3D face identity authentication. In addition, the human eye line of sight detection step may also be placed between other steps in the embodiment, that is, the human eye line of sight detection step. It is relatively independent of other steps, and can perform this step according to different application requirements and obtain the human eyesight detection result.
利用人脸深度图像、人脸红外图像其中之一或者二者结合均可以用来实现对人眼视线进行检测。在本实施例中,优选地采用深度图像以及红外图像的结合来检测人眼视线。首先利用深度图像计算出人脸部的3D信息(比如3D点云),并根据该3D信息可以获取诸如人脸朝向、关键点3D坐标等信息;其次再根据红外图像识别出人眼睛的细节特征,比如瞳孔中心、闪烁点(经红外光照射后由人眼角膜反射的光在红外相机中所成的固定的斑点)、瞳孔、虹膜等,进一步基于人脸3D信息以及红外图像与深度图像之间的关系(相互重合,或经配准后的两幅图像各像素之间的对应关系)可以得到人眼细节特征的3D坐标;最后结合一个或多个人眼细节特征的3D坐标计算人眼视线方向。The use of one of the face depth image, the face infrared image, or a combination of the two can be used to detect the human eye line of sight. In the present embodiment, a combination of a depth image and an infrared image is preferably employed to detect a human eye line of sight. Firstly, the 3D information of the human face (such as a 3D point cloud) is calculated by using the depth image, and information such as face orientation and key point 3D coordinates can be acquired according to the 3D information; secondly, the details of the human eye are identified according to the infrared image. For example, the pupil center, the scintillation point (the fixed spot formed by the cornea reflected by the human eye after the infrared light is fixed in the infrared camera), the pupil, the iris, etc., further based on the 3D information of the face and the infrared image and the depth image The relationship between the two (coincidents, or the correspondence between the pixels of the two images after registration) can obtain the 3D coordinates of the human eye detail features; finally, the human eye line of sight is calculated by combining the 3D coordinates of one or more human eye detail features. direction.
本领域公知技术中的人眼视线检测方法也可以被应用到本发明中,比如可以仅利用红外图像来进行视线的检测等。A human eye line of sight detection method known in the art can also be applied to the present invention, for example, it is possible to perform line of sight detection or the like using only an infrared image.
人眼视线检测可以进一步提升用户人脸身份认证的体检,比如对于图1所示的实施例中,当人眼并没有对装置11进行注视且人脸正好被深度相机111以及红外相机112所采集,此时所进行的认证往往并非是用户的主观意愿,而是一种误认证。因此在一些应用中,人眼视线检测可以作为独立的步骤对人眼视线进行检测,其他步骤可以基于该步骤中人眼检测的结果来判断是否需要进一步执行或执行哪种方法。The human eye line of sight detection can further enhance the physical examination of the user's face identity authentication. For example, in the embodiment shown in FIG. 1, when the human eye does not look at the device 11 and the face is collected by the depth camera 111 and the infrared camera 112. At this time, the certification performed is often not the subjective will of the user, but a mis-authentication. Therefore, in some applications, the human eye line of sight detection can be used as a separate step to detect the human eye line of sight, and other steps can be based on the results of the human eye detection in this step to determine whether or not to perform or perform which method.
在本实施例中,当人眼视线被检测到与预设视线方向相同时执行下一步骤,这里的预设视线方向一般指人眼注视方向或注意力在当前的3D人脸身份认证应用,比如装置11中的显示在屏幕上的人脸身份认证应用,如解锁、支付等,在一些实施例中,预设视线方向也可以指其他方向,比如指向装置11的方向。In this embodiment, the next step is performed when the human eye line of sight is detected to be the same as the preset line of sight direction, where the preset line of sight direction generally refers to the human eye gaze direction or attention in the current 3D face identity authentication application. For example, in the device 11, the face authentication application displayed on the screen, such as unlocking, paying, etc., in some embodiments, the preset line of sight direction may also refer to other directions, such as the direction of the pointing device 11.
可以理解的是,人眼视线检测步骤也可以应用于图3所示的实施例中。It will be appreciated that the human eye line of sight detection step can also be applied to the embodiment shown in FIG.
在本实施例中,由于仅对当前人脸进行单次采集,单幅深度图像或红外图像中往往仅包含部分人脸信息,直接将单幅图像进行特征提取以及与参考人脸特征信息进行匹配往往精度不高。因此在本实施例中,将利用深度图像获取当前目标人脸的姿态信息;其次基于该姿态信息将目标人脸红外图像进行对齐以及校正,校正的目的是获取与参考人脸的姿态相同的当前人脸红外图像,由此可以最大程度上消除由于姿态不同引起的人脸图像识别误差;最后再将校正后的目标人脸图像提取出人脸特征,并将该特征与参考人脸图像的特征进行比对以进行认证。接下来详细介绍这几个步骤:In this embodiment, since only a single acquisition is performed on the current face, a single depth image or an infrared image often includes only part of the face information, and the single image is directly extracted and matched with the reference facial feature information. Often the accuracy is not high. Therefore, in this embodiment, the posture information of the current target face is acquired by using the depth image; secondly, the target face infrared image is aligned and corrected based on the posture information, and the purpose of the correction is to acquire the current current posture of the reference human face. Infrared image of the face, which can eliminate the face image recognition error caused by different postures to the greatest extent; finally, the corrected target face image is extracted from the face feature, and the feature is compared with the feature of the reference face image. Make a comparison for certification. Let's take a closer look at these steps:
403:计算人脸姿态信息。403: Calculate face posture information.
在人脸录入阶段已经保存了参考人脸的3D图像(比如3D点云,3D网格等),因此在本步骤中将301步骤中获取的目标人脸深度图像与参考人脸的3D图像进行对齐,在一个实施例中利用ICP算法实现二者的对齐,经过对齐操作后可以得到当前目标人脸相对于参考人脸的姿态信息。In the face entry phase, a 3D image of the reference face (such as a 3D point cloud, a 3D mesh, etc.) has been saved, so in this step, the target face depth image acquired in step 301 and the 3D image of the reference face are performed. Alignment, in one embodiment, the ICP algorithm is used to achieve the alignment of the two, and after the alignment operation, the posture information of the current target face relative to the reference face can be obtained.
在一些实施例中,也可以采用标准人脸的3D图像,将该标准人脸的3D图像作为参考人脸的3D图像用于计算目标人脸的姿态信息。In some embodiments, a 3D image of a standard human face may also be used, and the 3D image of the standard human face is used as a 3D image of the reference human face for calculating the posture information of the target human face.
404:对齐人脸红外图像。404: Align the face infrared image.
在获取当前目标人脸的姿态信息后,基于该姿态信息对目标人脸红外图像进行校正以得到与步骤203中得到的参考人脸红外图像姿态相同的当前目标人脸红外图像,优选地,参考人脸红外图像为正面人脸图像,因此校正的目的是获取当前目标人脸的正面人脸红外图像。已有技术中的基于姿态的人脸图像对齐算法均可以被应用到本发明中,比如论文“DeepFace Closing the Gap to Human Level performance in face verification”中记载的方法。After acquiring the posture information of the current target face, correcting the target facial infrared image based on the posture information to obtain the current target facial infrared image with the same reference facial infrared image posture obtained in step 203, preferably, reference The face infrared image is a frontal face image, so the purpose of the correction is to obtain a frontal face infrared image of the current target face. The gesture-based face image alignment algorithm of the prior art can be applied to the present invention, such as the method described in the paper "DeepFace Closing the Gap to Human Level performance in face verification".
405:提取人脸特征信息。405: Extract face feature information.
针对上一步骤中获取的目标人脸红外图像,利用特征提取算法提取其脸部特征信息。在一个实施例中,将目标人脸红外图像放入与录入阶段使用的同一个神经网格中以输出具有类似结构的特征信息。For the target face infrared image obtained in the previous step, the feature extraction algorithm is used to extract the facial feature information. In one embodiment, the target face infrared image is placed in the same neural grid as used in the entry phase to output feature information having a similar structure.
406:人脸认证。406: Face authentication.
将上一步骤中获取的当前目标人脸的特征信息与录入阶段得到的目标人脸 的特征信息进行比对以判断是否为同一人脸。这里的比对一般输出相似度,当相似度超过预设的阈值时,比如阈值为80%,则认为是同一人脸,否则为不同人脸。The feature information of the current target face obtained in the previous step is compared with the feature information of the target face obtained in the entry phase to determine whether it is the same face. The comparison here generally outputs a similarity. When the similarity exceeds a preset threshold, such as a threshold of 80%, it is considered to be the same face, otherwise it is a different face.
图5所示的是上述的3D人脸身份录入以及认证方法示意图。需要说明的是,在录入阶段中除了将获取的参考人脸特征信息录入并保存,事实上还需要将参考人脸的3D图像(3D点云/网格)录入并保存,以便于在认证阶段计算目标人脸的姿态时调用3D图像。FIG. 5 is a schematic diagram of the above 3D face identity entry and authentication method. It should be noted that, in addition to the obtained reference face feature information recorded and saved in the entry phase, in fact, a 3D image (3D point cloud/grid) of the reference face needs to be entered and saved, so as to be in the authentication stage. The 3D image is called when calculating the pose of the target face.
图4与图5所对应的实施例中,3D人脸身份认证的精度很大方面取决于于人脸红外图像的对齐与校正精度,由于在认证阶段仅获取单幅或少数几幅红外图像,而当获取的红外图像姿态相对较偏时,比如抬头或侧脸较多时,即使通过对齐与校正将其转换为与录入阶段参考人脸红外图像姿态相同的目标人脸红外图像,但对齐与校正算法不可能恢复出由于侧脸所带来的特征损失。基于此,本发明还提供一种精度更高的3D人脸身份录入与认证方法。In the embodiment corresponding to FIG. 4 and FIG. 5, the accuracy of the 3D face identity authentication is greatly dependent on the alignment and correction accuracy of the face infrared image, since only a single image or a few infrared images are acquired during the authentication phase. When the acquired infrared image pose is relatively biased, such as when the head is raised or the side face is more, even if it is converted into the same target face infrared image by the alignment and correction by the alignment and correction, the alignment and correction are performed. It is impossible for the algorithm to recover the loss of features due to the side face. Based on this, the present invention also provides a more accurate 3D face identity entry and authentication method.
图6所示是根据本发明又一实施例的3D人脸身份录入以及认证方法示意图。FIG. 6 is a schematic diagram of a 3D face identity entry and authentication method according to still another embodiment of the present invention.
在录入阶段,首先获取参考人脸的深度图像序列以及红外图像序列,其次计算出包含3D点云/网格以及红外纹理信息的3D纹理图像,最后将3D纹理图像录入并保存要装置的储存器中。In the entry phase, the depth image sequence of the reference face and the infrared image sequence are first obtained, and then the 3D texture image including the 3D point cloud/grid and the infrared texture information is calculated, and finally the 3D texture image is recorded and stored in the storage device. in.
在认证阶段,首先获取目标人脸的深度图像以及红外图像,这一步骤往往还需要进行人脸检测以及图像分割以获取人脸图像;其次进行视线检测,当检测的人眼视线方向大于预设阈值时进入下一步骤,即将深度图像与保存的参考人脸的3D点云/网格进行匹配(或对齐、配准)以获取目标人脸的姿态信息;然后根据该姿态信息将参考人脸的3D纹理图像进行投影,以获取与目标人脸姿态相同的参考人脸红外图像;再次将参考人脸红外图像与目标人脸红外图像放入到神经网格中以提取出各自的人脸特征信息;最后进行人脸特征的比对,并输出比对结果。In the authentication phase, the depth image of the target face and the infrared image are first acquired. This step often requires face detection and image segmentation to obtain the face image. Secondly, the line of sight detection is performed when the detected eye line direction is greater than the preset. When the threshold is reached, the next step is to match (or align, register) the depth image with the saved 3D point cloud/grid of the reference face to obtain the posture information of the target face; and then refer to the face according to the posture information. The 3D texture image is projected to obtain the same reference face infrared image as the target face pose; the reference face infrared image and the target face infrared image are again placed into the neural grid to extract the respective facial features. Information; finally, the face features are compared and the comparison results are output.
在本实施例中,是通过获取目标人脸的姿态信息,并将参考人脸的3D纹理图像按照该姿态进行投影,从而获得与目标人脸姿态最为接近的人脸红外图像。与图4及图5所示的方法相比,本方法中无需通过对齐及校正算法改变红外图像的姿态,由于3D纹理图像中含有包括参考人脸的所有信息,因此通过投影所得到的参考人脸红外图像能保证与目标人脸红外图像最高的相似度,有利于提高认 证精度。In this embodiment, by acquiring the posture information of the target face and projecting the 3D texture image of the reference face according to the posture, a face infrared image closest to the target face pose is obtained. Compared with the method shown in FIG. 4 and FIG. 5, the method does not need to change the posture of the infrared image by the alignment and correction algorithm. Since the 3D texture image contains all the information including the reference face, the reference person obtained by the projection is obtained. The infrared image of the face can ensure the highest similarity with the infrared image of the target face, which is beneficial to improve the authentication accuracy.
另外,在本实施例中,由于是在认证阶段对参考人脸以及目标人脸的红外图像进行特征提取与比对,因此在算法选择方面,也可以训练一个相似度判断的深度学习算法,输入的两幅图像,算法输出的就是相似度,由此可以加快认证速度。In addition, in the embodiment, since the feature images of the reference face and the target face are extracted and compared in the authentication stage, a depth learning algorithm for similarity judgment may be trained in the algorithm selection. The two images, the output of the algorithm is the similarity, which can speed up the authentication.
活体检测Live detection
以上所述的人脸认证方法往往容易被“欺骗”,比如利用某个人脸的2D图像或三维模型,将该图像或三维模型作为目标人脸,利用以上方法可能会出现认证成功,这在一些基于人脸认证的解锁、支付等应用是不能接受的。基于此问题,本发明所提供的3D人脸身份认证方法还可以包括活体检测步骤,该步骤用来判断当前的目标人脸是否为真实的人脸,只有是目标人脸与参考人脸相似度超过预设阈值同时目标人脸为真实人脸才会被认证通过,否则失败。The face authentication method described above is often easily "spoofed". For example, using a 2D image or a three-dimensional model of a face, the image or the three-dimensional model is used as a target face, and the above method may be used for authentication success, which is in some cases. Applications such as unlocking and payment based on face authentication are unacceptable. Based on the problem, the 3D face identity authentication method provided by the present invention may further include a living body detecting step, which is used to determine whether the current target face is a real face, and only the target face and the reference face are similar. When the preset threshold is exceeded and the target face is a real face, it will be authenticated, otherwise it will fail.
活体检测方法有多种,在一个实施例中可以基于获取的目标人脸深度图像来判定是否为立体目标以解决2D图像所引起的“欺骗”,在一个实施例中可以利用提取红外图像中所隐含的人脸皮肤特性来进行皮肤判定以解决一般的三维模型所引起的“欺骗”,尽管如此,目前仍缺乏一种行之有效的可以应对多种“欺骗”状况的活体检测方法。本发明将提供一种算法以解决此问题。There are various methods for detecting a living body. In one embodiment, it may be determined based on the acquired target face depth image whether it is a stereoscopic object to solve the "spoofing" caused by the 2D image. In one embodiment, the extracted infrared image may be utilized. The implicit skin characteristics of the face are used to determine the skin to solve the "spoofing" caused by the general three-dimensional model. However, there is still a lack of an effective living detection method that can cope with various "spoofing" situations. The present invention will provide an algorithm to solve this problem.
本发明中的活体检测方法是基于深度学习算法,在一个实施例中,通过搭建神经网格模型,并利用大量的数据对该模型进行训练。这里大量的数据包括真人、2D照片、仿真面具、三维模型等的深度图像以及红外图像,数据量越大,所训练的神经网格模型将会越精确。经训练后的神经网格可以非常准确地从各种虚假人脸中找到真实人脸,从而实现活体检测。在一个实施例中,将获取的目标人脸深度图像以及红外图像输入到神经网格中,以输出是否为真实人脸的结果;在另一实施例中,也可以仅将深度图像或红外图像输入到神经网格中,以输出是否为真实人脸的结果。The living body detection method in the present invention is based on a deep learning algorithm. In one embodiment, the model is trained by constructing a neural grid model and using a large amount of data. A large amount of data here includes depth images of real people, 2D photos, simulation masks, 3D models, and infrared images. The larger the amount of data, the more accurate the trained neural mesh model will be. The trained neural grid can very accurately find real faces from various false faces, thus achieving live detection. In one embodiment, the acquired target face depth image and the infrared image are input into the neural grid to output a result of whether it is a real face; in another embodiment, only the depth image or the infrared image may be used. Enter into the neural grid to output the result of whether it is a real face.
因此,3D人脸身份认证过程中,只有当前目标人脸与参考人脸的相似度超过预设阈值并且通过活体检测才会认证成功,否则失败。在一些实施例中,活体检测步骤也可以被放在获取深度图像以及红外图像之后进行,当活体检测通过后再进行相似度检测的相关步骤,因此活体检测步骤相对于获取深度图像与红外图像之外的步骤也较为独立,可以在任意步骤之前执行该步骤并根据活体检测结果 判定是否继续执行下一步骤。Therefore, in the 3D face authentication process, only the similarity between the current target face and the reference face exceeds a preset threshold and the authentication is successful by the living body detection, otherwise it fails. In some embodiments, the living body detecting step may also be performed after the depth image and the infrared image are acquired, and the correlation detecting step is performed after the living body detection passes, so the living body detecting step is relative to the acquiring the depth image and the infrared image. The external steps are also relatively independent, and the step can be performed before any step and the next step is determined based on the result of the living body detection.
可以理解的是,当相似度检测低于预设阈值时,也可以不再执行活体检测步骤。另外,活体检测步骤也可以特征提取与比对等步骤执行,即只有当活体检测通过了再执行对目标人脸的相似度检测。It can be understood that when the similarity detection is lower than the preset threshold, the living body detecting step may not be performed. In addition, the living body detecting step can also be performed by the feature extraction and comparison steps, that is, the similarity detection of the target face is performed only when the living body detection passes.
数据更新Data Update
考虑到人脸随时间可能会发生变化,比如由生物成长、生病等引起的变化,随着时间的积累变化会越明显,为了应对这些变化对3D人脸身份认证算法的精度影响。在本发明中,3D人脸身份认证算法还可以包括数据更新步骤以应对人脸变化。Considering that the face may change with time, such as changes caused by biological growth, illness, etc., the accumulation of changes over time will become more obvious, in order to cope with the accuracy impact of these changes on the 3D face identity authentication algorithm. In the present invention, the 3D face identity authentication algorithm may further include a data update step to deal with face changes.
在前面所述的各个实施例中,当目标人脸与参考人脸相似度超过某一阈值并且通过活体检测时即通过认证。可以想到的是,若录入的参考人脸信息始终不变,当目标人脸随着时间变化越来越大时,相似度将会越来越低直至出现误识,即无法分辨出当前目标人脸即是当初的参考人脸。为了应对这一问题,在3D人脸认证通过后,当相似度高于另一阈值时,将当前目标人脸信息作为新的参考人脸信息,由于不断地更新参考人脸信息,即使随着时间推移人脸发生较大变化后也能准确地对人脸进行认证。需要说明的是,更新信息这一步骤中说对应的阈值一般应高于人脸认证判定步骤中的阈值。In the various embodiments described above, the authentication is performed when the target human face and the reference face similarity exceed a certain threshold and are detected by the living body. It is conceivable that if the entered reference face information is always the same, when the target face changes with time, the similarity will become lower and lower until there is misunderstanding, that is, the current target person cannot be distinguished. The face is the original reference face. In order to cope with this problem, after the 3D face authentication is passed, when the similarity is higher than another threshold, the current target face information is used as the new reference face information, and the reference face information is continuously updated, even if Faces can be accurately authenticated after a large change in face. It should be noted that the step of updating the information indicates that the corresponding threshold should generally be higher than the threshold in the face authentication determination step.
根据不同的认证方法,这里所说的更新参考人脸信息所代表的含义也不同。比如对于图3所示的实施例中,除了更新人脸红外图像特征信息外,还可以更新人脸3D纹理图像;对于图5所示的实施例中,更新的是人脸红外图像的特征信息,即将目标人脸特征信息作为新的参考人脸特征信息以实现数据更新;而对于图6所示的实施例中,更新的是人脸的3D纹理图像,即将目标人脸二维图像中的纹理信息代替原始参考人脸3D纹理图像中的相应的纹理信息。According to different authentication methods, the meaning of the update reference face information mentioned here is also different. For example, in the embodiment shown in FIG. 3, in addition to updating the face infrared image feature information, the face 3D texture image may be updated; for the embodiment shown in FIG. 5, the feature information of the face infrared image is updated. , the target face feature information is used as the new reference face feature information to implement data update; and in the embodiment shown in FIG. 6 , the updated 3D texture image of the face is the target face 2D image. The texture information replaces the corresponding texture information in the original reference face 3D texture image.
3D人脸身份认证装置3D face identity authentication device
图7所示的是根据本发明一个实施的3D人脸身份认证装置示意图。装置11包括由投影模组702、采集模组707,其中投影模组702用于向目标空间投射红外结构光图像,采集模组707用于采集结构光图像,装置11还包括处理器(图中未示出),处理器接收到结构光图像后用于计算出目标的深度图像。这里的结构光图像中除了包含结构光信息外还会包含人脸纹理信息,因此结构光图像也可 以作为人脸红外图像与深度图像一起参与人脸身份录入与认证。此时,采集模组707即是图1中深度相机111的一部分,也是红外相机112。换句话说,这里的深度相机与红外相机可以认为是同一个相机。FIG. 7 is a schematic diagram of a 3D face identity authentication apparatus according to an implementation of the present invention. The device 11 includes a projection module 702 for capturing an infrared structured light image to a target space, an acquisition module 707 for collecting a structured light image, and a device 11 further including a processor (in the figure) Not shown), the processor calculates a depth image of the target after receiving the structured light image. The structured light image herein includes face texture information in addition to the structured light information, so the structured light image can also participate in face identity entry and authentication as a face infrared image and a depth image. At this time, the acquisition module 707 is a part of the depth camera 111 in FIG. 1 and is also an infrared camera 112. In other words, the depth camera and the infrared camera here can be considered to be the same camera.
在一些实施例中,装置11还包括有红外泛光灯706,其可以发射出与投影模组702所发射结构光相同的波长的红外光,在进行人脸录入与认证过程中,可以通过将投影模组702与红外泛光灯706分时开关以分别获取目标的深度图像与红外图像。此时所获取的红外图像为纯红外图像,相对于结构光图像而言,其含有的人脸特征信息更加明显,人脸认证精度将更高。In some embodiments, the device 11 further includes an infrared floodlight 706 that can emit infrared light of the same wavelength as the structured light emitted by the projection module 702. In the process of performing face entry and authentication, The projection module 702 and the infrared floodlight 706 are time-division switches to respectively acquire the depth image and the infrared image of the target. The infrared image acquired at this time is a pure infrared image, and the face feature information contained in the structured light image is more obvious, and the face authentication accuracy is higher.
在一些实施例中,可以利用基于TOF(时间飞行法)技术的深度相机,此时投影模组702用于发射光脉冲,而采集模组707通过接收光脉冲,处理器用于记录脉冲发射以及接收所利用的时间,根据时间计算出目标的深度图像。此时采集模组707可以同时获取目标的深度图像与红外图像,并且二者之间没有任何视差。In some embodiments, a depth camera based on TOF (Time Flight Method) technology may be utilized, where projection module 702 is used to emit light pulses, and acquisition module 707 is used to receive pulse pulses, and the processor is used to record pulse transmissions and receptions. The time used, the depth image of the target is calculated based on the time. At this time, the acquisition module 707 can simultaneously acquire the depth image and the infrared image of the target, and there is no parallax between the two.
在一些实施例中,可以设置额外的红外相机703用于获取红外图像,当红外泛光灯706发射的光束波长与投影模组702所发射的光束波长不同时,可以同步利用采集模组707与红外相机703获取目标的深度图像与红外图像。这一装置与前面几种的区别在于,由于获取深度图像与红外图像的相机不同,因此二者之间会有视差,当在后续的人脸认证所进行的计算处理中需要没有视差的图像时,需要提前将深度图像与红外图像进行配准。In some embodiments, an additional infrared camera 703 can be provided for acquiring an infrared image. When the wavelength of the beam emitted by the infrared floodlight 706 is different from the wavelength of the beam emitted by the projection module 702, the acquisition module 707 can be used synchronously. The infrared camera 703 acquires a depth image and an infrared image of the target. This device differs from the previous ones in that, since the depth image is different from the camera of the infrared image, there is a parallax between the two, and when an image having no parallax is required in the calculation processing performed by the subsequent face authentication The depth image needs to be registered with the infrared image in advance.
装置11还可以包括听筒704、环境光/接近传感器705等器件以实现更多的功能。比如在一些实施例中,考虑到红外光对人体的危害性,当人脸靠的过近时,可以通过接近传感器705对人脸的接近度进行检测,当太近时关闭投影模组702的投影或减小投影功率。在一些实施例中,可以结合人脸认证以及听筒实现自动通话,比如当装置为通信装置时,装置收到来电后,启动人脸认证应用同时打开所需要的深度相机与红外相机采集深度图像与红外图像,当认证通过后,接通通话并打开听筒等器件以实现通话。 Device 11 may also include handset 704, ambient light/proximity sensor 705, etc. to achieve more functionality. For example, in some embodiments, considering the harm of the infrared light to the human body, when the face is too close, the proximity of the face can be detected by the proximity sensor 705, and when the face is too close, the projection module 702 is closed. Project or reduce the projection power. In some embodiments, the face authentication and the handset can be combined to implement an automatic call. For example, when the device is a communication device, after the device receives the incoming call, the face authentication application is activated and the required depth camera and the infrared camera are used to capture the depth image. Infrared image, when the authentication is passed, connect the call and open the device such as the handset to realize the call.
装置11还可以包括屏幕701,屏幕701可以用来显示图像内容也可以用来进行触摸交互。利用人脸认证方法可以实现该装置的屏幕解锁等功能,在一个实施例中,当装置11处于睡眠等状态时,用户通过拿起装置11,装置11中的惯性测量单元识别到由于拿起引起的加速度时会点亮屏幕,同时屏幕会出现待解锁 指令,此时装置打开深度相机以及红外相机用于采集深度图像和/或红外图像,当采集到的图像中检测到有人脸时,启动人脸认证应用。在人脸认证过程中的人眼视线检测中,可以将预设的人眼视线方向设置为人眼注视屏幕701的方向,即只有当人眼注视屏幕时才会进一步进行人脸认证与解锁。The device 11 can also include a screen 701 that can be used to display image content as well as for touch interaction. The function of the screen unlocking of the device can be implemented by using the face authentication method. In one embodiment, when the device 11 is in a state of sleep or the like, the user recognizes that the inertial measurement unit in the device 11 is picked up by picking up the device 11. The acceleration will light up the screen, and the screen will appear to be unlocked. At this time, the device turns on the depth camera and the infrared camera is used to capture the depth image and/or the infrared image. When the face is detected in the captured image, the initiator is activated. Face authentication application. In the human eye sight detection process in the face authentication process, the preset human eye line of sight direction may be set to the direction in which the human eye looks at the screen 701, that is, the face authentication and unlocking are further performed only when the human eye looks at the screen.
装置11还包括存储器(图中未示出),存储器用于存储如录入阶段录入的特征信息,还可以存储应用程序、指令等。比如将前面所述的3D人脸身份录入与认证方法以软件程序的形式保存到存储器中,当应用程序需要时,处理器调用存储器中的指令并执行录入以及认证方法。可以理解的是,3D人脸身份录入与认证方法也可以被直接以指令代码形式写入到处理器中,由此提高执行效率。另外,随着技术的不断发展,软、硬件之间的界限将逐渐消失,因此本发明中所述的3D人脸身份录入以及认证方法即可以以软件形式也可以是以硬件形式配置在装置11中。The device 11 also includes a memory (not shown) for storing feature information entered during the entry phase, and for storing applications, instructions, and the like. For example, the 3D face identity entry and authentication method described above is saved to the memory in the form of a software program. When the application needs it, the processor calls the instructions in the memory and performs the entry and authentication methods. It can be understood that the 3D face identity entry and authentication method can also be directly written into the processor in the form of instruction code, thereby improving execution efficiency. In addition, as technology continues to evolve, the boundaries between software and hardware will gradually disappear. Therefore, the 3D face identity entry and authentication method described in the present invention may be configured in the form of software or hardware. in.
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的技术人员来说,在不脱离本发明构思的前提下,还可以做出若干等同替代或明显变型,而且性能或用途相同,都应当视为属于本发明的保护范围。The above is a further detailed description of the present invention in connection with the specific preferred embodiments, and the specific embodiments of the present invention are not limited to the description. It will be apparent to those skilled in the art that <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt;

Claims (14)

  1. 一种3D人脸身份认证方法,其特征在于,包括以下步骤:A 3D face identity authentication method, comprising the steps of:
    (a)获取包含目标人脸的深度图像以及二维图像;(a) acquiring a depth image including the target face and a two-dimensional image;
    (b)将所述深度图像与参考人脸3D纹理图像进行配准,以获取所述目标人脸的姿态信息;(b) registering the depth image with a reference face 3D texture image to obtain posture information of the target face;
    (c)将所述参考人脸3D纹理图像按所述姿态信息投影出参考人脸二维图像;(c) projecting the reference face 3D texture image into the reference face two-dimensional image according to the posture information;
    (d)将所述目标人脸二维图像与所述参考人脸二维图像进行相似度比对。(d) comparing the similarity of the target human face two-dimensional image with the reference human face two-dimensional image.
  2. 如权利要求1所述的方法,其特征在于,还包括步骤:The method of claim 1 further comprising the step of:
    (e1)独立于步骤(b)~(d),利用所述深度图像和/或所述二维图像,检测所述目标人脸的人眼视线,当所述人眼视线方向与预设方向一致时,继续执行步骤(b)或(c)或(d)。(e1) detecting the human eye line of the target face using the depth image and/or the two-dimensional image independently of steps (b) to (d), when the human eye line of sight direction and the preset direction If they are consistent, proceed to step (b) or (c) or (d).
  3. 如权利要求1所述的方法,其特征在于,还包括步骤:The method of claim 1 further comprising the step of:
    (e2)独立于步骤(b)~(d),利用所述深度图像和/或所述二维图像,检测所述目标人脸是否为真实人脸,若为真实人脸则:(e2) independently of steps (b) to (d), using the depth image and/or the two-dimensional image to detect whether the target face is a real face, and if it is a real face:
    继续执行步骤(b)或(c)或(d),或Continue with step (b) or (c) or (d), or
    当所述相似度超过预设第一阈值时,认证通过。When the similarity exceeds a preset first threshold, the authentication passes.
  4. 如权利要求1所述的方法,其特征在于,还包括步骤:The method of claim 1 further comprising the step of:
    (e3)当所述相似度超过预设第二阈值时,利用所述目标人脸二维图像更新所述参考人脸3D纹理图像中相应的纹理信息。(e3) updating the corresponding texture information in the reference face 3D texture image by using the target face two-dimensional image when the similarity exceeds a preset second threshold.
  5. 如权利要求1~4所述的方法,其特征在于,所述参考人脸3D纹理图像通过以下方法获取:The method according to any one of claims 1 to 4, wherein the reference face 3D texture image is obtained by the following method:
    (1)获取包含参考人脸的深度图像序列以及二维图像序列;(1) acquiring a depth image sequence including a reference face and a two-dimensional image sequence;
    (2)计算出所述参考人脸3D纹理图像。(2) Calculate the reference face 3D texture image.
  6. 如权利要求5所述的方法,其特征在于,所述3D纹理图像包括带有纹理信息的3D点云或3D网格。The method of claim 5 wherein the 3D texture image comprises a 3D point cloud or a 3D mesh with texture information.
  7. 如权利要求1所述的方法,其特征在于,所述投影指的是将所述3D纹理图像投影到2D平面以形成人脸二维图像。The method of claim 1 wherein said projecting is to project said 3D texture image onto a 2D plane to form a two-dimensional image of a human face.
  8. 如权利要求1所述的方法,其特征在于,所述参考人脸二维图像中的人脸姿态与所述目标人脸二维图像中的人脸姿态一致。The method according to claim 1, wherein the face pose in the reference face two-dimensional image coincides with the face pose in the target face two-dimensional image.
  9. 如权利要求1~4所述的方法,其特征在于,所述二维图像包括红外图像。The method of claims 1-4, wherein the two-dimensional image comprises an infrared image.
  10. 如权利要求1~4所述的方法,其特征在于,所述二维图像包括结构光图像。The method of claims 1-4 wherein the two-dimensional image comprises a structured light image.
  11. 一种3D人脸身份认证装置,其特征在于,包括:A 3D face identity authentication device, comprising:
    深度相机,用于获取包含目标人脸的深度图像;a depth camera for acquiring a depth image containing a target face;
    平面相机,用于获取包含目标人脸的二维图像;a planar camera for acquiring a two-dimensional image containing a target face;
    处理器,接收所述深度图像以及所述二维图像,并执行以下操作:The processor receives the depth image and the two-dimensional image and performs the following operations:
    将所述深度图像与参考人脸3D纹理图像进行配准,以获取所述目标人脸的姿态信息;Registering the depth image with a reference face 3D texture image to obtain posture information of the target face;
    将所述参考人脸3D纹理图像按所述姿态信息投影出参考人脸二维图像;And projecting the reference face 3D texture image into the reference face two-dimensional image according to the posture information;
    将所述目标人脸二维图像与所述参考人脸二维图像进行相似度比对。Comparing the target human face two-dimensional image with the reference human face two-dimensional image.
  12. 如权利要求11所述的装置,其特征在于,所述处理器还执行以下操作:The apparatus of claim 11 wherein said processor further performs the following operations:
    利用所述深度图像和/或所述二维图像,检测所述目标人脸的人眼视线,当所述人眼视线方向与预设方向一致时,继续其他操作。Using the depth image and/or the two-dimensional image, the human eye line of sight of the target face is detected, and when the line of sight direction of the human eye coincides with the preset direction, other operations are continued.
  13. 如权利要求11所述的装置,其特征在于,所述处理器还执行以下操作:The apparatus of claim 11 wherein said processor further performs the following operations:
    利用所述深度图像和/或所述二维图像,检测所述目标人脸是否为真实人脸,若为真实人脸则:Using the depth image and/or the two-dimensional image, detecting whether the target face is a real face, and if it is a real face:
    继续执行其他操作,或Continue to perform other operations, or
    当所述相似度超过预设第一阈值时,认证通过。When the similarity exceeds a preset first threshold, the authentication passes.
  14. 如权利要求11所述的装置,其特征在于,所述处理器还执行以下操作:The apparatus of claim 11 wherein said processor further performs the following operations:
    当所述相似度超过预设第二阈值时,利用所述目标人脸二维图像更新所述参考人脸3D纹理图像中相应的纹理信息。And updating the corresponding texture information in the reference face 3D texture image by using the target face two-dimensional image when the similarity exceeds a preset second threshold.
PCT/CN2018/098442 2017-10-26 2018-08-03 3d face identity authentication method and apparatus WO2019080579A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711021418.0 2017-10-26
CN201711021418.0A CN107633165B (en) 2017-10-26 2017-10-26 3D face identity authentication method and device

Publications (1)

Publication Number Publication Date
WO2019080579A1 true WO2019080579A1 (en) 2019-05-02

Family

ID=61105967

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/098442 WO2019080579A1 (en) 2017-10-26 2018-08-03 3d face identity authentication method and apparatus

Country Status (2)

Country Link
CN (1) CN107633165B (en)
WO (1) WO2019080579A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160278A (en) * 2019-12-31 2020-05-15 河南中原大数据研究院有限公司 Face texture structure data acquisition method based on single image sensor
CN111222485A (en) * 2020-01-15 2020-06-02 北京明略软件系统有限公司 3D face recognition method and device, electronic equipment and storage medium
CN111242097A (en) * 2020-02-27 2020-06-05 腾讯科技(深圳)有限公司 Face recognition method and device, computer readable medium and electronic equipment
CN115131880A (en) * 2022-05-30 2022-09-30 上海大学 Multi-scale attention fusion double-supervision human face in-vivo detection method

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633165B (en) * 2017-10-26 2021-11-19 奥比中光科技集团股份有限公司 3D face identity authentication method and device
CN107748869B (en) 2017-10-26 2021-01-22 奥比中光科技集团股份有限公司 3D face identity authentication method and device
CN108416323B (en) * 2018-03-27 2023-06-30 百度在线网络技术(北京)有限公司 Method and device for recognizing human face
CN110378994B (en) * 2018-04-12 2021-05-28 Oppo广东移动通信有限公司 Face modeling method and related product
CN108701228A (en) * 2018-04-18 2018-10-23 深圳阜时科技有限公司 Identification authentication method, identification authentication device and electronic equipment
CN108566777A (en) * 2018-04-18 2018-09-21 深圳阜时科技有限公司 Identification authentication method, identification authentication device and electronic equipment
CN108513661A (en) * 2018-04-18 2018-09-07 深圳阜时科技有限公司 Identification authentication method, identification authentication device and electronic equipment
WO2019218274A1 (en) * 2018-05-16 2019-11-21 深圳阜时科技有限公司 Light source module, image acquisition apparatus, identity recognition apparatus, and electronic device
CN108629333A (en) * 2018-05-25 2018-10-09 厦门市美亚柏科信息股份有限公司 A kind of face image processing process of low-light (level), device, equipment and readable medium
CN108763903B (en) * 2018-05-29 2020-02-11 Oppo广东移动通信有限公司 Authentication device and electronic apparatus
WO2019228097A1 (en) 2018-05-29 2019-12-05 Oppo广东移动通信有限公司 Verification system, electronic device, verification method, computer-readable storage medium, and computer apparatus
CN110852134A (en) 2018-07-27 2020-02-28 北京市商汤科技开发有限公司 Living body detection method, living body detection device, living body detection system, electronic device, and storage medium
CN109584352B (en) 2018-08-21 2021-01-12 先临三维科技股份有限公司 Three-dimensional scanning image acquisition and processing method and device and three-dimensional scanning equipment
CN109345621A (en) * 2018-08-28 2019-02-15 广州智美科技有限公司 Interactive face three-dimensional modeling method and device
CN109753871A (en) * 2018-11-21 2019-05-14 惠州Tcl移动通信有限公司 A kind of method and mobile terminal of recognition of face
CN109684925B (en) * 2018-11-21 2023-10-27 奥比中光科技集团股份有限公司 Depth image-based human face living body detection method and device
CN110188616B (en) * 2019-05-05 2023-02-28 上海盎维信息技术有限公司 Space modeling method and device based on 2D and 3D images
CN110163164B (en) * 2019-05-24 2021-04-02 Oppo广东移动通信有限公司 Fingerprint detection method and device
CN114170640B (en) * 2020-08-19 2024-02-02 腾讯科技(深圳)有限公司 Face image processing method, device, computer readable medium and equipment
CN113743220A (en) * 2021-08-04 2021-12-03 深圳商周智联科技有限公司 Biological characteristic in-vivo detection method and device and computer equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375970A (en) * 2010-08-13 2012-03-14 北京中星微电子有限公司 Identity authentication method based on face and authentication apparatus thereof
US8824749B2 (en) * 2011-04-05 2014-09-02 Microsoft Corporation Biometric recognition
CN105022994A (en) * 2015-06-30 2015-11-04 国网山东省电力公司日照供电公司 Identity authentication method of network safety access of power system
CN105513221A (en) * 2015-12-30 2016-04-20 四川川大智胜软件股份有限公司 ATM (Automatic Teller Machine) cheat-proof device and system based on three-dimensional human face identification
CN105654048A (en) * 2015-12-30 2016-06-08 四川川大智胜软件股份有限公司 Multi-visual-angle face comparison method
CN107633165A (en) * 2017-10-26 2018-01-26 深圳奥比中光科技有限公司 3D face identity authentications and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050712B (en) * 2013-03-15 2018-06-05 索尼公司 The method for building up and device of threedimensional model
CN105335722B (en) * 2015-10-30 2021-02-02 商汤集团有限公司 Detection system and method based on depth image information
CN107169483A (en) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 Tasks carrying based on recognition of face

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375970A (en) * 2010-08-13 2012-03-14 北京中星微电子有限公司 Identity authentication method based on face and authentication apparatus thereof
US8824749B2 (en) * 2011-04-05 2014-09-02 Microsoft Corporation Biometric recognition
CN105022994A (en) * 2015-06-30 2015-11-04 国网山东省电力公司日照供电公司 Identity authentication method of network safety access of power system
CN105513221A (en) * 2015-12-30 2016-04-20 四川川大智胜软件股份有限公司 ATM (Automatic Teller Machine) cheat-proof device and system based on three-dimensional human face identification
CN105654048A (en) * 2015-12-30 2016-06-08 四川川大智胜软件股份有限公司 Multi-visual-angle face comparison method
CN107633165A (en) * 2017-10-26 2018-01-26 深圳奥比中光科技有限公司 3D face identity authentications and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160278A (en) * 2019-12-31 2020-05-15 河南中原大数据研究院有限公司 Face texture structure data acquisition method based on single image sensor
CN111160278B (en) * 2019-12-31 2023-04-07 陕西西图数联科技有限公司 Face texture structure data acquisition method based on single image sensor
CN111222485A (en) * 2020-01-15 2020-06-02 北京明略软件系统有限公司 3D face recognition method and device, electronic equipment and storage medium
CN111242097A (en) * 2020-02-27 2020-06-05 腾讯科技(深圳)有限公司 Face recognition method and device, computer readable medium and electronic equipment
CN111242097B (en) * 2020-02-27 2023-04-18 腾讯科技(深圳)有限公司 Face recognition method and device, computer readable medium and electronic equipment
CN115131880A (en) * 2022-05-30 2022-09-30 上海大学 Multi-scale attention fusion double-supervision human face in-vivo detection method

Also Published As

Publication number Publication date
CN107633165A (en) 2018-01-26
CN107633165B (en) 2021-11-19

Similar Documents

Publication Publication Date Title
US11238270B2 (en) 3D face identity authentication method and apparatus
WO2019080580A1 (en) 3d face identity authentication method and apparatus
WO2019080579A1 (en) 3d face identity authentication method and apparatus
US10339402B2 (en) Method and apparatus for liveness detection
US8406484B2 (en) Facial recognition apparatus, method and computer-readable medium
CN109271950B (en) Face living body detection method based on mobile phone forward-looking camera
WO2019056988A1 (en) Face recognition method and apparatus, and computer device
KR20170134356A (en) System and method for performing fingerprint-based user authentication using images photographed using a mobile device
KR101640014B1 (en) Iris recognition apparatus for detecting false face image
US11756338B2 (en) Authentication device, authentication method, and recording medium
US11651624B2 (en) Iris authentication device, iris authentication method, and recording medium
CN111445640A (en) Express delivery pickup method, device, equipment and storage medium based on iris recognition
KR101053253B1 (en) Apparatus and method for face recognition using 3D information
CN106156739B (en) A kind of certificate photo ear detection and extracting method based on face mask analysis
Bastias et al. A method for 3D iris reconstruction from multiple 2D near-infrared images
KR101561817B1 (en) Method and apparatus for authenticating biometric by using face/hand recognizing
KR101718244B1 (en) Apparatus and method of processing wide angle image for recognizing face
CN113128320A (en) Face living body detection method and device based on TOF camera and electronic equipment
US11948402B2 (en) Spoof detection using intraocular reflection correspondences
US11335123B2 (en) Live facial recognition system and method
CN113807248A (en) Face recognition or registration method, device, chip, terminal or storage medium
Bhanu Human recognition at a distance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18869643

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18869643

Country of ref document: EP

Kind code of ref document: A1