WO2021012647A1 - 人脸校验方法、装置、服务器及可读存储介质 - Google Patents

人脸校验方法、装置、服务器及可读存储介质 Download PDF

Info

Publication number
WO2021012647A1
WO2021012647A1 PCT/CN2020/071702 CN2020071702W WO2021012647A1 WO 2021012647 A1 WO2021012647 A1 WO 2021012647A1 CN 2020071702 W CN2020071702 W CN 2020071702W WO 2021012647 A1 WO2021012647 A1 WO 2021012647A1
Authority
WO
WIPO (PCT)
Prior art keywords
similarity
dimensional
face image
face
image
Prior art date
Application number
PCT/CN2020/071702
Other languages
English (en)
French (fr)
Inventor
赵豪
Original Assignee
创新先进技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 创新先进技术有限公司 filed Critical 创新先进技术有限公司
Priority to US16/875,121 priority Critical patent/US10853631B2/en
Publication of WO2021012647A1 publication Critical patent/WO2021012647A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints

Definitions

  • the embodiments of this specification relate to the field of data processing technology, and in particular, to a face verification method, device, server, and readable storage medium.
  • Face recognition technology is used in stations such as station brushing faces, supermarkets brushing faces to pay and mobile apps In scenes such as face-swiping and login.
  • face recognition devices such as LOT face brushing machines are faced with face forgery attacks, that is, attacks that use general masks or photos or videos to perform face checking. Such attacks are usually introduced by structured light.
  • the 3D camera performs defense, that is, it can determine whether the user is alive by performing live detection on the collected 3D images.
  • the embodiments of this specification provide a face verification method, device, server, and readable storage medium, which improve the accuracy of face verification, and can effectively improve the resistance to face forgery attacks on the basis of improved face verification. Performance.
  • the first aspect of the embodiments of this specification provides a face verification method, including:
  • the face recognition result indicates that the face recognition is successful, performing three-dimensional reconstruction on the two-dimensional face image to obtain a reconstructed three-dimensional face image;
  • the living body detection result indicates that the user in the original 3D face image is a living body, comparing the similarity of the reconstructed 3D face image with the original 3D face image to obtain a comparison result;
  • the second aspect of the embodiments of this specification provides a face verification device, including:
  • the face recognition unit is used to perform face recognition on the collected two-dimensional face image to obtain the face recognition result
  • a three-dimensional reconstruction unit if the face recognition result indicates successful face recognition, it is used to perform three-dimensional reconstruction on the two-dimensional face image to obtain a reconstructed three-dimensional face image;
  • the living body detection unit is used to perform living body detection on the collected original three-dimensional face image to obtain the living body detection result;
  • the similarity comparison unit if the living body detection result indicates that the user in the original 3D face image is a living body, it is used to compare the similarity of the reconstructed 3D face image and the original 3D face image to obtain Comparison result
  • the face verification unit is configured to determine whether the user in the two-dimensional face image is a target user according to the comparison result.
  • the third aspect of the embodiments of the present specification also provides a server, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor implements the aforementioned face verification when the program is executed. Method steps.
  • the fourth aspect of the embodiments of the present specification also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the above-mentioned face verification method are provided.
  • the reconstructed three-dimensional face image and the original three-dimensional face image are compared for similarity, and according to the obtained comparison Based on the result, it is determined whether the user in the two-dimensional face image is the target user.
  • the two conditions of successful face recognition and successful live detection are used as constraints to ensure that faces that need to be compared for similarity subsequently
  • the accuracy of the data that is, the accuracy of the reconstructed three-dimensional face image and the original three-dimensional face image are higher, and the reconstructed three-dimensional face image is used as a comparison sample, and then similar comparison is performed, which can prompt
  • the accuracy of the comparison result obtained is also improved; and on the basis of the higher accuracy of the comparison result, the accuracy of determining whether the user in the two-dimensional face image is the target user It will also be improved accordingly, thereby improving the accuracy of face verification, and on the basis of improving face verification, it can effectively improve the performance against face forgery attacks.
  • the data carried by the three-dimensional face image has more dimensions.
  • the data of each dimension needs to be compared for similarity. In this way, if the data The more dimensions there are, the higher the accuracy of the comparison result obtained by the similar comparison will be, which promotes the accuracy of the comparison result to be further improved.
  • the accuracy of determining whether the user in the two-dimensional face image is the target user will be further improved, that is, the accuracy of face verification can be further improved, and the face verification can be further improved on the basis of Further improve the performance against face forgery attacks.
  • FIG. 1 is a method flowchart of a face verification method in an embodiment of this specification
  • FIG. 2 is a schematic diagram of the structure of the face verification device in the embodiment of this specification.
  • Fig. 3 is a schematic diagram of the structure of the server in the embodiment of the specification.
  • an embodiment of this specification provides a face verification method, including:
  • S106 Perform live detection on the collected original three-dimensional face image to obtain a live detection result
  • image collection may be performed by a two-dimensional camera device, thereby collecting the two-dimensional face image, and then performing face recognition on the two-dimensional face image through a face recognition algorithm to obtain the The face recognition result, where the camera device may be a camera, a pan-tilt, a video camera, a digital camera, etc.
  • face recognition may be performed on the two-dimensional face image to obtain a face recognition value; it is determined whether the face recognition value is not less than the set threshold value for the face, Obtain the face judgment result; determine the face recognition result according to the face judgment result.
  • the two-dimensional face image in the process of performing face recognition on the two-dimensional face image, may be preprocessed first to obtain the preprocessed two-dimensional face image, and then use The face recognition algorithm performs face recognition on the preprocessed two-dimensional face image to obtain the face recognition result.
  • the face setting threshold can be set according to actual conditions, or manually or by equipment.
  • the face setting threshold can be a value not less than 80% and less than 1, for example It is 80%, 85%, 90%, etc.
  • the face setting threshold can also be set to a value less than 80%, which is not specifically limited in this specification.
  • the face recognition algorithm includes a recognition algorithm based on facial feature points (feature-based recognition algorithms), a recognition algorithm based on the entire face image (appearance-based recognition algorithms), and template-based recognition Recognition algorithms such as template-based recognition algorithms, recognition algorithms using neural network, and support vector machine recognition algorithms are not specifically limited in this manual.
  • the preprocessing of the two-dimensional face image performs the recognition accuracy of face recognition.
  • the two-dimensional face image when the two-dimensional face image is subjected to image preprocessing, since the two-dimensional face image is an original image, the two-dimensional face image contains the face area, background, and noise.
  • the two-dimensional face image may be first subjected to face detection, face calibration, and image background removal processing in sequence to obtain a face processed image to reduce the background and noise data in the two-dimensional face image to the recognition algorithm
  • face detection, face calibration, and image background removal processing in sequence to obtain a face processed image to reduce the background and noise data in the two-dimensional face image to the recognition algorithm
  • the average pixel value corresponding to the face processed image can be determined according to all the pixel values in the face processed image; and the average pixel value corresponding to the face processed image is determined according to the pixel average value.
  • one or more of the above-mentioned methods can be used to process the two-dimensional face image to obtain the pre-processed two-dimensional face image.
  • the mean value and variance are calculated on the face processed image
  • the mean value m is calculated on all pixels in the face processed image
  • the variance s corresponding to each pixel in the face processed image is calculated, and then the normalization operation of the mean variance is performed on each pixel in the face processed image to obtain each
  • the normalized data of each pixel can remove the average brightness value in the two-dimensional face image through the normalization operation, reduce the influence of illumination on the algorithm, and improve the use of the algorithm to the pre-processed two-dimensional face image The calculation accuracy of the face recognition calculation.
  • step S104 is executed; if the obtained face judgment result indicates that the face recognition value is less than the set threshold value for the face, it is determined that the face recognition result indicates that the face recognition has failed, that is, failed to recognize The user in the two-dimensional face image then no longer performs other operations.
  • a two-dimensional face image a1 as an example, if the probability that the user in a1 is a11 is 85% calculated by the face recognition algorithm, and the threshold value of the face is set to 90%, at this time, since 85% ⁇ 90%, so that the face recognition algorithm fails to recognize a11, that is, it is determined that the face recognition result of a1 indicates that the face recognition fails, and no other operations are performed.
  • step S104 is executed.
  • step S104 is executed.
  • a three-dimensional reconstruction algorithm may be used to perform three-dimensional reconstruction on the face image to obtain the reconstructed three-dimensional face image.
  • the three-dimensional reconstruction algorithm includes a single image color 3D reconstruction (Im2Avatar) algorithm, 3-SWEEP algorithm, 3D-GAN algorithm and other algorithms based on TensorFlow, which are not specifically limited in this specification.
  • the two-dimensional face image can be encoded and decoded, and then the encoded and decoded data can be subjected to shape learning, surface color learning, and detailed structure, and finally the learned The data and the detailed structured data are combined to obtain the reconstructed three-dimensional face data.
  • step S106 is executed.
  • a three-dimensional camera device can be used to perform image collection to obtain the original three-dimensional face image; live body detection is performed on the original three-dimensional face image to obtain the live body detection result, wherein:
  • the three-dimensional camera device may be a 3D camera, a 3D camera, or the like.
  • step S106 can be performed simultaneously with step S102, or can be performed before or after step S102 is performed; further, the two-dimensional camera device and the three-dimensional camera device can be set on the same LOT device, and It can be set on two connected LOT devices. This manual does not make specific restrictions.
  • the two-dimensional face image and the original three-dimensional face image can be collected at a set time and a set area.
  • the 2D camera and 3D camera on the face device collect two-dimensional face images and three-dimensional face images in a set area in real time.
  • the set area can be set according to the actual situation, or set by the device or manually; Therefore, the set time can be set according to actual conditions, or it can be set by equipment or manually.
  • the set time can be, for example, 1 second (s), 2s, 4s, etc., which are not specifically limited in this specification. .
  • image preprocessing may be performed on the original three-dimensional face image first to obtain the preprocessed three-dimensional face image, and then use the living body
  • the detection algorithm performs live detection on the preprocessed three-dimensional face image to obtain the live detection result.
  • step S102 For the sake of brevity of the description, it is not here. Repeat it again.
  • the original three-dimensional face image in the process of living body detection, can be subjected to living body detection to obtain the living body detection value; whether the living body detection value is less than the living body setting value is detected to obtain the detection result; The detection result determines the live body detection result; wherein, when the original three-dimensional face image is subjected to live body detection, the live body detection algorithm may be used to perform live body detection on the original three-dimensional face image.
  • the detection result indicates that the living body detection value is less than the living body set threshold, it is determined that the living body detection result is that the user in the original three-dimensional face image is a living body; if the detection result indicates If the living body detection value is not less than the living body set threshold, it is determined that the living body detection result is that the user in the original three-dimensional face image is a non-living body.
  • step S108 is executed; if the living body detection result is that the user in the original 3D face image is a non- For the living body, no operation will be performed for this face recognition.
  • the number of the original three-dimensional face image may be one or more images, and the original three-dimensional face image matches the number of images required by the living body detection algorithm, that is, the original three-dimensional face image
  • the number of three-dimensional face images is not less than the number of images required by the living body detection algorithm. For example, if the living body detection algorithm requires two images, the number of original three-dimensional face images is not less than two.
  • the living body setting threshold can be set according to actual conditions, or manually or by equipment.
  • the living body setting threshold can be a value not less than 80% and less than 1, for example, 80 %, 85%, 90%, etc.
  • the living body setting threshold can also be set to a value less than 80%, which is not specifically limited in this specification.
  • the living body detection algorithm may be, for example, an anti-spoofing algorithm, an image distortion analysis algorithm, and a color texture algorithm.
  • the living body setting threshold is represented by T
  • the living body detection value obtained by performing the living body detection on the original three-dimensional face image is represented by S, and detecting whether S is less than T, if S ⁇ T, then it is determined If the user in the original three-dimensional face image is a living body, then step S108 is executed; if S ⁇ T, it is determined that the user in the original three-dimensional face image is a non-living body. To proceed.
  • step S108 is executed, and the reconstructed 3D face image and the original 3D face image can be directly compared as a whole to obtain the Comparison result; the reconstructed three-dimensional structure data of the reconstructed three-dimensional face image and the original three-dimensional structure data of the original three-dimensional face image can also be compared for similarity to obtain the comparison result.
  • the reconstructed three-dimensional face image and the original three-dimensional face image may be input into the similarity algorithm Perform similarity calculation to obtain image similarity; determine whether the image similarity is not less than the set similarity to obtain a similarity judgment result; and determine the comparison result according to the similarity judgment result.
  • the reconstructed three-dimensional structure data and the original three-dimensional structure data may be input into the similarity algorithm for similarity. Degree calculation to obtain the image similarity; determine whether the image similarity is not less than the set similarity, and obtain the similarity judgment result; determine the comparison result according to the similarity judgment result.
  • the similarity determination result indicates that the image similarity is not less than the set similarity, it is determined that the comparison result indicates that the comparison is successful; if the similarity determination result indicates that the image similarity is less than the set similarity. If the similarity is set, it is determined that the comparison result represents a comparison failure.
  • the similarity algorithm may be cosine algorithm, Euclidean distance algorithm, and perceptual hash algorithm (Perceptual hash algorithm); further, the set similarity can be set according to actual conditions, or It is set manually or by equipment.
  • the set similarity can be a value not less than 75% and less than 1, such as 75%, 80%, and 90%.
  • the set similarity can also be set to Values less than 75% are not specifically limited in this specification.
  • step S110 is performed. If the comparison result indicates that the comparison is successful, it is determined that the user in the two-dimensional face image is the target user; if the comparison result indicates that the comparison fails, it is determined that the comparison is unsuccessful. Of users are not targeted users.
  • the overall flow of the face verification method is as follows: first perform S1, obtain a face image through a 2D camera device, and perform face recognition; if the face recognition is successful, perform step S2, pass Im2Avatar And other deep learning algorithms, perform 3D reconstruction on the face image to obtain a reconstructed 3D face image; if the face recognition fails, perform step S3, the face verification process fails, and no operation is performed for this face verification; While step S1 is performed, step S4 is performed, the original 3D face image is collected by the 3D camera device and the living body detection is performed to determine whether the user in the 3D face image is a living body; if it is not a living body, perform step S3; Step S5 is performed to compare the similarity between the reconstructed 3D face image and the original 3D face image; if the comparison result is, step S6 is performed to determine that the user in the face image is the target user, that is, the face verification is successful If the comparison fails, step S7 is executed to confirm that the user
  • the live body detection is performed on a3, and the live body detection value is S, and S ⁇ T (the live body setting value), then it is determined that the user in a3 is a live body; Then the original three-dimensional structure data of a3 is obtained and represented by a3-1; correspondingly, taking the two-dimensional face image as a2 as an example, the face recognition value corresponding to a2 is 95% and the face setting threshold is 90 %.
  • 95%>90% 3D reconstruction is performed on a2, and the reconstructed 3D face image is a22, and the reconstructed 3D structure data of a22 is represented by a22-1.
  • the perceptual hash algorithm is used for similar For degree calculation, use the mean hash algorithm to hash a3-1 and a22-1 respectively, and get the hash values a3-2 and a22-2 in turn, and then calculate the similarity between a3-2 and a22-2 Denoted by S1; judging whether S1 is greater than the set similarity is denoted by S2, if S1 ⁇ S2, it is determined that the comparison result indicates that the comparison is successful, and then it can be determined that a21 in a2 is the target user; if S1 ⁇ S2, then It is determined that the comparison result indicates that the comparison fails, and then it is determined that a21 in a2 is not the target user.
  • a 3D camera is usually used for defense, that is, by performing live detection on the collected 3D image, it can be judged whether the user is alive, and the face verification is performed by judging whether it is a live body. In the face verification process, there is no comparison sample, which will reduce the accuracy of the face verification.
  • the reconstructed three-dimensional face image and the original three-dimensional face image are compared for similarity, according to the obtained results. According to the comparison result, it is determined whether the user in the two-dimensional face image is the target user.
  • the accuracy of the face data means that the accuracy of the reconstructed three-dimensional face image and the original three-dimensional face image are higher, the reconstructed three-dimensional face image is used as a comparison sample, and then similar comparison is performed, It can promote the accuracy of the obtained comparison result to be improved accordingly; and on the basis of the higher accuracy of the comparison result, it determines whether the user in the two-dimensional face image is the target user’s The accuracy will also be improved, thereby improving the accuracy of face verification. On the basis of the improvement of face verification, the performance against face forgery attacks can be effectively improved.
  • the data carried by the three-dimensional face image has more dimensions.
  • the data of each dimension needs to be compared for similarity. In this way, if the data The more dimensions there are, the higher the accuracy of the comparison result obtained by the similar comparison will be, which promotes the accuracy of the comparison result to be further improved.
  • the accuracy of determining whether the user in the two-dimensional face image is the target user will be further improved, that is, the accuracy of face verification can be further improved, and the face verification can be further improved on the basis of Further improve the performance against face forgery attacks
  • an embodiment of this specification provides a face verification device, as shown in FIG. 2, including:
  • the face recognition unit 201 is configured to perform face recognition on the collected two-dimensional face image to obtain a face recognition result
  • the three-dimensional reconstruction unit 202 is configured to perform three-dimensional reconstruction on the two-dimensional face image to obtain a reconstructed three-dimensional face image if the face recognition result indicates that the face recognition is successful;
  • the living body detection unit 203 is configured to perform living body detection on the collected original three-dimensional face image to obtain a living body detection result;
  • the similarity comparison unit 204 if the living body detection result indicates that the user in the original 3D face image is a living body, it is used to compare the similarity of the reconstructed 3D face image with the original 3D face image, Get the comparison result;
  • the face verification unit 205 is configured to determine whether the user in the two-dimensional face image is a target user according to the comparison result.
  • the similarity comparison unit 204 is configured to perform similarity comparison between the reconstructed three-dimensional structure data of the reconstructed three-dimensional face image and the original three-dimensional structure data of the original three-dimensional face image to obtain The comparison result.
  • the similarity comparison unit 204 is configured to input the reconstructed three-dimensional structure data and the original three-dimensional structure data into a similarity algorithm for similarity calculation to obtain image similarity; Whether the image similarity is not less than the set similarity, the similarity judgment result is obtained; and the comparison result is determined according to the similarity judgment result.
  • the similarity comparison unit 204 if the similarity judgment result indicates that the image similarity is not less than the set similarity, is used to determine that the comparison result indicates that the comparison is successful;
  • the similarity judgment result indicates that the image similarity is less than the set similarity, and is used to determine that the comparison result indicates that the comparison fails.
  • the face recognition unit 201 is configured to perform face recognition on the two-dimensional face image to obtain a face recognition value; and determine whether the face recognition value is not less than a face set threshold , Obtain the face judgment result; determine the face recognition result according to the face judgment result.
  • the living body detection unit 203 is configured to perform image collection through a three-dimensional camera device to obtain the original three-dimensional face image; perform living body detection on the original three-dimensional face image to obtain the living body detection result .
  • the living body detection unit 203 is configured to perform a living body detection on the original three-dimensional face image to obtain a living body detection value; detect whether the living body detection value is less than a living body set value to obtain a detection result; The detection result determines the living body detection result.
  • an embodiment of this specification also provides a server, as shown in FIG. 3, including a memory 304, a processor 302, and a memory 304 stored on the memory 304.
  • a server as shown in FIG. 3, including a memory 304, a processor 302, and a memory 304 stored on the memory 304.
  • a computer program that can run on the processor 302, and when the processor 302 executes the program, the steps of any one of the aforementioned face verification methods are implemented.
  • the bus architecture (represented by a bus 300), the bus 300 may include any number of interconnected buses and bridges, and the bus 300 will include one or more processors represented by the processor 302 and a memory 304 representing The various circuits of the memory are linked together.
  • the bus 300 may also link various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are all known in the art, and therefore, will not be further described herein.
  • the bus interface 305 provides an interface between the bus 300 and the receiver 301 and transmitter 303.
  • the receiver 301 and the transmitter 303 may be the same element, namely a transceiver, which provides a unit for communicating with various other devices on the transmission medium.
  • the processor 302 is responsible for managing the bus 300 and general processing, and the memory 304 may be used to store data used by the processor 302 when performing operations.
  • the embodiment of this specification also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the foregoing Describe the steps of any method of the face verification method.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

一种人脸校验方法,包括:对二维人脸图像进行人脸识别,若人脸识别结果表征人脸识别成功,则进行三维重建,得到重建三维人脸图像;对采集的原始三维人脸图像进行活体检测,若检测为活体,则将所述重建三维人脸图像和所述原始三维人脸图像相似度比对,根据得到的比对结果,确定所述二维人脸图像中的用户是否为目标用户;如此,通过人脸识别成功和活体检测成功作为约束条件,能够确保后续需要进行相似度比对的人脸数据的准确性,将重建三维人脸图像进行对比样本再进行相似比对,能够提高比对结果的准确度;以及在比对结果准确度较高的基础上,能够有效提高人脸校验的准确度,促使对抗人脸伪造攻击的性能也随之提高。

Description

人脸校验方法、装置、服务器及可读存储介质 技术领域
本说明书实施例涉及数据处理技术领域,尤其涉及一种人脸校验方法、装置、服务器及可读存储介质。
背景技术
随着人脸识别技术的飞速发展,人脸识别技术越来越多的应用在人们的日常生活中,人脸识别技术应用在例如车站的刷脸进站,超市的刷脸付钱和手机APP的刷脸登录等场景中。
现有技术中,人脸识别设备例如LOT刷脸机具面临着人脸伪造攻击,即通过通用的面具或者照片、视频进行刷脸校验的攻击,针对此类攻击,通常是通过引入的结构光3D摄像头进行防御,即通过对采集的3D图像进行活体检测即可判断用户是否为活体。
发明内容
本说明书实施例提供了一种人脸校验方法、装置、服务器及可读存储介质,提高人脸校验的准确度,在人脸校验提高的基础上能够有效提高提高对抗人脸伪造攻击的性能。
本说明书实施例第一方面提供了一种人脸校验方法,包括:
对采集的二维人脸图像进行人脸识别,得到人脸识别结果;
若所述人脸识别结果表征人脸识别成功,则对所述二维人脸图像进行三维重建,得到重建三维人脸图像;
对采集的原始三维人脸图像进行活体检测,得到活体检测结果;
若所述活体检测结果表征所述原始三维人脸图像中的用户为活体,则将所述重建三维人脸图像和所述原始三维人脸图像相似度比对,得到比对结果;
根据所述比对结果,确定所述二维人脸图像中的用户是否为目标用户。
本说明书实施例第二方面提供了一种人脸校验装置,包括:
人脸识别单元,用于对采集的二维人脸图像进行人脸识别,得到人脸识别结果;
三维重建单元,若所述人脸识别结果表征人脸识别成功,用于对所述二维人脸图像 进行三维重建,得到重建三维人脸图像;
活体检测单元,用于对采集的原始三维人脸图像进行活体检测,得到活体检测结果;
相似度比对单元,若所述活体检测结果表征所述原始三维人脸图像中的用户为活体,用于将所述重建三维人脸图像和所述原始三维人脸图像相似度比对,得到比对结果;
人脸校验单元,用于根据所述比对结果,确定所述二维人脸图像中的用户是否为目标用户。
本说明书实施例第三方面还提供了一种服务器,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述人脸校验方法的步骤。
本说明书实施例第四方面还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时上述人脸校验方法的步骤。
本说明书实施例的有益效果如下:
基于上述技术方案,由于在人脸识别成功且活体检测成功这两种条件下,再将所述重建三维人脸图像和所述原始三维人脸图像进行相似度比对,根据得到的所述比对结果,确定所述二维人脸图像中的用户是否为目标用户,如此,通过人脸识别成功且活体检测成功这两种条件作为约束条件,能够确保后续需要进行相似度比对的人脸数据的准确性,即使得所述重建三维人脸图像和所述原始三维人脸图像的准确性较高,将所述重建三维人脸图像作为比对样本,然后再进行相似比对,能够促使得到的所述比对结果的准确度也随之提高;以及在所述比对结果准确度较高的基础上,其确定出所述二维人脸图像中的用户是否为目标用户的准确度也会随之提高,进而提高人脸校验的准确度,在人脸校验提高的基础上能够有效提高对抗人脸伪造攻击的性能。
而且,通过三维人脸图像进行相似度对比,其三维人脸图像携带的数据的维度更多,在相似度比对过程中需要将每个维度的数据均进行相似度比对,如此,若数据的维度越多,其进行相似比对而得到的所述比对结果的准确度也会越高,促使所述比对结果的准确度进一步提高,在所述比对结果准确度进一步提高的基础上,其确定出所述二维人脸图像中的用户是否为目标用户的准确度也会进一步提高,即能够进一步提高人脸校验的准确度,在人脸校验进一步提高的基础上能够进一步提高对抗人脸伪造攻击的性能。
附图说明
图1为本说明书实施例中人脸校验方法的方法流程图;
图2为本说明书实施例中人脸校验装置的结构示意图;
图3为本说明书实施例中服务器的结构示意图。
具体实施方式
为了更好的理解上述技术方案,下面通过附图以及具体实施例对本说明书实施例的技术方案做详细的说明,应当理解本说明书实施例以及实施例中的具体特征是对本说明书实施例技术方案的详细的说明,而不是对本说明书技术方案的限定,在不冲突的情况下,本说明书实施例以及实施例中的技术特征可以相互组合。
第一方面,如图1所示,本说明书实施例提供一种人脸校验方法,包括:
S102、对采集的二维人脸图像进行人脸识别,得到人脸识别结果;
S104、若所述人脸识别结果表征人脸识别成功,则对所述二维人脸图像进行三维重建,得到重建三维人脸图像;
S106、对采集的原始三维人脸图像进行活体检测,得到活体检测结果;
S108、若所述活体检测结果表征所述原始三维人脸图像中的用户为活体,则将所述重建三维人脸图像和所述原始三维人脸图像相似度比对,得到比对结果;
S110、根据所述比对结果,确定所述二维人脸图像中的用户是否为目标用户。
其中,在步骤S102中,可以通过二维摄像设备进行图像采集,从而采集到所述二维人脸图像,再通过人脸识别算法对所述二维人脸图像进行人脸识别,得到所述人脸识别结果,其中,所述摄像设备可以为摄像头、云台、摄像机和数码相机等。
具体来讲,获取所述二维人脸图像之后,可以对所述二维人脸图像进行人脸识别,得到人脸识别值;判断所述人脸识别值是否不小于人脸设定阈值,得到人脸判断结果;根据所述人脸判断结果,确定所述人脸识别结果。
本说明书实施例中,在对所述二维人脸图像进行人脸识别过程中,可以首先对所述二维人脸图像进行图像预处理,得到所述预处理二维人脸图像,然后使用所述人脸识别算法对所述预处理二维人脸图像进行人脸识别,得到所述人脸识别结果。
本说明书实施例中,所述人脸设定阈值可以根据实际情况设定,也可以由人工或设备自行设定,所述人脸设定阈值可以为不小于80%且小于1的值,例如为80%、85%和90%等,当然,所述人脸设定阈值也可以设置为小于80%的值,本说明书不作具体限制。
本说明书实施例中,所述人脸识别算法包括基于人脸特征点的识别算法(feature-based recognition algorithms)、基于整幅人脸图像的识别算法(appearance-based recognition algorithms)、基于模板的识别算法(template-based recognition algorithms)、利用神经网络进行识别的算法(recognition algorithms using neural network)和利用支持向量机进行识别的算法等识别算法,本说明书不作具体限制。
具体地,通过对所述二维人脸图像进行图像预处理,能够移除所述二维人脸图像的平均亮度值,降低光照对人脸识别算法的影响,提高在使用人脸识别算法对所述预处理二维人脸图像进行人脸识别的识别精确度。
本说明书实施例中,在所述二维人脸图像进行图像预处理时,由于所述二维人脸图像为原始图像,使得所述二维人脸图像中包含有人脸区域、背景和噪声,如此,可以首先对所述二维人脸图像依次进行人脸检测、人脸校准和去除图像背景处理,得到人脸处理图像,以降低所述二维人脸图像中背景和噪声数据对识别算法的影响,使得在对预处理二维人脸图像进行人脸识别时提高识别的精确度。
进一步地,在得到所述二维人脸图像之后,还可以根据所述人脸处理图像中的所有像素值,确定出所述人脸处理图像对应的像素均值;根据所述像素均值,确定出所述人脸处理图像中每个像素对应的方差;利用所述像素均值和每个像素对应的方差对每个像素进行归一化处理,得到每个像素的归一化数据;根据每个像素的归一化数据,得到所述预处理二维人脸图像。
当然,在得到所述预处理二维人脸图像时,可以采用上述一种或多种方式对所述二维人脸图像进行处理,从而得到所述预处理二维人脸图像。
具体地,在所述二维人脸图像进行图像预处理时,在所述人脸处理图像上进行均值和方差的计算,在所述人脸处理图像中的所有像素上求出均值m,然后在求出均值m的基础上求出所述人脸处理图像中每个像素对应的方差s,然后在所述人脸处理图像中的每个像素上进行均值方差的归一化操作,得到每个像素的归一化数据,通过归一化操作能够移除所述二维人脸图像中的平均亮度值,降低光照对算法的影响,提高在使用算法对所述预处理二维人脸图像进行人脸识别计算的计算精确度。
本说明书实施例中,在对所述二维人脸图像进行人脸识别,得到所述人脸识别值之后,判断所述人脸识别值是否不小于人脸设定阈值,若得到的所述人脸判断结果表征所述人脸识别值不小于所述人脸设定阈值,则确定所述人脸识别结果表征人脸识别成功,即能够识别出所述二维人脸图像中的用户,然后执行步骤S104,;若得到的所述人脸判断结果表征所述人脸识别值小于所述人脸设定阈值,则确定所述人脸识别结果表征人脸识别失败,即未能识别出所述二维人脸图像中的用户,然后不再进行其它操作。
例如,以二维人脸图像a1为例,若通过人脸识别算法计算出a1中用户为a11的概率为85%,而所述人脸设定阈值为90%,此时,由于85%<90%,使得所述人脸识别算法未能识别出a11,即确定a1的人脸识别结果表征人脸识别失败,则不再进行其它操作。
又例如,以二维人脸图像a2为例,若通过人脸识别算法计算出a2中的用户为a21的概率为95%,而所述人脸设定阈值为90%,此时,由于95%>90%,使得所述人脸识别算法能够识别出a21,即针对a2的人脸识别结果表征人脸识别成功,然后执行步骤S104。
若所述人脸识别结果表征人脸识别成功,则执行步骤S104,在该步骤中,可以使用三维重建算法对所述人脸图像进行三维重建,得到所述重建三维人脸图像。
本说明书实施例中,所述三维重建算法包括基于TensorFlow实现的单幅图像的彩色3D重建(Im2Avatar)算法、3-SWEEP算法、3D-GAN算法等算法,本说明书不作具体限制。
具体来讲,在使用Im2Avatar算法进行三维重建时,可以对所述二维人脸图像进行编解码,然后再将编解码后的数据进行形状学习、表面色彩学习和细节构造,最后将学习后的数据和细节构造后的数据进行组合,得到所述重建三维人脸数据。
接下来执行步骤S106,在该步骤中,可以使用三维摄像装置进行图像采集,得到所述原始三维人脸图像;对所述原始三维人脸图像进行活体检测,得到所述活体检测结果,其中,所述三维摄像装置可以是3D摄像头和3D相机等。
本说明书实施例中,步骤S106可以与步骤S102同时执行,也可以在执行步骤S102之前或之后执行;进一步地,所述二维摄像装置和所述三维摄像装置可以设置在同一LOT设备上,也可以设置在相连的两个LOT设备上,本说明书不作具体限制。
本说明书实施例中,在执行步骤S102和S106过程中,可以在设定时间和设定区域中采集所述二维人脸图像和所述原始三维人脸图像,例如可以在商店出口出的刷脸设备 上的2D摄像头和3D摄像头实时采集设定区域的二维人脸图像和三维人脸图像,所述设定区域可以根据实际情况进行设定,也可以由设备或人工进行设定;同理,所述设定时间可以是根据实际情况进行设定,也可以由设备或人工进行设定,所述设定时间例如可以为1秒(s)、2s和4s等,本说明书不作具体限制。
本说明书实施例中,在对所述原始三维人脸图像进行人脸识别过程中,可以首先对所述原始三维人脸图像进行图像预处理,得到所述预处理三维人脸图像,然后使用活体检测算法对所述预处理三维人脸图像进行活体检测,得到所述活体检测结果。
具体来讲,对所述原始三维人脸图像进行图像预处理的实施过程,可以参考步骤S102中的对所述二维人脸图像进行图像预处理的叙述,为了说明书的简洁,在此就不再赘述了。
本说明书实施例中,在进行活体检测过程中,可以对所述原始三维人脸图像进行活体检测,得到活体检测值;检测所述活体检测值是否小于活体设定值,得到检测结果;根据所述检测结果,确定所述活体检测结果;其中,在对所述原始三维人脸图像进行活体检测时,可以使用所述活体检测算法对所述原始三维人脸图像进行活体检测。
具体来讲,若所述检测结果表征所述活体检测值小于所述活体设定阈值,则确定所述活体检测结果为所述原始三维人脸图像中的用户为活体;若所述检测结果表征活体检测值不小于所述活体设定阈值,则确定所述活体检测结果为所述原始三维人脸图像中的用户为非活体。
本说明书实施例中,若所述活体检测结果为所述原始三维人脸图像中的用户为活体,则执行步骤S108;若所述活体检测结果为所述原始三维人脸图像中的用户为非活体,则针对此次人脸识别不再进行操作。
本说明书实施例中,所述原始三维人脸图像的数量可以为一幅或多幅图像,且所述原始三维人脸图像与所述活体检测算法所需要的图像数量相匹配,即所述原始三维人脸图像的数量不小于所述活体检测算法所需要的图像数量,例如,若所述活体检测算法需要2幅图像,则所述原始三维人脸图像的数量不小于2幅。
本说明书实施例中,所述活体设定阈值可以根据实际情况设定,也可以由人工或设备自行设定,所述活体设定阈值可以为不小于80%且小于1的值,例如为80%、85%和90%等,当然,所述活体设定阈值也可以设置为小于80%的值,本说明书不作具体限制。
本说明书实施例中,所述活体检测算法例如可以是反欺骗(anti spoofing)算法、图 像失真分析(image distortion analysis)算法和颜色纹理(colour texture)算法等。
具体来讲,若所述活体设定阈值用T表示,则对所述原始三维人脸图像进行活体检测而得到的活体检测值用S表示,检测S是否小于T,若S<T,则判定所述原始三维人脸图像中的用户为活体,接着执行步骤S108;若S≥T,则判定所述原始三维人脸图像中的用户为非活体,此时,针对此次人脸识别不再进行操作。
若所述活体检测结果表征所述原始三维人脸图像中的用户为活体,则执行步骤S108,可以直接将所述重建三维人脸图像和所述原始三维人脸图像进行整体对比,得到所述对比结果;还可以将所述重建三维人脸图像的重建三维结构数据和所述原始三维人脸图像的原始三维结构数据进行相似度比对,得到所述比对结果。
此时,通过对所述重建三维结构数据和所述原始三维结构数据进行相似度比对,由于三维结构数据携带的数据的维度更多,由于每个维度的数据均需要进行相似度比对,使得在数据维度更多的基础上,进行相似度比对而得到的所述比对结果的精确度会较高。
具体来讲,在将所述重建三维人脸图像和所述原始三维人脸图像进行整体对比过程中,可以将所述重建三维人脸图像和所述原始三维人脸图像输入到相似度算法中进行相似度计算,得到图像相似度;判断所述图像相似度是否不小于设定相似度,得到相似度判断结果;根据所述相似度判断结果,确定所述对比结果。
同样,在将所述重建三维结构数据和所述原始三维结构数据进行相似度比对过程中,可以将所述重建三维结构数据和所述原始三维结构数据输入到所述相似度算法中进行相似度计算,得到图像相似度;判断所述图像相似度是否不小于设定相似度,得到相似度判断结果;根据所述相似度判断结果,确定所述对比结果。
具体地,若所述相似度判断结果表征所述图像相似度不小于所述设定相似度,则确定所述对比结果表征对比成功;若所述相似度判断结果表征所述图像相似度小于所述设定相似度,则确定所述对比结果表征对比失败。
本说明书实施例中,所述相似度算法可以为余弦算法、欧式距离算法和感知哈希算法(Perceptual hash algorithm)等算法;进一步地,所述设定相似度可以根据实际情况设定,也可以由人工或设备自行设定,所述设定相似度可以为不小于75%且小于1的值,例如为75%、80%和90%等,当然,所述设定相似度也可以设置为小于75%的值,本说明书不作具体限制。
接下来执行步骤S110,若所述对比结果表征对比成功,则确定所述二维人脸图像中 的用户为目标用户;若所述对比结果表征对比失败,则确定所述二维人脸图像中的用户不为目标用户。
本说明书实施例中,所述人脸校验方法的整体流程如下:首先执行S1、通过2D摄像装置获取人脸图像,并进行人脸识别;若人脸识别成功,则执行步骤S2、通过Im2Avatar等深度学习算法,对人脸图像进行3D重建,得到重建3D人脸图像;若人脸识别失败,则执行步骤S3、人脸校验过程失败,针对此次人脸校验不进行任何操作;在执行步骤S1的同时,执行步骤S4、通过3D摄像装置采集原始3D人脸图像并进行活体检测,判断3D人脸图像中的用户是否为活体;若不是活体,则执行步骤S3;若是活体,则执行步骤S5、使用重建3D人脸图像和原始3D人脸图像进行相似度比对;若比对结果,则执行步骤S6,确定人脸图像中的用户为目标用户,即人脸校验成功;若比对失败,则执行步骤S7、确认所述人脸图像中的用户不为目标用户,即人脸校验失败。
例如,以采集的原始三维人脸图像为a3为例,对a3进行活体检测,得到活体检测值为S,且S<T(所述活体设定值),则判定a3中的用户为活体;然后获取a3的原始三维结构数据用a3-1表示;相应地,以所述二维人脸图像为a2为例,a2对应的人脸识别值为95%且所述人脸设定阈值为90%,此时,由于95%>90%,则对a2进行三维重建,得到重建三维人脸图像为a22,并获取a22的重建三维结构数据用a22-1表示,若使用感知哈希算法进行相似度计算,则使用均值哈希算法分别对a3-1和a22-1进行哈希计算,依次得到哈希值为a3-2和a22-2,然后计算出a3-2和a22-2的相似度用S1表示;判断S1是否大于所述设定相似度用S2表示,若S1≥S2,则确定所述对比结果表征对比成功,进而可以确定a2中的a21为目标用户;若S1<S2,则确定所述对比结果表征对比失败,进而确定a2中a21不为目标用户。
现有技术为了提高人脸伪造攻击的性能,通常是通过3D摄像头进行防御,即通过对采集的3D图像进行活体检测即可判断用户是否为活体,通过判断是否为活体进行人脸校验,而在人脸校验过程中没有有对比样本,导致会降低人脸校验的准确度。
而本说明书实施例中,由于在人脸识别成功且活体检测成功这两种条件下,再将所述重建三维人脸图像和所述原始三维人脸图像进行相似度比对,根据得到的所述比对结果,确定所述二维人脸图像中的用户是否为目标用户,如此,通过人脸识别成功且活体检测成功这两种条件作为约束条件,能够确保后续需要进行相似度比对的人脸数据的准确性,即使得所述重建三维人脸图像和所述原始三维人脸图像的准确性较高,将所述重建三维人脸图像作为比对样本,然后再进行相似比对,能够促使得到的所述比对结果的 准确度也随之提高;以及在所述比对结果准确度较高的基础上,其确定出所述二维人脸图像中的用户是否为目标用户的准确度也会随之提高,进而提高人脸校验的准确度,在人脸校验提高的基础上能够有效提高对抗人脸伪造攻击的性能。
而且,通过三维人脸图像进行相似度对比,其三维人脸图像携带的数据的维度更多,在相似度比对过程中需要将每个维度的数据均进行相似度比对,如此,若数据的维度越多,其进行相似比对而得到的所述比对结果的准确度也会越高,促使所述比对结果的准确度进一步提高,在所述比对结果准确度进一步提高的基础上,其确定出所述二维人脸图像中的用户是否为目标用户的准确度也会进一步提高,即能够进一步提高人脸校验的准确度,在人脸校验进一步提高的基础上能够进一步提高对抗人脸伪造攻击的性能
第二方面,基于与第一方面的同一发明构思,本说明书实施例提供一种人脸校验装置,如图2所示,包括:
人脸识别单元201,用于对采集的二维人脸图像进行人脸识别,得到人脸识别结果;
三维重建单元202,若所述人脸识别结果表征人脸识别成功,用于对所述二维人脸图像进行三维重建,得到重建三维人脸图像;
活体检测单元203,用于对采集的原始三维人脸图像进行活体检测,得到活体检测结果;
相似度比对单元204,若所述活体检测结果表征所述原始三维人脸图像中的用户为活体,用于将所述重建三维人脸图像和所述原始三维人脸图像相似度比对,得到比对结果;
人脸校验单元205,用于根据所述比对结果,确定所述二维人脸图像中的用户是否为目标用户。
在一种可选方式中,相似度比对单元204,用于将所述重建三维人脸图像的重建三维结构数据和所述原始三维人脸图像的原始三维结构数据进行相似度比对,得到所述比对结果。
在一种可选方式中,相似度比对单元204,用于将所述重建三维结构数据和所述原始三维结构数据输入到相似度算法中进行相似度计算,得到图像相似度;判断所述图像相似度是否不小于设定相似度,得到相似度判断结果;根据所述相似度判断结果,确定所述对比结果。
在一种可选方式中,相似度比对单元204,若所述相似度判断结果表征所述图像相似度不小于所述设定相似度,用于确定所述对比结果表征对比成功;若所述相似度判断结果表征所述图像相似度小于所述设定相似度,用于确定所述对比结果表征对比失败。
在一种可选方式中,人脸识别单元201,用于对所述二维人脸图像进行人脸识别,得到人脸识别值;判断所述人脸识别值是否不小于人脸设定阈值,得到人脸判断结果;根据所述人脸判断结果,确定所述人脸识别结果。
在一种可选方式中,活体检测单元203,用于通过三维摄像装置进行图像采集,得到所述原始三维人脸图像;对所述原始三维人脸图像进行活体检测,得到所述活体检测结果。
在一种可选方式中,活体检测单元203,用于对所述原始三维人脸图像进行活体检测,得到活体检测值;检测所述活体检测值是否小于活体设定值,得到检测结果;根据所述检测结果,确定所述活体检测结果。
第三方面,基于与前述实施例中人脸校验方法同样的发明构思,本说明书实施例还提供一种服务器,如图3所示,包括存储器304、处理器302及存储在存储器304上并可在处理器302上运行的计算机程序,所述处理器302执行所述程序时实现前文所述人脸校验方法的任一方法的步骤。
其中,在图3中,总线架构(用总线300来代表),总线300可以包括任意数量的互联的总线和桥,总线300将包括由处理器302代表的一个或多个处理器和存储器304代表的存储器的各种电路链接在一起。总线300还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口305在总线300和接收器301和发送器303之间提供接口。接收器301和发送器303可以是同一个元件,即收发机,提供用于在传输介质上与各种其他装置通信的单元。处理器302负责管理总线300和通常的处理,而存储器304可以被用于存储处理器302在执行操作时所使用的数据。
第四方面,基于与前述实施例中人脸校验方法的发明构思,本说明书实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现前文所述人脸校验方法的任一方法的步骤。
本说明书是参照根据本说明书实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中 的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的设备。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令设备的制造品,该指令设备实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本说明书的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本说明书范围的所有变更和修改。
显然,本领域的技术人员可以对本说明书进行各种改动和变型而不脱离本说明书的精神和范围。这样,倘若本说明书的这些修改和变型属于本说明书权利要求及其等同技术的范围之内,则本说明书也意图包含这些改动和变型在内。

Claims (16)

  1. 一种人脸校验方法,包括:
    对采集的二维人脸图像进行人脸识别,得到人脸识别结果;
    若所述人脸识别结果表征人脸识别成功,则对所述二维人脸图像进行三维重建,得到重建三维人脸图像;
    对采集的原始三维人脸图像进行活体检测,得到活体检测结果;
    若所述活体检测结果表征所述原始三维人脸图像中的用户为活体,则将所述重建三维人脸图像和所述原始三维人脸图像相似度比对,得到比对结果;
    根据所述比对结果,确定所述二维人脸图像中的用户是否为目标用户。
  2. 如权利要求1所述的方法,所述将所述重建三维人脸图像和所述原始三维人脸图像相似度比对,得到比对结果,包括:
    将所述重建三维人脸图像的重建三维结构数据和所述原始三维人脸图像的原始三维结构数据进行相似度比对,得到所述比对结果。
  3. 如权利要求2所述的方法,所述将所述重建三维人脸图像的重建三维结构数据和所述原始三维人脸图像的原始三维结构数据进行相似度比对,得到所述比对结果,包括:
    将所述重建三维结构数据和所述原始三维结构数据输入到相似度算法中进行相似度计算,得到图像相似度;
    判断所述图像相似度是否不小于设定相似度,得到相似度判断结果;
    根据所述相似度判断结果,确定所述对比结果。
  4. 如权利要求3所述的方法,所述根据所述相似度判断结果,确定所述对比结果,包括:
    若所述相似度判断结果表征所述图像相似度不小于所述设定相似度,则确定所述对比结果表征对比成功;若所述相似度判断结果表征所述图像相似度小于所述设定相似度,则确定所述对比结果表征对比失败。
  5. 如权利要求1-4任一项所述的方法,所述对采集的二维人脸图像进行人脸识别,得到人脸识别结果,包括:
    对所述二维人脸图像进行人脸识别,得到人脸识别值;
    判断所述人脸识别值是否不小于人脸设定阈值,得到人脸判断结果;
    根据所述人脸判断结果,确定所述人脸识别结果。
  6. 如权利要求1-4任一项所述的方法,所述对所述原始三维人脸图像进行活体检 测,得到所述活体检测结果,包括:
    通过三维摄像装置进行图像采集,得到所述原始三维人脸图像;
    对所述原始三维人脸图像进行活体检测,得到所述活体检测结果。
  7. 如权利要求6所述的方法,所述对所述原始三维人脸图像进行活体检测,得到所述活体检测结果,包括:
    对所述原始三维人脸图像进行活体检测,得到活体检测值;
    检测所述活体检测值是否小于活体设定值,得到检测结果;
    根据所述检测结果,确定所述活体检测结果。
  8. 一种人脸校验装置,包括:
    人脸识别单元,用于对采集的二维人脸图像进行人脸识别,得到人脸识别结果;
    三维重建单元,若所述人脸识别结果表征人脸识别成功,用于对所述二维人脸图像进行三维重建,得到重建三维人脸图像;
    活体检测单元,用于对采集的原始三维人脸图像进行活体检测,得到活体检测结果;
    相似度比对单元,若所述活体检测结果表征所述原始三维人脸图像中的用户为活体,用于将所述重建三维人脸图像和所述原始三维人脸图像相似度比对,得到比对结果;
    人脸校验单元,用于根据所述比对结果,确定所述二维人脸图像中的用户是否为目标用户。
  9. 如权利要求8所述的装置,所述相似度比对单元,用于将所述重建三维人脸图像的重建三维结构数据和所述原始三维人脸图像的原始三维结构数据进行相似度比对,得到所述比对结果。
  10. 如权利要求9所述的装置,所述相似度比对单元,用于将所述重建三维结构数据和所述原始三维结构数据输入到相似度算法中进行相似度计算,得到图像相似度;判断所述图像相似度是否不小于设定相似度,得到相似度判断结果;根据所述相似度判断结果,确定所述对比结果。
  11. 如权利要求10所述的装置,所述相似度比对单元,若所述相似度判断结果表征所述图像相似度不小于所述设定相似度,用于确定所述对比结果表征对比成功;若所述相似度判断结果表征所述图像相似度小于所述设定相似度,用于确定所述对比结果表征对比失败。
  12. 如权利要求8-11任一项所述的装置,所述人脸识别单元,用于对所述二维人脸图像进行人脸识别,得到人脸识别值;判断所述人脸识别值是否不小于人脸设定阈值,得到人脸判断结果;根据所述人脸判断结果,确定所述人脸识别结果。
  13. 如权利要求8-11任一项所述的装置,所述活体检测单元,用于通过三维摄像装置进行图像采集,得到所述原始三维人脸图像;对所述原始三维人脸图像进行活体检测,得到所述活体检测结果。
  14. 如权利要求13所述的装置,所述活体检测单元,用于对所述原始三维人脸图像进行活体检测,得到活体检测值;检测所述活体检测值是否小于活体设定值,得到检测结果;根据所述检测结果,确定所述活体检测结果。
  15. 一种服务器,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1-7任一项所述方法的步骤。
  16. 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1-7任一项所述方法的步骤。
PCT/CN2020/071702 2019-07-24 2020-01-13 人脸校验方法、装置、服务器及可读存储介质 WO2021012647A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/875,121 US10853631B2 (en) 2019-07-24 2020-05-15 Face verification method and apparatus, server and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910673271.6A CN110532746B (zh) 2019-07-24 2019-07-24 人脸校验方法、装置、服务器及可读存储介质
CN201910673271.6 2019-07-24

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/875,121 Continuation US10853631B2 (en) 2019-07-24 2020-05-15 Face verification method and apparatus, server and readable storage medium

Publications (1)

Publication Number Publication Date
WO2021012647A1 true WO2021012647A1 (zh) 2021-01-28

Family

ID=68661864

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071702 WO2021012647A1 (zh) 2019-07-24 2020-01-13 人脸校验方法、装置、服务器及可读存储介质

Country Status (3)

Country Link
CN (2) CN113705426B (zh)
TW (1) TWI721786B (zh)
WO (1) WO2021012647A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496019A (zh) * 2023-12-29 2024-02-02 南昌市小核桃科技有限公司 一种驱动静态图像的图像动画处理方法及系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705426B (zh) * 2019-07-24 2023-10-27 创新先进技术有限公司 人脸校验方法、装置、服务器及可读存储介质
CN111507294B (zh) * 2020-04-22 2023-04-07 上海第二工业大学 基于三维人脸重建和智能识别的教室安防预警系统及方法
CN113392763B (zh) * 2021-06-15 2022-11-11 支付宝(杭州)信息技术有限公司 一种人脸识别方法、装置以及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077563A (zh) * 2014-05-30 2014-10-01 小米科技有限责任公司 人脸识别方法和装置
CN107506696A (zh) * 2017-07-29 2017-12-22 广东欧珀移动通信有限公司 防伪处理方法及相关产品
US20180276487A1 (en) * 2017-03-24 2018-09-27 Wistron Corporation Method, system, and computer-readable recording medium for long-distance person identification
CN110532746A (zh) * 2019-07-24 2019-12-03 阿里巴巴集团控股有限公司 人脸校验方法、装置、服务器及可读存储介质

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6947579B2 (en) * 2002-10-07 2005-09-20 Technion Research & Development Foundation Ltd. Three-dimensional face recognition
CN101561874B (zh) * 2008-07-17 2011-10-26 清华大学 一种人脸虚拟图像生成的方法
CN102254154B (zh) * 2011-07-05 2013-06-12 南京大学 一种基于三维模型重建的人脸身份认证方法
US8457367B1 (en) * 2012-06-26 2013-06-04 Google Inc. Facial recognition
CN103716309B (zh) * 2013-12-17 2017-09-29 华为技术有限公司 一种安全认证方法及终端
CN105912912B (zh) * 2016-05-11 2018-12-18 青岛海信电器股份有限公司 一种终端用户身份登录方法和系统
CN107563304B (zh) * 2017-08-09 2020-10-16 Oppo广东移动通信有限公司 终端设备解锁方法及装置、终端设备
CN107480500B (zh) * 2017-08-11 2021-04-27 维沃移动通信有限公司 一种人脸验证的方法及移动终端
TWI625679B (zh) * 2017-10-16 2018-06-01 緯創資通股份有限公司 活體臉部辨識方法與系統
CN107609383B (zh) * 2017-10-26 2021-01-26 奥比中光科技集团股份有限公司 3d人脸身份认证方法与装置
CN107729875A (zh) * 2017-11-09 2018-02-23 上海快视信息技术有限公司 三维人脸识别方法和装置
CN109948399A (zh) * 2017-12-20 2019-06-28 宁波盈芯信息科技有限公司 一种智能手机的人脸支付方法及装置
CN108062544A (zh) * 2018-01-19 2018-05-22 百度在线网络技术(北京)有限公司 用于人脸活体检测的方法和装置
CN108427871A (zh) * 2018-01-30 2018-08-21 深圳奥比中光科技有限公司 3d人脸快速身份认证方法与装置
CN108537191B (zh) * 2018-04-17 2020-11-20 云从科技集团股份有限公司 一种基于结构光摄像头的三维人脸识别方法
CN109670487A (zh) * 2019-01-30 2019-04-23 汉王科技股份有限公司 一种人脸识别方法、装置及电子设备
CN109978989B (zh) * 2019-02-26 2023-08-01 腾讯科技(深圳)有限公司 三维人脸模型生成方法、装置、计算机设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077563A (zh) * 2014-05-30 2014-10-01 小米科技有限责任公司 人脸识别方法和装置
US20180276487A1 (en) * 2017-03-24 2018-09-27 Wistron Corporation Method, system, and computer-readable recording medium for long-distance person identification
CN107506696A (zh) * 2017-07-29 2017-12-22 广东欧珀移动通信有限公司 防伪处理方法及相关产品
CN110532746A (zh) * 2019-07-24 2019-12-03 阿里巴巴集团控股有限公司 人脸校验方法、装置、服务器及可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496019A (zh) * 2023-12-29 2024-02-02 南昌市小核桃科技有限公司 一种驱动静态图像的图像动画处理方法及系统
CN117496019B (zh) * 2023-12-29 2024-04-05 南昌市小核桃科技有限公司 一种驱动静态图像的图像动画处理方法及系统

Also Published As

Publication number Publication date
TWI721786B (zh) 2021-03-11
CN110532746B (zh) 2021-07-23
CN113705426A (zh) 2021-11-26
TW202105329A (zh) 2021-02-01
CN113705426B (zh) 2023-10-27
CN110532746A (zh) 2019-12-03

Similar Documents

Publication Publication Date Title
WO2021012647A1 (zh) 人脸校验方法、装置、服务器及可读存储介质
WO2021036436A1 (zh) 一种人脸识别方法及装置
CN110383288B (zh) 人脸识别的方法、装置和电子设备
WO2019192121A1 (zh) 双通道神经网络模型训练及人脸比对方法、终端及介质
US11682232B2 (en) Device and method with image matching
US8675926B2 (en) Distinguishing live faces from flat surfaces
CN110490076B (zh) 活体检测方法、装置、计算机设备和存储介质
US10853631B2 (en) Face verification method and apparatus, server and readable storage medium
CN112052831B (zh) 人脸检测的方法、装置和计算机存储介质
WO2020244071A1 (zh) 基于神经网络的手势识别方法、装置、存储介质及设备
WO2019033572A1 (zh) 人脸遮挡检测方法、装置及存储介质
JP7191061B2 (ja) ライブネス検査方法及び装置
CN110287776B (zh) 一种人脸识别的方法、装置以及计算机可读存储介质
US9449217B1 (en) Image authentication
KR102223478B1 (ko) 눈 상태 검출에 딥러닝 모델을 이용하는 눈 상태 검출 시스템 및 그 작동 방법
CN111339897B (zh) 活体识别方法、装置、计算机设备和存储介质
CN112232323A (zh) 人脸验证方法、装置、计算机设备和存储介质
CN114387548A (zh) 视频及活体检测方法、系统、设备、存储介质及程序产品
KR20140074905A (ko) 홍채 인식에 의한 식별
TWI727514B (zh) 指紋識別方法、裝置、儲存介質,及終端
CN112101296A (zh) 人脸注册方法、人脸验证方法、装置及系统
CN108875472B (zh) 图像采集装置及基于该图像采集装置的人脸身份验证方法
CN111723626A (zh) 用于活体检测的方法、装置和电子设备
CN111274899B (zh) 人脸匹配方法、装置、电子设备和存储介质
CN112183454A (zh) 图像检测方法及装置、存储介质、终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20844302

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20844302

Country of ref document: EP

Kind code of ref document: A1